---
title: Resource Configuration
---
Administrators can define [Resource Policies](../resource_policies.md) to implement resource quotas and
reservations for different user groups to prioritize workload usage across available resources.
Under the **Resource Configuration** section, administrators define the available resources and the way in which they
will be allocated to different workloads.
![Resource configuration page](../../img/resource_configuration.png)
The Resource Configuration settings page shows the [currently provisioned](#applying-resource-configuration) configuration:
the defined resource pools, resource profiles, and the resource allocation architecture.
## Resource Pools
A resource pool is an aggregation of resources available for use, such as a Kubernetes cluster or a GPU superpod.
Administrators specify the total number of resources available in each pool. The resource policy manager ensures
workload assignment up to the available number of resources.
Administrators control the execution priority within a pool across the resource profiles making use of it (e.g. if jobs
of profile A and jobs of profile B currently need to run in a pool, allocate resources for profile A jobs first or vice
versa).
The resource pool cards are displayed on the top of the Resource Configuration settings page. Each card displays the
following information:
![Resource pool card](../../img/resource_configuration_pool_card.png)
* Pool name
* Number of resources currently in use out of the total available resources
* Execution Priority - List of [linked profiles](#connecting-profiles-to-pools) in order of execution priority.
## Resource Profiles
Resource profiles represent the resource consumption requirements of jobs, such as the number of GPUs needed. They are
the interface that administrators use to provide users with access to the available resource pools based on their job
resource requirements via [Resource Policies](../resource_policies.md).
Administrators can control the resource pool allocation precedence within a profile (e.g. only run jobs on `pool B` if
`pool A` cannot currently satisfy the profile's resource requirements).
Administrators can control the queuing priority within a profile across resource policies making use of it (e.g. if the
R&D team and DevOps team both have pending jobs - run the R&D team's jobs first or vice versa).
The resource profile cards are displayed on the bottom of the Resource Configuration settings page. Each card displays
the following information:
![Resource profile card](../../img/resource_configuration_profile_card.png)
* Profile name
* - Number
of resources allocated to jobs in this profile
* List of [pool links](#connecting-profiles-to-pools)
*
- Number of currently pending jobs
*
- Number of currently running jobs
* Number of resource policies. Click to open resource policy list and to order queuing priority.
## Example Workflow
You have GPUs spread across a local H100 and additional bare metal servers, as well as on AWS (managed
by an autoscaler). Assume that currently most of your resources are already assigned to jobs, and only 16 resources are available: 8 in the
H100 resource pool and 8 in the Bare Metal pool:
![Example resource pools](../../img/resource_example_pools.png)
Teams' jobs have varying resource requirements of 0.5, 2, 4, and 8 GPUs. Resource profiles are defined to reflect these:
![Example resource profiles](../../img/resource_example_profile.png)
The different jobs will be routed to different resource pools by connecting the profiles to the resource pools. Jobs
enqueued through the profiles will be run in the pools where there are available resources in order of their priority.
For example, the H100 pool will run jobs with the following precedence: 2 GPU jobs first, then 4 GPU ones, then 8 GPU,
and lastly 0.5 GPU.
![Example profile priority](../../img/resource_example_profile_priority.png)
Resource policies are implemented for two teams:
* Dev team
* Research Team
Each team has a resource policy configured with 8 reserved resources and a 16 resource limit. Both teams make use of the
4xGPU profile (i.e. each job running through this profile requires 4 resources).
![Example resource policy](../../img/resource_example_policy.png)
The Dev team is prioritized over the Research team by placing it higher in the Resource Profile's Policies Priority list:
![Example resource policy priority](../../img/resource_example_policy_priority.png)
Both the Dev team and the Research team enqueue four 4-resource jobs each: Dev team jobs will be allocated resources
first. The `4xGPU` resource profile is connected to two resource pools: `Bare Metal Low END GPUs` (with the
`4 GPU Low End` link) and `H100 Half a Superpod` (with the `4 GPU H100 link`).
![Example resource profile-pool connections](../../img/resource_example_profile_pool_links.png)
Resources are assigned from the `Bare Metal` pool first (precedence set on the resource profile card):
![Example resource pool precedence](../../img/resource_example_pool_priority.png)
If the first pool cannot currently satisfy the profile’s resource requirements, resources are assigned from the next
listed pool. Let's look at the first pool in the image below. Notice that the pool has 8 available resources, therefore
it can run two 4-resource jobs.