partition - Partition parallel pool - MATLAB (original) (raw)
Partition parallel pool
Since R2025a
Syntax
Description
partition
divides an existing parallel pool into pool partitions, allowing you to use specific resources from the existing pool. Use thepartition
function to create pools that target specific resources or to create multiple pools to execute more than one parallel workflow at a time.
[newPool](#mw%5F8ecc9fcd-0b74-49ec-af87-f0ca60b25878) = partition([pool](#mw%5F1a72c97e-773d-4a61-816b-8520f1df8b80),"MaxNumWorkersPerHost",[MaxNumWorkersPerHost](#mw%5Facbc4bc2-4809-4b84-b62b-db841895fce3))
creates a pool object, newPool
, with up toMaxNumWorkersPerHost
workers from each host contributing workers to the input pool, pool
.
A host is a machine that runs MATLABĀ® workers. On a local pool, the host is your computer. On a cluster pool, the hosts are the machines within the cluster. A cluster pool can have workers running on multiple hosts.
If a host runs fewer pool workers than the number of workers you request, thepartition
function allocates all available workers from that host to newPool
.
[newPool](#mw%5F8ecc9fcd-0b74-49ec-af87-f0ca60b25878) = partition([pool](#mw%5F1a72c97e-773d-4a61-816b-8520f1df8b80),"MaxNumWorkersPerGPU",[MaxNumWorkersPerGPU](#mw%5F074a8e2d-3ddc-4de7-ba28-35e5f960f34a))
creates a pool object, newpool
, with up toMaxNumWorkersPerGPU
workers for each unique GPU in the input pool,pool
.
A GPU must be attached to the same host as a MATLAB worker to be assigned to the worker. If the partition
function is unable to assign the requested number of workers to a GPU, the function allocates all workers that it can assign to the GPU to newpool
. On hosts with multiple GPUs, the number of MATLAB workers and GPUs on the host determines the maximum number of workers that thepartition
function can assign to each GPU. On hosts that do not have a GPU, the partition
function returns an empty pool unless other hosts in the pool have GPUs.
This partitioning method uses the result of the gpuDevice function on each parallel pool worker to determine which worker is associated with which GPU. The partition
function considers only workers with an allocated GPU and does not modify the property values of any GPU. Thepartition
function executes code on all workers inpool
to gather gpuDevice
information. If any worker is busy executing a parfeval
computation, thepartition
function waits until the parfeval
computation completes.
[newPool](#mw%5F8ecc9fcd-0b74-49ec-af87-f0ca60b25878) = partition([pool](#mw%5F1a72c97e-773d-4a61-816b-8520f1df8b80),"Workers",[Workers](#mw%5Fc0600719-fe8a-4ee9-b877-be7331edcc44))
partitions the input pool object, pool
, to create a new pool object,newPool
, containing only workers specified by theparallel.Worker
object or array of parallel.Worker
objects, Workers
. Every worker in Workers
must be a member of the Workers
property of the input pool.
[[newPool](#mw%5F8ecc9fcd-0b74-49ec-af87-f0ca60b25878),[remainingPool](#mw%5Fe85a2a01-3be4-41c2-9238-a4309c15a3b2)] = partition([pool](#mw%5F1a72c97e-773d-4a61-816b-8520f1df8b80),___)
also returns remainingPool
, a pool object that contains the set of workers in pool
but not in newpool
.
Examples
Partition a parallel pool to allocate one worker per GPU, which is useful for GPU-intensive computations.
For this example, the myCluster
profile requests a parallel pool with four GPUs. Start a parallel pool of 12 workers using themyCluster
profile.
pool = parpool("myCluster",12);
Starting parallel pool (parpool) using the 'myCluster' profile ... Connected to parallel pool with 12 workers.
Partition the pool to assign one worker per GPU.
[gpuWorkers,cpuWorkers] = partition(pool,"MaxNumWorkersPerGPU",1);
Display the number of workers in the new pools.
fprintf("Number of GPU workers: %d\n" + ... "Number of CPU workers: %d\n",gpuWorkers.NumWorkers,cpuWorkers.NumWorkers);
Number of GPU workers: 4 Number of CPU workers: 8
You can use the CPU pool to execute work at the same time as the GPU pool.
Allocating one worker per host can be beneficial for tasks that require a significant amount of memory or CPU resources per worker.
Start a parallel pool with 15 workers using the remote cluster profilemyMJSCluster
. The myMJSCluster
creates a pool with three hosts.
myClusterPool = parpool("myMJSCluster",15);
Starting parallel pool (parpool) using the 'myMJSCluster' profile ... Connected to parallel pool with 15 workers.
Partition the myClusterPool
pool to have one worker per host.
[perHostPool,remainingPool] = partition(myClusterPool,"MaxNumWorkersPerHost",1);
Display the number of workers in the new pools.
fprintf("Number of workers in perHostPool: %d\n" + ... "Number of workers in remainingPool: %d\n",perHostPool.NumWorkers,remainingPool.NumWorkers)
Number of workers in perHostPool: 3 Number of workers in remainingPool: 12
Isolating a single worker can be useful when you need to run tasks on a dedicated worker.
Start a parallel pool, and use the Workers
property of the pool object to obtain the parallel.Worker
objects for the workers in the pool.
pool = parpool("Processes"); allWorkers = pool.Workers;
Partition the pool to isolate the first worker in the allWorkers
array.
[singleWkrPool,remainingPool] = partition(pool,"Workers",allWorkers(1));
Inspect the partitioned single worker pool.
singleWkrPool =
ProcessPool with properties:
Connected: true
NumWorkers: 1
Busy: false
Cluster: Processes (Local Cluster)
AttachedFiles: {}
AutoAddClientPath: true
FileStore: [1x1 parallel.FileStore]
ValueStore: [1x1 parallel.ValueStore]
IdleTimeout: 30 minutes (26 minutes remaining)
SpmdEnabled: true
Input Arguments
Input pool object, specified as a parallel.Pool object.
Example: pool = parpool;
Maximum number of workers per GPU to partition, specified as a positive integer.
The partition
function attempts to assign at leastMaxNumWorkersPerGPU
workers to each unique GPU of the input pool. If a GPU has fewer than MaxNumWorkersPerGPU
assigned workers,partition
selects all available workers.
Maximum number of workers per host to partition, specified as a positive integer.
The partition
function attempts to assign at leastMaxNumWorkersPerHost
workers to each unique host of the input pool. If a host has fewer than MaxNumWorkersPerHost
available workers, partition
selects all available workers.
Workers to partition, specified as a parallel.Worker object or an array of parallel.Worker objects.
To obtain parallel.Worker
objects, you can use theWorkers
property of the input pool. You can also use thegetCurrentWorker function on the pool workers you want to target to return their parallel.Worker
objects.
Example: allWorkers = pool.Workers;
Output Arguments
Parallel pool with partitioned workers, returned as a parallel.Pool
object.
Parallel pool with remaining workers, returned as a parallel.Pool
object.
Tips
- The pool partition contains some workers from the input pool therefore both the input pool and its partitions can schedule work on the same workers. Consequently, using one pool can delay the execution of work on other pools. Deleting any pool instance deletes the underlying collection of workers and any parallel job running on the pool. All pools using the same collection of resources become invalid. If you no longer need a pool partition, allow the instance to go out of scope rather than explicitly deleting it.
- The
NumWorkers
property value of the pool partition reflects the number of workers the pool can use. - The new pool shares some properties with the input pool. Changes to these properties affect all pools. These shared properties are:
AttachedFiles
FileStore
ValueStore
IdleTimeout
Version History
Introduced in R2025a