Run LeaderWorkerSet
This page shows how to leverage Kueue’s scheduling and resource management capabilities when running LeaderWorkerSet.
We demonstrate how to support scheduling LeaderWorkerSets where a group of Pods constitutes a unit of admission represented by a Workload. This allows to scale-up and down LeaderWorkerSets group by group.
This integration is based on the Plain Pod Group integration.
This guide is for serving users that have a basic understanding of Kueue. For more information, see Kueue’s overview.
Before you begin
-
Learn how to install Kueue with a custom manager configuration.
-
Ensure that you have the
leaderworkerset.x-k8s.io/leaderworkerset
integration enabled, for example:Also, follow steps in Run Plain Pods to learn how to enable and configure the
pod
integration. -
Check Administer cluster quotas for details on the initial Kueue setup.
Running a LeaderWorkerSet admitted by Kueue
When running a LeaderWorkerSet on Kueue, take into consideration the following aspects:
a. Queue selection
The target local queue should be specified in the metadata.labels
section of the LeaderWorkerSet configuration.
b. Configure the resource needs
The resource needs of the workload can be configured in the spec.template.spec.containers
.
c. Scaling
You can perform scale up or scale down operations on a LeaderWorkerSet .spec.replicas
.
The unit of scaling is a LWS group. By changing the number of replicas
in the LWS you can create
or delete entire groups of Pods. As a result of scale up the newly created group of Pods is
suspended by a scheduling gate, until the corresponding Workload is admitted.
Example
Here is a sample LeaderWorkerSet:
You can create the LeaderWorkerSet using the following command:
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.