vllm.config.scheduler
SchedulerConfig ¶
Scheduler configuration.
Source code in vllm/config/scheduler.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 |
|
async_scheduling class-attribute
instance-attribute
¶
async_scheduling: bool = False
EXPERIMENTAL: If set to True, perform async scheduling. This may help reduce the CPU overheads, leading to better latency and throughput. However, async scheduling is currently not supported with some features such as structured outputs, speculative decoding, and pipeline parallelism.
chunked_prefill_enabled class-attribute
instance-attribute
¶
True if chunked prefill is enabled.
cuda_graph_sizes class-attribute
instance-attribute
¶
Cuda graph capture sizes 1. if none provided, then default set to [min(max_num_seqs * 2, 512)] 2. if one value is provided, then the capture list would follow the pattern: [1, 2, 4] + [i for i in range(8, cuda_graph_sizes + 1, 8)] 3. more than one value (e.g. 1 2 128) is provided, then the capture list will follow the provided list.
delay_factor class-attribute
instance-attribute
¶
delay_factor: float = 0.0
Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt.
disable_chunked_mm_input class-attribute
instance-attribute
¶
disable_chunked_mm_input: bool = False
If set to true and chunked prefill is enabled, we do not want to partially schedule a multimodal item. Only used in V1 This ensures that if a request has a mixed prompt (like text tokens TTTT followed by image tokens IIIIIIIIII) where only some image tokens can be scheduled (like TTTTIIIII, leaving IIIII), it will be scheduled as TTTT in one step and IIIIIIIIII in the next.
disable_hybrid_kv_cache_manager class-attribute
instance-attribute
¶
disable_hybrid_kv_cache_manager: bool = False
If set to True, KV cache manager will allocate the same size of KV cache for all attention layers even if there are multiple type of attention layers like full attention and sliding window attention.
enable_chunked_prefill class-attribute
instance-attribute
¶
enable_chunked_prefill: SkipValidation[bool] = None
If True, prefill requests can be chunked based on the remaining max_num_batched_tokens.
encoder_cache_size class-attribute
instance-attribute
¶
Multimodal encoder cache size, only used in V1.
NOTE: This is not currently configurable. It will be overridden by max_num_batched_tokens in case max multimodal embedding size is larger.
is_multimodal_model class-attribute
instance-attribute
¶
is_multimodal_model: bool = False
True if the model is multimodal.
long_prefill_token_threshold class-attribute
instance-attribute
¶
long_prefill_token_threshold: int = 0
For chunked prefill, a request is considered long if the prompt is longer than this number of tokens.
max_long_partial_prefills class-attribute
instance-attribute
¶
max_long_partial_prefills: int = 1
For chunked prefill, the maximum number of prompts longer than long_prefill_token_threshold that will be prefilled concurrently. Setting this less than max_num_partial_prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency.
max_model_len class-attribute
instance-attribute
¶
max_model_len: SkipValidation[int] = None
Maximum length of a sequence (including prompt and generated text). This is primarily set in ModelConfig
and that value should be manually duplicated here.
max_num_batched_tokens class-attribute
instance-attribute
¶
max_num_batched_tokens: SkipValidation[int] = None
Maximum number of tokens to be processed in a single iteration.
This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config
based on the usage context.
max_num_encoder_input_tokens class-attribute
instance-attribute
¶
Multimodal encoder compute budget, only used in V1.
NOTE: This is not currently configurable. It will be overridden by max_num_batched_tokens in case max multimodal embedding size is larger.
max_num_partial_prefills class-attribute
instance-attribute
¶
max_num_partial_prefills: int = 1
For chunked prefill, the maximum number of sequences that can be partially prefilled concurrently.
max_num_seqs class-attribute
instance-attribute
¶
max_num_seqs: SkipValidation[int] = None
Maximum number of sequences to be processed in a single iteration.
This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config
based on the usage context.
num_lookahead_slots class-attribute
instance-attribute
¶
num_lookahead_slots: int = 0
The number of slots to allocate per sequence per step, beyond the known token ids. This is used in speculative decoding to store KV activations of tokens which may or may not be accepted.
NOTE: This will be replaced by speculative config in the future; it is present to enable correctness tests until then.
policy class-attribute
instance-attribute
¶
policy: SchedulerPolicy = 'fcfs'
The scheduling policy to use:
-
"fcfs" means first come first served, i.e. requests are handled in order of arrival.
-
"priority" means requests are handled based on given priority (lower value means earlier handling) and time of arrival deciding any ties).
preemption_mode class-attribute
instance-attribute
¶
preemption_mode: Optional[PreemptionMode] = None
Whether to perform preemption by swapping or recomputation. If not specified, we determine the mode as follows: We use recomputation by default since it incurs lower overhead than swapping. However, when the sequence group has multiple sequences (e.g., beam search), recomputation is not currently supported. In such a case, we use swapping instead.
runner_type class-attribute
instance-attribute
¶
runner_type: RunnerType = 'generate'
The runner type to launch for the model.
scheduler_cls class-attribute
instance-attribute
¶
The scheduler class to use. "vllm.core.scheduler.Scheduler" is the default scheduler. Can be a class directly or the path to a class of form "mod.custom_class".
send_delta_data class-attribute
instance-attribute
¶
send_delta_data: bool = False
Private API. If used, scheduler sends delta data to workers instead of an entire data. It should be enabled only when SPMD worker architecture is enabled. I.e., VLLM_USE_RAY_SPMD_WORKER=1
__post_init__ ¶
Source code in vllm/config/scheduler.py
_verify_args ¶
_verify_args() -> Self
Source code in vllm/config/scheduler.py
compute_hash ¶
compute_hash() -> str
WARNING: Whenever a new field is added to this config, ensure that it is included in the factors list if it affects the computation graph.
Provide a hash that uniquely identifies all the configs that affect the structure of the computation graph from input ids/embeddings to the final hidden states, excluding anything before input ids/embeddings and after the final hidden states.