vllm.utils
Modules:
Name | Description |
---|---|
deep_gemm | Compatibility wrapper for DeepGEMM API changes. |
flashinfer | Compatibility wrapper for FlashInfer API changes. |
jsontree | Helper functions to work with nested JSON structures. |
tensor_schema | |
MULTIMODAL_MODEL_MAX_NUM_BATCHED_TOKENS module-attribute
¶
POOLING_MODEL_MAX_NUM_BATCHED_TOKENS module-attribute
¶
STR_DTYPE_TO_TORCH_DTYPE module-attribute
¶
STR_DTYPE_TO_TORCH_DTYPE = {
"float32": float32,
"half": half,
"bfloat16": bfloat16,
"float": float,
"fp8": uint8,
"fp8_e4m3": uint8,
"fp8_e5m2": uint8,
"int8": int8,
"fp8_inc": float8_e4m3fn,
}
STR_DUAL_CHUNK_FLASH_ATTN_VAL module-attribute
¶
STR_DUAL_CHUNK_FLASH_ATTN_VAL: str = "DUAL_CHUNK_FLASH_ATTN"
STR_NOT_IMPL_ENC_DEC_BACKEND module-attribute
¶
STR_NOT_IMPL_ENC_DEC_BACKEND = "XFormers and Flash-Attention are the only backends currently supported with encoder/decoder models."
STR_NOT_IMPL_ENC_DEC_CHUNKED_PREFILL module-attribute
¶
STR_NOT_IMPL_ENC_DEC_CHUNKED_PREFILL = (
"Chunked prefill for encoder/decoder models "
+ "is not currently supported."
)
STR_NOT_IMPL_ENC_DEC_ERR_STRS module-attribute
¶
STR_NOT_IMPL_ENC_DEC_ERR_STRS = {
"STR_NOT_IMPL_ENC_DEC_SWA": STR_NOT_IMPL_ENC_DEC_SWA,
"STR_NOT_IMPL_ENC_DEC_PREFIX_CACHE": STR_NOT_IMPL_ENC_DEC_PREFIX_CACHE,
"STR_NOT_IMPL_ENC_DEC_CHUNKED_PREFILL": STR_NOT_IMPL_ENC_DEC_CHUNKED_PREFILL,
"STR_NOT_IMPL_ENC_DEC_LOGIT_SOFTCAP": STR_NOT_IMPL_ENC_DEC_LOGIT_SOFTCAP,
"STR_NOT_IMPL_ENC_DEC_LORA": STR_NOT_IMPL_ENC_DEC_LORA,
"STR_NOT_IMPL_ENC_DEC_PP": STR_NOT_IMPL_ENC_DEC_PP,
"STR_NOT_IMPL_ENC_DEC_MM": STR_NOT_IMPL_ENC_DEC_MM,
"STR_NOT_IMPL_ENC_DEC_SPEC_DEC": STR_NOT_IMPL_ENC_DEC_SPEC_DEC,
"STR_NOT_IMPL_ENC_DEC_BACKEND": STR_NOT_IMPL_ENC_DEC_BACKEND,
}
STR_NOT_IMPL_ENC_DEC_LOGIT_SOFTCAP module-attribute
¶
STR_NOT_IMPL_ENC_DEC_LOGIT_SOFTCAP = "Models with logits_soft_cap require FlashInfer backend, which is currently not supported for encoder/decoder models."
STR_NOT_IMPL_ENC_DEC_LORA module-attribute
¶
STR_NOT_IMPL_ENC_DEC_MM module-attribute
¶
STR_NOT_IMPL_ENC_DEC_PP module-attribute
¶
STR_NOT_IMPL_ENC_DEC_PP = "Pipeline parallelism is not currently supported with encoder/decoder models."
STR_NOT_IMPL_ENC_DEC_PREFIX_CACHE module-attribute
¶
STR_NOT_IMPL_ENC_DEC_PREFIX_CACHE = (
"Prefix caching for encoder/decoder models "
+ "is not currently supported."
)
STR_NOT_IMPL_ENC_DEC_SPEC_DEC module-attribute
¶
STR_NOT_IMPL_ENC_DEC_SPEC_DEC = "Speculative decoding is not currently supported with encoder/decoder models."
STR_NOT_IMPL_ENC_DEC_SWA module-attribute
¶
STR_NOT_IMPL_ENC_DEC_SWA = (
"Sliding window attention for encoder/decoder models "
+ "is not currently supported."
)
TORCH_DTYPE_TO_NUMPY_DTYPE module-attribute
¶
TORCH_DTYPE_TO_NUMPY_DTYPE = {
float16: float16,
float32: float32,
float64: float64,
uint8: uint8,
int32: int32,
int64: int64,
}
AsyncMicrobatchTokenizer ¶
Asynchronous tokenizer with micro-batching.
Pulls pending encode/decode requests from a queue and batches them up to reduce overhead. A single-thread ThreadPoolExecutor is used so the event loop stays responsive.
Source code in vllm/utils/__init__.py
516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 |
|
_queues instance-attribute
¶
__call__ async
¶
Source code in vllm/utils/__init__.py
__del__ ¶
Source code in vllm/utils/__init__.py
__init__ ¶
Source code in vllm/utils/__init__.py
_batch_decode_loop async
¶
_batch_decode_loop(queue: Queue)
Batch incoming decode requests for efficiency.
Source code in vllm/utils/__init__.py
_batch_encode_loop async
¶
Batch incoming encode requests for efficiency.
Source code in vllm/utils/__init__.py
_get_queue ¶
_get_queue(
loop: AbstractEventLoop, key: tuple
) -> Queue[
Union[
tuple[str, dict, Future], tuple[list[int], Future]
]
]
Get the request queue for the given operation key, creating a new queue and batcher task if needed.
Source code in vllm/utils/__init__.py
_queue_key ¶
Return a normalized key describing operation + kwargs.
add_special_tokens
: {True/False}truncation
: {True/False}- If
truncation
is False (max_length
is None), returns a key for a can_batch queue. - If
truncation
is True andmax_length
is None or equalstokenizer.model_max_length
, returns a key for a can_batch queue. - Otherwise, returns a key for a cannot_batch queue.
Examples:
- Decode: ("decode",)
- Encode typical: ("encode", add_special_tokens, bool_truncation, max_length_label)
- Fallback: ("encode", "other")
Source code in vllm/utils/__init__.py
decode async
¶
Source code in vllm/utils/__init__.py
AtomicCounter ¶
An atomic, thread-safe counter
Source code in vllm/utils/__init__.py
CacheInfo ¶
ClassRegistry ¶
Source code in vllm/utils/__init__.py
Counter ¶
Device ¶
DeviceMemoryProfiler ¶
Source code in vllm/utils/__init__.py
FlexibleArgumentParser ¶
Bases: ArgumentParser
ArgumentParser that allows both underscore and dash in names.
Source code in vllm/utils/__init__.py
1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 |
|
_json_tip class-attribute
instance-attribute
¶
_json_tip: str = 'When passing JSON CLI arguments, the following sets of arguments are equivalent:\n --json-arg \'{"key1": "value1", "key2": {"key3": "value2"}}\'\n --json-arg.key1 value1 --json-arg.key2.key3 value2\n\nAdditionally, list elements can be passed individually using +:\n --json-arg \'{"key4": ["value3", "value4", "value5"]}\'\n --json-arg.key4+ value3 --json-arg.key4+=\'value4,value5\'\n\n'
__init__ ¶
Source code in vllm/utils/__init__.py
_load_config_file ¶
Loads a yaml file and returns the key value pairs as a flattened list with argparse like pattern
returns: processed_args: list[str] = [ '--port': '12323', '--tensor-parallel-size': '4' ]Source code in vllm/utils/__init__.py
_pull_args_from_config ¶
Method to pull arguments specified in the config file into the command-line args variable.
The arguments in config file will be inserted between the argument list.
example:
$: vllm {serve,chat,complete} "facebook/opt-12B" --config config.yaml -tp 2
$: args = [
"serve,chat,complete",
"facebook/opt-12B",
'--config', 'config.yaml',
'-tp', '2'
]
$: args = [
"serve,chat,complete",
"facebook/opt-12B",
'--port', '12323',
'--tensor-parallel-size', '4',
'-tp', '2'
]
Please note how the config args are inserted after the sub command. this way the order of priorities is maintained when these are args parsed by super().
Source code in vllm/utils/__init__.py
1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 |
|
add_argument ¶
add_argument_group ¶
check_port ¶
Source code in vllm/utils/__init__.py
parse_args ¶
Source code in vllm/utils/__init__.py
1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 |
|
parse_known_args ¶
Source code in vllm/utils/__init__.py
LRUCache ¶
Bases: LRUCache[_K, _V]
, Generic[_K, _V]
Source code in vllm/utils/__init__.py
277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 |
|
__delitem__ ¶
__delitem__(key: _K) -> None
Source code in vllm/utils/__init__.py
__getitem__ ¶
__init__ ¶
Source code in vllm/utils/__init__.py
_on_remove ¶
_remove_old_if_needed ¶
clear ¶
get ¶
Source code in vllm/utils/__init__.py
pin ¶
pin(key: _K) -> None
Pins a key in the cache preventing it from being evicted in the LRU order.
popitem ¶
popitem(remove_pinned: bool = False)
Remove and return the (key, value)
pair least recently used.
Source code in vllm/utils/__init__.py
put ¶
stat ¶
Gets the cumulative number of hits and queries against this cache.
If delta=True
, instead gets these statistics since the last call that also passed delta=True
.
Source code in vllm/utils/__init__.py
LayerBlockType ¶
LazyDict ¶
Bases: Mapping[str, T]
, Generic[T]
Source code in vllm/utils/__init__.py
LazyLoader ¶
Bases: ModuleType
LazyLoader module borrowed from Tensorflow https://github.com/tensorflow/tensorflow/blob/main/tensorflow/python/util/lazy_loader.py with an addition of "module caching".
Lazily import a module, mainly to avoid pulling in large dependencies. Modules such as xgrammar
might do additional side effects, so we only want to use this when it is needed, delaying all eager effects
Source code in vllm/utils/__init__.py
__dir__ ¶
__getattr__ ¶
__init__ ¶
Source code in vllm/utils/__init__.py
_load ¶
_load() -> ModuleType
Source code in vllm/utils/__init__.py
MemoryProfilingResult dataclass
¶
Memory profiling result. All numbers are in bytes.
Source code in vllm/utils/__init__.py
after_profile class-attribute
instance-attribute
¶
after_profile: MemorySnapshot = field(
default_factory=MemorySnapshot
)
before_create class-attribute
instance-attribute
¶
before_create: MemorySnapshot = field(
default_factory=MemorySnapshot
)
before_profile class-attribute
instance-attribute
¶
before_profile: MemorySnapshot = field(
default_factory=MemorySnapshot
)
__init__ ¶
__init__(
non_kv_cache_memory: int = 0,
torch_peak_increase: int = 0,
non_torch_increase: int = 0,
weights_memory: float = 0,
before_create: MemorySnapshot = MemorySnapshot(),
before_profile: MemorySnapshot = MemorySnapshot(),
after_profile: MemorySnapshot = MemorySnapshot(),
profile_time: float = 0.0,
) -> None
__repr__ ¶
__repr__() -> str
Source code in vllm/utils/__init__.py
MemorySnapshot dataclass
¶
Memory snapshot.
Source code in vllm/utils/__init__.py
__init__ ¶
__init__(
torch_peak: int = 0,
free_memory: int = 0,
total_memory: int = 0,
cuda_memory: int = 0,
torch_memory: int = 0,
non_torch_memory: int = 0,
timestamp: float = 0.0,
auto_measure: bool = True,
) -> None
__post_init__ ¶
__sub__ ¶
__sub__(other: MemorySnapshot) -> MemorySnapshot
Source code in vllm/utils/__init__.py
measure ¶
Source code in vllm/utils/__init__.py
PlaceholderModule ¶
Bases: _PlaceholderBase
A placeholder object to use when a module does not exist.
This enables more informative errors when trying to access attributes of a module that does not exist.
Source code in vllm/utils/__init__.py
__getattr__ ¶
__getattr__(key: str)
Source code in vllm/utils/__init__.py
PyObjectCache ¶
Used to cache python objects to avoid object allocations across scheduler iterations.
Source code in vllm/utils/__init__.py
SortedHelpFormatter ¶
Bases: ArgumentDefaultsHelpFormatter
, RawDescriptionHelpFormatter
SortedHelpFormatter that sorts arguments by their option strings.
Source code in vllm/utils/__init__.py
_split_lines ¶
- Sentences split across lines have their single newlines removed.
- Paragraphs and explicit newlines are split into separate lines.
- Each line is wrapped to the specified width (width of terminal).
Source code in vllm/utils/__init__.py
StoreBoolean ¶
Bases: Action
Source code in vllm/utils/__init__.py
__call__ ¶
Source code in vllm/utils/__init__.py
_MappingOrderCacheView ¶
Source code in vllm/utils/__init__.py
_PlaceholderBase ¶
Disallows downstream usage of placeholder modules.
We need to explicitly override each dunder method because __getattr__
is not called when they are accessed.
Source code in vllm/utils/__init__.py
2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 |
|
__abs__ ¶
__bool__ ¶
__call__ ¶
__ceil__ ¶
__enter__ ¶
__exit__ ¶
__floor__ ¶
__getattr__ ¶
The main class should implement this to throw an error for attribute accesses representing downstream usage.
__hash__ ¶
__index__ ¶
__invert__ ¶
__len__ ¶
__neg__ ¶
__pos__ ¶
__pow__ ¶
__setitem__ ¶
__trunc__ ¶
_PlaceholderModuleAttr ¶
Bases: _PlaceholderBase
Source code in vllm/utils/__init__.py
__init__ ¶
__init__(module: PlaceholderModule, attr_path: str) -> None
_StreamPlaceholder ¶
Source code in vllm/utils/__init__.py
_add_prefix ¶
Prepend each output line with process-specific prefix
Source code in vllm/utils/__init__.py
_cuda_device_count_stateless cached
¶
Source code in vllm/utils/__init__.py
_generate_random_fp8 ¶
Source code in vllm/utils/__init__.py
_get_open_port ¶
_get_open_port() -> int
Source code in vllm/utils/__init__.py
_get_precision_level ¶
_has_module cached
¶
Return True if module_name can be found in the current environment.
The result is cached so that subsequent queries for the same module incur no additional overhead.
Source code in vllm/utils/__init__.py
_is_torch_equal_or_newer ¶
_maybe_force_spawn ¶
Check if we need to force the use of the spawn
multiprocessing start method.
Source code in vllm/utils/__init__.py
_next_task ¶
_next_task(
iterator: AsyncGenerator[T, None],
loop: AbstractEventLoop,
) -> Task
_run_task_with_lock async
¶
as_list ¶
async_tensor_h2d ¶
async_tensor_h2d(
data: list,
dtype: dtype,
target_device: Union[str, device],
pin_memory: bool,
) -> Tensor
Asynchronously create a tensor and copy it from host to device.
Source code in vllm/utils/__init__.py
bind_kv_cache ¶
bind_kv_cache(
ctx: dict[str, Any],
kv_cache: list[list[Tensor]],
shared_kv_cache_layers: Optional[dict[str, str]] = None,
) -> None
Source code in vllm/utils/__init__.py
cdiv ¶
check_use_alibi ¶
check_use_alibi(model_config: ModelConfig) -> bool
Source code in vllm/utils/__init__.py
chunk_list ¶
close_sockets ¶
collect_from_async_generator async
¶
collect_from_async_generator(
iterator: AsyncGenerator[T, None],
) -> list[T]
Collect all items from an async generator into a list.
common_broadcastable_dtype ¶
common_broadcastable_dtype(dtypes: Collection[dtype])
Get the common dtype
where all of the other dtypes
can be cast to it without losing any information.
Source code in vllm/utils/__init__.py
cprofile ¶
Decorator to profile a Python method using cProfile.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save_file | Optional[str] | Path to save the profile result. If "1", None, or "", results will be printed to stdout. | None |
enabled | bool | Set to false to turn this into a no-op | True |
Source code in vllm/utils/__init__.py
cprofile_context ¶
Run a cprofile
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save_file | Optional[str] | path to save the profile result. "1" or None will result in printing to stdout. | None |
Source code in vllm/utils/__init__.py
create_kv_caches_with_random ¶
create_kv_caches_with_random(
num_blocks: int,
block_size: int,
num_layers: int,
num_heads: int,
head_size: int,
cache_dtype: Optional[Union[str, dtype]],
model_dtype: Optional[Union[str, dtype]] = None,
seed: Optional[int] = None,
device: Optional[str] = "cuda",
) -> tuple[list[Tensor], list[Tensor]]
Source code in vllm/utils/__init__.py
create_kv_caches_with_random_flash ¶
create_kv_caches_with_random_flash(
num_blocks: int,
block_size: int,
num_layers: int,
num_heads: int,
head_size: int,
cache_dtype: Optional[Union[str, dtype]],
model_dtype: Optional[Union[str, dtype]] = None,
seed: Optional[int] = None,
device: Optional[str] = "cuda",
cache_layout: Optional[str] = "NHD",
) -> tuple[list[Tensor], list[Tensor]]
Source code in vllm/utils/__init__.py
cuda_device_count_stateless ¶
cuda_device_count_stateless() -> int
Get number of CUDA devices, caching based on the value of CUDA_VISIBLE_DEVICES at the time of call.
This should be used instead of torch.cuda.device_count() unless CUDA_VISIBLE_DEVICES has already been set to the desired value.
Source code in vllm/utils/__init__.py
cuda_get_device_properties ¶
Get specified CUDA device property values without initializing CUDA in the current process.
Source code in vllm/utils/__init__.py
current_stream ¶
current_stream() -> Stream
replace torch.cuda.current_stream()
with vllm.utils.current_stream()
. it turns out that torch.cuda.current_stream()
is quite expensive, as it will construct a new stream object at each call. here we patch torch.cuda.set_stream
to keep track of the current stream directly, so that we can avoid calling torch.cuda.current_stream()
.
the underlying hypothesis is that we do not call torch._C._cuda_setStream
from C/C++ code.
Source code in vllm/utils/__init__.py
decorate_logs ¶
Adds a process-specific prefix to each line of output written to stdout and stderr.
This function is intended to be called before initializing the api_server, engine_core, or worker classes, so that all subsequent output from the process is prefixed with the process name and PID. This helps distinguish log output from different processes in multi-process environments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
process_name | Optional[str] | Optional; the name of the process to use in the prefix. If not provided, the current process name from the multiprocessing context is used. | None |
Source code in vllm/utils/__init__.py
deprecate_args ¶
deprecate_args(
start_index: int,
is_deprecated: Union[bool, Callable[[], bool]] = True,
additional_message: Optional[str] = None,
) -> Callable[[F], F]
Source code in vllm/utils/__init__.py
deprecate_kwargs ¶
deprecate_kwargs(
*kws: str,
is_deprecated: Union[bool, Callable[[], bool]] = True,
additional_message: Optional[str] = None,
) -> Callable[[F], F]
Source code in vllm/utils/__init__.py
direct_register_custom_op ¶
direct_register_custom_op(
op_name: str,
op_func: Callable,
mutates_args: list[str],
fake_impl: Optional[Callable] = None,
target_lib: Optional[Library] = None,
dispatch_key: str = "CUDA",
tags: tuple[Tag, ...] = (),
)
torch.library.custom_op
can have significant overhead because it needs to consider complicated dispatching logic. This function directly registers a custom op and dispatches it to the CUDA backend. See https://gist.github.com/youkaichao/ecbea9ec9fc79a45d2adce1784d7a9a5 for more details.
By default, the custom op is registered to the vLLM library. If you want to register it to a different library, you can pass the library object to the target_lib
argument.
IMPORTANT: the lifetime of the operator is tied to the lifetime of the library object. If you want to bind the operator to a different library, make sure the library object is alive when the operator is used.
Source code in vllm/utils/__init__.py
enable_trace_function_call_for_thread ¶
enable_trace_function_call_for_thread(
vllm_config: VllmConfig,
) -> None
Set up function tracing for the current thread, if enabled via the VLLM_TRACE_FUNCTION environment variable
Source code in vllm/utils/__init__.py
find_library cached
¶
Find the library file in the system. lib_name
is full filename, with both prefix and suffix. This function resolves lib_name
to the full path of the library.
Source code in vllm/utils/__init__.py
find_nccl_library ¶
find_nccl_library() -> str
We either use the library file specified by the VLLM_NCCL_SO_PATH
environment variable, or we find the library file brought by PyTorch. After importing torch
, libnccl.so.2
or librccl.so.1
can be found by ctypes
automatically.
Source code in vllm/utils/__init__.py
find_process_using_port ¶
Source code in vllm/utils/__init__.py
flatten_2d_lists ¶
full_groupby ¶
full_groupby(
values: Iterable[_K" backlink-type="used-by" backlink-anchor="vllm.utils.full_groupby" optional hover>_K], *, key: Callable[[_K" backlink-type="used-by" backlink-anchor="vllm.utils.full_groupby" optional hover>_K], _K]
)
Unlike itertools.groupby
, groups are not broken by non-contiguous data.
Source code in vllm/utils/__init__.py
get_allowed_kwarg_only_overrides ¶
get_allowed_kwarg_only_overrides(
callable: Callable[..., object],
overrides: Optional[Mapping[str, object]],
*,
requires_kw_only: bool = True,
allow_var_kwargs: bool = False,
) -> dict[str, Any]
Given a callable which has one or more keyword only params and a dict mapping param names to values, drop values that can be not be kwarg expanded to overwrite one or more keyword-only args. This is used in a few places to handle custom processor overrides for multimodal models, e.g., for profiling when processor options provided by the user may affect the number of mm tokens per instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
callable | Callable[..., object] | Callable which takes 0 or more keyword only arguments. If None is provided, all overrides names are allowed. | required |
overrides | Optional[Mapping[str, object]] | Potential overrides to be used when invoking the callable. | required |
allow_var_kwargs | bool | Allows overrides that are expandable for var kwargs. | False |
Returns:
Type | Description |
---|---|
dict[str, Any] | Dictionary containing the kwargs to be leveraged which may be used |
dict[str, Any] | to overwrite one or more keyword only arguments when invoking the |
dict[str, Any] | callable. |
Source code in vllm/utils/__init__.py
get_cuda_view_from_cpu_tensor ¶
Get a CUDA view of a CPU tensor using Unified Virtual Addressing (UVA).
Source code in vllm/utils/__init__.py
get_distributed_init_method ¶
get_dtype_size ¶
get_exception_traceback ¶
get_hash_fn_by_name ¶
Get a hash function by name, or raise an error if the function is not found. Args: hash_fn_name: Name of the hash function. Returns: A hash function.
Source code in vllm/utils/__init__.py
get_ip ¶
get_ip() -> str
Source code in vllm/utils/__init__.py
get_kv_cache_torch_dtype ¶
get_kv_cache_torch_dtype(
cache_dtype: Optional[Union[str, dtype]],
model_dtype: Optional[Union[str, dtype]] = None,
) -> dtype
Source code in vllm/utils/__init__.py
get_loopback_ip ¶
get_loopback_ip() -> str
Source code in vllm/utils/__init__.py
get_max_shared_memory_bytes cached
¶
Returns the maximum shared memory per thread block in bytes.
Source code in vllm/utils/__init__.py
get_mp_context ¶
Get a multiprocessing context with a particular method (spawn or fork). By default we follow the value of the VLLM_WORKER_MULTIPROC_METHOD to determine the multiprocessing method (default is fork). However, under certain conditions, we may enforce spawn and override the value of VLLM_WORKER_MULTIPROC_METHOD.
Source code in vllm/utils/__init__.py
get_open_port ¶
get_open_port() -> int
Get an open port for the vLLM process to listen on. An edge case to handle, is when we run data parallel, we need to avoid ports that are potentially used by the data parallel master process. Right now we reserve 10 ports for the data parallel master process. Currently it uses 2 ports.
Source code in vllm/utils/__init__.py
get_open_ports_list ¶
get_tcp_uri ¶
get_vllm_optional_dependencies cached
¶
Source code in vllm/utils/__init__.py
identity ¶
import_from_path ¶
Import a Python file according to its file path.
Based on the official recipe: https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
Source code in vllm/utils/__init__.py
import_pynvml ¶
Historical comments:
libnvml.so is the library behind nvidia-smi, and pynvml is a Python wrapper around it. We use it to get GPU status without initializing CUDA context in the current process. Historically, there are two packages that provide pynvml: - nvidia-ml-py
(https://pypi.org/project/nvidia-ml-py/): The official wrapper. It is a dependency of vLLM, and is installed when users install vLLM. It provides a Python module named pynvml
. - pynvml
(https://pypi.org/project/pynvml/): An unofficial wrapper. Prior to version 12.0, it also provides a Python module pynvml
, and therefore conflicts with the official one. What's worse, the module is a Python package, and has higher priority than the official one which is a standalone Python file. This causes errors when both of them are installed. Starting from version 12.0, it migrates to a new module named pynvml_utils
to avoid the conflict. It is so confusing that many packages in the community use the unofficial one by mistake, and we have to handle this case. For example, nvcr.io/nvidia/pytorch:24.12-py3
uses the unofficial one, and it will cause errors, see the issue https://github.com/vllm-project/vllm/issues/12847 for example. After all the troubles, we decide to copy the official pynvml
module to our codebase, and use it directly.
Source code in vllm/utils/__init__.py
in_loop ¶
in_loop(event_loop: AbstractEventLoop) -> bool
init_cached_hf_modules ¶
is_list_of ¶
is_list_of(
value: object,
typ: Union[type[T], tuple[type[T], ...]],
*,
check: Literal["first", "all"] = "first",
) -> TypeIs[list[T]]
Source code in vllm/utils/__init__.py
is_lossless_cast ¶
Test whether it is lossless to cast a tensor from src_dtype
to tgt_dtype
.
Source code in vllm/utils/__init__.py
is_torch_equal_or_newer ¶
Check if the installed torch version is >= the target version.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
target | str | a version string, like "2.6.0". | required |
Returns:
Type | Description |
---|---|
bool | Whether the condition meets. |
Source code in vllm/utils/__init__.py
is_uva_available cached
¶
is_uva_available() -> bool
Check if Unified Virtual Addressing (UVA) is available.
is_valid_ipv6_address ¶
join_host_port ¶
kill_process_tree ¶
kill_process_tree(pid: int)
Kills all descendant processes of the given pid by sending SIGKILL.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pid | int | Process ID of the parent process | required |
Source code in vllm/utils/__init__.py
make_async ¶
make_async(
func: Callable[P, T],
executor: Optional[Executor] = None,
) -> Callable[P, Awaitable[T]]
Take a blocking function, and run it on in an executor thread.
This function prevents the blocking function from blocking the asyncio event loop. The code in this function needs to be thread safe.
Source code in vllm/utils/__init__.py
make_ndarray_with_pad ¶
make_ndarray_with_pad(
x: list[list[T]],
pad: T,
dtype: DTypeLike,
*,
max_len: Optional[int] = None,
) -> NDArray
Make a padded array from 2D inputs.
The padding is applied to the end of each inner list until it reaches max_len
.
Source code in vllm/utils/__init__.py
make_tensor_with_pad ¶
make_tensor_with_pad(
x: list[list[T]],
pad: T,
dtype: dtype,
*,
max_len: Optional[int] = None,
device: Optional[Union[str, device]] = None,
pin_memory: bool = False,
) -> Tensor
Make a padded tensor from 2D inputs.
The padding is applied to the end of each inner list until it reaches max_len
.
Source code in vllm/utils/__init__.py
make_zmq_path ¶
Make a ZMQ path from its parts.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scheme | str | The ZMQ transport scheme (e.g. tcp, ipc, inproc). | required |
host | str | The host - can be an IPv4 address, IPv6 address, or hostname. | required |
port | Optional[int] | Optional port number, only used for TCP sockets. | None |
Returns:
Type | Description |
---|---|
str | A properly formatted ZMQ path string. |
Source code in vllm/utils/__init__.py
make_zmq_socket ¶
make_zmq_socket(
ctx: Union[Context, Context],
path: str,
socket_type: Any,
bind: Optional[bool] = None,
identity: Optional[bytes] = None,
linger: Optional[int] = None,
) -> Union[Socket, Socket]
Make a ZMQ socket with the proper bind/connect semantics.
Source code in vllm/utils/__init__.py
memory_profiling ¶
memory_profiling(
baseline_snapshot: MemorySnapshot, weights_memory: int
) -> Generator[MemoryProfilingResult, None, None]
Memory profiling context manager. baseline_snapshot: the memory snapshot before the current vLLM instance. weights_memory: memory used by PyTorch when loading the model weights. Note that, before loading the model weights, we also initialize the device and distributed environment, which may consume some memory. This part is not included in the weights_memory because PyTorch does not control it.
The memory in one GPU can be classified into 3 categories: 1. memory used by anything other than the current vLLM instance. 2. memory used by torch in the current vLLM instance. 3. memory used in the current vLLM instance, but not by torch.
A quantitive example:
Before creating the current vLLM instance
category 1: 1 GiB category 2: 0 GiB category 3: 0 GiB
After creating the current vLLM instance and loading the model, (i.e. before profiling): category 1: 1 GiB category 2: 2 GiB (model weights take 2 GiB) category 3: 0.5 GiB (memory used by NCCL)
During profiling (peak): category 1: 1 GiB category 2: 4 GiB (peak activation tensors take 2 GiB) category 3: 1 GiB (memory used by NCCL + buffers for some attention backends)
After profiling
category 1: 1 GiB category 2: 3 GiB (after garbage-collecting activation tensors) category 3: 1 GiB (memory used by NCCL + buffers for some attention backends)
In this case, non-kv cache takes 5 GiB in total, including: a. 2 GiB used by the model weights (category 2) b. 2 GiB reserved for the peak activation tensors (category 2) c. 1 GiB used by non-torch components (category 3)
The memory used for loading weights (a.) is directly given from the argument weights_memory
.
The increase of torch.cuda.memory_stats()["allocated_bytes.all.peak"]
during profiling gives (b.).
The increase of non_torch_memory
from creating the current vLLM instance until after profiling to get (c.).
Source code in vllm/utils/__init__.py
2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 |
|
merge_async_iterators async
¶
merge_async_iterators(
*iterators: AsyncGenerator[T, None],
) -> AsyncGenerator[tuple[int, T], None]
Merge multiple asynchronous iterators into a single iterator.
This method handle the case where some iterators finish before others. When it yields, it yields a tuple (i, item) where i is the index of the iterator that yields the item.
Source code in vllm/utils/__init__.py
prev_power_of_2 ¶
resolve_obj_by_qualname ¶
Resolve an object by its fully-qualified class name.
Source code in vllm/utils/__init__.py
round_down ¶
round_up ¶
run_in_loop ¶
run_in_loop(
loop: AbstractEventLoop, function: Callable, *args
)
run_method ¶
run_method(
obj: Any,
method: Union[str, bytes, Callable],
args: tuple[Any],
kwargs: dict[str, Any],
) -> Any
Run a method of an object with the given arguments and keyword arguments. If the method is string, it will be converted to a method using getattr. If the method is serialized bytes and will be deserialized using cloudpickle. If the method is a callable, it will be called directly.
Source code in vllm/utils/__init__.py
run_once ¶
Source code in vllm/utils/__init__.py
set_default_torch_num_threads ¶
set_default_torch_num_threads(num_threads: int)
Sets the default number of threads for PyTorch to the given value.
Source code in vllm/utils/__init__.py
set_process_title ¶
Set the current process title to a specific name with an optional suffix.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name | str | The title to assign to the current process. | required |
suffix | str | An optional suffix to append to the base name. | '' |
append | bool | Whether to append to the existing process title. | False |
Source code in vllm/utils/__init__.py
set_ulimit ¶
Source code in vllm/utils/__init__.py
sha256 ¶
sha256(input) -> int
Hash any picklable Python object using SHA-256.
The input is serialized using pickle before hashing, which allows arbitrary Python objects to be used. Note that this function does not use a hash seed—if you need one, prepend it explicitly to the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input | Any picklable Python object. | required |
Returns:
Type | Description |
---|---|
int | An integer representing the SHA-256 hash of the serialized input. |
Source code in vllm/utils/__init__.py
sha256_cbor_64bit ¶
sha256_cbor_64bit(input) -> int
Hash objects using CBOR serialization and SHA-256, then truncate to 64bits.
This option is useful for non-Python-dependent serialization and hashing.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input | Object to be serialized and hashed. Supported types include basic Python types and complex structures like lists, tuples, and dictionaries. Custom classes must implement CBOR serialization methods. | required |
Returns:
Type | Description |
---|---|
int | An integer in the range [0, 2^64-1] representing the lower 64 bits |
int | of the SHA-256 hash of the CBOR serialized input. |
Source code in vllm/utils/__init__.py
split_host_port ¶
Source code in vllm/utils/__init__.py
split_zmq_path ¶
Split a zmq path into its parts.
Source code in vllm/utils/__init__.py
supports_kw ¶
supports_kw(
callable: Callable[..., object],
kw_name: str,
*,
requires_kw_only: bool = False,
allow_var_kwargs: bool = True,
) -> bool
Check if a keyword is a valid kwarg for a callable; if requires_kw_only disallows kwargs names that can also be positional arguments.
Source code in vllm/utils/__init__.py
swap_dict_values ¶
Helper function to swap values for two keys
Source code in vllm/utils/__init__.py
test_loopback_bind ¶
update_environment_variables ¶
Source code in vllm/utils/__init__.py
warn_for_unimplemented_methods ¶
A replacement for abc.ABC
. When we use abc.ABC
, subclasses will fail to instantiate if they do not implement all abstract methods. Here, we only require raise NotImplementedError
in the base class, and log a warning if the method is not implemented in the subclass.
Source code in vllm/utils/__init__.py
weak_bind ¶
Make an instance method that weakly references its associated instance and no-ops once that instance is collected.
Source code in vllm/utils/__init__.py
weak_ref_tensor ¶
Create a weak reference to a tensor. The new tensor will share the same data as the original tensor, but will not keep the original tensor alive.
Source code in vllm/utils/__init__.py
weak_ref_tensors ¶
weak_ref_tensors(
tensors: Union[Tensor, list[Tensor], tuple[Tensor]],
) -> Union[Tensor, list[Any], tuple[Any], Any]
Convenience function to create weak references to tensors, for single tensor, list of tensors or tuple of tensors.
Source code in vllm/utils/__init__.py
zmq_socket_ctx ¶
zmq_socket_ctx(
path: str,
socket_type: Any,
bind: Optional[bool] = None,
linger: int = 0,
identity: Optional[bytes] = None,
) -> Iterator[Socket]
Context manager for a ZMQ socket