vllm.entrypoints.openai.api_server
INVOCATION_TYPES module-attribute
¶
INVOCATION_TYPES: list[
tuple[RequestType, tuple[GetHandlerFn, EndpointFn]]
] = [
(ChatCompletionRequest, (chat, create_chat_completion)),
(CompletionRequest, (completion, create_completion)),
(EmbeddingRequest, (embedding, create_embedding)),
(ClassificationRequest, (classify, create_classify)),
(ScoreRequest, (score, create_score)),
(RerankRequest, (rerank, do_rerank)),
(PoolingRequest, (pooling, create_pooling)),
]
INVOCATION_VALIDATORS module-attribute
¶
INVOCATION_VALIDATORS = [
(TypeAdapter(request_type), (get_handler, endpoint))
for (
request_type,
(get_handler, endpoint),
) in INVOCATION_TYPES
]
parser module-attribute
¶
parser = FlexibleArgumentParser(
description="vLLM OpenAI-Compatible RESTful API server."
)
AuthenticationMiddleware ¶
Pure ASGI middleware that authenticates each request by checking if the Authorization header exists and equals "Bearer {api_key}".
Notes¶
There are two cases in which authentication is skipped: 1. The HTTP method is OPTIONS. 2. The request path doesn't start with /v1 (e.g. /health).
Source code in vllm/entrypoints/openai/api_server.py
__call__ ¶
__call__(
scope: Scope, receive: Receive, send: Send
) -> Awaitable[None]
Source code in vllm/entrypoints/openai/api_server.py
PrometheusResponse ¶
SSEDecoder ¶
Robust Server-Sent Events decoder for streaming responses.
Source code in vllm/entrypoints/openai/api_server.py
__init__ ¶
decode_chunk ¶
Decode a chunk of SSE data and return parsed events.
Source code in vllm/entrypoints/openai/api_server.py
extract_content ¶
ScalingMiddleware ¶
Middleware that checks if the model is currently scaling and returns a 503 Service Unavailable response if it is.
This middleware applies to all HTTP requests and prevents processing when the model is in a scaling state.
Source code in vllm/entrypoints/openai/api_server.py
__call__ ¶
__call__(
scope: Scope, receive: Receive, send: Send
) -> Awaitable[None]
Source code in vllm/entrypoints/openai/api_server.py
XRequestIdMiddleware ¶
Middleware the set's the X-Request-Id header for each response to a random uuid4 (hex) value if the header isn't already present in the request, otherwise use the provided request id.
Source code in vllm/entrypoints/openai/api_server.py
__call__ ¶
__call__(
scope: Scope, receive: Receive, send: Send
) -> Awaitable[None]
Source code in vllm/entrypoints/openai/api_server.py
_extract_content_from_chunk ¶
Extract content from a streaming response chunk.
Source code in vllm/entrypoints/openai/api_server.py
_log_non_streaming_response ¶
_log_non_streaming_response(response_body: list) -> None
Log non-streaming response.
Source code in vllm/entrypoints/openai/api_server.py
_log_streaming_response ¶
_log_streaming_response(
response, response_body: list
) -> None
Log streaming response with robust SSE parsing.
Source code in vllm/entrypoints/openai/api_server.py
base ¶
base(request: Request) -> OpenAIServing
build_app ¶
build_app(args: Namespace) -> FastAPI
Source code in vllm/entrypoints/openai/api_server.py
1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 |
|
build_async_engine_client async
¶
build_async_engine_client(
args: Namespace,
*,
usage_context: UsageContext = OPENAI_API_SERVER,
disable_frontend_multiprocessing: Optional[bool] = None,
client_config: Optional[dict[str, Any]] = None,
) -> AsyncIterator[EngineClient]
Source code in vllm/entrypoints/openai/api_server.py
build_async_engine_client_from_engine_args async
¶
build_async_engine_client_from_engine_args(
engine_args: AsyncEngineArgs,
*,
usage_context: UsageContext = OPENAI_API_SERVER,
disable_frontend_multiprocessing: bool = False,
client_config: Optional[dict[str, Any]] = None,
) -> AsyncIterator[EngineClient]
Create EngineClient, either: - in-process using the AsyncLLMEngine Directly - multiprocess using AsyncLLMEngine RPC
Returns the Client or None if the creation failed.
Source code in vllm/entrypoints/openai/api_server.py
187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 |
|
cancel_responses async
¶
cancel_responses(response_id: str, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
chat ¶
chat(request: Request) -> Optional[OpenAIServingChat]
classify ¶
classify(
request: Request,
) -> Optional[ServingClassification]
collective_rpc async
¶
Source code in vllm/entrypoints/openai/api_server.py
completion ¶
completion(
request: Request,
) -> Optional[OpenAIServingCompletion]
create_chat_completion async
¶
create_chat_completion(
request: ChatCompletionRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_classify async
¶
create_classify(
request: ClassificationRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_completion async
¶
create_completion(
request: CompletionRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_embedding async
¶
create_embedding(
request: EmbeddingRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_pooling async
¶
create_pooling(
request: PoolingRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_responses async
¶
create_responses(
request: ResponsesRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_score async
¶
create_score(request: ScoreRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
create_score_v1 async
¶
create_score_v1(
request: ScoreRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_server_socket ¶
Source code in vllm/entrypoints/openai/api_server.py
create_server_unix_socket ¶
create_transcriptions async
¶
create_transcriptions(
raw_request: Request,
request: Annotated[TranscriptionRequest, Form()],
)
Source code in vllm/entrypoints/openai/api_server.py
create_translations async
¶
create_translations(
request: Annotated[TranslationRequest, Form()],
raw_request: Request,
)
Source code in vllm/entrypoints/openai/api_server.py
detokenize async
¶
detokenize(
request: DetokenizeRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
do_rerank async
¶
do_rerank(request: RerankRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
do_rerank_v1 async
¶
do_rerank_v1(request: RerankRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
do_rerank_v2 async
¶
do_rerank_v2(request: RerankRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
embedding ¶
embedding(
request: Request,
) -> Optional[OpenAIServingEmbedding]
engine_client ¶
engine_client(request: Request) -> EngineClient
get_server_load_metrics async
¶
Source code in vllm/entrypoints/openai/api_server.py
health async
¶
init_app_state async
¶
init_app_state(
engine_client: EngineClient,
vllm_config: VllmConfig,
state: State,
args: Namespace,
) -> None
Source code in vllm/entrypoints/openai/api_server.py
1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 |
|
invocations async
¶
For SageMaker, routes requests based on the request type.
Source code in vllm/entrypoints/openai/api_server.py
is_scaling_elastic_ep async
¶
is_sleeping async
¶
Source code in vllm/entrypoints/openai/api_server.py
lifespan async
¶
Source code in vllm/entrypoints/openai/api_server.py
load_log_config ¶
Source code in vllm/entrypoints/openai/api_server.py
load_lora_adapter async
¶
load_lora_adapter(
request: LoadLoRAAdapterRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
maybe_register_tokenizer_info_endpoint ¶
Conditionally register the tokenizer info endpoint if enabled.
Source code in vllm/entrypoints/openai/api_server.py
models ¶
models(request: Request) -> OpenAIServingModels
mount_metrics ¶
Mount prometheus metrics to a FastAPI app.
Source code in vllm/entrypoints/openai/api_server.py
ping async
¶
Ping check. Endpoint required for SageMaker
pooling ¶
pooling(request: Request) -> Optional[OpenAIServingPooling]
rerank ¶
rerank(request: Request) -> Optional[ServingScores]
reset_prefix_cache async
¶
Reset the prefix cache. Note that we currently do not check if the prefix cache is successfully reset in the API server.
Source code in vllm/entrypoints/openai/api_server.py
responses ¶
responses(
request: Request,
) -> Optional[OpenAIServingResponses]
retrieve_responses async
¶
retrieve_responses(response_id: str, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
run_server async
¶
Run a single-worker API server.
Source code in vllm/entrypoints/openai/api_server.py
run_server_worker async
¶
Run a single API server worker.
Source code in vllm/entrypoints/openai/api_server.py
scale_elastic_ep async
¶
Source code in vllm/entrypoints/openai/api_server.py
score ¶
score(request: Request) -> Optional[ServingScores]
setup_server ¶
Validate API server args, set up signal handler, create socket ready to serve.
Source code in vllm/entrypoints/openai/api_server.py
show_available_models async
¶
show_server_info async
¶
show_version async
¶
sleep async
¶
Source code in vllm/entrypoints/openai/api_server.py
start_profile async
¶
Source code in vllm/entrypoints/openai/api_server.py
stop_profile async
¶
Source code in vllm/entrypoints/openai/api_server.py
tokenization ¶
tokenization(request: Request) -> OpenAIServingTokenization
tokenize async
¶
tokenize(request: TokenizeRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
transcription ¶
transcription(
request: Request,
) -> OpenAIServingTranscription
translation ¶
translation(request: Request) -> OpenAIServingTranslation
unload_lora_adapter async
¶
unload_lora_adapter(
request: UnloadLoRAAdapterRequest, raw_request: Request
)