vllm.entrypoints.openai.serving_classification
ClassificationMixin ¶
Bases: OpenAIServing
Source code in vllm/entrypoints/openai/serving_classification.py
_build_response ¶
_build_response(
ctx: ServeContext,
) -> Union[ClassificationResponse, ErrorResponse]
Convert model outputs to a formatted classification response with probabilities and labels.
Source code in vllm/entrypoints/openai/serving_classification.py
_preprocess async
¶
_preprocess(ctx: ServeContext) -> Optional[ErrorResponse]
Process classification inputs: tokenize text, resolve adapters, and prepare model-specific inputs.
Source code in vllm/entrypoints/openai/serving_classification.py
ServingClassification ¶
Bases: ClassificationMixin
Source code in vllm/entrypoints/openai/serving_classification.py
__init__ ¶
__init__(
engine_client: EngineClient,
model_config: ModelConfig,
models: OpenAIServingModels,
*,
request_logger: Optional[RequestLogger],
) -> None
Source code in vllm/entrypoints/openai/serving_classification.py
_create_pooling_params ¶
_create_pooling_params(
ctx: ClassificationServeContext,
) -> Union[PoolingParams, ErrorResponse]
Source code in vllm/entrypoints/openai/serving_classification.py
_validate_request ¶
_validate_request(
ctx: ClassificationServeContext,
) -> Optional[ErrorResponse]
Source code in vllm/entrypoints/openai/serving_classification.py
create_classify async
¶
create_classify(
request: ClassificationRequest, raw_request: Request
) -> Union[ClassificationResponse, ErrorResponse]