vllm.entrypoints.score_utils
ScoreContentPartParam module-attribute
¶
ScoreContentPartParam: TypeAlias = Union[
ChatCompletionContentPartImageParam,
ChatCompletionContentPartImageEmbedsParam,
]
ScoreMultiModalParam ¶
Bases: TypedDict
A specialized parameter type for scoring multimodal content
The reasons why don't reuse CustomChatCompletionMessageParam
directly: 1. Score tasks don't need the 'role' field (user/assistant/system) that's required in chat completions 2. Including chat-specific fields would confuse users about their purpose in scoring 3. This is a more focused interface that only exposes what's needed for scoring
Source code in vllm/entrypoints/score_utils.py
_cosine_similarity ¶
_cosine_similarity(
tokenizer: Union[
PreTrainedTokenizer, PreTrainedTokenizerFast
],
embed_1: list[PoolingRequestOutput],
embed_2: list[PoolingRequestOutput],
) -> list[PoolingRequestOutput]
Source code in vllm/entrypoints/score_utils.py
_parse_score_content ¶
_parse_score_content(
data: Union[str, ScoreContentPartParam],
mm_tracker: BaseMultiModalItemTracker,
) -> Optional[_ContentPart]
Source code in vllm/entrypoints/score_utils.py
_validate_score_input_lens ¶
_validate_score_input_lens(
data_1: Union[list[str], list[ScoreContentPartParam]],
data_2: Union[list[str], list[ScoreContentPartParam]],
)
Source code in vllm/entrypoints/score_utils.py
apply_score_template ¶
apply_score_template(
model_config: ModelConfig, prompt_1: str, prompt_2: str
) -> str
Source code in vllm/entrypoints/score_utils.py
compress_token_type_ids ¶
Return position of the first 1 or the length of the list if not found.
Source code in vllm/entrypoints/score_utils.py
get_score_prompt ¶
get_score_prompt(
model_config: ModelConfig,
tokenizer: AnyTokenizer,
tokenization_kwargs: dict[str, Any],
data_1: Union[str, ScoreContentPartParam],
data_2: Union[str, ScoreContentPartParam],
) -> tuple[str, TokensPrompt]
Source code in vllm/entrypoints/score_utils.py
parse_score_data ¶
parse_score_data(
data_1: Union[str, ScoreContentPartParam],
data_2: Union[str, ScoreContentPartParam],
model_config: ModelConfig,
tokenizer: AnyTokenizer,
) -> tuple[str, str, Optional[MultiModalDataDict]]
Source code in vllm/entrypoints/score_utils.py
post_process_tokens ¶
post_process_tokens(
model_config: ModelConfig, prompt: TokensPrompt
) -> None
Perform architecture-specific manipulations on the input tokens.
Note
This is an in-place operation.