vllm.model_executor.models.interfaces
MultiModalEmbeddings module-attribute
¶
The output embeddings must be one of the following formats:
- A list or tuple of 2D tensors, where each tensor corresponds to each input multimodal data item (e.g, image).
- A single 3D tensor, with the batch dimension grouping the 2D tensors.
HasInnerState ¶
Bases: Protocol
The interface required for all models that has inner state.
Source code in vllm/model_executor/models/interfaces.py
HasNoOps ¶
IsAttentionFree ¶
Bases: Protocol
The interface required for all models like Mamba that lack attention, but do have state whose size is constant wrt the number of tokens.
Source code in vllm/model_executor/models/interfaces.py
IsHybrid ¶
Bases: Protocol
The interface required for all models like Jamba that have both attention and mamba blocks, indicates that hf_config has 'layers_block_type'
Source code in vllm/model_executor/models/interfaces.py
is_hybrid class-attribute
¶
is_hybrid: Literal[True] = True
A flag that indicates this model has both mamba and attention blocks , also indicates that the model's hf_config has 'layers_block_type'
get_mamba_state_shape_from_config classmethod
¶
get_mamba_state_shape_from_config(
vllm_config: VllmConfig, use_v1: bool = True
) -> tuple[tuple[int, int], tuple[int, int, int]]
Calculate shapes for Mamba's convolutional and state caches.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
vllm_config | VllmConfig | vLLM config | required |
use_v1 | bool | Get shapes for V1 (or V0) | True |
Returns:
Type | Description |
---|---|
tuple[int, int] | Tuple containing: |
tuple[int, int, int] |
|
tuple[tuple[int, int], tuple[int, int, int]] |
|
Source code in vllm/model_executor/models/interfaces.py
MixtureOfExperts ¶
Bases: Protocol
Check if the model is a mixture of experts (MoE) model.
Source code in vllm/model_executor/models/interfaces.py
expert_weights instance-attribute
¶
expert_weights: MutableSequence[Iterable[Tensor]]
Expert weights saved in this rank.
The first dimension is the layer, and the second dimension is different parameters in the layer, e.g. up/down projection weights.
num_expert_groups instance-attribute
¶
num_expert_groups: int
Number of expert groups in this model.
num_local_physical_experts instance-attribute
¶
num_local_physical_experts: int
Number of local physical experts in this model.
num_logical_experts instance-attribute
¶
num_logical_experts: int
Number of logical experts in this model.
num_physical_experts instance-attribute
¶
num_physical_experts: int
Number of physical experts in this model.
num_redundant_experts instance-attribute
¶
num_redundant_experts: int
Number of redundant experts in this model.
num_routed_experts instance-attribute
¶
num_routed_experts: int
Number of routed experts in this model.
num_shared_experts instance-attribute
¶
num_shared_experts: int
Number of shared experts in this model.
set_eplb_state ¶
set_eplb_state(
expert_load_view: Tensor,
logical_to_physical_map: Tensor,
logical_replica_count: Tensor,
) -> None
Register the EPLB state in the MoE model.
Since these are views of the actual EPLB state, any changes made by the EPLB algorithm are automatically reflected in the model's behavior without requiring additional method calls to set new states.
You should also collect model's expert_weights
here instead of in the weight loader, since after initial weight loading, further processing like quantization may be applied to the weights.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
expert_load_view | Tensor | A view of the expert load metrics tensor. | required |
logical_to_physical_map | Tensor | Mapping from logical to physical experts. | required |
logical_replica_count | Tensor | Count of replicas for each logical expert. | required |
Source code in vllm/model_executor/models/interfaces.py
SupportsCrossEncoding ¶
Bases: Protocol
The interface required for all models that support cross encoding.
Source code in vllm/model_executor/models/interfaces.py
SupportsEagle3 ¶
Bases: Protocol
The interface required for models that support EAGLE3 speculative decoding.
Source code in vllm/model_executor/models/interfaces.py
supports_eagle3 class-attribute
¶
supports_eagle3: Literal[True] = True
A flag that indicates this model supports EAGLE3 speculative decoding.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
get_eagle3_aux_hidden_state_layers ¶
SupportsLoRA ¶
Bases: Protocol
The interface required for all models that support LoRA.
Source code in vllm/model_executor/models/interfaces.py
SupportsMultiModal ¶
Bases: Protocol
The interface required for all multi-modal models.
Source code in vllm/model_executor/models/interfaces.py
supports_multimodal class-attribute
¶
supports_multimodal: Literal[True] = True
A flag that indicates this model supports multi-modal inputs.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
get_input_embeddings ¶
get_input_embeddings(
input_ids: Tensor,
multimodal_embeddings: Optional[
MultiModalEmbeddings
] = None,
attn_metadata: Optional[AttentionMetadata] = None,
) -> Tensor
get_input_embeddings(
input_ids: Tensor,
multimodal_embeddings: Optional[
MultiModalEmbeddings
] = None,
) -> Tensor
get_input_embeddings(
input_ids: Tensor,
multimodal_embeddings: Optional[
MultiModalEmbeddings
] = None,
attn_metadata: Optional[AttentionMetadata] = None,
) -> Tensor
Returns the input embeddings merged from the text embeddings from input_ids and the multimodal embeddings generated from multimodal kwargs.
Source code in vllm/model_executor/models/interfaces.py
get_language_model ¶
get_language_model() -> Module
Returns the underlying language model used for text generation.
This is typically the torch.nn.Module
instance responsible for processing the merged multimodal embeddings and producing hidden states
Returns:
Type | Description |
---|---|
Module | torch.nn.Module: The core language model component. |
Source code in vllm/model_executor/models/interfaces.py
get_multimodal_embeddings ¶
get_multimodal_embeddings(
**kwargs: object,
) -> MultiModalEmbeddings
Returns multimodal embeddings generated from multimodal kwargs to be merged with text embeddings.
Note
The returned multimodal embeddings must be in the same order as the appearances of their corresponding multimodal data item in the input prompt.
Source code in vllm/model_executor/models/interfaces.py
get_placeholder_str classmethod
¶
Get the placeholder text for the i
th modality
item in the prompt.
SupportsMultiModalWithRawInput ¶
Bases: SupportsMultiModal
, Protocol
The interface required for all multi-modal models.
Source code in vllm/model_executor/models/interfaces.py
supports_multimodal_raw_input class-attribute
¶
supports_multimodal_raw_input: Literal[True] = True
A flag that indicates this model supports multi-modal inputs and processes them in their raw form and not embeddings.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
SupportsPP ¶
Bases: Protocol
The interface required for all models that support pipeline parallel.
Source code in vllm/model_executor/models/interfaces.py
supports_pp class-attribute
¶
supports_pp: Literal[True] = True
A flag that indicates this model supports pipeline parallel.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
forward ¶
forward(
*, intermediate_tensors: Optional[IntermediateTensors]
) -> Union[Tensor, IntermediateTensors]
Accept IntermediateTensors
when PP rank > 0.
Return IntermediateTensors
only for the last PP rank.
Source code in vllm/model_executor/models/interfaces.py
make_empty_intermediate_tensors ¶
make_empty_intermediate_tensors(
batch_size: int, dtype: dtype, device: device
) -> IntermediateTensors
Called when PP rank > 0 for profiling purposes.
SupportsQuant ¶
The interface required for all models that support quantization.
Source code in vllm/model_executor/models/interfaces.py
packed_modules_mapping class-attribute
¶
__new__ ¶
__new__(*args, **kwargs) -> Self
Source code in vllm/model_executor/models/interfaces.py
_find_quant_config staticmethod
¶
_find_quant_config(
*args, **kwargs
) -> Optional[QuantizationConfig]
Find quant config passed through model constructor args
Source code in vllm/model_executor/models/interfaces.py
SupportsScoreTemplate ¶
Bases: Protocol
The interface required for all models that support score template.
Source code in vllm/model_executor/models/interfaces.py
supports_score_template class-attribute
¶
supports_score_template: Literal[True] = True
A flag that indicates this model supports score template.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
get_score_template classmethod
¶
Generate a full prompt by populating the score template with query and document content.
post_process_tokens classmethod
¶
post_process_tokens(prompt: TokensPrompt) -> None
SupportsTranscription ¶
Bases: Protocol
The interface required for all models that support transcription.
Source code in vllm/model_executor/models/interfaces.py
703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 |
|
supports_transcription_only class-attribute
¶
supports_transcription_only: bool = False
Transcription models can opt out of text generation by setting this to True
.
__init_subclass__ ¶
Source code in vllm/model_executor/models/interfaces.py
get_generation_prompt classmethod
¶
get_generation_prompt(
audio: ndarray,
stt_config: SpeechToTextConfig,
model_config: ModelConfig,
language: Optional[str],
task_type: str,
request_prompt: str,
) -> PromptType
Get the prompt for the ASR model. The model has control over the construction, as long as it returns a valid PromptType.
Source code in vllm/model_executor/models/interfaces.py
get_num_audio_tokens classmethod
¶
get_num_audio_tokens(
audio_duration_s: float,
stt_config: SpeechToTextConfig,
model_config: ModelConfig,
) -> Optional[int]
Map from audio duration to number of audio tokens produced by the ASR model, without running a forward pass. This is used for estimating the amount of processing for this audio.
Source code in vllm/model_executor/models/interfaces.py
get_other_languages classmethod
¶
get_speech_to_text_config classmethod
¶
get_speech_to_text_config(
model_config: ModelConfig,
task_type: Literal["transcribe", "translate"],
) -> SpeechToTextConfig
Get the speech to text config for the ASR model.
validate_language classmethod
¶
Ensure the language specified in the transcription request is a valid ISO 639-1 language code. If the request language is valid, but not natively supported by the model, trigger a warning (but not an exception).
Source code in vllm/model_executor/models/interfaces.py
SupportsV0Only ¶
Bases: Protocol
Models with this interface are not compatible with V1 vLLM.
Source code in vllm/model_executor/models/interfaces.py
_SupportsPPType ¶
Bases: Protocol
Source code in vllm/model_executor/models/interfaces.py
forward ¶
forward(
*, intermediate_tensors: Optional[IntermediateTensors]
) -> Union[Tensor, IntermediateTensors]
make_empty_intermediate_tensors ¶
make_empty_intermediate_tensors(
batch_size: int, dtype: dtype, device: device
) -> IntermediateTensors
_supports_cross_encoding ¶
_supports_cross_encoding(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsCrossEncoding]],
TypeIs[SupportsCrossEncoding],
]
_supports_lora ¶
_supports_pp_attributes ¶
_supports_pp_inspect ¶
get_default_pooling_type ¶
has_inner_state ¶
has_inner_state(model: object) -> TypeIs[HasInnerState]
has_inner_state(
model: type[object],
) -> TypeIs[type[HasInnerState]]
has_inner_state(
model: Union[type[object], object],
) -> Union[
TypeIs[type[HasInnerState]], TypeIs[HasInnerState]
]
has_noops ¶
is_attention_free ¶
is_attention_free(model: object) -> TypeIs[IsAttentionFree]
is_attention_free(
model: type[object],
) -> TypeIs[type[IsAttentionFree]]
is_attention_free(
model: Union[type[object], object],
) -> Union[
TypeIs[type[IsAttentionFree]], TypeIs[IsAttentionFree]
]
is_hybrid ¶
is_mixture_of_experts ¶
is_mixture_of_experts(
model: object,
) -> TypeIs[MixtureOfExperts]
supports_cross_encoding ¶
supports_cross_encoding(
model: type[object],
) -> TypeIs[type[SupportsCrossEncoding]]
supports_cross_encoding(
model: object,
) -> TypeIs[SupportsCrossEncoding]
supports_cross_encoding(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsCrossEncoding]],
TypeIs[SupportsCrossEncoding],
]
supports_eagle3 ¶
supports_eagle3(
model: type[object],
) -> TypeIs[type[SupportsEagle3]]
supports_eagle3(model: object) -> TypeIs[SupportsEagle3]
supports_eagle3(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsEagle3]], TypeIs[SupportsEagle3]
]
supports_lora ¶
supports_lora(
model: type[object],
) -> TypeIs[type[SupportsLoRA]]
supports_lora(model: object) -> TypeIs[SupportsLoRA]
supports_lora(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsLoRA]], TypeIs[SupportsLoRA]
]
Source code in vllm/model_executor/models/interfaces.py
supports_multimodal ¶
supports_multimodal(
model: type[object],
) -> TypeIs[type[SupportsMultiModal]]
supports_multimodal(
model: object,
) -> TypeIs[SupportsMultiModal]
supports_multimodal(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsMultiModal]],
TypeIs[SupportsMultiModal],
]
supports_multimodal_raw_input ¶
supports_multimodal_raw_input(
model: object,
) -> TypeIs[SupportsMultiModalWithRawInput]
supports_multimodal_raw_input(
model: type[object],
) -> TypeIs[type[SupportsMultiModalWithRawInput]]
supports_multimodal_raw_input(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsMultiModalWithRawInput]],
TypeIs[SupportsMultiModalWithRawInput],
]
Source code in vllm/model_executor/models/interfaces.py
supports_pp ¶
supports_pp(
model: type[object],
) -> TypeIs[type[SupportsPP]]
supports_pp(model: object) -> TypeIs[SupportsPP]
supports_pp(
model: Union[type[object], object],
) -> Union[
bool, TypeIs[type[SupportsPP]], TypeIs[SupportsPP]
]
Source code in vllm/model_executor/models/interfaces.py
supports_score_template ¶
supports_score_template(
model: type[object],
) -> TypeIs[type[SupportsScoreTemplate]]
supports_score_template(
model: object,
) -> TypeIs[SupportsScoreTemplate]
supports_score_template(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsScoreTemplate]],
TypeIs[SupportsScoreTemplate],
]
supports_transcription ¶
supports_transcription(
model: type[object],
) -> TypeIs[type[SupportsTranscription]]
supports_transcription(
model: object,
) -> TypeIs[SupportsTranscription]
supports_transcription(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsTranscription]],
TypeIs[SupportsTranscription],
]
supports_v0_only ¶
supports_v0_only(
model: type[object],
) -> TypeIs[type[SupportsV0Only]]
supports_v0_only(model: object) -> TypeIs[SupportsV0Only]
supports_v0_only(
model: Union[type[object], object],
) -> Union[
TypeIs[type[SupportsV0Only]], TypeIs[SupportsV0Only]
]