vllm.model_executor.models.nvlm_d
NVLMDummyInputsBuilder ¶
Bases: BaseInternVLDummyInputsBuilder[NVLMProcessingInfo]
Source code in vllm/model_executor/models/nvlm_d.py
get_dummy_mm_data ¶
get_dummy_mm_data(
seq_len: int, mm_counts: Mapping[str, int]
) -> MultiModalDataDict
Source code in vllm/model_executor/models/nvlm_d.py
get_dummy_text ¶
NVLMMultiModalProcessor ¶
Bases: BaseInternVLMultiModalProcessor[NVLMProcessingInfo]
Source code in vllm/model_executor/models/nvlm_d.py
_get_prompt_updates ¶
_get_prompt_updates(
mm_items: MultiModalDataItems,
hf_processor_mm_kwargs: Mapping[str, object],
out_mm_kwargs: MultiModalKwargsItems,
) -> Sequence[PromptUpdate]
Source code in vllm/model_executor/models/nvlm_d.py
NVLMProcessingInfo ¶
Bases: BaseInternVLProcessingInfo
Source code in vllm/model_executor/models/nvlm_d.py
get_hf_processor ¶
get_hf_processor(**kwargs: object) -> NVLMProcessor
NVLMProcessor ¶
Bases: BaseInternVLProcessor
Source code in vllm/model_executor/models/nvlm_d.py
get_image_repl ¶
get_image_repl(
feature_size: int, num_patches: Optional[int]
) -> PromptUpdateDetails[str]
Source code in vllm/model_executor/models/nvlm_d.py
NVLM_D_Model ¶
Bases: InternVLChatModel
Source code in vllm/model_executor/models/nvlm_d.py
_init_mlp1 ¶
_init_mlp1(config: PretrainedConfig) -> Sequential
Source code in vllm/model_executor/models/nvlm_d.py
_init_vision_model ¶
_init_vision_model(
config: PretrainedConfig,
quant_config: Optional[QuantizationConfig],
*,
is_mono: bool,
prefix: str,
)