vllm.model_executor.models.rvl
RForConditionalGeneration ¶
Bases: LlavaOnevisionForConditionalGeneration
Source code in vllm/model_executor/models/rvl.py
hf_to_vllm_mapper class-attribute
instance-attribute
¶
hf_to_vllm_mapper = WeightsMapper(
orig_to_new_prefix={
"model.language_model.": "language_model.model.",
"model.vision_tower.": "vision_tower.",
"model.multi_modal_projector.": "multi_modal_projector.",
"model.image_newline": "image_newline",
"lm_head.": "language_model.lm_head.",
}
)
__init__ ¶
__init__(
*, vllm_config: VllmConfig, prefix: str = ""
) -> None
RVLDummyInputsBuilder ¶
Bases: LlavaDummyInputsBuilder[RVLProcessingInfo]
Source code in vllm/model_executor/models/rvl.py
get_dummy_mm_data ¶
get_dummy_mm_data(
seq_len: int, mm_counts: Mapping[str, int]
) -> MultiModalDataDict
Source code in vllm/model_executor/models/rvl.py
RVLMultiModalProjector ¶
Bases: Module