vllm.multimodal.inputs
AudioItem module-attribute
¶
Represents a single audio item, which can be passed to a HuggingFace AudioProcessor
.
Alternatively, a tuple (audio, sampling_rate)
, where the sampling rate is different from that expected by the model; these are resampled to the model's sampling rate before being processed by HF.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as audio embeddings; these are directly passed to the model without HF processing.
BatchedTensorInputs module-attribute
¶
BatchedTensorInputs: TypeAlias = Mapping[str, NestedTensors]
A dictionary containing nested tensors which have been batched via MultiModalKwargs.batch
.
HfAudioItem module-attribute
¶
Represents a single audio item, which can be passed to a HuggingFace AudioProcessor
.
HfImageItem module-attribute
¶
A transformers.image_utils.ImageInput
representing a single image item, which can be passed to a HuggingFace ImageProcessor
.
HfVideoItem module-attribute
¶
HfVideoItem: TypeAlias = Union[
list["Image"],
ndarray,
"torch.Tensor",
list[ndarray],
list["torch.Tensor"],
]
A transformers.image_utils.VideoInput
representing a single video item, which can be passed to a HuggingFace VideoProcessor
.
ImageItem module-attribute
¶
ImageItem: TypeAlias = Union[HfImageItem, 'torch.Tensor']
A transformers.image_utils.ImageInput
representing a single image item, which can be passed to a HuggingFace ImageProcessor
.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as image embeddings; these are directly passed to the model without HF processing.
ModalityData module-attribute
¶
Either a single data item, or a list of data items.
The number of data items allowed per modality is restricted by --limit-mm-per-prompt
.
MultiModalDataDict module-attribute
¶
MultiModalDataDict: TypeAlias = Mapping[
str, ModalityData[Any]
]
A dictionary containing an entry for each modality type to input.
The built-in modalities are defined by MultiModalDataBuiltins
.
MultiModalPlaceholderDict module-attribute
¶
MultiModalPlaceholderDict: TypeAlias = Mapping[
str, Sequence[PlaceholderRange]
]
A dictionary containing placeholder ranges for each modality.
NestedTensors module-attribute
¶
NestedTensors: TypeAlias = Union[
list["NestedTensors"],
list["torch.Tensor"],
"torch.Tensor",
tuple["torch.Tensor", ...],
]
Uses a list instead of a tensor if the dimensions of each element do not match.
VideoItem module-attribute
¶
VideoItem: TypeAlias = Union[
HfVideoItem,
"torch.Tensor",
tuple[HfVideoItem, dict[str, Any]],
]
A transformers.video_utils.VideoInput
representing a single video item. This can be passed to a HuggingFace VideoProcessor
with transformers.video_utils.VideoMetadata
.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as video embeddings; these are directly passed to the model without HF processing.
BaseMultiModalField dataclass
¶
Bases: ABC
Defines how to interpret tensor data belonging to a keyword argument in MultiModalKwargs
for multiple multi-modal items, and vice versa.
Source code in vllm/multimodal/inputs.py
_field_factory ¶
Source code in vllm/multimodal/inputs.py
_reduce_data abstractmethod
¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
build_elems abstractmethod
¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
Construct MultiModalFieldElem
instances to represent the provided data.
This is the inverse of reduce_data
.
Source code in vllm/multimodal/inputs.py
reduce_data ¶
reduce_data(
elems: list[MultiModalFieldElem],
*,
pin_memory: bool = False,
) -> NestedTensors
Merge the data from multiple instances of MultiModalFieldElem
.
This is the inverse of build_elems
.
Source code in vllm/multimodal/inputs.py
MultiModalBatchedField dataclass
¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
MultiModalDataBuiltins ¶
Bases: TypedDict
Type annotations for modality types predefined by vLLM.
Source code in vllm/multimodal/inputs.py
MultiModalEncDecInputs ¶
Bases: MultiModalInputs
Represents the outputs of EncDecMultiModalProcessor
ready to be passed to vLLM internals.
Source code in vllm/multimodal/inputs.py
encoder_prompt_token_ids instance-attribute
¶
The processed token IDs of the encoder prompt.
encoder_token_type_ids instance-attribute
¶
encoder_token_type_ids: NotRequired[list[int]]
The token type IDs of the encoder prompt.
MultiModalFieldConfig ¶
Source code in vllm/multimodal/inputs.py
442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 |
|
__init__ ¶
__init__(field: BaseMultiModalField, modality: str) -> None
batched staticmethod
¶
batched(modality: str)
Defines a field where an element in the batch is obtained by indexing into the first dimension of the underlying data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
Example:
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
key: str, batch: NestedTensors
) -> Sequence[MultiModalFieldElem]
flat staticmethod
¶
Defines a field where an element in the batch is obtained by slicing along the first dimension of the underlying data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
slices | Union[Sequence[slice], Sequence[Sequence[slice]]] | For each multi-modal item, a slice (dim=0) or a tuple of slices (dim>0) that is used to extract the data corresponding to it. | required |
dim | int | The dimension to extract data, default to 0. | 0 |
Example:
Given:
slices: [slice(0, 3), slice(3, 7), slice(7, 9)]
Input:
Data: [AAABBBBCC]
Output:
Element 1: [AAA]
Element 2: [BBBB]
Element 3: [CC]
Given:
slices: [
(slice(None), slice(0, 3)),
(slice(None), slice(3, 7)),
(slice(None), slice(7, 9))]
dim: 1
Input:
Data: [[A],[A],[A],[B],[B],[B],[B],[C],[C]]
Output:
Element 1: [[A],[A],[A]]
Element 2: [[B],[B],[B],[B]]
Element 3: [[C],[C]]
Source code in vllm/multimodal/inputs.py
flat_from_sizes staticmethod
¶
Defines a field where an element in the batch is obtained by slicing along the first dimension of the underlying data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
slices | For each multi-modal item, the size of the slice that is used to extract the data corresponding to it. | required | |
dim | int | The dimension to slice, default to 0. | 0 |
Example:
Given:
size_per_item: [3, 4, 2]
Input:
Data: [AAABBBBCC]
Output:
Element 1: [AAA]
Element 2: [BBBB]
Element 3: [CC]
Given:
slices: [3, 4, 2]
dim: 1
Input:
Data: [[A],[A],[A],[B],[B],[B],[B],[C],[C]]
Output:
Element 1: [[A],[A],[A]]
Element 2: [[B],[B],[B],[B]]
Element 3: [[C],[C]]
Source code in vllm/multimodal/inputs.py
shared staticmethod
¶
Defines a field where an element in the batch is obtained by taking the entirety of the underlying data.
This means that the data is the same for each element in the batch.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
batch_size | int | The number of multi-modal items which share this data. | required |
Example:
Given:
batch_size: 4
Input:
Data: [XYZ]
Output:
Element 1: [XYZ]
Element 2: [XYZ]
Element 3: [XYZ]
Element 4: [XYZ]
Source code in vllm/multimodal/inputs.py
MultiModalFieldElem dataclass
¶
Represents a keyword argument corresponding to a multi-modal item in MultiModalKwargs
.
Source code in vllm/multimodal/inputs.py
data instance-attribute
¶
data: NestedTensors
The tensor data of this field in MultiModalKwargs
, i.e. the value of the keyword argument to be passed to the model.
It may be set to None
if it is determined that the item is cached in EngineCore
.
field instance-attribute
¶
field: BaseMultiModalField
Defines how to combine the tensor data of this field with others in order to batch multi-modal items together for model inference.
key instance-attribute
¶
key: str
The key of this field in MultiModalKwargs
, i.e. the name of the keyword argument to be passed to the model.
modality instance-attribute
¶
modality: str
The modality of the corresponding multi-modal item. Each multi-modal item can consist of multiple keyword arguments.
__eq__ ¶
Source code in vllm/multimodal/inputs.py
__init__ ¶
__init__(
modality: str,
key: str,
data: NestedTensors,
field: BaseMultiModalField,
) -> None
MultiModalFlatField dataclass
¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
__init__ ¶
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
Source code in vllm/multimodal/inputs.py
MultiModalInputs ¶
Bases: TypedDict
Represents the outputs of BaseMultiModalProcessor
, ready to be passed to vLLM internals.
Source code in vllm/multimodal/inputs.py
cache_salt instance-attribute
¶
cache_salt: NotRequired[str]
Optional cache salt to be used for prefix caching.
mm_kwargs instance-attribute
¶
mm_kwargs: MultiModalKwargsItems
Keyword arguments to be directly passed to the model after batching.
mm_placeholders instance-attribute
¶
mm_placeholders: MultiModalPlaceholderDict
For each modality, information about the placeholder tokens in prompt_token_ids
.
prompt_token_ids instance-attribute
¶
The processed token IDs which includes placeholder tokens.
token_type_ids instance-attribute
¶
token_type_ids: NotRequired[list[int]]
The token type IDs of the prompt.
MultiModalKwargs ¶
Bases: UserDict[str, NestedTensors]
A dictionary that represents the keyword arguments to torch.nn.Module.forward
.
Source code in vllm/multimodal/inputs.py
738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 |
|
__eq__ ¶
_try_stack staticmethod
¶
_try_stack(
nested_tensors: NestedTensors, pin_memory: bool = False
) -> NestedTensors
Stack the inner dimensions that have the same shape in a nested list of tensors.
Thus, a dimension represented by a list means that the inner dimensions are different for each element along that dimension.
Source code in vllm/multimodal/inputs.py
as_kwargs staticmethod
¶
as_kwargs(
batched_inputs: BatchedTensorInputs, *, device: Device
) -> BatchedTensorInputs
Source code in vllm/multimodal/inputs.py
batch staticmethod
¶
batch(
inputs_list: list[MultiModalKwargs],
pin_memory: bool = False,
) -> BatchedTensorInputs
Batch multiple inputs together into a dictionary.
The resulting dictionary has the same keys as the inputs. If the corresponding value from each input is a tensor and they all share the same shape, the output value is a single batched tensor; otherwise, the output value is a list containing the original value from each input.
Source code in vllm/multimodal/inputs.py
from_hf_inputs staticmethod
¶
from_hf_inputs(
hf_inputs: BatchFeature,
config_by_key: Mapping[str, MultiModalFieldConfig],
)
Source code in vllm/multimodal/inputs.py
from_items staticmethod
¶
from_items(
items: Sequence[MultiModalKwargsItem],
*,
pin_memory: bool = False,
)
Source code in vllm/multimodal/inputs.py
MultiModalKwargsItem ¶
Bases: UserDict[str, MultiModalFieldElem]
A collection of MultiModalFieldElem
corresponding to a data item in MultiModalDataItems
.
Source code in vllm/multimodal/inputs.py
__init__ ¶
__init__(
data: Mapping[str, MultiModalFieldElem] = {},
) -> None
Source code in vllm/multimodal/inputs.py
dummy staticmethod
¶
dummy(modality: str)
Convenience class for testing.
Source code in vllm/multimodal/inputs.py
from_elems staticmethod
¶
from_elems(elems: Sequence[MultiModalFieldElem])
MultiModalKwargsItems ¶
Bases: UserDict[str, Sequence[MultiModalKwargsItem]]
A dictionary of MultiModalKwargsItem
s by modality.
Source code in vllm/multimodal/inputs.py
from_hf_inputs staticmethod
¶
from_hf_inputs(
hf_inputs: BatchFeature,
config_by_key: Mapping[str, MultiModalFieldConfig],
)
Source code in vllm/multimodal/inputs.py
from_seq staticmethod
¶
from_seq(items: Sequence[MultiModalKwargsItem])
get_data ¶
get_data(*, pin_memory: bool = False) -> MultiModalKwargs
Source code in vllm/multimodal/inputs.py
MultiModalSharedField dataclass
¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
PlaceholderRange dataclass
¶
Placeholder location information for multi-modal data.
Example:
Prompt: AAAA BBBB What is in these images?
Images A and B will have:
Source code in vllm/multimodal/inputs.py
is_embed class-attribute
instance-attribute
¶
A boolean mask of shape (length,)
indicating which positions between offset
and offset + length
to assign embeddings to.
__eq__ ¶
Source code in vllm/multimodal/inputs.py
nested_tensors_equal ¶
nested_tensors_equal(
a: NestedTensors, b: NestedTensors
) -> bool
Equality check between NestedTensors
objects.