Skip to content

vllm.multimodal.processing

MultiModalHashes module-attribute

MultiModalHashes = dict[str, list[str]]

A collection of hashes with a similar structure as MultiModalKwargsItems.

MultiModalPromptUpdates module-attribute

MultiModalPromptUpdates = Mapping[
    str, list[Sequence[ResolvedPromptUpdate]]
]

A collection of prompt updates with a similar structure as MultiModalKwargsItems.

MultiModalPromptUpdatesApplyResult module-attribute

MultiModalPromptUpdatesApplyResult = Mapping[
    str, list[Optional[int]]
]

For an item MultiModalPromptUpdates[k][i], MultiModalPromptUpdatesApplyResult[k][i] represents the index of the ResolvedPromptUpdate instance that has been applied, or None if none of the ResolvedPromptUpdate instances have been applied.

PromptSeq module-attribute

PromptSeq = Union[str, list[int]]

A token sequence (list of token IDs) or text.

PromptUpdateContent module-attribute

PromptUpdateContent = Union[
    Callable[[int], PromptUpdateInfo], PromptUpdateInfo
]

Given the index of the processed item within modality, output the corresponding token sequence (or text).

For convenience, you can directly pass in the token sequence (or text) instead of a function if it does not depend on the input.

PromptUpdateInfo module-attribute

PromptUpdateInfo = Union[PromptSeq, PromptUpdateDetails]

The token sequence or text that are part of the update.

If only part of the content corresponds to feature placeholders, you can use PromptUpdateDetails to specify which part.

PromptUpdateTarget module-attribute

PromptUpdateTarget = Union[
    Callable[[int], UpdateTarget], UpdateTarget
]

Given the index of the processed item within modality, output the corresponding token sequence (or text).

For convenience, you can directly pass in the token sequence (or text) instead of a function if it does not depend on the input.

UpdateTarget module-attribute

UpdateTarget = Union[PromptSeq, PromptIndex]

The token sequence or text to update.

_CacheItemOrHash module-attribute

_CacheItemOrHash = Union[MultiModalKwargsItem, str]

_I module-attribute

_I = TypeVar('_I', bound=BaseProcessingInfo)

_M module-attribute

_MatchToApply module-attribute

_MatchToApply = tuple[
    tuple[str, int], tuple[PromptTargetMatch, int]
]

_S module-attribute

_S = TypeVar('_S', str, list[int])

logger module-attribute

logger = init_logger(__name__)

BaseMultiModalProcessor

Bases: ABC, Generic[_I]

Abstract base class to process multi-modal inputs to be used in vLLM.

Not to be confused with transformers.ProcessorMixin.

Source code in vllm/multimodal/processing.py
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
class BaseMultiModalProcessor(ABC, Generic[_I]):
    """
    Abstract base class to process multi-modal inputs to be used in vLLM.

    Not to be confused with `transformers.ProcessorMixin`.
    """

    def __init__(self,
                 info: _I,
                 dummy_inputs: "BaseDummyInputsBuilder[_I]",
                 *,
                 cache: Optional[ProcessingCache] = None) -> None:
        super().__init__()

        self.info = info
        self.dummy_inputs = dummy_inputs
        self.cache = cache

        self.data_parser = self._get_data_parser()

        # Avoid unnecessary recomputation
        self._supported_mm_limits = self.info.get_supported_mm_limits()
        self._allowed_mm_limits = self.info.get_allowed_mm_limits()

    @property
    def supported_mm_limits(self):
        return self._supported_mm_limits

    @property
    def allowed_mm_limits(self):
        return self._allowed_mm_limits

    def __call__(
        self,
        prompt: str,
        mm_data: MultiModalDataDict,
        hf_processor_mm_kwargs: Mapping[str, object],
    ) -> MultiModalInputs:
        return self.apply(prompt, mm_data, hf_processor_mm_kwargs)

    def _get_data_parser(self) -> MultiModalDataParser:
        """
        Construct a parser to preprocess multi-modal data items
        before passing them to
        [`_get_hf_mm_data`][vllm.multimodal.processing.BaseMultiModalProcessor._get_hf_mm_data].

        You can support additional modalities by creating a subclass
        of [`MultiModalDataParser`][vllm.multimodal.parse.MultiModalDataParser]
        that has additional subparsers.
        """
        return MultiModalDataParser()

    def validate_num_items(
        self,
        modality: str,
        num_items: int,
    ) -> None:
        supported_limit = self.supported_mm_limits.get(modality, 0)
        allowed_limit = self.allowed_mm_limits.get(modality, 0)

        if supported_limit is None:
            supported_limit = allowed_limit

        limit = min(supported_limit, allowed_limit)

        if num_items > limit:
            msg = (f"At most {limit} {modality}(s) may be provided in "
                   "one prompt.")

            if num_items <= supported_limit:
                msg += " Set `--limit-mm-per-prompt` to increase this limit."

            raise ValueError(msg)

    def _to_mm_items(
        self,
        mm_data: MultiModalDataDict,
    ) -> MultiModalDataItems:
        """
        Normalize
        [`MultiModalDataDict`][vllm.multimodal.inputs.MultiModalDataDict]
        to [`MultiModalDataItems`][vllm.multimodal.parse.MultiModalDataItems]
        before passing them to
        [`_get_hf_mm_data`][vllm.multimodal.processing.BaseMultiModalProcessor._get_hf_mm_data].
        """
        mm_items = self.data_parser.parse_mm_data(mm_data)

        for modality, items in mm_items.items():
            self.validate_num_items(modality, len(items))

        return mm_items

    @abstractmethod
    def _get_mm_fields_config(
        self,
        hf_inputs: "BatchFeature",
        hf_processor_mm_kwargs: Mapping[str, object],
    ) -> Mapping[str, MultiModalFieldConfig]:
        """Given the HF-processed data, output the metadata of each field."""
        raise NotImplementedError

    @abstractmethod
    def _get_prompt_updates(
        self,
        mm_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        out_mm_kwargs: MultiModalKwargsItems,
    ) -> Sequence[PromptUpdate]:
        """
        Given the original multi-modal items for this modality
        and HF-processed data, output the updates to perform.

        The information returned by this method is used to update token inputs
        which bypass the HF processor. It is also used to update the output of
        HF processor if the HF process does not apply prompt updates to text
        inputs.

        Moreover, this information is critical to determine the token positions
        in order to construct
        [`PlaceholderRange`][vllm.multimodal.inputs.PlaceholderRange]
        for each multi-modal item.
        """
        raise NotImplementedError

    def _bind_and_group_updates(
        self,
        prompt_updates: Sequence[PromptUpdate],
        mm_item_counts: Mapping[str, int],
    ) -> MultiModalPromptUpdates:
        return {
            modality: [[update.resolve(item_idx) for update in updates]
                       for item_idx in range(mm_item_counts.get(modality, 0))]
            for modality, updates in full_groupby_modality(prompt_updates)
        }

    def _get_mm_prompt_updates(
        self,
        mm_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        out_mm_kwargs: MultiModalKwargsItems,
    ) -> MultiModalPromptUpdates:
        unbound_prompt_updates = self._get_prompt_updates(
            mm_items=mm_items,
            hf_processor_mm_kwargs=hf_processor_mm_kwargs,
            out_mm_kwargs=out_mm_kwargs,
        )

        mm_prompt_updates = self._bind_and_group_updates(
            unbound_prompt_updates,
            mm_items.get_all_counts(),
        )

        for modality, prompt_updates in mm_prompt_updates.items():
            for item_idx, item_prompt_updates in enumerate(prompt_updates):
                if len(item_prompt_updates) > 1:
                    logger.warning_once(
                        "Detected %d prompt updates for `mm_items[%r][%s]`. "
                        "Multiple prompt updates per item is now "
                        "deprecated and may be removed in v0.13. "
                        "Instead, please specify dynamic update targets "
                        "in the same prompt update definition by passing "
                        "a function to `PromptUpdate.target`.",
                        len(prompt_updates),
                        modality,
                        item_idx,
                    )

        return mm_prompt_updates

    def _find_mm_placeholders(
        self,
        new_token_ids: list[int],
        mm_prompt_updates: MultiModalPromptUpdates,
    ) -> Mapping[str, list[PlaceholderFeaturesInfo]]:
        tokenizer = self.info.get_tokenizer()

        return find_mm_placeholders(new_token_ids, mm_prompt_updates,
                                    tokenizer)

    def _get_hf_mm_data(
        self,
        mm_items: MultiModalDataItems,
    ) -> tuple[Mapping[str, object], Mapping[str, object]]:
        processor_data = dict[str, object]()
        passthrough_data = dict[str, object]()

        for items in mm_items.values():
            processor_data.update(items.get_processor_data())
            passthrough_data.update(items.get_passthrough_data())

        return processor_data, passthrough_data

    def _call_hf_processor(
        self,
        prompt: str,
        # Not to be confused with `mm_data` in `self.apply`.
        # This refers to the data to be passed to HF processor.
        mm_data: Mapping[str, object],
        mm_kwargs: Mapping[str, object],
        tok_kwargs: Mapping[str, object],
    ) -> "BatchFeature":
        """
        Call the HF processor on the prompt text and
        associated multi-modal data.
        """
        return self.info.ctx.call_hf_processor(
            self.info.get_hf_processor(**mm_kwargs),
            dict(text=prompt, **mm_data),
            dict(**mm_kwargs, **tok_kwargs),
        )

    def _hf_processor_applies_updates(
        self,
        prompt_text: str,
        mm_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Mapping[str, object],
    ) -> bool:
        """
        Return whether the HF processor applies prompt updates.

        For most HF processors, this should be `True` when multi-modal
        data items are passed, but `False` when multi-modal embeddings
        are passed.
        """
        return not any(
            isinstance(items, (EmbeddingItems, DictEmbeddingItems))
            for items in mm_items.values())

    def _apply_hf_processor_text_mm(
        self,
        prompt_text: str,
        mm_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Mapping[str, object],
    ) -> tuple[list[int], "BatchFeature", bool]:
        """
        Apply the HF processor on the prompt text and multi-modal data
        together.

        In addition, return whether prompt updates have been applied.
        """
        processor_data, passthrough_data = self._get_hf_mm_data(mm_items)

        processed_data = self._call_hf_processor(
            prompt=prompt_text,
            mm_data=processor_data,
            mm_kwargs=hf_processor_mm_kwargs,
            tok_kwargs=tokenization_kwargs,
        )
        processed_data.update(passthrough_data)

        prompt_ids, = processed_data.pop("input_ids").tolist()

        is_update_applied = self._hf_processor_applies_updates(
            prompt_text=prompt_text,
            mm_items=mm_items,
            hf_processor_mm_kwargs=hf_processor_mm_kwargs,
            tokenization_kwargs=tokenization_kwargs,
        )

        return prompt_ids, processed_data, is_update_applied

    def _apply_hf_processor_text_only(
        self,
        prompt_text: str,
        tokenization_kwargs: Mapping[str, object],
    ) -> list[int]:
        """
        Apply the HF processor on the prompt text only.

        Since HF processor requires that text and multi-modal items
        correspond to each other, we create dummy multi-modal items
        to go along with the text.
        """
        prompt_ids, _, _ = self._apply_hf_processor_text_mm(
            prompt_text=prompt_text,
            mm_items=MultiModalDataItems({}),
            hf_processor_mm_kwargs={},
            tokenization_kwargs=tokenization_kwargs,
        )

        return prompt_ids

    def _apply_hf_processor_tokens_only(
        self,
        prompt_tokens: list[int],
    ) -> list[int]:
        """
        Apply the HF processor on the prompt tokens only.

        Most HF processors accept prompt text but not prompt tokens.
        If the HF processor adds or removes tokens that are not related to
        multi-modal data, you should override this method so it is consistent
        with the output of
        [`_apply_hf_processor_text_only`][vllm.multimodal.processing.BaseMultiModalProcessor._apply_hf_processor_text_only]
        on the
        corresponding text.
        """
        return prompt_tokens

    def _apply_hf_processor_mm_only(
        self,
        mm_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Mapping[str, object],
    ) -> "BatchFeature":
        """
        Apply the HF processor on the multi-modal data only.

        Since HF processor requires that text and multi-modal items
        correspond to each other, we generate dummy text using
        [`DummyInputsBuilder`][vllm.multimodal.profiling.BaseDummyInputsBuilder]
        to go along with the multi-modal data.
        """
        mm_counts = mm_items.get_all_counts()

        _, mm_processed_data, _ = self._apply_hf_processor_text_mm(
            prompt_text=self.dummy_inputs.get_dummy_text(mm_counts),
            mm_items=mm_items,
            hf_processor_mm_kwargs=hf_processor_mm_kwargs,
            tokenization_kwargs=tokenization_kwargs,
        )

        return mm_processed_data

    def _apply_hf_processor_main(
        self,
        prompt: Union[str, list[int]],
        mm_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Mapping[str, object],
        *,
        enable_hf_prompt_update: bool,
    ) -> tuple[list[int], "BatchFeature", bool]:
        """
        Apply the HF processor on the prompt text and multi-modal data.

        In addition, return whether prompt updates have been applied
        (for most HF processors, this should be `True`).

        Note:
            If `enable_hf_prompt_update=False`, we use HF processor
            to perform prompt updates if available; HF processor requires
            that the prompt corresponds to multi-modal items.
        """
        if isinstance(prompt, str):
            if enable_hf_prompt_update:
                return self._apply_hf_processor_text_mm(
                    prompt_text=prompt,
                    mm_items=mm_items,
                    hf_processor_mm_kwargs=hf_processor_mm_kwargs,
                    tokenization_kwargs=tokenization_kwargs,
                )

            prompt_ids = self._apply_hf_processor_text_only(
                prompt, tokenization_kwargs)
        else:
            prompt_ids = self._apply_hf_processor_tokens_only(prompt)

        mm_processed_data = self._apply_hf_processor_mm_only(
            mm_items=mm_items,
            hf_processor_mm_kwargs=hf_processor_mm_kwargs,
            tokenization_kwargs=tokenization_kwargs,
        )

        return prompt_ids, mm_processed_data, False

    def _get_cache_missing_items(
        self,
        cache: ProcessingCache,
        mm_data_items: MultiModalDataItems,
        mm_hashes: MultiModalHashes,
    ) -> tuple[dict[str, list[_CacheItemOrHash]], MultiModalDataItems]:
        mm_cache_items_or_hashes: dict[str, list[_CacheItemOrHash]] = {
            modality: [(h if (v := cache.get(h)) is None else v)
                       for h in hashes]
            for modality, hashes in mm_hashes.items()
        }

        mm_missing_idxs = {
            modality: [
                idx for idx, item_or_hash in enumerate(items_or_hashes)
                if isinstance(item_or_hash, str)
            ]
            for modality, items_or_hashes in mm_cache_items_or_hashes.items()
        }
        mm_missing_data = {
            modality: [mm_data_items[modality][idx] for idx in idxs]
            for modality, idxs in mm_missing_idxs.items()
        }

        return mm_cache_items_or_hashes, self._to_mm_items(mm_missing_data)

    def _hash_mm_items(
        self,
        mm_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Mapping[str, object],
    ) -> MultiModalHashes:
        """Create MM hashes to be returned (only used in V1)."""
        model_id = self.info.model_id

        return {
            modality: [
                MultiModalHasher.hash_kwargs(model_id=model_id,
                                             **{modality: item},
                                             **hf_processor_mm_kwargs,
                                             **tokenization_kwargs)
                for item in items
            ]
            for modality, items in mm_items.items()
        }

    def _merge_mm_kwargs(
        self,
        cache: ProcessingCache,
        mm_cache_items_or_hashes: dict[str, list[_CacheItemOrHash]],
        mm_missing_kwargs: MultiModalKwargsItems,
    ) -> MultiModalKwargsItems:
        mm_missing_next_idx = defaultdict[str, int](lambda: 0)

        merged_items = defaultdict[str, list[MultiModalKwargsItem]](list)
        for modality, items_or_hashes in mm_cache_items_or_hashes.items():
            for item_or_hash in items_or_hashes:
                if isinstance(item_or_hash, str):
                    kw_item = mm_missing_kwargs[modality][
                        mm_missing_next_idx[modality]]
                    cache.put(item_or_hash, kw_item)
                    mm_missing_next_idx[modality] += 1
                else:
                    kw_item = item_or_hash

                merged_items[modality].append(kw_item)

        return MultiModalKwargsItems(merged_items)

    def _apply_hf_processor(
        self,
        prompt: Union[str, list[int]],
        mm_data_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Mapping[str, object],
    ) -> tuple[list[int], MultiModalProcessingInfo, bool]:
        (
            prompt_ids,
            mm_processed_data,
            is_update_applied,
        ) = self._apply_hf_processor_main(
            prompt=prompt,
            mm_items=mm_data_items,
            hf_processor_mm_kwargs=hf_processor_mm_kwargs,
            tokenization_kwargs=tokenization_kwargs,
            enable_hf_prompt_update=True,
        )

        mm_kwargs = MultiModalKwargsItems.from_hf_inputs(
            mm_processed_data,
            self._get_mm_fields_config(mm_processed_data,
                                       hf_processor_mm_kwargs),
        )

        mm_hashes = self._hash_mm_items(mm_data_items, hf_processor_mm_kwargs,
                                        tokenization_kwargs)

        mm_prompt_updates = self._get_mm_prompt_updates(
            mm_data_items,
            hf_processor_mm_kwargs,
            mm_kwargs,
        )

        mm_info = MultiModalProcessingInfo(
            kwargs=mm_kwargs,
            hashes=mm_hashes,
            prompt_updates=mm_prompt_updates,
        )

        return prompt_ids, mm_info, is_update_applied

    def _cached_apply_hf_processor(
        self,
        prompt: Union[str, list[int]],
        mm_data_items: MultiModalDataItems,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Mapping[str, object],
    ) -> tuple[list[int], MultiModalProcessingInfo, bool]:
        """
        Apply the HF processor on the full prompt text,
        caching the results and reusing cached results.
        """
        cache = self.cache

        _, passthrough_data = self._get_hf_mm_data(mm_data_items)
        if cache is None or passthrough_data:
            return self._apply_hf_processor(
                prompt=prompt,
                mm_data_items=mm_data_items,
                hf_processor_mm_kwargs=hf_processor_mm_kwargs,
                tokenization_kwargs=tokenization_kwargs,
            )

        mm_hashes = self._hash_mm_items(mm_data_items, hf_processor_mm_kwargs,
                                        tokenization_kwargs)
        (
            mm_cache_items_or_hashes,
            mm_missing_data_items,
        ) = self._get_cache_missing_items(
            cache=cache,
            mm_data_items=mm_data_items,
            mm_hashes=mm_hashes,
        )

        # NOTE: `prompt` does not correspond to `mm_missing_data_items`,
        # so we can't apply prompt updates until the new multimodal
        # items are combined with the cached multimodal items
        (
            prompt_ids,
            mm_missing_processed_data,
            is_update_applied,
        ) = self._apply_hf_processor_main(
            prompt=prompt,
            mm_items=mm_missing_data_items,
            hf_processor_mm_kwargs=hf_processor_mm_kwargs,
            tokenization_kwargs=tokenization_kwargs,
            enable_hf_prompt_update=False,
        )

        mm_missing_kwargs = MultiModalKwargsItems.from_hf_inputs(
            mm_missing_processed_data,
            self._get_mm_fields_config(mm_missing_processed_data,
                                       hf_processor_mm_kwargs),
        )

        mm_kwargs = self._merge_mm_kwargs(
            cache,
            mm_cache_items_or_hashes=mm_cache_items_or_hashes,
            mm_missing_kwargs=mm_missing_kwargs,
        )

        mm_prompt_updates = self._get_mm_prompt_updates(
            mm_data_items,
            hf_processor_mm_kwargs,
            mm_kwargs,
        )

        mm_info = MultiModalProcessingInfo(
            kwargs=mm_kwargs,
            hashes=mm_hashes,
            prompt_updates=mm_prompt_updates,
        )

        return prompt_ids, mm_info, is_update_applied

    def _apply_token_matches(
        self,
        prompt: list[int],
        mm_prompt_updates: MultiModalPromptUpdates,
    ) -> tuple[list[int], MultiModalPromptUpdatesApplyResult]:
        tokenizer = self.info.get_tokenizer()
        return apply_token_matches(prompt, mm_prompt_updates, tokenizer)

    def _apply_text_matches(
        self,
        prompt: str,
        mm_prompt_updates: MultiModalPromptUpdates,
    ) -> tuple[str, MultiModalPromptUpdatesApplyResult]:
        tokenizer = self.info.get_tokenizer()
        return apply_text_matches(prompt, mm_prompt_updates, tokenizer)

    def _apply_prompt_updates(
        self,
        token_ids: list[int],
        mm_prompt_updates: MultiModalPromptUpdates,
    ) -> tuple[list[int], str, Mapping[str, list[PlaceholderFeaturesInfo]]]:
        tokenizer = self.info.get_tokenizer()

        new_token_ids, match_result = self._apply_token_matches(
            token_ids,
            mm_prompt_updates,
        )

        # If the search text does not represent a special token,
        # it may have different token IDs in the prompt, because
        # the tokens may go across the boundaries of the search text.
        # ----
        # e.g. when searching for "foo" in "food", if "food" itself makes
        # up a token, then the token ID of "foo" will not appear at all
        # ----
        # Since it is inefficient to search for all possible tokenizations
        # of the search text in the prompt, we instead perform string-based
        # updates on the decoded token IDs, then encode them back.
        if all(
                all(update_idx is not None for update_idx in update_idxs)
                for update_idxs in match_result.values()):
            new_text = decode_tokens(tokenizer, new_token_ids)
        else:
            new_text, match_result = self._apply_text_matches(
                decode_tokens(tokenizer, token_ids),
                mm_prompt_updates,
            )

            new_token_ids = encode_tokens(
                tokenizer,
                new_text,
                add_special_tokens=False,
            )

        matched_updates = defaultdict[
            str, list[Sequence[ResolvedPromptUpdate]]](list)
        for modality, update_idxs in match_result.items():
            for item_idx, update_idx in enumerate(update_idxs):
                assert update_idx is not None, (
                    "Failed to apply prompt replacement for "
                    f"mm_items[{modality!r}][{item_idx}]")

                matched_updates[modality].append(
                    [mm_prompt_updates[modality][item_idx][update_idx]])

        placeholders = self._find_mm_placeholders(
            new_token_ids,
            dict(matched_updates),
        )

        return new_token_ids, new_text, placeholders

    def _validate_mm_kwargs(
        self,
        mm_kwargs: MultiModalKwargsItems,
        mm_item_counts: Mapping[str, int],
    ) -> None:
        for modality, item_count in mm_item_counts.items():
            items = mm_kwargs.get(modality, [])

            if len(items) != item_count:
                raise RuntimeError(
                    f"Expected there to be {item_count} {modality} items in "
                    f"keyword arguments corresponding to {item_count} "
                    f"{modality} data items, but only found {len(items)}! "
                    "There is likely a problem with your "
                    "implementation of merged multi-modal processor for this "
                    "model (usually arising from an inconsistency between "
                    "`_call_hf_processor` and `_get_mm_fields_config`).")

    def _validate_mm_placeholders(
        self,
        mm_placeholders: Mapping[str, list[PlaceholderFeaturesInfo]],
        mm_item_counts: Mapping[str, int],
    ) -> None:
        for modality, item_count in mm_item_counts.items():
            placeholders = mm_placeholders.get(modality, [])

            if len(placeholders) != item_count:
                # NOTE: If you are a model developer, this can also arise from
                # an inconsistency between `_call_hf_processor` and
                # `_get_mm_fields_config` implementations
                raise RuntimeError(
                    f"Expected there to be {item_count} prompt updates "
                    f"corresponding to {item_count} {modality} items, but "
                    f"instead found {len(placeholders)} prompt updates! "
                    "This is likely because you forgot to include input "
                    "placeholder tokens (e.g., `<image>`, `<|image_pad|>`) "
                    "in the prompt. If the model has a chat template, make "
                    "sure you have applied it before calling `LLM.generate`.")

    def _maybe_apply_prompt_updates(
        self,
        mm_items: MultiModalDataItems,
        prompt_ids: list[int],
        mm_kwargs: MultiModalKwargsItems,
        mm_prompt_updates: MultiModalPromptUpdates,
        is_update_applied: bool,
    ) -> tuple[list[int], str, Mapping[str, list[PlaceholderFeaturesInfo]]]:
        mm_item_counts = mm_items.get_all_counts()
        self._validate_mm_kwargs(mm_kwargs, mm_item_counts)

        if is_update_applied:
            mm_placeholders = self._find_mm_placeholders(
                prompt_ids,
                mm_prompt_updates,
            )
            self._validate_mm_placeholders(mm_placeholders, mm_item_counts)

            tokenizer = self.info.get_tokenizer()
            prompt = decode_tokens(tokenizer, prompt_ids)
        else:
            (
                prompt_ids,
                prompt,
                mm_placeholders,
            ) = self._apply_prompt_updates(
                prompt_ids,
                mm_prompt_updates,
            )
            self._validate_mm_placeholders(mm_placeholders, mm_item_counts)

        return prompt_ids, prompt, mm_placeholders

    def apply(
        self,
        prompt: Union[str, list[int]],
        mm_data: MultiModalDataDict,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Optional[Mapping[str, object]] = None,
    ) -> MultiModalInputs:
        """
        Process multi-modal inputs to be used in vLLM.

        The main steps are:

        1. Apply HF Processor on prompt text and multi-modal data together,
           outputting token IDs and processed tensors.
        2. Find and update sequences in the token IDs with placeholder tokens.
           The number of placeholder tokens equals the feature size of the
           multi-modal data outputted by the multi-modal encoder.
        3. Extract information about the placeholder tokens from the
           processed token IDs.
        """
        mm_items = self._to_mm_items(mm_data)

        if tokenization_kwargs is None:
            tokenization_kwargs = {}

        (
            prompt_ids,
            mm_info,
            is_update_applied,
        ) = self._cached_apply_hf_processor(
            prompt,
            mm_items,
            hf_processor_mm_kwargs,
            tokenization_kwargs=tokenization_kwargs,
        )

        # NOTE: tokenization_kwargs are not required to init processor
        prompt_ids, prompt, mm_placeholders = self._maybe_apply_prompt_updates(
            mm_items=mm_items,
            prompt_ids=prompt_ids,
            mm_kwargs=mm_info.kwargs,
            mm_prompt_updates=mm_info.prompt_updates,
            is_update_applied=is_update_applied,
        )

        mm_placeholder_ranges = {
            modality: [item.to_range() for item in placeholders]
            for modality, placeholders in mm_placeholders.items()
        }

        return MultiModalInputs(
            type="multimodal",
            prompt=prompt,
            prompt_token_ids=prompt_ids,
            mm_kwargs=mm_info.kwargs,
            mm_hashes=mm_info.hashes,
            mm_placeholders=mm_placeholder_ranges,
        )

_allowed_mm_limits instance-attribute

_allowed_mm_limits = get_allowed_mm_limits()

_supported_mm_limits instance-attribute

_supported_mm_limits = get_supported_mm_limits()

allowed_mm_limits property

allowed_mm_limits

cache instance-attribute

cache = cache

data_parser instance-attribute

data_parser = _get_data_parser()

dummy_inputs instance-attribute

dummy_inputs = dummy_inputs

info instance-attribute

info = info

supported_mm_limits property

supported_mm_limits

__call__

__call__(
    prompt: str,
    mm_data: MultiModalDataDict,
    hf_processor_mm_kwargs: Mapping[str, object],
) -> MultiModalInputs
Source code in vllm/multimodal/processing.py
def __call__(
    self,
    prompt: str,
    mm_data: MultiModalDataDict,
    hf_processor_mm_kwargs: Mapping[str, object],
) -> MultiModalInputs:
    return self.apply(prompt, mm_data, hf_processor_mm_kwargs)

__init__

__init__(
    info: _I,
    dummy_inputs: BaseDummyInputsBuilder[_I],
    *,
    cache: Optional[ProcessingCache] = None,
) -> None
Source code in vllm/multimodal/processing.py
def __init__(self,
             info: _I,
             dummy_inputs: "BaseDummyInputsBuilder[_I]",
             *,
             cache: Optional[ProcessingCache] = None) -> None:
    super().__init__()

    self.info = info
    self.dummy_inputs = dummy_inputs
    self.cache = cache

    self.data_parser = self._get_data_parser()

    # Avoid unnecessary recomputation
    self._supported_mm_limits = self.info.get_supported_mm_limits()
    self._allowed_mm_limits = self.info.get_allowed_mm_limits()

_apply_hf_processor

_apply_hf_processor(
    prompt: Union[str, list[int]],
    mm_data_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> tuple[list[int], MultiModalProcessingInfo, bool]
Source code in vllm/multimodal/processing.py
def _apply_hf_processor(
    self,
    prompt: Union[str, list[int]],
    mm_data_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> tuple[list[int], MultiModalProcessingInfo, bool]:
    (
        prompt_ids,
        mm_processed_data,
        is_update_applied,
    ) = self._apply_hf_processor_main(
        prompt=prompt,
        mm_items=mm_data_items,
        hf_processor_mm_kwargs=hf_processor_mm_kwargs,
        tokenization_kwargs=tokenization_kwargs,
        enable_hf_prompt_update=True,
    )

    mm_kwargs = MultiModalKwargsItems.from_hf_inputs(
        mm_processed_data,
        self._get_mm_fields_config(mm_processed_data,
                                   hf_processor_mm_kwargs),
    )

    mm_hashes = self._hash_mm_items(mm_data_items, hf_processor_mm_kwargs,
                                    tokenization_kwargs)

    mm_prompt_updates = self._get_mm_prompt_updates(
        mm_data_items,
        hf_processor_mm_kwargs,
        mm_kwargs,
    )

    mm_info = MultiModalProcessingInfo(
        kwargs=mm_kwargs,
        hashes=mm_hashes,
        prompt_updates=mm_prompt_updates,
    )

    return prompt_ids, mm_info, is_update_applied

_apply_hf_processor_main

_apply_hf_processor_main(
    prompt: Union[str, list[int]],
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
    *,
    enable_hf_prompt_update: bool,
) -> tuple[list[int], BatchFeature, bool]

Apply the HF processor on the prompt text and multi-modal data.

In addition, return whether prompt updates have been applied (for most HF processors, this should be True).

Note

If enable_hf_prompt_update=False, we use HF processor to perform prompt updates if available; HF processor requires that the prompt corresponds to multi-modal items.

Source code in vllm/multimodal/processing.py
def _apply_hf_processor_main(
    self,
    prompt: Union[str, list[int]],
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
    *,
    enable_hf_prompt_update: bool,
) -> tuple[list[int], "BatchFeature", bool]:
    """
    Apply the HF processor on the prompt text and multi-modal data.

    In addition, return whether prompt updates have been applied
    (for most HF processors, this should be `True`).

    Note:
        If `enable_hf_prompt_update=False`, we use HF processor
        to perform prompt updates if available; HF processor requires
        that the prompt corresponds to multi-modal items.
    """
    if isinstance(prompt, str):
        if enable_hf_prompt_update:
            return self._apply_hf_processor_text_mm(
                prompt_text=prompt,
                mm_items=mm_items,
                hf_processor_mm_kwargs=hf_processor_mm_kwargs,
                tokenization_kwargs=tokenization_kwargs,
            )

        prompt_ids = self._apply_hf_processor_text_only(
            prompt, tokenization_kwargs)
    else:
        prompt_ids = self._apply_hf_processor_tokens_only(prompt)

    mm_processed_data = self._apply_hf_processor_mm_only(
        mm_items=mm_items,
        hf_processor_mm_kwargs=hf_processor_mm_kwargs,
        tokenization_kwargs=tokenization_kwargs,
    )

    return prompt_ids, mm_processed_data, False

_apply_hf_processor_mm_only

_apply_hf_processor_mm_only(
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> BatchFeature

Apply the HF processor on the multi-modal data only.

Since HF processor requires that text and multi-modal items correspond to each other, we generate dummy text using DummyInputsBuilder to go along with the multi-modal data.

Source code in vllm/multimodal/processing.py
def _apply_hf_processor_mm_only(
    self,
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> "BatchFeature":
    """
    Apply the HF processor on the multi-modal data only.

    Since HF processor requires that text and multi-modal items
    correspond to each other, we generate dummy text using
    [`DummyInputsBuilder`][vllm.multimodal.profiling.BaseDummyInputsBuilder]
    to go along with the multi-modal data.
    """
    mm_counts = mm_items.get_all_counts()

    _, mm_processed_data, _ = self._apply_hf_processor_text_mm(
        prompt_text=self.dummy_inputs.get_dummy_text(mm_counts),
        mm_items=mm_items,
        hf_processor_mm_kwargs=hf_processor_mm_kwargs,
        tokenization_kwargs=tokenization_kwargs,
    )

    return mm_processed_data

_apply_hf_processor_text_mm

_apply_hf_processor_text_mm(
    prompt_text: str,
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> tuple[list[int], BatchFeature, bool]

Apply the HF processor on the prompt text and multi-modal data together.

In addition, return whether prompt updates have been applied.

Source code in vllm/multimodal/processing.py
def _apply_hf_processor_text_mm(
    self,
    prompt_text: str,
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> tuple[list[int], "BatchFeature", bool]:
    """
    Apply the HF processor on the prompt text and multi-modal data
    together.

    In addition, return whether prompt updates have been applied.
    """
    processor_data, passthrough_data = self._get_hf_mm_data(mm_items)

    processed_data = self._call_hf_processor(
        prompt=prompt_text,
        mm_data=processor_data,
        mm_kwargs=hf_processor_mm_kwargs,
        tok_kwargs=tokenization_kwargs,
    )
    processed_data.update(passthrough_data)

    prompt_ids, = processed_data.pop("input_ids").tolist()

    is_update_applied = self._hf_processor_applies_updates(
        prompt_text=prompt_text,
        mm_items=mm_items,
        hf_processor_mm_kwargs=hf_processor_mm_kwargs,
        tokenization_kwargs=tokenization_kwargs,
    )

    return prompt_ids, processed_data, is_update_applied

_apply_hf_processor_text_only

_apply_hf_processor_text_only(
    prompt_text: str,
    tokenization_kwargs: Mapping[str, object],
) -> list[int]

Apply the HF processor on the prompt text only.

Since HF processor requires that text and multi-modal items correspond to each other, we create dummy multi-modal items to go along with the text.

Source code in vllm/multimodal/processing.py
def _apply_hf_processor_text_only(
    self,
    prompt_text: str,
    tokenization_kwargs: Mapping[str, object],
) -> list[int]:
    """
    Apply the HF processor on the prompt text only.

    Since HF processor requires that text and multi-modal items
    correspond to each other, we create dummy multi-modal items
    to go along with the text.
    """
    prompt_ids, _, _ = self._apply_hf_processor_text_mm(
        prompt_text=prompt_text,
        mm_items=MultiModalDataItems({}),
        hf_processor_mm_kwargs={},
        tokenization_kwargs=tokenization_kwargs,
    )

    return prompt_ids

_apply_hf_processor_tokens_only

_apply_hf_processor_tokens_only(
    prompt_tokens: list[int],
) -> list[int]

Apply the HF processor on the prompt tokens only.

Most HF processors accept prompt text but not prompt tokens. If the HF processor adds or removes tokens that are not related to multi-modal data, you should override this method so it is consistent with the output of _apply_hf_processor_text_only on the corresponding text.

Source code in vllm/multimodal/processing.py
def _apply_hf_processor_tokens_only(
    self,
    prompt_tokens: list[int],
) -> list[int]:
    """
    Apply the HF processor on the prompt tokens only.

    Most HF processors accept prompt text but not prompt tokens.
    If the HF processor adds or removes tokens that are not related to
    multi-modal data, you should override this method so it is consistent
    with the output of
    [`_apply_hf_processor_text_only`][vllm.multimodal.processing.BaseMultiModalProcessor._apply_hf_processor_text_only]
    on the
    corresponding text.
    """
    return prompt_tokens

_apply_prompt_updates

_apply_prompt_updates(
    token_ids: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
) -> tuple[
    list[int],
    str,
    Mapping[str, list[PlaceholderFeaturesInfo]],
]
Source code in vllm/multimodal/processing.py
def _apply_prompt_updates(
    self,
    token_ids: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
) -> tuple[list[int], str, Mapping[str, list[PlaceholderFeaturesInfo]]]:
    tokenizer = self.info.get_tokenizer()

    new_token_ids, match_result = self._apply_token_matches(
        token_ids,
        mm_prompt_updates,
    )

    # If the search text does not represent a special token,
    # it may have different token IDs in the prompt, because
    # the tokens may go across the boundaries of the search text.
    # ----
    # e.g. when searching for "foo" in "food", if "food" itself makes
    # up a token, then the token ID of "foo" will not appear at all
    # ----
    # Since it is inefficient to search for all possible tokenizations
    # of the search text in the prompt, we instead perform string-based
    # updates on the decoded token IDs, then encode them back.
    if all(
            all(update_idx is not None for update_idx in update_idxs)
            for update_idxs in match_result.values()):
        new_text = decode_tokens(tokenizer, new_token_ids)
    else:
        new_text, match_result = self._apply_text_matches(
            decode_tokens(tokenizer, token_ids),
            mm_prompt_updates,
        )

        new_token_ids = encode_tokens(
            tokenizer,
            new_text,
            add_special_tokens=False,
        )

    matched_updates = defaultdict[
        str, list[Sequence[ResolvedPromptUpdate]]](list)
    for modality, update_idxs in match_result.items():
        for item_idx, update_idx in enumerate(update_idxs):
            assert update_idx is not None, (
                "Failed to apply prompt replacement for "
                f"mm_items[{modality!r}][{item_idx}]")

            matched_updates[modality].append(
                [mm_prompt_updates[modality][item_idx][update_idx]])

    placeholders = self._find_mm_placeholders(
        new_token_ids,
        dict(matched_updates),
    )

    return new_token_ids, new_text, placeholders

_apply_text_matches

_apply_text_matches(
    prompt: str, mm_prompt_updates: MultiModalPromptUpdates
) -> tuple[str, MultiModalPromptUpdatesApplyResult]
Source code in vllm/multimodal/processing.py
def _apply_text_matches(
    self,
    prompt: str,
    mm_prompt_updates: MultiModalPromptUpdates,
) -> tuple[str, MultiModalPromptUpdatesApplyResult]:
    tokenizer = self.info.get_tokenizer()
    return apply_text_matches(prompt, mm_prompt_updates, tokenizer)

_apply_token_matches

_apply_token_matches(
    prompt: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
) -> tuple[list[int], MultiModalPromptUpdatesApplyResult]
Source code in vllm/multimodal/processing.py
def _apply_token_matches(
    self,
    prompt: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
) -> tuple[list[int], MultiModalPromptUpdatesApplyResult]:
    tokenizer = self.info.get_tokenizer()
    return apply_token_matches(prompt, mm_prompt_updates, tokenizer)

_bind_and_group_updates

_bind_and_group_updates(
    prompt_updates: Sequence[PromptUpdate],
    mm_item_counts: Mapping[str, int],
) -> MultiModalPromptUpdates
Source code in vllm/multimodal/processing.py
def _bind_and_group_updates(
    self,
    prompt_updates: Sequence[PromptUpdate],
    mm_item_counts: Mapping[str, int],
) -> MultiModalPromptUpdates:
    return {
        modality: [[update.resolve(item_idx) for update in updates]
                   for item_idx in range(mm_item_counts.get(modality, 0))]
        for modality, updates in full_groupby_modality(prompt_updates)
    }

_cached_apply_hf_processor

_cached_apply_hf_processor(
    prompt: Union[str, list[int]],
    mm_data_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> tuple[list[int], MultiModalProcessingInfo, bool]

Apply the HF processor on the full prompt text, caching the results and reusing cached results.

Source code in vllm/multimodal/processing.py
def _cached_apply_hf_processor(
    self,
    prompt: Union[str, list[int]],
    mm_data_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> tuple[list[int], MultiModalProcessingInfo, bool]:
    """
    Apply the HF processor on the full prompt text,
    caching the results and reusing cached results.
    """
    cache = self.cache

    _, passthrough_data = self._get_hf_mm_data(mm_data_items)
    if cache is None or passthrough_data:
        return self._apply_hf_processor(
            prompt=prompt,
            mm_data_items=mm_data_items,
            hf_processor_mm_kwargs=hf_processor_mm_kwargs,
            tokenization_kwargs=tokenization_kwargs,
        )

    mm_hashes = self._hash_mm_items(mm_data_items, hf_processor_mm_kwargs,
                                    tokenization_kwargs)
    (
        mm_cache_items_or_hashes,
        mm_missing_data_items,
    ) = self._get_cache_missing_items(
        cache=cache,
        mm_data_items=mm_data_items,
        mm_hashes=mm_hashes,
    )

    # NOTE: `prompt` does not correspond to `mm_missing_data_items`,
    # so we can't apply prompt updates until the new multimodal
    # items are combined with the cached multimodal items
    (
        prompt_ids,
        mm_missing_processed_data,
        is_update_applied,
    ) = self._apply_hf_processor_main(
        prompt=prompt,
        mm_items=mm_missing_data_items,
        hf_processor_mm_kwargs=hf_processor_mm_kwargs,
        tokenization_kwargs=tokenization_kwargs,
        enable_hf_prompt_update=False,
    )

    mm_missing_kwargs = MultiModalKwargsItems.from_hf_inputs(
        mm_missing_processed_data,
        self._get_mm_fields_config(mm_missing_processed_data,
                                   hf_processor_mm_kwargs),
    )

    mm_kwargs = self._merge_mm_kwargs(
        cache,
        mm_cache_items_or_hashes=mm_cache_items_or_hashes,
        mm_missing_kwargs=mm_missing_kwargs,
    )

    mm_prompt_updates = self._get_mm_prompt_updates(
        mm_data_items,
        hf_processor_mm_kwargs,
        mm_kwargs,
    )

    mm_info = MultiModalProcessingInfo(
        kwargs=mm_kwargs,
        hashes=mm_hashes,
        prompt_updates=mm_prompt_updates,
    )

    return prompt_ids, mm_info, is_update_applied

_call_hf_processor

_call_hf_processor(
    prompt: str,
    mm_data: Mapping[str, object],
    mm_kwargs: Mapping[str, object],
    tok_kwargs: Mapping[str, object],
) -> BatchFeature

Call the HF processor on the prompt text and associated multi-modal data.

Source code in vllm/multimodal/processing.py
def _call_hf_processor(
    self,
    prompt: str,
    # Not to be confused with `mm_data` in `self.apply`.
    # This refers to the data to be passed to HF processor.
    mm_data: Mapping[str, object],
    mm_kwargs: Mapping[str, object],
    tok_kwargs: Mapping[str, object],
) -> "BatchFeature":
    """
    Call the HF processor on the prompt text and
    associated multi-modal data.
    """
    return self.info.ctx.call_hf_processor(
        self.info.get_hf_processor(**mm_kwargs),
        dict(text=prompt, **mm_data),
        dict(**mm_kwargs, **tok_kwargs),
    )

_find_mm_placeholders

_find_mm_placeholders(
    new_token_ids: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
) -> Mapping[str, list[PlaceholderFeaturesInfo]]
Source code in vllm/multimodal/processing.py
def _find_mm_placeholders(
    self,
    new_token_ids: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
) -> Mapping[str, list[PlaceholderFeaturesInfo]]:
    tokenizer = self.info.get_tokenizer()

    return find_mm_placeholders(new_token_ids, mm_prompt_updates,
                                tokenizer)

_get_cache_missing_items

_get_cache_missing_items(
    cache: ProcessingCache,
    mm_data_items: MultiModalDataItems,
    mm_hashes: MultiModalHashes,
) -> tuple[
    dict[str, list[_CacheItemOrHash]], MultiModalDataItems
]
Source code in vllm/multimodal/processing.py
def _get_cache_missing_items(
    self,
    cache: ProcessingCache,
    mm_data_items: MultiModalDataItems,
    mm_hashes: MultiModalHashes,
) -> tuple[dict[str, list[_CacheItemOrHash]], MultiModalDataItems]:
    mm_cache_items_or_hashes: dict[str, list[_CacheItemOrHash]] = {
        modality: [(h if (v := cache.get(h)) is None else v)
                   for h in hashes]
        for modality, hashes in mm_hashes.items()
    }

    mm_missing_idxs = {
        modality: [
            idx for idx, item_or_hash in enumerate(items_or_hashes)
            if isinstance(item_or_hash, str)
        ]
        for modality, items_or_hashes in mm_cache_items_or_hashes.items()
    }
    mm_missing_data = {
        modality: [mm_data_items[modality][idx] for idx in idxs]
        for modality, idxs in mm_missing_idxs.items()
    }

    return mm_cache_items_or_hashes, self._to_mm_items(mm_missing_data)

_get_data_parser

_get_data_parser() -> MultiModalDataParser

Construct a parser to preprocess multi-modal data items before passing them to _get_hf_mm_data.

You can support additional modalities by creating a subclass of MultiModalDataParser that has additional subparsers.

Source code in vllm/multimodal/processing.py
def _get_data_parser(self) -> MultiModalDataParser:
    """
    Construct a parser to preprocess multi-modal data items
    before passing them to
    [`_get_hf_mm_data`][vllm.multimodal.processing.BaseMultiModalProcessor._get_hf_mm_data].

    You can support additional modalities by creating a subclass
    of [`MultiModalDataParser`][vllm.multimodal.parse.MultiModalDataParser]
    that has additional subparsers.
    """
    return MultiModalDataParser()

_get_hf_mm_data

_get_hf_mm_data(
    mm_items: MultiModalDataItems,
) -> tuple[Mapping[str, object], Mapping[str, object]]
Source code in vllm/multimodal/processing.py
def _get_hf_mm_data(
    self,
    mm_items: MultiModalDataItems,
) -> tuple[Mapping[str, object], Mapping[str, object]]:
    processor_data = dict[str, object]()
    passthrough_data = dict[str, object]()

    for items in mm_items.values():
        processor_data.update(items.get_processor_data())
        passthrough_data.update(items.get_passthrough_data())

    return processor_data, passthrough_data

_get_mm_fields_config abstractmethod

_get_mm_fields_config(
    hf_inputs: BatchFeature,
    hf_processor_mm_kwargs: Mapping[str, object],
) -> Mapping[str, MultiModalFieldConfig]

Given the HF-processed data, output the metadata of each field.

Source code in vllm/multimodal/processing.py
@abstractmethod
def _get_mm_fields_config(
    self,
    hf_inputs: "BatchFeature",
    hf_processor_mm_kwargs: Mapping[str, object],
) -> Mapping[str, MultiModalFieldConfig]:
    """Given the HF-processed data, output the metadata of each field."""
    raise NotImplementedError

_get_mm_prompt_updates

_get_mm_prompt_updates(
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    out_mm_kwargs: MultiModalKwargsItems,
) -> MultiModalPromptUpdates
Source code in vllm/multimodal/processing.py
def _get_mm_prompt_updates(
    self,
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    out_mm_kwargs: MultiModalKwargsItems,
) -> MultiModalPromptUpdates:
    unbound_prompt_updates = self._get_prompt_updates(
        mm_items=mm_items,
        hf_processor_mm_kwargs=hf_processor_mm_kwargs,
        out_mm_kwargs=out_mm_kwargs,
    )

    mm_prompt_updates = self._bind_and_group_updates(
        unbound_prompt_updates,
        mm_items.get_all_counts(),
    )

    for modality, prompt_updates in mm_prompt_updates.items():
        for item_idx, item_prompt_updates in enumerate(prompt_updates):
            if len(item_prompt_updates) > 1:
                logger.warning_once(
                    "Detected %d prompt updates for `mm_items[%r][%s]`. "
                    "Multiple prompt updates per item is now "
                    "deprecated and may be removed in v0.13. "
                    "Instead, please specify dynamic update targets "
                    "in the same prompt update definition by passing "
                    "a function to `PromptUpdate.target`.",
                    len(prompt_updates),
                    modality,
                    item_idx,
                )

    return mm_prompt_updates

_get_prompt_updates abstractmethod

_get_prompt_updates(
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    out_mm_kwargs: MultiModalKwargsItems,
) -> Sequence[PromptUpdate]

Given the original multi-modal items for this modality and HF-processed data, output the updates to perform.

The information returned by this method is used to update token inputs which bypass the HF processor. It is also used to update the output of HF processor if the HF process does not apply prompt updates to text inputs.

Moreover, this information is critical to determine the token positions in order to construct PlaceholderRange for each multi-modal item.

Source code in vllm/multimodal/processing.py
@abstractmethod
def _get_prompt_updates(
    self,
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    out_mm_kwargs: MultiModalKwargsItems,
) -> Sequence[PromptUpdate]:
    """
    Given the original multi-modal items for this modality
    and HF-processed data, output the updates to perform.

    The information returned by this method is used to update token inputs
    which bypass the HF processor. It is also used to update the output of
    HF processor if the HF process does not apply prompt updates to text
    inputs.

    Moreover, this information is critical to determine the token positions
    in order to construct
    [`PlaceholderRange`][vllm.multimodal.inputs.PlaceholderRange]
    for each multi-modal item.
    """
    raise NotImplementedError

_hash_mm_items

_hash_mm_items(
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> MultiModalHashes

Create MM hashes to be returned (only used in V1).

Source code in vllm/multimodal/processing.py
def _hash_mm_items(
    self,
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> MultiModalHashes:
    """Create MM hashes to be returned (only used in V1)."""
    model_id = self.info.model_id

    return {
        modality: [
            MultiModalHasher.hash_kwargs(model_id=model_id,
                                         **{modality: item},
                                         **hf_processor_mm_kwargs,
                                         **tokenization_kwargs)
            for item in items
        ]
        for modality, items in mm_items.items()
    }

_hf_processor_applies_updates

_hf_processor_applies_updates(
    prompt_text: str,
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> bool

Return whether the HF processor applies prompt updates.

For most HF processors, this should be True when multi-modal data items are passed, but False when multi-modal embeddings are passed.

Source code in vllm/multimodal/processing.py
def _hf_processor_applies_updates(
    self,
    prompt_text: str,
    mm_items: MultiModalDataItems,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Mapping[str, object],
) -> bool:
    """
    Return whether the HF processor applies prompt updates.

    For most HF processors, this should be `True` when multi-modal
    data items are passed, but `False` when multi-modal embeddings
    are passed.
    """
    return not any(
        isinstance(items, (EmbeddingItems, DictEmbeddingItems))
        for items in mm_items.values())

_maybe_apply_prompt_updates

_maybe_apply_prompt_updates(
    mm_items: MultiModalDataItems,
    prompt_ids: list[int],
    mm_kwargs: MultiModalKwargsItems,
    mm_prompt_updates: MultiModalPromptUpdates,
    is_update_applied: bool,
) -> tuple[
    list[int],
    str,
    Mapping[str, list[PlaceholderFeaturesInfo]],
]
Source code in vllm/multimodal/processing.py
def _maybe_apply_prompt_updates(
    self,
    mm_items: MultiModalDataItems,
    prompt_ids: list[int],
    mm_kwargs: MultiModalKwargsItems,
    mm_prompt_updates: MultiModalPromptUpdates,
    is_update_applied: bool,
) -> tuple[list[int], str, Mapping[str, list[PlaceholderFeaturesInfo]]]:
    mm_item_counts = mm_items.get_all_counts()
    self._validate_mm_kwargs(mm_kwargs, mm_item_counts)

    if is_update_applied:
        mm_placeholders = self._find_mm_placeholders(
            prompt_ids,
            mm_prompt_updates,
        )
        self._validate_mm_placeholders(mm_placeholders, mm_item_counts)

        tokenizer = self.info.get_tokenizer()
        prompt = decode_tokens(tokenizer, prompt_ids)
    else:
        (
            prompt_ids,
            prompt,
            mm_placeholders,
        ) = self._apply_prompt_updates(
            prompt_ids,
            mm_prompt_updates,
        )
        self._validate_mm_placeholders(mm_placeholders, mm_item_counts)

    return prompt_ids, prompt, mm_placeholders

_merge_mm_kwargs

_merge_mm_kwargs(
    cache: ProcessingCache,
    mm_cache_items_or_hashes: dict[
        str, list[_CacheItemOrHash]
    ],
    mm_missing_kwargs: MultiModalKwargsItems,
) -> MultiModalKwargsItems
Source code in vllm/multimodal/processing.py
def _merge_mm_kwargs(
    self,
    cache: ProcessingCache,
    mm_cache_items_or_hashes: dict[str, list[_CacheItemOrHash]],
    mm_missing_kwargs: MultiModalKwargsItems,
) -> MultiModalKwargsItems:
    mm_missing_next_idx = defaultdict[str, int](lambda: 0)

    merged_items = defaultdict[str, list[MultiModalKwargsItem]](list)
    for modality, items_or_hashes in mm_cache_items_or_hashes.items():
        for item_or_hash in items_or_hashes:
            if isinstance(item_or_hash, str):
                kw_item = mm_missing_kwargs[modality][
                    mm_missing_next_idx[modality]]
                cache.put(item_or_hash, kw_item)
                mm_missing_next_idx[modality] += 1
            else:
                kw_item = item_or_hash

            merged_items[modality].append(kw_item)

    return MultiModalKwargsItems(merged_items)

_to_mm_items

_to_mm_items(
    mm_data: MultiModalDataDict,
) -> MultiModalDataItems

Normalize MultiModalDataDict to MultiModalDataItems before passing them to _get_hf_mm_data.

Source code in vllm/multimodal/processing.py
def _to_mm_items(
    self,
    mm_data: MultiModalDataDict,
) -> MultiModalDataItems:
    """
    Normalize
    [`MultiModalDataDict`][vllm.multimodal.inputs.MultiModalDataDict]
    to [`MultiModalDataItems`][vllm.multimodal.parse.MultiModalDataItems]
    before passing them to
    [`_get_hf_mm_data`][vllm.multimodal.processing.BaseMultiModalProcessor._get_hf_mm_data].
    """
    mm_items = self.data_parser.parse_mm_data(mm_data)

    for modality, items in mm_items.items():
        self.validate_num_items(modality, len(items))

    return mm_items

_validate_mm_kwargs

_validate_mm_kwargs(
    mm_kwargs: MultiModalKwargsItems,
    mm_item_counts: Mapping[str, int],
) -> None
Source code in vllm/multimodal/processing.py
def _validate_mm_kwargs(
    self,
    mm_kwargs: MultiModalKwargsItems,
    mm_item_counts: Mapping[str, int],
) -> None:
    for modality, item_count in mm_item_counts.items():
        items = mm_kwargs.get(modality, [])

        if len(items) != item_count:
            raise RuntimeError(
                f"Expected there to be {item_count} {modality} items in "
                f"keyword arguments corresponding to {item_count} "
                f"{modality} data items, but only found {len(items)}! "
                "There is likely a problem with your "
                "implementation of merged multi-modal processor for this "
                "model (usually arising from an inconsistency between "
                "`_call_hf_processor` and `_get_mm_fields_config`).")

_validate_mm_placeholders

_validate_mm_placeholders(
    mm_placeholders: Mapping[
        str, list[PlaceholderFeaturesInfo]
    ],
    mm_item_counts: Mapping[str, int],
) -> None
Source code in vllm/multimodal/processing.py
def _validate_mm_placeholders(
    self,
    mm_placeholders: Mapping[str, list[PlaceholderFeaturesInfo]],
    mm_item_counts: Mapping[str, int],
) -> None:
    for modality, item_count in mm_item_counts.items():
        placeholders = mm_placeholders.get(modality, [])

        if len(placeholders) != item_count:
            # NOTE: If you are a model developer, this can also arise from
            # an inconsistency between `_call_hf_processor` and
            # `_get_mm_fields_config` implementations
            raise RuntimeError(
                f"Expected there to be {item_count} prompt updates "
                f"corresponding to {item_count} {modality} items, but "
                f"instead found {len(placeholders)} prompt updates! "
                "This is likely because you forgot to include input "
                "placeholder tokens (e.g., `<image>`, `<|image_pad|>`) "
                "in the prompt. If the model has a chat template, make "
                "sure you have applied it before calling `LLM.generate`.")

apply

apply(
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Optional[
        Mapping[str, object]
    ] = None,
) -> MultiModalInputs

Process multi-modal inputs to be used in vLLM.

The main steps are:

  1. Apply HF Processor on prompt text and multi-modal data together, outputting token IDs and processed tensors.
  2. Find and update sequences in the token IDs with placeholder tokens. The number of placeholder tokens equals the feature size of the multi-modal data outputted by the multi-modal encoder.
  3. Extract information about the placeholder tokens from the processed token IDs.
Source code in vllm/multimodal/processing.py
def apply(
    self,
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Optional[Mapping[str, object]] = None,
) -> MultiModalInputs:
    """
    Process multi-modal inputs to be used in vLLM.

    The main steps are:

    1. Apply HF Processor on prompt text and multi-modal data together,
       outputting token IDs and processed tensors.
    2. Find and update sequences in the token IDs with placeholder tokens.
       The number of placeholder tokens equals the feature size of the
       multi-modal data outputted by the multi-modal encoder.
    3. Extract information about the placeholder tokens from the
       processed token IDs.
    """
    mm_items = self._to_mm_items(mm_data)

    if tokenization_kwargs is None:
        tokenization_kwargs = {}

    (
        prompt_ids,
        mm_info,
        is_update_applied,
    ) = self._cached_apply_hf_processor(
        prompt,
        mm_items,
        hf_processor_mm_kwargs,
        tokenization_kwargs=tokenization_kwargs,
    )

    # NOTE: tokenization_kwargs are not required to init processor
    prompt_ids, prompt, mm_placeholders = self._maybe_apply_prompt_updates(
        mm_items=mm_items,
        prompt_ids=prompt_ids,
        mm_kwargs=mm_info.kwargs,
        mm_prompt_updates=mm_info.prompt_updates,
        is_update_applied=is_update_applied,
    )

    mm_placeholder_ranges = {
        modality: [item.to_range() for item in placeholders]
        for modality, placeholders in mm_placeholders.items()
    }

    return MultiModalInputs(
        type="multimodal",
        prompt=prompt,
        prompt_token_ids=prompt_ids,
        mm_kwargs=mm_info.kwargs,
        mm_hashes=mm_info.hashes,
        mm_placeholders=mm_placeholder_ranges,
    )

validate_num_items

validate_num_items(modality: str, num_items: int) -> None
Source code in vllm/multimodal/processing.py
def validate_num_items(
    self,
    modality: str,
    num_items: int,
) -> None:
    supported_limit = self.supported_mm_limits.get(modality, 0)
    allowed_limit = self.allowed_mm_limits.get(modality, 0)

    if supported_limit is None:
        supported_limit = allowed_limit

    limit = min(supported_limit, allowed_limit)

    if num_items > limit:
        msg = (f"At most {limit} {modality}(s) may be provided in "
               "one prompt.")

        if num_items <= supported_limit:
            msg += " Set `--limit-mm-per-prompt` to increase this limit."

        raise ValueError(msg)

BaseProcessingInfo

Base class to provide the information necessary for data processing.

Source code in vllm/multimodal/processing.py
class BaseProcessingInfo:
    """Base class to provide the information necessary for data processing."""

    def __init__(self, ctx: InputProcessingContext) -> None:
        super().__init__()

        self.ctx = ctx

    @property
    def model_id(self) -> str:
        return self.ctx.model_config.model

    def get_tokenizer(self) -> AnyTokenizer:
        return self.ctx.tokenizer

    def get_hf_config(self) -> "PretrainedConfig":
        return self.ctx.get_hf_config()

    def get_hf_processor(self, **kwargs: object) -> "ProcessorMixin":
        """
        Subclasses can override this method to handle
        specific kwargs from model config or user inputs.
        """
        return self.ctx.get_hf_processor(**kwargs)

    @abstractmethod
    def get_supported_mm_limits(self) -> Mapping[str, Optional[int]]:
        """
        Return the maximum supported number of items for each modality.

        A value of `None` means unlimited number of items.

        Omitting a modality from the returned dictionary means that
        it is not supported at all.
        """
        raise NotImplementedError

    def get_allowed_mm_limits(self) -> Mapping[str, int]:
        """Return the maximum allowed number of items for each modality."""
        supported_mm_limits = self.get_supported_mm_limits()
        mm_config = self.ctx.get_mm_config()

        allowed_limits = dict[str, int]()
        for modality, supported_limit in supported_mm_limits.items():
            user_limit = mm_config.get_limit_per_prompt(modality)

            allowed_limits[modality] = (user_limit if supported_limit is None
                                        else min(user_limit, supported_limit))

        return allowed_limits

    def get_mm_max_tokens_per_item(
        self,
        seq_len: int,
        mm_counts: Mapping[str, int],
    ) -> Optional[Mapping[str, int]]:
        """
        Return the maximum number of tokens per item of for each modality.

        When `None` (the default) is returned, vLLM will generate dummy inputs
        (images/videos) at maximum possible sizes and process them to determine
        the maximum token count per modality.

        This approach works but can be very slow for certain models (e.g.,
        Qwen2.5-VL), leading to very long startup time. For better performance,
        each model can override this method to return pre-computed maximum token
        counts, avoiding the need for dummy input generation and processing.

        Note:
            The maximum number of tokens per item of each modality returned 
            from this function should respect the model's maximum sequence
            length and the maximum number of items of each modality allowed,
            and agree with dummy inputs (images/videos) at maximum possible
            sizes.
        """
        return None

ctx instance-attribute

ctx = ctx

model_id property

model_id: str

__init__

__init__(ctx: InputProcessingContext) -> None
Source code in vllm/multimodal/processing.py
def __init__(self, ctx: InputProcessingContext) -> None:
    super().__init__()

    self.ctx = ctx

get_allowed_mm_limits

get_allowed_mm_limits() -> Mapping[str, int]

Return the maximum allowed number of items for each modality.

Source code in vllm/multimodal/processing.py
def get_allowed_mm_limits(self) -> Mapping[str, int]:
    """Return the maximum allowed number of items for each modality."""
    supported_mm_limits = self.get_supported_mm_limits()
    mm_config = self.ctx.get_mm_config()

    allowed_limits = dict[str, int]()
    for modality, supported_limit in supported_mm_limits.items():
        user_limit = mm_config.get_limit_per_prompt(modality)

        allowed_limits[modality] = (user_limit if supported_limit is None
                                    else min(user_limit, supported_limit))

    return allowed_limits

get_hf_config

get_hf_config() -> PretrainedConfig
Source code in vllm/multimodal/processing.py
def get_hf_config(self) -> "PretrainedConfig":
    return self.ctx.get_hf_config()

get_hf_processor

get_hf_processor(**kwargs: object) -> ProcessorMixin

Subclasses can override this method to handle specific kwargs from model config or user inputs.

Source code in vllm/multimodal/processing.py
def get_hf_processor(self, **kwargs: object) -> "ProcessorMixin":
    """
    Subclasses can override this method to handle
    specific kwargs from model config or user inputs.
    """
    return self.ctx.get_hf_processor(**kwargs)

get_mm_max_tokens_per_item

get_mm_max_tokens_per_item(
    seq_len: int, mm_counts: Mapping[str, int]
) -> Optional[Mapping[str, int]]

Return the maximum number of tokens per item of for each modality.

When None (the default) is returned, vLLM will generate dummy inputs (images/videos) at maximum possible sizes and process them to determine the maximum token count per modality.

This approach works but can be very slow for certain models (e.g., Qwen2.5-VL), leading to very long startup time. For better performance, each model can override this method to return pre-computed maximum token counts, avoiding the need for dummy input generation and processing.

Note

The maximum number of tokens per item of each modality returned from this function should respect the model's maximum sequence length and the maximum number of items of each modality allowed, and agree with dummy inputs (images/videos) at maximum possible sizes.

Source code in vllm/multimodal/processing.py
def get_mm_max_tokens_per_item(
    self,
    seq_len: int,
    mm_counts: Mapping[str, int],
) -> Optional[Mapping[str, int]]:
    """
    Return the maximum number of tokens per item of for each modality.

    When `None` (the default) is returned, vLLM will generate dummy inputs
    (images/videos) at maximum possible sizes and process them to determine
    the maximum token count per modality.

    This approach works but can be very slow for certain models (e.g.,
    Qwen2.5-VL), leading to very long startup time. For better performance,
    each model can override this method to return pre-computed maximum token
    counts, avoiding the need for dummy input generation and processing.

    Note:
        The maximum number of tokens per item of each modality returned 
        from this function should respect the model's maximum sequence
        length and the maximum number of items of each modality allowed,
        and agree with dummy inputs (images/videos) at maximum possible
        sizes.
    """
    return None

get_supported_mm_limits abstractmethod

get_supported_mm_limits() -> Mapping[str, Optional[int]]

Return the maximum supported number of items for each modality.

A value of None means unlimited number of items.

Omitting a modality from the returned dictionary means that it is not supported at all.

Source code in vllm/multimodal/processing.py
@abstractmethod
def get_supported_mm_limits(self) -> Mapping[str, Optional[int]]:
    """
    Return the maximum supported number of items for each modality.

    A value of `None` means unlimited number of items.

    Omitting a modality from the returned dictionary means that
    it is not supported at all.
    """
    raise NotImplementedError

get_tokenizer

get_tokenizer() -> AnyTokenizer
Source code in vllm/multimodal/processing.py
def get_tokenizer(self) -> AnyTokenizer:
    return self.ctx.tokenizer

EncDecMultiModalProcessor

Bases: BaseMultiModalProcessor[_I]

Source code in vllm/multimodal/processing.py
class EncDecMultiModalProcessor(BaseMultiModalProcessor[_I]):

    @abstractmethod
    def create_encoder_prompt(
        self,
        prompt: Union[str, list[int]],
        mm_data: MultiModalDataDict,
    ) -> Union[str, list[int]]:
        """
        Create input prompt for the encoder. HF processor will be applied on
        this prompt during profiling and generation.
        """
        raise NotImplementedError

    @property
    def pad_dummy_encoder_prompt(self) -> bool:
        return False

    def create_decoder_prompt(
        self,
        prompt: Union[str, list[int]],
        mm_data: MultiModalDataDict,
    ) -> Union[str, list[int]]:
        """Create input prompt for the decoder."""
        return prompt

    def _get_enc_dec_inputs(
        self,
        prompt: Union[str, list[int]],
        mm_data: MultiModalDataDict,
        encoder_inputs: MultiModalInputs,
    ):
        tokenizer = self.info.get_tokenizer()
        decoder_prompt = self.create_decoder_prompt(prompt, mm_data)
        if isinstance(decoder_prompt, str):
            decoder_prompt_ids = encode_tokens(tokenizer,
                                               decoder_prompt,
                                               add_special_tokens=False)
        else:
            decoder_prompt_ids = decoder_prompt
            decoder_prompt = decode_tokens(tokenizer, decoder_prompt)

        mm_inputs = MultiModalEncDecInputs(
            encoder_prompt=encoder_inputs["prompt"],
            encoder_prompt_token_ids=encoder_inputs["prompt_token_ids"],
            **encoder_inputs)
        mm_inputs.update({
            "prompt": decoder_prompt,
            "prompt_token_ids": decoder_prompt_ids
        })
        return mm_inputs

    def apply(
        self,
        prompt: Union[str, list[int]],
        mm_data: MultiModalDataDict,
        hf_processor_mm_kwargs: Mapping[str, object],
        tokenization_kwargs: Optional[Mapping[str, object]] = None,
    ) -> MultiModalEncDecInputs:
        """
        Process multi-modal inputs to be used in vLLM.
        The main processing steps are modified to fit encoder-decoder model:
        1. Create encoder prompt from input prompt text.
        2. Apply the HF processor on encoder prompt.
        3. Copy the input prompt text as decoder prompt inputs.
        """
        encoder_prompt = self.create_encoder_prompt(prompt, mm_data)
        encoder_inputs = super().apply(
            encoder_prompt,
            mm_data,
            hf_processor_mm_kwargs,
            tokenization_kwargs,
        )

        return self._get_enc_dec_inputs(
            prompt=prompt,
            mm_data=mm_data,
            encoder_inputs=encoder_inputs,
        )

pad_dummy_encoder_prompt property

pad_dummy_encoder_prompt: bool

_get_enc_dec_inputs

_get_enc_dec_inputs(
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
    encoder_inputs: MultiModalInputs,
)
Source code in vllm/multimodal/processing.py
def _get_enc_dec_inputs(
    self,
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
    encoder_inputs: MultiModalInputs,
):
    tokenizer = self.info.get_tokenizer()
    decoder_prompt = self.create_decoder_prompt(prompt, mm_data)
    if isinstance(decoder_prompt, str):
        decoder_prompt_ids = encode_tokens(tokenizer,
                                           decoder_prompt,
                                           add_special_tokens=False)
    else:
        decoder_prompt_ids = decoder_prompt
        decoder_prompt = decode_tokens(tokenizer, decoder_prompt)

    mm_inputs = MultiModalEncDecInputs(
        encoder_prompt=encoder_inputs["prompt"],
        encoder_prompt_token_ids=encoder_inputs["prompt_token_ids"],
        **encoder_inputs)
    mm_inputs.update({
        "prompt": decoder_prompt,
        "prompt_token_ids": decoder_prompt_ids
    })
    return mm_inputs

apply

apply(
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Optional[
        Mapping[str, object]
    ] = None,
) -> MultiModalEncDecInputs

Process multi-modal inputs to be used in vLLM. The main processing steps are modified to fit encoder-decoder model: 1. Create encoder prompt from input prompt text. 2. Apply the HF processor on encoder prompt. 3. Copy the input prompt text as decoder prompt inputs.

Source code in vllm/multimodal/processing.py
def apply(
    self,
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
    hf_processor_mm_kwargs: Mapping[str, object],
    tokenization_kwargs: Optional[Mapping[str, object]] = None,
) -> MultiModalEncDecInputs:
    """
    Process multi-modal inputs to be used in vLLM.
    The main processing steps are modified to fit encoder-decoder model:
    1. Create encoder prompt from input prompt text.
    2. Apply the HF processor on encoder prompt.
    3. Copy the input prompt text as decoder prompt inputs.
    """
    encoder_prompt = self.create_encoder_prompt(prompt, mm_data)
    encoder_inputs = super().apply(
        encoder_prompt,
        mm_data,
        hf_processor_mm_kwargs,
        tokenization_kwargs,
    )

    return self._get_enc_dec_inputs(
        prompt=prompt,
        mm_data=mm_data,
        encoder_inputs=encoder_inputs,
    )

create_decoder_prompt

create_decoder_prompt(
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
) -> Union[str, list[int]]

Create input prompt for the decoder.

Source code in vllm/multimodal/processing.py
def create_decoder_prompt(
    self,
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
) -> Union[str, list[int]]:
    """Create input prompt for the decoder."""
    return prompt

create_encoder_prompt abstractmethod

create_encoder_prompt(
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
) -> Union[str, list[int]]

Create input prompt for the encoder. HF processor will be applied on this prompt during profiling and generation.

Source code in vllm/multimodal/processing.py
@abstractmethod
def create_encoder_prompt(
    self,
    prompt: Union[str, list[int]],
    mm_data: MultiModalDataDict,
) -> Union[str, list[int]]:
    """
    Create input prompt for the encoder. HF processor will be applied on
    this prompt during profiling and generation.
    """
    raise NotImplementedError

MultiModalProcessingInfo

Bases: NamedTuple

Source code in vllm/multimodal/processing.py
class MultiModalProcessingInfo(NamedTuple):
    kwargs: MultiModalKwargsItems
    hashes: MultiModalHashes
    prompt_updates: MultiModalPromptUpdates

hashes instance-attribute

kwargs instance-attribute

prompt_updates instance-attribute

prompt_updates: MultiModalPromptUpdates

PlaceholderFeaturesInfo dataclass

Source code in vllm/multimodal/processing.py
@dataclass
class PlaceholderFeaturesInfo:
    modality: str
    item_idx: int
    start_idx: int
    tokens: list[int]
    is_embed: Optional[torch.Tensor]

    @property
    def length(self) -> int:
        return len(self.tokens)

    def to_range(self) -> PlaceholderRange:
        # TODO: Is it worth it to optimize this by stripping the
        # leading and ending positions where `is_embed=False`?
        return PlaceholderRange(
            offset=self.start_idx,
            length=self.length,
            is_embed=self.is_embed,
        )

is_embed instance-attribute

is_embed: Optional[Tensor]

item_idx instance-attribute

item_idx: int

length property

length: int

modality instance-attribute

modality: str

start_idx instance-attribute

start_idx: int

tokens instance-attribute

tokens: list[int]

__init__

__init__(
    modality: str,
    item_idx: int,
    start_idx: int,
    tokens: list[int],
    is_embed: Optional[Tensor],
) -> None

to_range

to_range() -> PlaceholderRange
Source code in vllm/multimodal/processing.py
def to_range(self) -> PlaceholderRange:
    # TODO: Is it worth it to optimize this by stripping the
    # leading and ending positions where `is_embed=False`?
    return PlaceholderRange(
        offset=self.start_idx,
        length=self.length,
        is_embed=self.is_embed,
    )

ProcessingCache

Bases: MultiModalCache

Source code in vllm/multimodal/processing.py
class ProcessingCache(MultiModalCache):

    def __init__(self, capacity_gb: float) -> None:
        super().__init__()

        self._cache = self.get_lru_cache(capacity_gb, MultiModalKwargsItem)

        self.get = self._cache.get
        self.put = self._cache.put
        self.reset = self._cache.clear

_cache instance-attribute

_cache = get_lru_cache(capacity_gb, MultiModalKwargsItem)

get instance-attribute

get = get

put instance-attribute

put = put

reset instance-attribute

reset = clear

__init__

__init__(capacity_gb: float) -> None
Source code in vllm/multimodal/processing.py
def __init__(self, capacity_gb: float) -> None:
    super().__init__()

    self._cache = self.get_lru_cache(capacity_gb, MultiModalKwargsItem)

    self.get = self._cache.get
    self.put = self._cache.put
    self.reset = self._cache.clear

PromptIndex dataclass

Resolves to an index in the prompt.

Source code in vllm/multimodal/processing.py
@dataclass
class PromptIndex:
    """Resolves to an index in the prompt."""
    get_match_index: _GetMatchIndex

get_match_index instance-attribute

get_match_index: _GetMatchIndex

__init__

__init__(get_match_index: _GetMatchIndex) -> None

PromptIndexTargets

Source code in vllm/multimodal/processing.py
class PromptIndexTargets:

    @staticmethod
    def start() -> PromptIndex:
        """
        Resolves to the start of the prompt (before the first token).

        This results in a match even if the prompt is empty.
        """
        return PromptIndex(lambda tokenizer, prompt, start_idx=0: 0)

    @staticmethod
    def prefix(seq: PromptSeq) -> PromptIndex:
        """
        Resolves to a location in the prompt after the given prefix.
        """

        def get_match_index(
            tokenizer: AnyTokenizer,
            prompt: PromptSeq,
            start_idx: int = 0,
        ) -> Optional[int]:
            if start_idx != 0:
                return None

            prefix = seq

            if isinstance(prompt, str):
                if not isinstance(prefix, str):
                    # Make both `str`
                    prefix = decode_tokens(tokenizer, prefix)
            else:
                if isinstance(prefix, str):
                    # Make both `list[int]`
                    prefix = encode_tokens(tokenizer,
                                           prefix,
                                           add_special_tokens=False)

            match_idx = len(prefix)
            return match_idx if prompt[:match_idx] == prefix else None

        return PromptIndex(get_match_index)

    @staticmethod
    def end() -> PromptIndex:
        """
        Resolves to the end of the prompt (after the last token).

        This results in a match even if the prompt is empty.
        """
        return PromptIndex(lambda tokenizer, prompt, start_idx=0: len(prompt))

end staticmethod

end() -> PromptIndex

Resolves to the end of the prompt (after the last token).

This results in a match even if the prompt is empty.

Source code in vllm/multimodal/processing.py
@staticmethod
def end() -> PromptIndex:
    """
    Resolves to the end of the prompt (after the last token).

    This results in a match even if the prompt is empty.
    """
    return PromptIndex(lambda tokenizer, prompt, start_idx=0: len(prompt))

prefix staticmethod

prefix(seq: PromptSeq) -> PromptIndex

Resolves to a location in the prompt after the given prefix.

Source code in vllm/multimodal/processing.py
@staticmethod
def prefix(seq: PromptSeq) -> PromptIndex:
    """
    Resolves to a location in the prompt after the given prefix.
    """

    def get_match_index(
        tokenizer: AnyTokenizer,
        prompt: PromptSeq,
        start_idx: int = 0,
    ) -> Optional[int]:
        if start_idx != 0:
            return None

        prefix = seq

        if isinstance(prompt, str):
            if not isinstance(prefix, str):
                # Make both `str`
                prefix = decode_tokens(tokenizer, prefix)
        else:
            if isinstance(prefix, str):
                # Make both `list[int]`
                prefix = encode_tokens(tokenizer,
                                       prefix,
                                       add_special_tokens=False)

        match_idx = len(prefix)
        return match_idx if prompt[:match_idx] == prefix else None

    return PromptIndex(get_match_index)

start staticmethod

start() -> PromptIndex

Resolves to the start of the prompt (before the first token).

This results in a match even if the prompt is empty.

Source code in vllm/multimodal/processing.py
@staticmethod
def start() -> PromptIndex:
    """
    Resolves to the start of the prompt (before the first token).

    This results in a match even if the prompt is empty.
    """
    return PromptIndex(lambda tokenizer, prompt, start_idx=0: 0)

PromptInsertion dataclass

Bases: PromptUpdate

Defines how to insert placeholder tokens into a prompt.

Example:

For each image, insert a number of <image> feature placeholders equal to the feature size of the vision encoder after the <s> token:

PromptInsertion(
    modality="image",
    target="<s>",
    insertion="<image>" * image_feature_size,
)

Insert these tokens at the start of the prompt:

PromptInsertion(
    modality="image",
    target=PromptIndexTargets.start(),
    insertion="<image>" * image_feature_size,
)

Insert these tokens after a prefix Images::

PromptInsertion(
    modality="image",
    target=PromptIndexTargets.prefix("Images:"),
    insertion="<image>" * image_feature_size,
)

Insert these tokens at the end of the prompt:

PromptInsertion(
    modality="image",
    target=PromptIndexTargets.end(),
    insertion="<image>" * image_feature_size,
)
Source code in vllm/multimodal/processing.py
@dataclass
class PromptInsertion(PromptUpdate):
    """
    Defines how to insert placeholder tokens into a prompt.

    Example:

    For each image, insert a number of ``<image>`` feature placeholders
    equal to the feature size of the vision encoder after the ``<s>`` token:

    ```python
    PromptInsertion(
        modality="image",
        target="<s>",
        insertion="<image>" * image_feature_size,
    )
    ```

    Insert these tokens at the start of the prompt:

    ```python
    PromptInsertion(
        modality="image",
        target=PromptIndexTargets.start(),
        insertion="<image>" * image_feature_size,
    )
    ```

    Insert these tokens after a prefix ``Images:``:

    ```python
    PromptInsertion(
        modality="image",
        target=PromptIndexTargets.prefix("Images:"),
        insertion="<image>" * image_feature_size,
    )
    ```

    Insert these tokens at the end of the prompt:

    ```python
    PromptInsertion(
        modality="image",
        target=PromptIndexTargets.end(),
        insertion="<image>" * image_feature_size,
    )
    ```
    """

    insertion: PromptUpdateContent = field(repr=False)
    """
    Given the index of the processed item within
    [`modality`][vllm.multimodal.processing.PromptUpdate.modality],
    output the token sequence (or text) to insert right after
    [`target`][vllm.multimodal.processing.PromptUpdate.target].

    For convenience, you can directly pass in the token sequence (or text)
    instead of a function if it does not depend on the input.
    """

    @property
    def content(self) -> PromptUpdateContent:
        return self.insertion

    @property
    def mode(self) -> UpdateMode:
        return UpdateMode.INSERT

content property

insertion class-attribute instance-attribute

insertion: PromptUpdateContent = field(repr=False)

Given the index of the processed item within modality, output the token sequence (or text) to insert right after target.

For convenience, you can directly pass in the token sequence (or text) instead of a function if it does not depend on the input.

mode property

mode: UpdateMode

__init__

__init__(
    modality: str,
    target: PromptUpdateTarget,
    insertion: PromptUpdateContent,
) -> None

PromptReplacement dataclass

Bases: PromptUpdate

Defines how to replace portions of an input prompt with placeholder tokens.

Example:

For each image, replace one <image> input placeholder in the prompt with a number of <image> feature placeholders equal to the feature size of the vision encoder:

PromptReplacement(
    modality="image",
    target="<image>",
    replacement="<image>" * image_feature_size,
)

As above, but further pad the feature placeholders with <image_bos> and ```, which are not supposed to be passed to the vision encoder:

PromptReplacement(
    modality="image",
    target="<image>",
    replacement=PromptUpdateDetails(
        full="".join([
            "<image_bos>",
            "<image>" * image_feature_size,
            "<image_eos>",
        ]),
        features="<image>" * image_feature_size,
    ),
)

To avoid unnecessary tokenization during prompt replacement, we recommended passing token sequences instead of text:

PromptReplacement(
    modality="image",
    target=[image_token_id],
    replacement=PromptUpdateDetails(
        full=([image_bos_id] + [image_token_id] * image_feature_size
                + [image_eos_id]),
        features=[image_token_id] * image_feature_size,
    ),
)
Source code in vllm/multimodal/processing.py
@dataclass
class PromptReplacement(PromptUpdate):
    """
    Defines how to replace portions of an input prompt with placeholder tokens.

    Example:

    For each image, replace one ``<image>`` input placeholder in the prompt
    with a number of ``<image>`` feature placeholders
    equal to the feature size of the vision encoder:

    ```python
    PromptReplacement(
        modality="image",
        target="<image>",
        replacement="<image>" * image_feature_size,
    )
    ```

    As above, but further pad the feature placeholders with ``<image_bos>``
    and `<image_eos>``, which are not supposed to be passed to the vision
    encoder:

    ```python
    PromptReplacement(
        modality="image",
        target="<image>",
        replacement=PromptUpdateDetails(
            full="".join([
                "<image_bos>",
                "<image>" * image_feature_size,
                "<image_eos>",
            ]),
            features="<image>" * image_feature_size,
        ),
    )
    ```

    To avoid unnecessary tokenization during prompt replacement,
    we recommended passing token sequences instead of text:

    ```python
    PromptReplacement(
        modality="image",
        target=[image_token_id],
        replacement=PromptUpdateDetails(
            full=([image_bos_id] + [image_token_id] * image_feature_size
                    + [image_eos_id]),
            features=[image_token_id] * image_feature_size,
        ),
    )
    ```
    """

    replacement: PromptUpdateContent = field(repr=False)
    """
    Given the index of the processed item within
    [`modality`][vllm.multimodal.processing.PromptUpdate.modality],
    output the token sequence (or text) to replace
    [`target`][vllm.multimodal.processing.PromptUpdate.target].

    For convenience, you can directly pass in the token sequence (or text)
    instead of a function if it does not depend on the input.
    """

    @property
    def content(self) -> PromptUpdateContent:
        return self.replacement

    @property
    def mode(self) -> UpdateMode:
        return UpdateMode.REPLACE

content property

mode property

mode: UpdateMode

replacement class-attribute instance-attribute

replacement: PromptUpdateContent = field(repr=False)

Given the index of the processed item within modality, output the token sequence (or text) to replace target.

For convenience, you can directly pass in the token sequence (or text) instead of a function if it does not depend on the input.

__init__

__init__(
    modality: str,
    target: PromptUpdateTarget,
    replacement: PromptUpdateContent,
) -> None

PromptTargetMatch

Bases: NamedTuple

Source code in vllm/multimodal/processing.py
class PromptTargetMatch(NamedTuple):
    start_idx: int
    end_idx: int

end_idx instance-attribute

end_idx: int

start_idx instance-attribute

start_idx: int

PromptUpdate dataclass

Bases: ABC

Defines how to update a prompt with placeholder tokens.

Source code in vllm/multimodal/processing.py
@dataclass
class PromptUpdate(ABC):
    """
    Defines how to update a prompt with placeholder tokens.
    """

    modality: str
    """The modality for which the update is made."""

    target: PromptUpdateTarget
    """The token sequence (or text) to update."""

    @property
    @abstractmethod
    def content(self) -> PromptUpdateContent:
        """The placeholder tokens that are part of the update."""
        raise NotImplementedError

    @property
    @abstractmethod
    def mode(self) -> UpdateMode:
        """Defines how to update the prompt."""
        raise NotImplementedError

    def _resolve_target(self, item_idx: int) -> UpdateTarget:
        target = self.target
        if callable(target):
            target = target(item_idx)

        return target

    def _resolve_content(self, item_idx: int) -> PromptUpdateDetails:
        content = self.content
        if callable(content):
            content = content(item_idx)

        if not isinstance(content, PromptUpdateDetails):
            content = PromptUpdateDetails.from_seq(content)

        return content

    def resolve(self, item_idx: int) -> "ResolvedPromptUpdate":
        """
        Given the index of the processed item within
        [`modality`][vllm.multimodal.processing.PromptUpdate.modality],
        output a copy of this object with its lazy attributes resolved.
        """
        return ResolvedPromptUpdate(
            modality=self.modality,
            item_idx=item_idx,
            mode=self.mode,
            target=self._resolve_target(item_idx),
            content=self._resolve_content(item_idx),
        )

content abstractmethod property

The placeholder tokens that are part of the update.

modality instance-attribute

modality: str

The modality for which the update is made.

mode abstractmethod property

mode: UpdateMode

Defines how to update the prompt.

target instance-attribute

The token sequence (or text) to update.

__init__

__init__(modality: str, target: PromptUpdateTarget) -> None

_resolve_content

_resolve_content(item_idx: int) -> PromptUpdateDetails
Source code in vllm/multimodal/processing.py
def _resolve_content(self, item_idx: int) -> PromptUpdateDetails:
    content = self.content
    if callable(content):
        content = content(item_idx)

    if not isinstance(content, PromptUpdateDetails):
        content = PromptUpdateDetails.from_seq(content)

    return content

_resolve_target

_resolve_target(item_idx: int) -> UpdateTarget
Source code in vllm/multimodal/processing.py
def _resolve_target(self, item_idx: int) -> UpdateTarget:
    target = self.target
    if callable(target):
        target = target(item_idx)

    return target

resolve

resolve(item_idx: int) -> ResolvedPromptUpdate

Given the index of the processed item within modality, output a copy of this object with its lazy attributes resolved.

Source code in vllm/multimodal/processing.py
def resolve(self, item_idx: int) -> "ResolvedPromptUpdate":
    """
    Given the index of the processed item within
    [`modality`][vllm.multimodal.processing.PromptUpdate.modality],
    output a copy of this object with its lazy attributes resolved.
    """
    return ResolvedPromptUpdate(
        modality=self.modality,
        item_idx=item_idx,
        mode=self.mode,
        target=self._resolve_target(item_idx),
        content=self._resolve_content(item_idx),
    )

PromptUpdateDetails dataclass

Bases: Generic[_S]

Details about the token sequence or text that are part of the update.

Source code in vllm/multimodal/processing.py
@dataclass
class PromptUpdateDetails(Generic[_S]):
    """Details about the token sequence or text that are part of the update."""

    full: _S
    """The full content."""

    is_embed: Optional[Callable[[AnyTokenizer, PromptSeq],
                                torch.Tensor]] = None
    """
    Given [`full`][vllm.multimodal.processing.PromptUpdateDetails.full],
    return a boolean mask of shape `(len(full),)` indicating which positions
    of `full` to assign embeddings to.

    `None` (default) means to assign embeddings to all positions of `full`.

    The embeddings are obtained by calling
    [`SupportsMultiModal.get_multimodal_embeddings`][vllm.model_executor.models.interfaces.SupportsMultiModal.get_multimodal_embeddings].
    """

    @staticmethod
    def from_seq(seq: _S) -> "PromptUpdateDetails[_S]":
        return PromptUpdateDetails(full=seq)

    @staticmethod
    def select_text(
        seq: _S,
        embed_text: str,
    ) -> "PromptUpdateDetails[_S]":

        def is_embed(tokenizer: AnyTokenizer, full: PromptSeq) -> torch.Tensor:
            embed_token_ids = encode_tokens(tokenizer, embed_text)
            token_ids = _seq2tokens(tokenizer, full)

            return torch.isin(
                torch.tensor(token_ids),
                torch.tensor(embed_token_ids),
            )

        return PromptUpdateDetails(full=seq, is_embed=is_embed)

    @staticmethod
    def select_token_id(
        seq: _S,
        embed_token_id: int,
    ) -> "PromptUpdateDetails[_S]":

        def is_embed(tokenizer: AnyTokenizer, full: PromptSeq) -> torch.Tensor:
            token_ids = _seq2tokens(tokenizer, full)

            return torch.tensor(token_ids) == embed_token_id

        return PromptUpdateDetails(full=seq, is_embed=is_embed)

full instance-attribute

full: _S

The full content.

is_embed class-attribute instance-attribute

is_embed: Optional[
    Callable[[AnyTokenizer, PromptSeq], Tensor]
] = None

Given full, return a boolean mask of shape (len(full),) indicating which positions of full to assign embeddings to.

None (default) means to assign embeddings to all positions of full.

The embeddings are obtained by calling SupportsMultiModal.get_multimodal_embeddings.

__init__

__init__(
    full: _S,
    is_embed: Optional[
        Callable[[AnyTokenizer, PromptSeq], Tensor]
    ] = None,
) -> None

from_seq staticmethod

from_seq(seq: _S) -> PromptUpdateDetails[_S]
Source code in vllm/multimodal/processing.py
@staticmethod
def from_seq(seq: _S) -> "PromptUpdateDetails[_S]":
    return PromptUpdateDetails(full=seq)

select_text staticmethod

select_text(
    seq: _S, embed_text: str
) -> PromptUpdateDetails[_S]
Source code in vllm/multimodal/processing.py
@staticmethod
def select_text(
    seq: _S,
    embed_text: str,
) -> "PromptUpdateDetails[_S]":

    def is_embed(tokenizer: AnyTokenizer, full: PromptSeq) -> torch.Tensor:
        embed_token_ids = encode_tokens(tokenizer, embed_text)
        token_ids = _seq2tokens(tokenizer, full)

        return torch.isin(
            torch.tensor(token_ids),
            torch.tensor(embed_token_ids),
        )

    return PromptUpdateDetails(full=seq, is_embed=is_embed)

select_token_id staticmethod

select_token_id(
    seq: _S, embed_token_id: int
) -> PromptUpdateDetails[_S]
Source code in vllm/multimodal/processing.py
@staticmethod
def select_token_id(
    seq: _S,
    embed_token_id: int,
) -> "PromptUpdateDetails[_S]":

    def is_embed(tokenizer: AnyTokenizer, full: PromptSeq) -> torch.Tensor:
        token_ids = _seq2tokens(tokenizer, full)

        return torch.tensor(token_ids) == embed_token_id

    return PromptUpdateDetails(full=seq, is_embed=is_embed)

ResolvedPromptUpdate dataclass

A PromptUpdate with its lazy attributes resolved, apart from those related to tokenization.

Source code in vllm/multimodal/processing.py
@dataclass(frozen=True)
class ResolvedPromptUpdate:
    """
    A [`PromptUpdate`][vllm.multimodal.processing.PromptUpdate] with its
    lazy attributes resolved, apart from those related to tokenization.
    """

    modality: str
    """The modality for which the update is made."""

    item_idx: int
    """The index within `modality` of the item this update pertains to."""

    mode: UpdateMode
    """Defines how to update the prompt."""

    target: UpdateTarget
    """The token sequence (or text) to update."""

    content: PromptUpdateDetails = field(repr=False)
    """The placeholder tokens that are part of the update."""

    def iter_token_matches(
        self,
        prompt: list[int],
        tokenizer: AnyTokenizer,
        *,
        start_idx: int = 0,
    ) -> Generator[PromptTargetMatch]:
        """Yield each instance of `self.target` found in `prompt`."""
        target = self.target

        if isinstance(target, PromptIndex):
            match_idx = target.get_match_index(tokenizer, prompt, start_idx)
            if match_idx is not None:
                yield PromptTargetMatch(match_idx, match_idx)

            return

        target_token_ids = _seq2tokens(tokenizer, target)

        for match in iter_token_matches(prompt,
                                        target_token_ids,
                                        start_idx=start_idx):
            yield PromptTargetMatch(match.start_idx, match.end_idx)

    def iter_text_matches(
        self,
        prompt: str,
        tokenizer: AnyTokenizer,
        *,
        start_idx: int = 0,
    ) -> Generator[PromptTargetMatch]:
        """Yield each instance of `self.target` found in `prompt`."""
        target = self.target

        if isinstance(target, PromptIndex):
            match_idx = target.get_match_index(tokenizer, prompt, start_idx)
            if match_idx is not None:
                yield PromptTargetMatch(match_idx, match_idx)

            return

        target_text = _seq2text(tokenizer, target)

        for match in re.finditer(re.escape(target_text), prompt,
                                 pos=start_idx):
            yield PromptTargetMatch(match.start(), match.end())

    def iter_matches(
        self,
        prompt: Union[list[int], str],
        tokenizer: AnyTokenizer,
        *,
        start_idx: int = 0,
    ) -> Generator[PromptTargetMatch]:
        """Yield each instance of `self.target` found in `prompt`."""
        if isinstance(prompt, str):
            return self.iter_text_matches(prompt,
                                          tokenizer,
                                          start_idx=start_idx)

        return self.iter_token_matches(prompt, tokenizer, start_idx=start_idx)

content class-attribute instance-attribute

content: PromptUpdateDetails = field(repr=False)

The placeholder tokens that are part of the update.

item_idx instance-attribute

item_idx: int

The index within modality of the item this update pertains to.

modality instance-attribute

modality: str

The modality for which the update is made.

mode instance-attribute

mode: UpdateMode

Defines how to update the prompt.

target instance-attribute

target: UpdateTarget

The token sequence (or text) to update.

__init__

__init__(
    modality: str,
    item_idx: int,
    mode: UpdateMode,
    target: UpdateTarget,
    content: PromptUpdateDetails,
) -> None

iter_matches

iter_matches(
    prompt: Union[list[int], str],
    tokenizer: AnyTokenizer,
    *,
    start_idx: int = 0,
) -> Generator[PromptTargetMatch]

Yield each instance of self.target found in prompt.

Source code in vllm/multimodal/processing.py
def iter_matches(
    self,
    prompt: Union[list[int], str],
    tokenizer: AnyTokenizer,
    *,
    start_idx: int = 0,
) -> Generator[PromptTargetMatch]:
    """Yield each instance of `self.target` found in `prompt`."""
    if isinstance(prompt, str):
        return self.iter_text_matches(prompt,
                                      tokenizer,
                                      start_idx=start_idx)

    return self.iter_token_matches(prompt, tokenizer, start_idx=start_idx)

iter_text_matches

iter_text_matches(
    prompt: str,
    tokenizer: AnyTokenizer,
    *,
    start_idx: int = 0,
) -> Generator[PromptTargetMatch]

Yield each instance of self.target found in prompt.

Source code in vllm/multimodal/processing.py
def iter_text_matches(
    self,
    prompt: str,
    tokenizer: AnyTokenizer,
    *,
    start_idx: int = 0,
) -> Generator[PromptTargetMatch]:
    """Yield each instance of `self.target` found in `prompt`."""
    target = self.target

    if isinstance(target, PromptIndex):
        match_idx = target.get_match_index(tokenizer, prompt, start_idx)
        if match_idx is not None:
            yield PromptTargetMatch(match_idx, match_idx)

        return

    target_text = _seq2text(tokenizer, target)

    for match in re.finditer(re.escape(target_text), prompt,
                             pos=start_idx):
        yield PromptTargetMatch(match.start(), match.end())

iter_token_matches

iter_token_matches(
    prompt: list[int],
    tokenizer: AnyTokenizer,
    *,
    start_idx: int = 0,
) -> Generator[PromptTargetMatch]

Yield each instance of self.target found in prompt.

Source code in vllm/multimodal/processing.py
def iter_token_matches(
    self,
    prompt: list[int],
    tokenizer: AnyTokenizer,
    *,
    start_idx: int = 0,
) -> Generator[PromptTargetMatch]:
    """Yield each instance of `self.target` found in `prompt`."""
    target = self.target

    if isinstance(target, PromptIndex):
        match_idx = target.get_match_index(tokenizer, prompt, start_idx)
        if match_idx is not None:
            yield PromptTargetMatch(match_idx, match_idx)

        return

    target_token_ids = _seq2tokens(tokenizer, target)

    for match in iter_token_matches(prompt,
                                    target_token_ids,
                                    start_idx=start_idx):
        yield PromptTargetMatch(match.start_idx, match.end_idx)

UpdateMode

Bases: str, Enum

Source code in vllm/multimodal/processing.py
class UpdateMode(str, Enum):
    INSERT = "insert"
    REPLACE = "replace"

INSERT class-attribute instance-attribute

INSERT = 'insert'

REPLACE class-attribute instance-attribute

REPLACE = 'replace'

_GetMatchIndex

Bases: Protocol

Source code in vllm/multimodal/processing.py
class _GetMatchIndex(Protocol):

    def __call__(
        self,
        tokenizer: AnyTokenizer,
        prompt: PromptSeq,
        start_idx: int = 0,
    ) -> Optional[int]:
        ...

__call__

__call__(
    tokenizer: AnyTokenizer,
    prompt: PromptSeq,
    start_idx: int = 0,
) -> Optional[int]
Source code in vllm/multimodal/processing.py
def __call__(
    self,
    tokenizer: AnyTokenizer,
    prompt: PromptSeq,
    start_idx: int = 0,
) -> Optional[int]:
    ...

_HasModalityAttr

Bases: Protocol

Source code in vllm/multimodal/processing.py
class _HasModalityAttr(Protocol):
    modality: str

modality instance-attribute

modality: str

_HasModalityProp

Bases: Protocol

Source code in vllm/multimodal/processing.py
class _HasModalityProp(Protocol):

    @property
    def modality(self) -> str:
        ...

modality property

modality: str

_TokenMatch

Bases: NamedTuple

Source code in vllm/multimodal/processing.py
class _TokenMatch(NamedTuple):
    start_idx: int
    end_idx: int

end_idx instance-attribute

end_idx: int

start_idx instance-attribute

start_idx: int

_apply_matches

_apply_matches(
    prompt: _S,
    mm_prompt_updates: MultiModalPromptUpdates,
    tokenizer: AnyTokenizer,
) -> tuple[list[_S], MultiModalPromptUpdatesApplyResult]
Source code in vllm/multimodal/processing.py
def _apply_matches(
    prompt: _S,
    mm_prompt_updates: "MultiModalPromptUpdates",
    tokenizer: AnyTokenizer,
) -> tuple[list[_S], "MultiModalPromptUpdatesApplyResult"]:
    prompt_len = len(prompt)

    out_seqs = list[Union[str, list[int]]]()
    out_result: MultiModalPromptUpdatesApplyResult = {
        m: [None] * len(items)
        for m, items in mm_prompt_updates.items()
    }

    start_idx = prev_end_idx = 0
    while start_idx < max(prompt_len, 1):  # Allow inserts into empty prompt
        found = False

        mode, matches_to_apply = _find_matches(
            prompt,
            mm_prompt_updates,
            tokenizer,
            prev_end_idx=prev_end_idx,
            current_result=out_result,
        )

        if mode is not None:
            for (modality, item_idx), (match, update_idx) in matches_to_apply:
                found = True

                matched_update = mm_prompt_updates[modality][item_idx][
                    update_idx]
                matched_content = matched_update.content.full

                if mode == UpdateMode.INSERT:
                    end_idx_to_insert = match.end_idx
                elif mode == UpdateMode.REPLACE:
                    end_idx_to_insert = match.start_idx
                else:
                    assert_never(mode)

                out_seqs.append(prompt[prev_end_idx:end_idx_to_insert])
                out_seqs.append(
                    _seq2text(tokenizer, matched_content
                              ) if isinstance(prompt, str) else _seq2tokens(
                                  tokenizer, matched_content))
                out_result[modality][item_idx] = update_idx

                # Exclude overlapping matches
                start_idx = prev_end_idx = match.end_idx

        if not found:
            start_idx += 1

    out_seqs.append(prompt[prev_end_idx:])

    return cast(list[_S], out_seqs), out_result

_cached_decode cached

_cached_decode(
    tokenizer: AnyTokenizer,
    token_ids: tuple[int, ...],
    *,
    skip_special_tokens: Optional[bool] = None,
) -> str
Source code in vllm/multimodal/processing.py
@lru_cache(maxsize=2048)
def _cached_decode(
    tokenizer: AnyTokenizer,
    token_ids: tuple[int, ...],
    *,
    skip_special_tokens: Optional[bool] = None,
) -> str:
    return decode_tokens(tokenizer,
                         list(token_ids),
                         skip_special_tokens=skip_special_tokens)

_cached_encode cached

_cached_encode(
    tokenizer: AnyTokenizer,
    text: str,
    *,
    add_special_tokens: Optional[bool] = None,
) -> list[int]
Source code in vllm/multimodal/processing.py
@lru_cache(maxsize=2048)
def _cached_encode(
    tokenizer: AnyTokenizer,
    text: str,
    *,
    add_special_tokens: Optional[bool] = None,
) -> list[int]:
    return encode_tokens(tokenizer,
                         text,
                         add_special_tokens=add_special_tokens)

_find_matches

_find_matches(
    prompt: _S,
    mm_prompt_updates: MultiModalPromptUpdates,
    tokenizer: AnyTokenizer,
    *,
    prev_end_idx: int = 0,
    current_result: MultiModalPromptUpdatesApplyResult,
) -> tuple[Optional[UpdateMode], list[_MatchToApply]]
Source code in vllm/multimodal/processing.py
def _find_matches(
    prompt: _S,
    mm_prompt_updates: "MultiModalPromptUpdates",
    tokenizer: AnyTokenizer,
    *,
    prev_end_idx: int = 0,
    current_result: "MultiModalPromptUpdatesApplyResult",
) -> tuple[Optional[UpdateMode], list[_MatchToApply]]:
    mode: Optional[UpdateMode] = None
    mm_matches = dict[tuple[str, int], tuple[PromptTargetMatch, int]]()

    for modality, modality_updates in mm_prompt_updates.items():
        for item_idx, item_updates in enumerate(modality_updates):
            if current_result[modality][item_idx] is not None:
                continue  # Updates have already been applied for this item

            for update_idx, update in enumerate(item_updates):
                if (modality, item_idx) in mm_matches:
                    break  # Already found a match for this item

                for match in update.iter_matches(
                        prompt,
                        tokenizer,
                        start_idx=prev_end_idx,
                ):
                    # All matches should share the same mode
                    if mode is None:
                        mode = update.mode
                    elif mode != update.mode:
                        continue

                    mm_matches[(modality, item_idx)] = match, update_idx
                    break  # Get only the first valid match per item

    # Prioritize earlier matches
    matches_to_apply = sorted(mm_matches.items(), key=lambda item: item[1][0])

    # To avoid conflicts, only replace one non-empty item at a time
    if mode == UpdateMode.REPLACE:
        matches_to_apply_ = list[_MatchToApply]()
        has_non_empty_matches = False

        for item in matches_to_apply:
            _, (match, _) = item
            if match.start_idx == match.end_idx:
                matches_to_apply_.append(item)
            elif not has_non_empty_matches:
                has_non_empty_matches = True
                matches_to_apply_.append(item)

        matches_to_apply = matches_to_apply_

    return mode, matches_to_apply

_iter_placeholders

_iter_placeholders(
    prompt: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
    tokenizer: AnyTokenizer,
) -> Iterable[PlaceholderFeaturesInfo]

Yield each set of placeholder tokens found in prompt.

Matches are exclusive even when multiple modalities share the same placeholder tokens. In that case, the modality that appears earlier in mm_prompt_updates takes priority.

Note that empty matches are ignored.

Source code in vllm/multimodal/processing.py
def _iter_placeholders(
    prompt: list[int],
    mm_prompt_updates: "MultiModalPromptUpdates",
    tokenizer: AnyTokenizer,
) -> Iterable[PlaceholderFeaturesInfo]:
    """
    Yield each set of placeholder tokens found in `prompt`.

    Matches are exclusive even when multiple modalities share
    the same placeholder tokens. In that case, the modality that
    appears earlier in `mm_prompt_updates` takes priority.

    Note that empty matches are ignored.
    """
    prompt_len = len(prompt)
    mm_item_counts = {m: len(items) for m, items in mm_prompt_updates.items()}

    item_idx_by_modality = defaultdict[str, int](lambda: 0)

    start_idx = 0
    while start_idx < prompt_len:
        found = False

        for modality, modality_updates in mm_prompt_updates.items():
            item_idx = item_idx_by_modality[modality]
            if item_idx >= mm_item_counts.get(modality, 0):
                continue

            for update in modality_updates[item_idx]:
                content = update.content
                content_tokens_full = _seq2tokens(tokenizer, content.full)
                content_len_full = len(content_tokens_full)
                end_idx_full = start_idx + content_len_full

                if content_len_full == 0 or end_idx_full > prompt_len:
                    continue

                if prompt[start_idx:end_idx_full] == content_tokens_full:
                    content_is_embed = content.is_embed
                    if content_is_embed is not None:
                        content_is_embed = content_is_embed(
                            tokenizer, content.full)

                    yield PlaceholderFeaturesInfo(
                        modality=modality,
                        item_idx=item_idx,
                        start_idx=start_idx,
                        tokens=content_tokens_full,
                        is_embed=content_is_embed,
                    )

                    # Exclude overlapping matches
                    start_idx = end_idx_full
                    item_idx_by_modality[modality] += 1
                    found = True
                    break

            if found:
                break  # Go back to the outer while loop

        if not found:
            start_idx += 1

_seq2text

_seq2text(tokenizer: AnyTokenizer, seq: PromptSeq) -> str
Source code in vllm/multimodal/processing.py
def _seq2text(tokenizer: AnyTokenizer, seq: PromptSeq) -> str:
    if isinstance(seq, str):
        return seq

    return _cached_decode(tokenizer, tuple(seq))

_seq2tokens

_seq2tokens(
    tokenizer: AnyTokenizer, seq: PromptSeq
) -> list[int]
Source code in vllm/multimodal/processing.py
def _seq2tokens(tokenizer: AnyTokenizer, seq: PromptSeq) -> list[int]:
    if isinstance(seq, str):
        return _cached_encode(tokenizer, seq, add_special_tokens=False)

    return seq

apply_text_matches

apply_text_matches(
    prompt: str,
    mm_prompt_updates: MultiModalPromptUpdates,
    tokenizer: AnyTokenizer,
) -> tuple[str, MultiModalPromptUpdatesApplyResult]

Apply the updates in mm_prompt_updates to prompt.

Matches are exclusive even when multiple modalities share the same placeholder tokens. In that case, the modality that appears earlier in mm_prompt_updates takes priority.

Source code in vllm/multimodal/processing.py
def apply_text_matches(
    prompt: str,
    mm_prompt_updates: "MultiModalPromptUpdates",
    tokenizer: AnyTokenizer,
) -> tuple[str, "MultiModalPromptUpdatesApplyResult"]:
    """
    Apply the updates in `mm_prompt_updates` to `prompt`.

    Matches are exclusive even when multiple modalities share
    the same placeholder tokens. In that case, the modality that
    appears earlier in `mm_prompt_updates` takes priority.
    """
    texts, result = _apply_matches(prompt, mm_prompt_updates, tokenizer)

    return "".join(texts), result

apply_token_matches

apply_token_matches(
    prompt: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
    tokenizer: AnyTokenizer,
) -> tuple[list[int], MultiModalPromptUpdatesApplyResult]

Apply the updates in mm_prompt_updates to prompt.

Matches are exclusive even when multiple modalities share the same placeholder tokens. In that case, the modality that appears earlier in mm_prompt_updates takes priority.

Source code in vllm/multimodal/processing.py
def apply_token_matches(
    prompt: list[int],
    mm_prompt_updates: "MultiModalPromptUpdates",
    tokenizer: AnyTokenizer,
) -> tuple[list[int], "MultiModalPromptUpdatesApplyResult"]:
    """
    Apply the updates in `mm_prompt_updates` to `prompt`.

    Matches are exclusive even when multiple modalities share
    the same placeholder tokens. In that case, the modality that
    appears earlier in `mm_prompt_updates` takes priority.
    """
    token_id_seqs, result = _apply_matches(prompt, mm_prompt_updates,
                                           tokenizer)

    return flatten_2d_lists(token_id_seqs), result

find_mm_placeholders

find_mm_placeholders(
    prompt: list[int],
    mm_prompt_updates: MultiModalPromptUpdates,
    tokenizer: AnyTokenizer,
) -> Mapping[str, list[PlaceholderFeaturesInfo]]
Source code in vllm/multimodal/processing.py
def find_mm_placeholders(
    prompt: list[int],
    mm_prompt_updates: "MultiModalPromptUpdates",
    tokenizer: AnyTokenizer,
) -> Mapping[str, list[PlaceholderFeaturesInfo]]:
    it = _iter_placeholders(prompt, mm_prompt_updates, tokenizer)
    return dict(full_groupby_modality(it))

full_groupby_modality

full_groupby_modality(
    values: Iterable[_M],
) -> ItemsView[str, list[_M]]

Convenience function to apply full_groupby based on modality.

Source code in vllm/multimodal/processing.py
def full_groupby_modality(values: Iterable[_M]) -> ItemsView[str, list[_M]]:
    """Convenience function to apply [`full_groupby`][vllm.utils.full_groupby]
    based on modality."""
    return full_groupby(values, key=lambda x: x.modality)

iter_token_matches

iter_token_matches(
    token_ids: list[int],
    match_ids: list[int],
    *,
    start_idx: int = 0,
) -> Generator[_TokenMatch]

Yield each occurrence of match_ids in token_ids.

Note that empty matches are ignored.

Source code in vllm/multimodal/processing.py
def iter_token_matches(
    token_ids: list[int],
    match_ids: list[int],
    *,
    start_idx: int = 0,
) -> Generator[_TokenMatch]:
    """
    Yield each occurrence of `match_ids` in `token_ids`.

    Note that empty matches are ignored.
    """
    prompt_len = len(token_ids)
    match_len = len(match_ids)

    if match_len == 0:
        return

    while start_idx < prompt_len - match_len + 1:
        end_idx = start_idx + match_len

        if token_ids[start_idx:end_idx] == match_ids:
            yield _TokenMatch(start_idx=start_idx, end_idx=end_idx)

            # Exclude overlapping matches
            start_idx = end_idx
        else:
            start_idx += 1

replace_token_matches

replace_token_matches(
    token_ids: list[int],
    match_ids: list[int],
    new_ids: list[int],
) -> list[int]

Replace each occurrence of match_ids in token_ids with new_ids.

Note that empty matches are ignored.

Source code in vllm/multimodal/processing.py
def replace_token_matches(
    token_ids: list[int],
    match_ids: list[int],
    new_ids: list[int],
) -> list[int]:
    """
    Replace each occurrence of `match_ids` in `token_ids`
    with `new_ids`.

    Note that empty matches are ignored.
    """
    out_seqs = list[list[int]]()
    prev_end_idx = 0

    for match in iter_token_matches(token_ids, match_ids):
        start_idx = match.start_idx
        end_idx = match.end_idx

        out_seqs.append(token_ids[prev_end_idx:start_idx])
        out_seqs.append(new_ids)
        prev_end_idx = end_idx

    out_seqs.append(token_ids[prev_end_idx:])

    return flatten_2d_lists(out_seqs)