vllm.engine.output_processor.interfaces
SequenceGroupOutputProcessor ¶
Bases: ABC
Interface for logic that processes new token ids in sequence groups, managing detokenization, stop checking, and freeing/forking sequences with the scheduler.
This is highly coupled with the LLMEngine and should be seen as an extension of it. The logic is separated to simplify the LLMEngine class and allow separate implementations for single-step decoding (which supports beam search sequence forking) and multi-step decoding (which does not support beam search, but does support speculative decoding).
Source code in vllm/engine/output_processor/interfaces.py
create_output_processor staticmethod
¶
create_output_processor(
scheduler_config: SchedulerConfig,
detokenizer: Detokenizer,
scheduler: List[Scheduler],
seq_counter: Counter,
get_tokenizer_for_seq: Callable[
[Sequence], AnyTokenizer
],
stop_checker: StopChecker,
)
Create an output processor.
Multi-step scheduling is no longer supported. Always return a single-step output processor.
Source code in vllm/engine/output_processor/interfaces.py
process_outputs abstractmethod
¶
process_outputs(
sequence_group: SequenceGroup,
outputs: List[SequenceGroupOutput],
is_async: bool,
) -> None
Process new token ids for the sequence group. Handles logic such as detokenization, stop checking, and freeing/forking sequences in the scheduler.
Source code in vllm/engine/output_processor/interfaces.py
process_prompt_logprob abstractmethod
¶
process_prompt_logprob(
seq_group: SequenceGroup,
outputs: List[SequenceGroupOutput],
) -> None
Update prompt logprobs received from outputs to seq_group.