Multi-Modality#
vLLM provides experimental support for multi-modal models through the vllm.multimodal package.
Multi-modal inputs can be passed alongside text and token prompts to supported models
via the multi_modal_data field in vllm.inputs.PromptType.
Looking to add your own multi-modal model? Please follow the instructions listed here.