Wrappers
Reference information for the language model Wrappers
API.
eva.language.models.wrappers.HuggingFaceModel
Bases: LanguageModel
Wrapper class for loading HuggingFace transformers
models using pipelines.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name_or_path |
str
|
The model name or path to load the model from.
This can be a local path or a model name from the |
required |
task |
Literal['text-generation']
|
The pipeline task. Defaults to "text-generation". |
'text-generation'
|
model_kwargs |
Dict[str, Any] | None
|
Additional arguments for configuring the pipeline. |
None
|
system_prompt |
str | None
|
System prompt to use. |
None
|
generation_kwargs |
Dict[str, Any] | None
|
Additional generation parameters (temperature, max_length, etc.). |
None
|
chat_mode |
bool
|
Whether the specified model expects chat style messages. If set to False the model is assumed to be a standard text completion model and will expect plain text string inputs. |
True
|
Source code in src/eva/language/models/wrappers/huggingface.py
load_model
Loads the model as a Hugging Face pipeline.
format_inputs
Formats inputs for HuggingFace models.
Note: If multiple system messages are present, they will be combined into a single message, given that many models only support a single system prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
TextBatch
|
A batch of text and image inputs. |
required |
Returns:
Type | Description |
---|---|
List[List[Dict[str, Any]]] | List[str]
|
When in chat mode, returns a batch of message series following |
List[List[Dict[str, Any]]] | List[str]
|
OpenAI's API format {"role": "user", "content": "..."}, for non-chat |
List[List[Dict[str, Any]]] | List[str]
|
models returns a list of plain text strings. |
Source code in src/eva/language/models/wrappers/huggingface.py
model_forward
Generates text using the pipeline.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompts |
List[str]
|
The input prompts for the model. |
required |
Returns:
Type | Description |
---|---|
ModelOutput
|
The generated text as a string. |
Source code in src/eva/language/models/wrappers/huggingface.py
eva.language.models.wrappers.LiteLLMModel
Bases: LanguageModel
Wrapper class for LiteLLM language models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name |
str
|
The name of the model to use. |
required |
model_kwargs |
Dict[str, Any] | None
|
Additional keyword arguments to pass during
generation (e.g., |
None
|
system_prompt |
str | None
|
The system prompt to use (optional). |
None
|
log_level |
int | None
|
Optional logging level for LiteLLM. Defaults to WARNING. |
INFO
|
Source code in src/eva/language/models/wrappers/litellm.py
format_inputs
Formats inputs for LiteLLM.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
TextBatch
|
A batch of text inputs. |
required |
Returns:
Type | Description |
---|---|
List[List[Dict[str, Any]]]
|
A list of messages in the following format: |
List[List[Dict[str, Any]]]
|
[ { "role": ... "content": ... }, ... |
List[List[Dict[str, Any]]]
|
] |
Source code in src/eva/language/models/wrappers/litellm.py
model_forward
Generates output text through API calls via LiteLLM's batch completion functionality.
Source code in src/eva/language/models/wrappers/litellm.py
eva.language.models.wrappers.VllmModel
Bases: LanguageModel
Wrapper class for using vLLM for text generation.
This wrapper loads a vLLM model, sets up the tokenizer and sampling parameters, and uses a chat template to convert a plain string prompt into the proper input format for vLLM generation. It then returns the generated text response.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name_or_path |
str
|
The model identifier (e.g., a Hugging Face repo ID or local path). |
required |
model_kwargs |
Dict[str, Any] | None
|
Arguments required to initialize the vLLM model, see link for more information. |
None
|
system_prompt |
str | None
|
System prompt to use. |
None
|
generation_kwargs |
Dict[str, Any] | None
|
Arguments required to generate the output, need to align with the arguments of vllm.SamplingParams. |
None
|
Source code in src/eva/language/models/wrappers/vllm.py
load_model
Create the vLLM engine on first use.
This lazy initialisation keeps the wrapper picklable by Ray / Lightning.
Source code in src/eva/language/models/wrappers/vllm.py
format_inputs
Formats inputs for vLLM models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
TextBatch
|
A batch of text and image inputs. |
required |
Returns:
Type | Description |
---|---|
List[TokensPrompt]
|
List of formatted prompts. |
Source code in src/eva/language/models/wrappers/vllm.py
model_forward
Generates text for the given prompt using the vLLM model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[TokensPrompt]
|
A list encoded / tokenized messages (TokensPrompt objects). |
required |
Returns:
Type | Description |
---|---|
List[str]
|
The generated text response. |