Wrappers
Reference information for the language model Wrappers
API.
eva.language.models.wrappers.HuggingFaceTextModel
Bases: BaseModel[List[str], List[str]]
Wrapper class for loading HuggingFace transformers
models using pipelines.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name_or_path |
str
|
The model name or path to load the model from.
This can be a local path or a model name from the |
required |
task |
Literal['text-generation']
|
The pipeline task. Defaults to "text-generation". |
'text-generation'
|
model_kwargs |
Dict[str, Any] | None
|
Additional arguments for configuring the pipeline. |
None
|
generation_kwargs |
Dict[str, Any] | None
|
Additional generation parameters (temperature, max_length, etc.). |
None
|
Source code in src/eva/language/models/wrappers/huggingface.py
load_model
Loads the model as a Hugging Face pipeline.
Source code in src/eva/language/models/wrappers/huggingface.py
model_forward
Generates text using the pipeline.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompts |
List[str]
|
The input prompts for the model. |
required |
Returns:
Type | Description |
---|---|
List[str]
|
The generated text as a string. |
Source code in src/eva/language/models/wrappers/huggingface.py
eva.language.models.wrappers.LiteLLMTextModel
Bases: BaseModel[List[str], List[str]]
Wrapper class for using litellm for chat-based text generation.
This wrapper uses litellm's completion
function which accepts a list of
message dicts. The forward
method converts a string prompt into a chat
message with a default "user" role, optionally prepends a system message,
and includes an API key if provided.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name_or_path |
str
|
The model identifier (or name) for litellm (e.g.,"openai/gpt-4o" or "anthropic/claude-3-sonnet-20240229"). |
required |
model_kwargs |
Dict[str, Any] | None
|
Additional keyword arguments to pass during
generation (e.g., |
None
|
Source code in src/eva/language/models/wrappers/litellm.py
load_model
Prepares the litellm model.
Note
litellm doesn't require an explicit loading step; models are called directly during generation. This method exists for API consistency.
Source code in src/eva/language/models/wrappers/litellm.py
model_forward
Generates text using litellm.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompts |
List[str]
|
A list of prompts to be converted into a "user" message. |
required |
Returns:
Type | Description |
---|---|
List[str]
|
A list of generated text responses. Failed generations will contain |
List[str]
|
error messages instead of generated text. |
Source code in src/eva/language/models/wrappers/litellm.py
eva.language.models.wrappers.VLLMTextModel
Bases: BaseModel
Wrapper class for using vLLM for text generation.
This wrapper loads a vLLM model, sets up the tokenizer and sampling parameters, and uses a chat template to convert a plain string prompt into the proper input format for vLLM generation. It then returns the generated text response.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name_or_path |
str
|
The model identifier (e.g., a Hugging Face repo ID or local path). |
required |
model_kwargs |
Dict[str, Any] | None
|
Arguments required to initialize the vLLM model, see link for more information. |
None
|
generation_kwargs |
Dict[str, Any] | None
|
Arguments required to generate the output, need to align with the arguments of vllm.SamplingParams. |
None
|
Source code in src/eva/language/models/wrappers/vllm.py
load_model
Create the vLLM engine on first use.
This lazy initialisation keeps the wrapper picklable by Ray / Lightning.
Source code in src/eva/language/models/wrappers/vllm.py
generate
Generates text for the given prompt using the vLLM model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompts |
List[str]
|
A list of string prompts for generation. |
required |
Returns:
Type | Description |
---|---|
List[str]
|
The generated text response. |