Skip to content

MistralGPTSpec

The MistralGPTSpec class configures the interaction with Mistral’s GPT models within the Eidolon framework, defaulting to the Mistral Large model. It allows customization of the model’s behavior, including temperature settings, JSON formatting, and token limits.

Spec

KeyDescription
modeltype: AnnotatedReference[LLMModel, mistral_large]
Default: mistral_large
Description: Specifies the Mistral Large model, known for its advanced capabilities in language understanding and generation.
temperaturetype: float
Default: 0.3
Description: Sets the creativity of the model’s responses. A lower value results in more deterministic and predictable responses.
force_jsontype: bool
Default: True
Description: Forces the model to output responses in JSON format, facilitating easier parsing and integration within digital systems.
max_tokenstype: Optional[int]
Default: None
Description: Limits the number of tokens in the model’s responses, which is useful for controlling response length or computational load.
client_argstype: dict
Default: {}
Description: Allows for the passing of additional arguments to the model client, providing flexibility to customize the model’s behavior based on specific requirements.