Skip to content


The OpenAiGPTSpec class is designed to configure and manage the interaction with OpenAI’s GPT models within the Eidolon framework. This specification allows for customization of the GPT model behavior, including temperature settings, token limits, and JSON formatting preferences.


modeltype: Reference[LLMModel]
Default: Reference[gpt_4]
Description: Specifies the GPT model to use, typically configured to use GPT-4 for its advanced capabilities in language understanding and generation.
temperaturetype: float
Default: 0.3
Description: Sets the creativity of the model’s responses. A lower value makes the model’s responses more deterministic and predictable.
force_jsontype: bool
Default: True
Description: Forces the model to output responses in JSON format, facilitating easier parsing and integration within digital systems.
max_tokenstype: Optional[int]
Default: None
Description: Limits the number of tokens in the model’s responses, useful for controlling response length or computational load.
connection_handlertype: Reference[OpenAIConnectionHandler]
Default: Reference[OpenAIConnectionHandler]
Description: Manages the connection to the OpenAI API, ensuring secure and efficient communication with the service.