How to Build Custom Agent Templates
Eidolon defines several useful built-in AgentTemplates out of the box, but for domain specific problems, you may need to create your own.
Why
Most projects will end up needing custom agents with domain specific logic. It is crucial for an agentic framework to be flexible enough to “hand over the keys” when developers just need to write their own code. Trying to incorporate this into an agentic framework just means creating another programing language. Nothing gained, but a lot lost.
How
You can create new agent templates by creating a class and decorating the methods you would like to expose as actions. You also specify the states these actions are allowed on, and return the state the agent should transition to after the action is complete.
To use this agent template, you simply refer to it in your agent’s yaml file by its fully qualified name.
But wait 💭, that didn’t use an LLM at all… what gives?! This is because there are no fixed patterns in Eidolon that you are forced to use. This also enables you to use Eidolon in tandem with any other LLM frameworks and libraries.
What is the point of this then? By defining an agent in this way, you get deployment, and more importantly inter-agent communication for free. So even though this “agent” is not using an LLM, other agents (and external services) can still communicate with it.
LLM Based Agent Template
Ok, so now let’s actually create an agent that uses an LLM. You could use langchain or even raw calls out to OpenAI if you want, but to fully leverage Eidolon’s capabilities, you should use an Agent Processing Unit (APU).
The APU is Eidolon’s abstraction around LLM interactions. It provides an LLM-agnostic, multi-media interface. The APU gives developers built-in tooling to manage memory, inter-agent-communication, logic units, metrics, and prompt engineering.
Everything you love about Eidolon is baked into the APU.
🔎 What is this Specable
thing? Eidolon uses Pydantic models to define the spec of different resources. Learn more about how references work.
Streaming Response
LLMs can be slow, and sometimes you want to stream responses back to the user. Eidolon supports this by allowing you to yield events from your action. The APU can be called with stream_request to yield these events.