Skip to content


This Recipe shows an example of a multi-llm multimedia enabled chatbot.

Not all LLMs support multimedia, let alone mid-conversation brain-boosts. This can cause issues when swapping out components.

Eidolon’s AgentProcessingUnit abstracts away those concepts so you can enable multimedia, json output, and function calling on even the smallest LLM.

Core Concepts

Customizing the AgentProcessingUnit
Running the UI


Conversational Agent

This uses the SimpleAgent template, but needs some customization to enable file uploads and support multiple LLMs.

You will notice that enabled file upload on our AgentProcessingUnit’s primary action.

- name: "converse"
description: "A copilot that engages with the user."
allow_file_upload: true

We also have a list of available APUs in resources/apus.yaml.

- apu: MistralSmall
title: Mistral Small
- apu: MistralMedium
title: Mistral Medium
- apu: MistralLarge

We did not need to make any customization to support multimedia within the APU, this is turned on by default 🚀.

Try it out!

First let’s fork for Eidolon’s chatbot repository, clone it to your local machine, and start your server.

Terminal window
gh repo fork eidolon-ai/eidolon-chatbot --clone=true
cd eidolon-chatbot
make serve-dev

Next let’s run the ui locally.

Terminal window
docker run -e "EIDOLON_SERVER=http://host.docker.internal:8080" -p 3000:3000 eidolonai/webui:latest

Now Head over to the chatbot ui in your favorite browser and start chatting with your new agent.