Skip to content


In this recipe we have created a RAG chatbot powered by documents living in s3.

It dynamically pulls in information via similarity search to answer user queries.

This is important if you have a body of information that is constantly changing, but you need real time information about (ie, a git repository).

Core Concepts

Multi-agent communication
Sub-component customization
Dynamic embedding management


Conversational Agent

The user facing copilot. Ask this agent questions and it use the llm to provide answers while reaching out to the S3 Search Agent as needed for relevant documents as needed assistance of the repo search agent.

The SimpleAgent template defines a shorthand mechanism to add an agent as a logic unit. Just add the downstream agent to the agent_refs list.

agent_refs: [s3_search]


Handles loading, embedding, and re-embedding documents ensuring they are up-to-date.

Translates queries into a vector search query and returns the top results.

You will notice that this agent uses the RetrieverAgent template. By default, this template is defined to use a loader that reads files from disk, but Eidolon has a GitHub loader built in that we can use.

implementation: S3Loader
bucket: agentic-papers
region_name: us-east-2
aws_access_key_id: ####
aws_secret_access_key: ####

Try it out!

First clone Eidolon’s chatbot repository and then start your server.

Terminal window
git clone
cd eidolon-s3-rag
make serve-dev

🚨make sure you set your tokens

Now you can interact with the Repo Expert via the Eidolon UI or the CLI. For this example let’s launch the UI.

Terminal window
docker run -e "EIDOLON_SERVER=http://host.docker.internal:8080" -p 3000:3000 eidolonai/webui:latest

Now Head over to the dev tool ui in your favorite browser and start chatting with your new agent.