Building Loop: A Slack/Discord Bot That Runs AI Agents in Docker
I recently open-sourced Loop, a Slack/Discord bot that runs Claude Code agents inside Docker containers. You @mention it in a channel, it spins up an isolated container, runs claude --print, and streams the response back. It also supports scheduled tasks, an MCP server that agents use to schedule follow-up work, and a semantic memory system powered by Ollama embeddings.
Why I Built This
I wanted Claude Code to be always available through chat — not a wrapper around the API, but the full CLI with tool use, file access, and session continuity. The obvious approach was running it on the host, but I hit problems fast. Two people messaging the same channel at once meant two Claude processes stomping on the same project directory. No isolation, no cleanup.
Docker solved this cleanly. Each agent run gets its own container with the project directory mounted. One container per channel at a time, automatic cleanup when it's done. The bot manages all the lifecycle so I don't think about it.
How It Fits Together
Slack/Discord → Bot → Orchestrator → DockerRunner → Container (claude --print)
↑
Scheduler (poll loop)
↑
MCP Server (inside container) → API Server → SQLite
The orchestrator coordinates everything. The DockerRunner handles containers. The scheduler polls SQLite for due tasks. And the MCP server runs inside each container, giving Claude tools to call back into the system.
What It Looks Like in Practice
Once the bot is running, you interact with it like any other team member — @mention it in a channel, reply to its messages, DM it, or use !loop as a prefix.
Basic usage:
@LoopBot what's the status of the payments service?
@LoopBot review the last 3 commits and flag anything concerning
!loop summarize today's changes
The bot streams responses back in real-time. While it's working, a Stop button appears if you need to cancel.
Threads for longer work:
@LoopBot investigate the failing CI pipeline and work in a thread
Threads inherit the parent channel's project directory but get their own independent session. I use this constantly — kick off an investigation in a thread and come back to it later.
Scheduling:
/loop schedule "0 9 * * 1-5" cron Review open PRs and post a summary
/loop schedule "30m" interval Check API health and alert on errors
/loop schedule "2026-03-01T14:00:00Z" once Run the quarterly DB migration
The three schedule types — cron, interval, one-shot — cover most of what I need. Tasks are managed with /loop tasks, /loop cancel <id>, and /loop toggle <id>.
Reminders:
@LoopBot remind me in 30 minutes to check the deployment
@LoopBot remind me tomorrow morning to update the changelog
The agent uses its schedule_task MCP tool under the hood. I didn't build a reminder feature — it just emerged from giving the agent scheduling capabilities.
Cross-channel communication:
@LoopBot check the #backend channel for recent errors and summarize them here
@LoopBot send a message to #devops asking to rotate the API keys
Agents use search_channels to find the target and send_message to reach out. If the message includes a bot mention, it triggers a new agent run in that channel — so agents can coordinate work across projects.
Parallel ticket workflows:
This is the most powerful pattern. You ask the bot to break a task into tk tickets:
@LoopBot analyze the test files and create tk work tickets to reduce verbosity
in each test file. Tag them with "work". Don't start working on them yet.
A heartbeat task polls for ready tickets. When it finds them, a dispatcher creates a worker thread per ticket — each in its own git worktree. Workers implement in parallel, commit, close their tickets. Merge tickets are chained so branches merge back into main one at a time, in order.
The whole thing runs autonomously. I've used it to refactor test files across an entire codebase — create the tickets, walk away, come back to merged PRs. Setup details are in the Parallel Work with tk Tickets section of the README.
Now I'll go through the parts that were interesting to build.
The Concurrency Bug
Early on, three messages in quick succession would spawn three containers fighting over the same working directory. I needed per-channel serialization without blocking other channels. A buffered channel per channel ID as a semaphore solved it — simple and correct.
100% Test Coverage
Loop talks to Slack, Discord, Docker, SQLite, and the filesystem. I couldn't test any of this end-to-end, so I put interfaces in front of everything early and enforced 100% coverage via make coverage-check. Sounds extreme, but with this many moving parts interacting, it catches gaps that would otherwise show up in production at 2am.
Parsing Claude's Stream-JSON
Claude Code's --output-format stream-json emits newline-delimited JSON. I needed to stream responses back to chat in real-time while also capturing the final result.
The approach: two-phase unmarshal. Extract just the type field first, then decode the full struct only for events I care about. An onTurn callback forwards each assistant turn to chat as it arrives instead of buffering the whole response. This keeps latency low — the user sees text appearing in Slack while Claude is still thinking.
The ugliest part was handling Claude's "Prompt is too long" error. When it happens mid-session, the runner automatically sends /compact and retries. It's a hack, but it means long-running conversations don't just die.
Why a Polling Scheduler
I considered a priority queue with timers, but went with a simple polling loop — query SQLite every 30 seconds for due tasks. The reason: the daemon might restart at any time. A polling loop just picks up where it left off. No lost timers, no in-memory state to reconstruct. SQLite is always the source of truth.
When Agents Started Scheduling Agents
The MCP server is where things got interesting. Each container gets an MCP config that points back to the Loop daemon's API, so Claude inside the container can use tools to schedule follow-up tasks, create threads, send cross-channel messages, and query semantic memory.
Built with the official Go MCP SDK. Each tool is a handler that calls the daemon's REST API.
I didn't design the parallel ticket workflows explicitly. I built scheduling, threads, and MCP tools as independent primitives. The ticket workflow emerged from composing them. That was the moment I realized the system had enough building blocks to support workflows I hadn't anticipated.
Hindsight
The Docker SDK is verbose. Creating a container requires assembling container.Config, host.Config, and network.NetworkingConfig separately. I wrapped it in a simpler struct, but there's still a lot of translation code. If I were starting over, I'd consider shelling out to docker run for simple cases.
HJSON for config was worth it. Comments and trailing commas in JSON. Config files become living documentation — // required for Socket Mode next to the token field saves a trip to the README.
SQLite over Postgres was the right call. Pure Go driver (modernc.org/sqlite), no CGO, single file. Channels, messages, tasks, run logs, embeddings — all in one place. Backup is copying a file.
Try It
brew install radutopala/tap/loop
loop onboard:global
loop serve
Full source: github.com/radutopala/loop.