Every AI agent tool today faces the same fundamental question: how does the agent know what it knows?
Most agentic platforms — OpenClaw, AutoGPT, CrewAI — answer this with files. Markdown files. YAML configs. JSON blobs in a folder. The agent reads a file, does a thing, writes a file back. It works. Until it doesn't.
We took a different path with Runlo. Our context layer is a relational database — not a filesystem. And that single decision shaped everything else about how Runlo works.
1. The Relational Context Layer: Your Database Is Your Agent's Brain
The Problem with File-Based Context
File-based agent systems hit a wall fast:
- No relationships. A markdown file about "Project Alpha" can't natively link to a contact record for "Sarah Chen" or a task due Thursday. You're left stuffing cross-references into free text and hoping the LLM figures it out.
- No queries. Want all overdue tasks across all projects? You parse every file. Want contacts mentioned in the last 3 meetings? Good luck with
grep. - No atomicity. Two concurrent agent steps writing to the same markdown file? Race condition. File-based systems either serialize everything (slow) or hope for the best (broken).
- No access control. Multi-tenant? You're managing folder permissions. Row-level security? Doesn't exist in a filesystem.
How Runlo Does It
Every piece of context in Runlo lives in Postgres with explicit relationships:
users
├─ contacts (people you work with)
├─ projects (what you're working on)
├─ tasks & commitments (what's due)
├─ user_memory (pinned facts about you)
├─ conversations (per-channel history)
├─ collections (custom structured data)
├─ missions (active goals)
└─ meetings (calendar events + prep/debrief)
When the agent prepares for your Monday meeting, it doesn't parse a folder of notes. It runs a query: "Give me Sarah's contact record, the Acme project status, overdue tasks for that project, and any commitments I made to her in the last 2 weeks." That's a join across 4 tables. It takes milliseconds. And the result is precise — not "I found a file that might mention Sarah."
Why This Matters for Accuracy
LLMs are only as good as the context you give them. File-based systems stuff everything into the prompt and hope the model picks out what's relevant. Our approach is surgical: we query the exact data the agent needs, structured with types and relationships, and inject it into the prompt. The model spends tokens on reasoning, not on parsing messy markdown.
2. Dynamic Prompt Assembly: Prompt-as-Code, Not Prompt-as-File
Most agent frameworks have a system prompt. It's a long string in a file. Maybe there's some variable interpolation. That's it.
Runlo assembles the system prompt per request from 10+ data sources:
- Base identity and personality (user-configurable tone)
- Enabled skill definitions (each skill injects its own instructions)
- User memory — pinned facts first, then semantic search results
- Known contacts (recent, for entity awareness)
- Active commitments (overdue items surface automatically)
- Projects and custom collections
- Active missions and goals
- Background job context (when running autonomously)
This is prompt-as-code — the system prompt is a function of your data, not a static file. When you enable a new skill, its instructions appear in the prompt. When you add a contact, the agent knows about them. When a task goes overdue, it surfaces unprompted.
The power here is composability. Skills don't override each other — they compose. Your meeting-prep skill can reference data from your book-mastery skill's collection because they share the same relational context layer. No inter-file coordination. No custom glue code. Just SQL.
3. Multi-Tenancy as a First Principle, Not an Afterthought
File-based agent systems are inherently single-user. Adding multi-tenancy means bolting on folder isolation, file-level ACLs, and hoping no path traversal bugs leak data between users.
Runlo is multi-tenant from line one:
- Every table has
user_id. There is no shared mutable state between users. - Row-Level Security (RLS) in Postgres enforces isolation at the database level — even if application code has a bug, the database won't return another user's data.
- Per-user encryption. Each user has a unique salt. OAuth tokens and API keys are encrypted with a key derived from the master key plus the user's salt via HKDF. Compromising one user's credentials requires both the master key AND their specific salt.
- Per-user concurrency control. The scheduler tracks active jobs per user in memory. No single user can monopolize the job queue.
This isn't defense-in-depth for enterprise sales. It's the natural consequence of building on a relational database that was designed for exactly this kind of isolation.
4. Channel-First Identity: One User, Every Platform
Your agent should know you whether you're messaging from Discord, Telegram, WhatsApp, email, or the web app. In Runlo, a single channel_links table maps all your platform identities to one user record.
Your conversations are isolated per channel (your Discord thread is separate from your web chat), but your memory, contacts, projects, and skills are global. Ask the agent about your Tuesday meeting from any platform — same answer, same context.
Try doing this with markdown files. You'd need a sync layer, conflict resolution, and a way to merge context from multiple file stores. With Postgres, it's a foreign key.
5. The Skill System: Extensibility Without the Framework Tax
Most agent frameworks make you learn a framework. Custom tools? Subclass BaseTool. Custom prompts? Override get_system_prompt(). Custom scheduling? Write a plugin.
Runlo skills are data, not code:
- A skill is a database record with a prompt addition (text injected into the system prompt) and optional triggers (cron expressions).
- Users can create custom skills through the API — no code, no deployment, no plugin SDK.
- Skills compose naturally because they share the relational context layer.
- Each skill's instructions are sandboxed and explicitly marked as data, preventing prompt injection from user-created skills.
The 20-skill-per-user limit, 2000-character prompt limit, and 10-config-field limit aren't arbitrary — they're guardrails that keep the system simple and the prompts focused. Constraints are features.
6. MCP Integration: Tiered Trust, Not Blind Execution
Runlo supports the Model Context Protocol (MCP) for tool integration, but with a tiered trust model:
Tier 1 — Platform-managed tools (Gmail, Calendar, Notion, etc.): Direct API wrappers. Fast, reliable, and maintained by us. Multi-tenant with per-user OAuth.
Tier 2 — User-connected remote MCPs: Users paste any MCP server URL. But we don't trust them blindly:
- SSRF prevention (IP denylist for private ranges, cloud metadata endpoints)
- DNS rebinding protection (resolve immediately before each request)
- Response size caps and timeout enforcement
- Result sanitization (prompt injection scanning)
- Lazy connection (connected on first use, not at startup)
Every tool also carries an action type — read, write, side-effect, or dangerous — which drives the approval policy. Your agent can read your calendar without asking, but it'll confirm before sending an email. Trust-by-default for reads, approval-by-default for writes.
7. Background Jobs: Autonomy with Guardrails
Autonomous agents that run unsupervised need hard limits. Runlo's background job system uses pg-boss (a Postgres-backed job queue) with:
- Step tracking: Each autonomous loop has a step counter and a max of 200 steps.
- Hard timeouts: 5-minute wall clock limit per execution.
- Per-user concurrency: Max 3 concurrent jobs per user.
- Atomic credit consumption: Every LLM call deducts credits transactionally — no overdraft possible.
The key insight: autonomy is a resource, not a feature. It should be metered, bounded, and auditable. Not "run until you're done."
8. Simplicity Over Abstraction
Runlo's agent loop is roughly 200 lines of TypeScript. It calls the Vercel AI SDK's streamText() with the assembled prompt and discovered tools. That's it.
No custom orchestration framework. No DAG engine. No "agent graph" abstraction. The LLM decides what to do next. The database provides context. The tools execute actions. The prompt assembles the instructions.
When something breaks, you read 200 lines of code, not 20 layers of framework abstraction. When you want to change behavior, you modify a prompt section or a database query, not a plugin interface.
We believe the best agent architecture is the thinnest possible layer between the LLM and your data. Everything else is overhead.
The Underlying Thesis
File-based context systems were built for a world where agents are toys — single-user, single-session, running on your laptop. Relational databases were built for a world where data has structure, relationships matter, multiple users need isolation, and queries need to be fast and precise.
We're building agents for the second world.
Runlo's design philosophy can be summarized in one sentence: Treat agent context with the same rigor you'd treat application data — because that's what it is.
Try Runlo free — your agent's context layer is ready in 5 minutes.
Start Free