Give shape to what your agents know.

Runeform is the semantic reasoning infrastructure beneath agentic AI — a workspace where domain experts design, version, and deploy the ontologies that constrain agent behavior, at query-time speed.

Request early access

What it is

An abstraction layer for AI agents, grounded in formal logic.

Every business putting agentic AI into production eventually runs into the same ceiling: prompts don't scale, fine-tuning is opaque, and hard-coded rules become unmaintainable. Runeform gives you a place to author the ontologies that define your world — the types, properties, and rules your agents reason over — and an API that serves those definitions to every agent, in real time.

When your ontology changes, the change propagates immediately to every agent in your system. Think of it as a hive mind for agents: one source of truth for decision logic, traceable from agent behavior back to the rule that produced it.

Who it's for

Teams putting agentic AI into production.

Runeform is built for product, AI, and domain teams that have outgrown prompt-stuffing. You'll recognize yourself if:

Our story

We built Runeform because we needed it.

Runeform began as an internal tool. We were building Kinection — an expert conversational AI for relationships — and we needed a way to constrain a frontier LLM to reason with the same understanding as the psychology professor on our team. We built controlled vocabularies, formalized the rules, and stuffed them into prompts. Model performance jumped. Hallucinations dropped. Interpretability became possible.

But as our agents multiplied, so did the concepts behind them. Soon we were managing hundreds of definitions — authoring, versioning, tracing changes — and no existing tool fit the job. The ontology editors on the market required months of training and targeted PhD ontologists. We needed something a domain expert could use.

So we built it. Runeform is the tool we wish had existed when we started, and we're opening it up to everyone else building in this space.

The bet

What we're proving We can make ontology reasoning fast enough to run at agent query-time, non-expert-friendly enough that domain users author ontologies via chat, and flexible enough that property versions enable continuous ontology evolution.

Mission

Democratize the abstraction layer beneath AI.

We believe the next advances in AI won't come from bigger models alone. They'll come from the parallel systems that constrain and guide them — the conceptual scaffolding that turns raw intelligence into reliable expertise. That's the position Yann LeCun has argued, and it's the one we're building toward.

Structured formal understanding of the world — ontologies — has been the province of specialists for forty years. We want it to be a tool every team can pick up. When shaping meaning is as accessible as shaping a design system, the systems we build on top of AI get dramatically better.

Contact

Get in early.

We're onboarding early partners now. If you're working on agentic systems and the problems above sound familiar, we'd like to talk.

hello@runeform.ai