← Back to Blog

Augur: A Text-Based Boss Battle Powered by AI

auguraigamedevllm

Augur is a text-based boss battle where the boss is an LLM. The premise is simple: you find yourself in a library, and that's it. There's no tutorial, very little narrative. You learn the game as the game learns you.

You type what you do. It tells you what happens. There are no menus, no health bars, and no predetermined outcomes. It's just you, an AI opponent called the Architect, and a door you need to get through.

Augur is a game I've wanted to write for a long time. It's inspired by text games of my youth, such as Colossal Cave Adventure and Zork, but with a twist. The game is powered by AI: reasoning, inference, and consequences are all emergent. Players are not constrained by the small list of supported actions game designers (me) included in the UI. Consequently, the world and the narrative are much more open.

What the game feels like

You start in a library. There's a figure standing by a door at the far end of the room. You can look around, move between sections, pick things up. Everything you do is typed in plain language — "I walk toward the north stacks" or "I grab the bronze bookend off the shelf."

The game responds with prose. It describes what you see, what the Architect does, and what happens when you act. The Architect has a personality. It watches you. It comments on your choices — what you picked up, how you're moving, whether you're hesitating. If you do something creative, it might tell you it's impressed. If you do something reckless, it lets you know.

There are weapons in the library, but there's no equipment screen. You find a heavy iron candelabra, a desk lamp with an electrical cord, a crystalline rod tucked between two books. You decide what's useful. A player in an early playtest spilled ink across the floor and set it on fire. Another froze the ground with the crystalline rod and shoved the Architect into the ice. Nobody designed those interactions — the game reasoned through them.

You can fight. You can sneak through the stacks and try to reach the door while the Architect searches for you. You can try to talk your way past it. All five outcomes (three wins and two losses) are genuinely reachable.

Not Your Traditional Rule Based Engine

Every object in Augur has a physical description: what it's made of, how heavy it is, what happens when it breaks. There's no damage type system, no element table, no spell slots. The LLM reads those descriptions and reasons about what should happen when things interact.

Metal conducts electricity. Glass shatters on impact. Alcohol-based ink is flammable. A frozen floor is slippery. These interactions aren't coded — the LLM works them out from the properties each time.

The same principle applies to injuries. When you get hit, you get a specific consequence: bruised ribs that make deep breathing hurt, a numb hand that can't grip a weapon, blood in your eyes that impairs your vision. Those descriptions accumulate and constrain what you can do next. The Architect's body works the same way. There's no boss HP pool absorbing your attacks; a bat to its knee does exactly what you'd expect.

This is where the interesting stuff happens. When a player spills flammable ink near a candelabra, the game doesn't look up a fire interaction in a table. It reads the properties of ink and flame and figures out the result. The encounter can produce moments nobody anticipated because the reasoning is generative.

How the architecture evolved

The first version of Augur ran on a single LLM. One model handled everything: resolving player actions, tracking game state, controlling the Architect, and writing the narrative. It worked well enough to prove the concept was worth pursuing, but it had a fundamental problem.

The Architect knew everything. It knew where you were, what you were carrying, and what you were planning, because the same context window held all of it. The Architect, just by the way the engine was designed, inherited God Mode by default. I needed to create information asymmetry - to forcefully conceal information from the Architect. I asked the model to pretend it didn't have access to information it could plainly see. That's prompt engineering on the honor system, and it's fragile. Sometimes the model would "forget" it wasn't supposed to know where you were hiding. Sometimes it would make suspiciously good guesses.

Early playtests confirmed this. The encounter felt rigged in a way that was hard to pin down. Players couldn't articulate why, but the Architect always seemed to know too much. Stealth didn't feel real and ambushes didn't land. Players didn't use the term God Mode, but that's what it was. The information asymmetry that makes the encounter tactically interesting was leaking.

The fix was to split the system into two separate LLM calls per turn. The first is an engine: a mechanical referee that sees the full true state of the encounter. It resolves your actions, determines outcomes, and then decides what the Architect has actually perceived based on line of sight, noise, and attention. The second call is the Architect itself, which receives only that filtered view. It doesn't get the full game state. It gets what it saw, what it heard, and what it can infer.

The Architect can't cheat because it doesn't have the data. When you hide behind a bookshelf, the Architect has to spend its own turn searching for you. It can guess wrong. It can waste an action attacking empty space. The information asymmetry went from a suggestion in a prompt to a structural guarantee in the architecture.

That split was the moment Augur went from an interesting prototype to something that actually played the way it was supposed to.

Engines and Agents

The dual-LLM loop is the core of the game. There are two calls per turn, the game state mutates, and the encounter advances. But players need to interact with the game outside that loop. "What am I carrying?" is a reasonable question. So is "what does this rod do?" or "where are the exits?" Answering those questions shouldn't cost you a turn, and the Architect shouldn't get to act just because you wanted to check your inventory.

This is where agents come in. An agent is a single-shot LLM call that can read your game state but can't change it. The Ask agent sees the encounter — your position, your injuries, what you're holding, what's around you — and answers your question in plain language. Then it's done. The game state is untouched. The Architect never knows you asked. Think of it as a read only transaction.

That distinction matters. The engine loop is stateful: every turn changes the world, and both the player and the Architect live with the consequences. Agents are stateless: they observe and respond, with no side effects. Keeping those two things separate means players can ask as many questions as they want without giving the Architect free turns or leaking information across the asymmetry boundary.

It's a small architectural decision, but it solves a real design tension. The encounter needs to be mechanically rigorous — every action has weight and every turn matters. Player curiosity shouldn't be punished for that rigor.

Try it

Augur is live at theaugur.ai. There's a free trial encounter if you want to see how it plays.

This is a soft launch. The full payment system isn't live yet, so if you're interested in playing beyond the trial, reach out to me directly at contact@conecrows.com and I'll get you set up.

It's early. The core encounter works, but I'm watching for the things that are hard to predict: strategies players invent that I didn't design for, property interactions that surprise me, whether the Architect's personality drifts in interesting directions as more people play.

If you try it, I'd like to hear what happened. The encounters that go sideways are usually the most useful ones.


This is the fourth post on the Cone Crows engineering blog. Subscribe to the RSS feed to follow along.