Ignite Bold Ideas, Faster

We fuse human ingenuity with AI to unleash limitless creative sparks. Are you ready to set yours on fire?

↓

Building A Space For Thinking

Over the past year, AI researchers have become obsessed with a phrase:

World models.

You see it everywhere:

  • Agents navigating Minecraft.
  • Simulated physics environments.
  • Virtual cities where AI learns to reason about space and cause.

Even serious money is flowing into the idea. Yesterday, Yann LeCun’s new company raised $1.03 billion to build world models. That’s a lot of zeros for something that sounds suspiciously like
 a video game engine for intelligence. But the core idea is actually right:

If agents are going to operate autonomously, they need something more than prompts.

They need a world to reason inside. The problem is, that most world-model discussions are focused on physical worlds. But much of the work humans actually do is not physical, it’s cognitive.

  • Strategy
  • Creativity
  • Product design
  • Transformation
  • Narrative building

These worlds are not made of objects and gravity. Yes there are ‘physics’ to these kind of problems. But they’re made of priorities, constraints, ideas, and meaning. Which leads to a slightly uncomfortable hypothesis.

Context alone is not enough. An agent also needs to understand how its world works.

Context Is Only Half the Game

Most AI systems today operate on a single trick:

Stuff enough context into the prompt and hope the model figures it out.

This works surprisingly well for small tasks. But the moment you move into serious thinking work — strategy papers, concepts, analytical reports — the system collapses into improvisation. Because context answers only one question:

What exists in the world?

But agents also need to know:

  • How the world behaves

  • What rules govern it

  • What entities exist

  • What their role is inside it

In other words:

They need a world model, not a context dump.

This is where things get interesting.

Because if you build an artificial world, you get to define the rules. And that means you can optimize the world for the kind of thinking you want to happen inside it.

So we built one.

We call it the Whitespace.

The Whitespace: A World for Thinking

The Whitespace is not a document, not another project workspace, certainly not a chat thread. It’s an artificial cognitive environment designed for strategy, creativity, and transformation. And it runs on three structural pillars — what we call the Three C’s:

Concept. Context. Constitution.

Together they form a domain-centric world model. Not a physics simulation, but a thinking substrate.

Context: The Fabric of the World

The first layer is the Context Fabric. This is where the world’s raw information lives. But instead of throwing everything into prompts, the Whitespace structures context into meaningful categories:

  • priorities

  • constraints

  • themes

  • domains

  • user context

Each context is processed into a distilled representation before it becomes part of the fabric. Which means, our agents don’t read messy documents, but on structured meaning. The result is a living map of the environment — a world surface agents can orient themselves on.

Concept: The World Reflects on Itself

But a world that only accumulates information becomes a library. Don’t get us wrong — libraries are useful.

But they don’t think.

That’s why the Whitespace includes the second layer: the Concept. The Concept is a versioned interpretation of what the work actually is. It answers questions like:

  • What are we building?

  • What patterns are emerging?

  • What is the strategic direction?

Unlike context, which stores facts, the Concept stores interpretation.

And it evolves.

Each revision is a new snapshot of understanding. Over time, the world doesn’t just collect knowledge. It develops perspective.

Constitution: The Agent Understands Itself

Now we reach the third layer.

And arguably the most important one.

Because having a world model is still not enough; an agent must also understand who it is inside that world.

This is the role of the Constitution. Technically speaking, the Constitution is just a JSON object. Conceptually, it’s the identity layer of the agent.

The Constitution tells the agent:

  • what it is

  • what it can do

  • what tools it can use

  • what entities exist in the environment

We call that last piece the taxonomy — artifacts, ideas, contexts, tools and skills, other agents. The Constitution defines the ecosystem of the Whitespace and the agent’s relationship to it. In other words: The agent doesn’t just know the world, it also knows how it exists within that world.

Why Artificial Worlds Are Actually Easier

There’s a reason world-model research is exploding: Understanding the real world is incredibly hard. Physics. Society. Economics. Culture. It’s messy. But artificial worlds are different. We design the rules. Which means we can create worlds that are optimized for a specific kind of intelligence.

The Whitespace is one of those worlds. A world optimized not for physics.

But for thinking.

From Lego To Code

There is a straight line from the first Lego brick to a codebase.

Not a neat line, obviously. More like a child’s line. Slightly crooked. Unreasonably ambitious. Heading directly toward something structurally unsound and wonderful.

I know this because I’ve been building like that for as long as I can remember.

As a kid, I had thousands of Lego pieces. They came in through the usual channels: birthdays, Easter, Christmas, the occasional parental lapse in judgment. Naturally, every new set was built once according to instruction. Because of course it was. You had to understand the official version first. Respect the system. Learn the intended shape of the thing.

And then, just as naturally, it had to be destroyed.

Not out of disrespect, but out of curiosity. Out of creative necessity. The pristine police station, spaceship, pirate fortress, whatever it was, had served its purpose. It had delivered its parts into the republic. The bricks were no longer loyal to the box art. They had been assimilated into larger, stranger, more important plans being run by a child brain with absolutely no regard for scope management.

That was one of the first true flow states of my life.

Hours disappeared. The world fell away. There was only structure, tension, possibility. A pile of pieces and the intoxicating sense that reality was, at least in some small radius around me, negotiable.

That instinct went beyond Lego.

When I was three, I apparently deconstructed a vacuum cleaner. “Deconstructed” is a generous word for what was, from the vacuum’s perspective, a catastrophic event. My father, an electrician and engineer, had to put it back together. I’m sure this was inconvenient for him. But in retrospect I like to think he recognized the species of problem. Some children play with toys. Some want to know what the toy is hiding.

Or, more precisely: how the machine works, where the seams are, and whether it could be made to do something else.

That urge led me, among many other things, toward engineering. Which felt less like a career choice than a formalization of a preexisting condition.

Engineering, at its best, is organized agency.

It is the refusal to stand in front of a system and treat it as fixed just because somebody else assembled it first. It is the belief that environments can be understood, modified, redesigned. That constraints are real, but not sacred. That the world is, in fact, made of parts. And if it is made of parts, then it can be learned. If it can be learned, it can be shaped. And if it can be shaped, then maybe you are not merely living inside fate. Maybe, to some degree, you get to build it.

That idea never really left me. Only the medium changed.

I learned actual coding with QBasic, then Pascal, then C in the early 90s. In the early 2000s I moved into web development. From there into design, creative direction, strategy. On paper, that can look like a sequence of pivots. From the inside, it feels much simpler: I’ve always been building.

Sometimes with bricks. Sometimes with code. Sometimes with language, systems, teams, narratives, interfaces, brands, and operating models. But always with the same basic instinct: take the thing apart, understand the pieces, imagine a better architecture, build again.

Which is why the current AI moment feels less alien to me than it seems to feel to some people.

A lot of the current discourse around coding agents carries either panic or cosplay. Either software is over, or everyone is suddenly a ten-person product team with a prompt window and a dream. Both are a little silly. The more interesting truth is simpler: the cost of building has collapsed again, and that changes who gets to play.

That matters.

Because agents, at their best, do not eliminate the need for human taste, judgment, or ambition. They amplify them. They give people with agency more surface area. More reach. More iterations. More ways to move from idea to artifact without needing an entire institutional machine just to test whether the idea has legs.

In other words: more people get to play with Legos again.

Just with different bricks.

This weekend, while working on our latest release, I had that thought more than once. There I was, once again in a room, happily immersed for hours, arranging parts into systems: AI agents, code fragments, Python classes, components, prompts, event flows, schemas, states. Same feeling. Same quiet electricity. Same ridiculous optimism that if I keep moving the pieces around long enough, something elegant might emerge.

And maybe that is one of the most beautiful things about this moment.

For all the noise around AI, one of its gifts is that it returns building to people who were previously kept at the edge of the workshop. Not everyone will use that gift well. Many will build nonsense. Some are building haunted demoware held together by vibes and unsecured API keys. That, too, is part of the tradition.

But some people are using these new tools the way children use bricks: seriously, playfully, obsessively, with taste and nerve and unreasonable hope. They will build because they can. Then build because they must. Then wake up one day and realize that the real joy never was the finished object. It was the agency.

The chance to shape your environment a little more deliberately.

The chance to shape yourself with it.

We never really stop being the child on the floor surrounded by parts. The lucky ones just find better workshops.

And better toys.

 

—

Jo Wedenigg is the founder of Apes on fire, where he builds human x AI collaboration systems for creative, strategic, and transformation work. He is the creator of Ape Space and focuses on turning AI into a partner for advanced thinking.

 

—

The Answer Box: The New Homepage Isn’t A Homepage At All, It’s A Question.

If you’ve looked at space.apesonfire.com lately, you’ve already seen the future hiding in plain sight.

It’s not a magic feed. No special nav tree. It’s not a dashboard with seventeen widgets screaming for your attention.

It’s a simple input field that asks: What do we want to create today?

Ape Space Homepage
The Ape Space Homepage – A Typical Answer Box

The Answer Box – A UI Choice, And The Core Of A Distribution Thesis

Google did it. Perplexity did it. ChatGPT did it. And even Yahoo (yes, still alive) can’t help itself. Every product that wants to own “where decisions happen” is doing it. The internet’s UI is collapsing into a single shape: the answer box.

The old homepage was a place you visited. The new homepage is where you ask. And where you expect an answer. If you’re building a brand, a product, or a point of view: you need to adapt your content strategy to the new interface.

Three things are happening at the same time:

  1. Search is being re-bundled into answers. People don’t want links. They want the synthesis.
  2. Distribution surfaces are compressing. The UI has less room for the brand, the machine. Fewer clicks. Less patience. Less context.
  3. Attribution is becoming optional. Not because anyone is evil (though: lol), but because the interface is not showing its work the way we were used to (sources don’t matter that much anymore on the surface, if knowledge and thinking are abundant)

So the old strategy — “share content, rank on Google, collect clicks” — is no longer the default path to awareness. We need to optimize for a new era, measuring attention in ‘Share of Response’ not ‘Share of Voice’.

The new game is: get your ideas into the response of the ‘model’ – and that includes human minds.

What Wins In The Answer Box Era

Here are five formats that survive (and compound) when the UI collapses:

1) Sharp claims (that can be repeated)

Not hot takes or vibes. Actual claims, defensible cognitive moats.

A claim is a sentence somebody can carry into a meeting without you.

Example: “Attention is a supply chain.”

You see? We said it. If it’s not repeatable, it’s not distributable.

2) Frameworks (that reduce uncertainty)

Frameworks travel because they help people decide.

A good framework makes someone feel smarter in under 30 seconds. Like you, while you are reading this.

3) Original data (even small)

You don’t need a lab. You need something you saw that others didn’t document.

A screenshot. A pattern across 20 customers. A before/after. A list of failure modes.

Originality is the new SEO.

4) Memetic phrasing (earned, not manufactured)

Yes, words matter.

Not because of “branding.”, but because the answer box is basically a metaphor for a compression algorithm – meaning, association, affiliation, compressed into verbiage that can be owned. Articulation that becomes habitual.

If your phrasing is sticky, it gets carried forward.

5) Narrative threads (the human layer)

The answer box is efficient. Humans aren’t. Narrative is how people decide what to believe, who to trust, and what to try next.

So you still need story — but story as a delivery vehicle for a claim or framework, not story as decoration.

What To Measure If Clicks Don’t Count

If you keep measuring “traffic” as the KPI, you’ll optimize for a world that’s leaving.

In the answer-box era, you care about:

  • Mentions: are people repeating the phrasing?
  • Citations: are answer engines / newsletters / other writers referencing you?
  • Prompt inclusion: are people asking the system for you? (“What would Apes on Fire say about
?”)
  • Downstream behavior: do the right people DM you, book time, try the product, steal the framework? (Good.)

You can’t win “content”, if content is always just a prompt away. Which is why our front page is a question. And the machine that you rely on for the answer. The answer box. Everything else is implementation detail (beautiful, intricate implementation detail
 but still).

TL;DR

The internet is becoming an answer box.

So your content needs to become:

  • claims people can repeat
  • frameworks people can use
  • reference people can return to
  • narratives people can feel

Forge

PUBLIC BETA COMING SOON

Forge is where you take your ideas from spark to impact – providing you all the tools to drive interactive, AI powered brainstormings, and breakthrough innovation sessions.

Rapid innovation and brainstorming

Lightning-fast ideation cycles that transform scattered thoughts into structured innovation frameworks.

Graph based idea management

Visualize connections between concepts with intuitive knowledge graphs that reveal hidden insights.

Contexts to add depth

Rich contextual layers that bring nuance and specificity to every creative exploration.

The tech inside the spark

We are building the platforms to work with whatever intelligence comes next

Thinking bigger at scale

We are building the platforms to work with whatever intelligence comes next

Where Innovation Takes Flight

Discover our big-picture outlook and see how Apes on fire is reshaping creative possibilities.