Ignite Bold Ideas, Faster

We fuse human ingenuity with AI to unleash limitless creative sparks. Are you ready to set yours on fire?

↓

Building A Space For Thinking

Over the past year, AI researchers have become obsessed with a phrase:

World models.

You see it everywhere:

  • Agents navigating Minecraft.
  • Simulated physics environments.
  • Virtual cities where AI learns to reason about space and cause.

Even serious money is flowing into the idea. Yesterday, Yann LeCun’s new company raised $1.03 billion to build world models. That’s a lot of zeros for something that sounds suspiciously like
 a video game engine for intelligence. But the core idea is actually right:

If agents are going to operate autonomously, they need something more than prompts.

They need a world to reason inside. The problem is, that most world-model discussions are focused on physical worlds. But much of the work humans actually do is not physical, it’s cognitive.

  • Strategy
  • Creativity
  • Product design
  • Transformation
  • Narrative building

These worlds are not made of objects and gravity. Yes there are ‘physics’ to these kind of problems. But they’re made of priorities, constraints, ideas, and meaning. Which leads to a slightly uncomfortable hypothesis.

Context alone is not enough. An agent also needs to understand how its world works.

Context Is Only Half the Game

Most AI systems today operate on a single trick:

Stuff enough context into the prompt and hope the model figures it out.

This works surprisingly well for small tasks. But the moment you move into serious thinking work — strategy papers, concepts, analytical reports — the system collapses into improvisation. Because context answers only one question:

What exists in the world?

But agents also need to know:

  • How the world behaves

  • What rules govern it

  • What entities exist

  • What their role is inside it

In other words:

They need a world model, not a context dump.

This is where things get interesting.

Because if you build an artificial world, you get to define the rules. And that means you can optimize the world for the kind of thinking you want to happen inside it.

So we built one.

We call it the Whitespace.

The Whitespace: A World for Thinking

The Whitespace is not a document, not another project workspace, certainly not a chat thread. It’s an artificial cognitive environment designed for strategy, creativity, and transformation. And it runs on three structural pillars — what we call the Three C’s:

Concept. Context. Constitution.

Together they form a domain-centric world model. Not a physics simulation, but a thinking substrate.

Context: The Fabric of the World

The first layer is the Context Fabric. This is where the world’s raw information lives. But instead of throwing everything into prompts, the Whitespace structures context into meaningful categories:

  • priorities

  • constraints

  • themes

  • domains

  • user context

Each context is processed into a distilled representation before it becomes part of the fabric. Which means, our agents don’t read messy documents, but on structured meaning. The result is a living map of the environment — a world surface agents can orient themselves on.

Concept: The World Reflects on Itself

But a world that only accumulates information becomes a library. Don’t get us wrong — libraries are useful.

But they don’t think.

That’s why the Whitespace includes the second layer: the Concept. The Concept is a versioned interpretation of what the work actually is. It answers questions like:

  • What are we building?

  • What patterns are emerging?

  • What is the strategic direction?

Unlike context, which stores facts, the Concept stores interpretation.

And it evolves.

Each revision is a new snapshot of understanding. Over time, the world doesn’t just collect knowledge. It develops perspective.

Constitution: The Agent Understands Itself

Now we reach the third layer.

And arguably the most important one.

Because having a world model is still not enough; an agent must also understand who it is inside that world.

This is the role of the Constitution. Technically speaking, the Constitution is just a JSON object. Conceptually, it’s the identity layer of the agent.

The Constitution tells the agent:

  • what it is

  • what it can do

  • what tools it can use

  • what entities exist in the environment

We call that last piece the taxonomy — artifacts, ideas, contexts, tools and skills, other agents. The Constitution defines the ecosystem of the Whitespace and the agent’s relationship to it. In other words: The agent doesn’t just know the world, it also knows how it exists within that world.

Why Artificial Worlds Are Actually Easier

There’s a reason world-model research is exploding: Understanding the real world is incredibly hard. Physics. Society. Economics. Culture. It’s messy. But artificial worlds are different. We design the rules. Which means we can create worlds that are optimized for a specific kind of intelligence.

The Whitespace is one of those worlds. A world optimized not for physics.

But for thinking.

Ape Space Just Leveled Up: APEx, Concept, Context Fabric Updates, Creative Tools, And More

When we launched Ape Space at the end of December, the bet was simple: Maybe AI should do more than just spit out answers, write code, or cosplay as an intern with a bash loop.

What we set out to create, is a new kind of thinking environment. A system that helps people hold complexity, shape ideas, move through ambiguity, and build better things with more coherence, more momentum, and less mental spaghetti.

That was the thesis. And then we launched.

Now, about two and a half months later, Ape Space just made a big leap forward.

Meet APEx: The Intelligence Of The Whitespace

The biggest new character in this release is APEx.

APEx is not “another chatbot.” The world has enough of those already. APEx is our new whitespace cognitive partner: the agent designed to drive the actual intelligence of the whitespace. And help you crack your toughest cognitive problems.

It is built on our own blend of OODA + Ralph. In plain English: it knows how to observe, orient, decide, and act — but it also knows how to stay in motion, keep context alive, and keep work progressing without collapsing into either chaos or sterile over-analysis.

We optimized APEx for the kind of work most AI systems are still oddly bad at once things get messy: strategy, transformation, product development, creative writing, direction-setting, reframing, synthesis, and the long middle of complex thinking.

Not just “answer this prompt.” More like: help me understand what game I am in, what matters, what changed, what is stuck, what is missing, and what to do next. That’s a very different job.

Context Is No Longer A Pile. It’s A Fabric.

We have also expanded the Context Fabric with new Domain, Theme, and User context types.

Most AI systems treat context like a bucket. You throw stuff in. The model forgets half of it. Then you repeat yourself. Then you resent technology.

We do not think that is good enough.

In Ape Space, context is a dynamic fabric of information that agents can search, read, write, share, and build on together. That means coherence is no longer accidental. It becomes architectural.

And yes: this now includes User context. Meaning, the whitespace can remember details you share about yourself and use them to become more helpful over time. Your preferences, your goals, your patterns, your style, your operating reality. Not in a creepy way. In a useful way. Like software finally discovering that continuity might matter.

This is part of our larger conviction: context engineering is not a side feature. It is the work.

We’re Giving The Agents A World-Model

This may be our favorite part of the release: the Concept is how we stopped letting agents roam around the whitespace like caffeinated raccoons.

 

Each whitespace now carries a codified, dynamic model of what it represents: intent, strategy spine, priorities, problem frame, stakeholder map, and the deeper structure of the work itself. It is persistent. It is interactive. And it is accessible to the agents.

That means APEx and other agents are no longer operating on vibes and whatever happened to fit inside the latest prompt window. They can reason about the whitespace as a living system. They can inspect the Concept, suggest updates, extend it with new information. APEx is designed to work with the concept, challenge weak assumptions, and strengthen the frame as the project evolves.

As your ambition grows, the world model grows with it.

Ape Space Can Now Make Things, Not Just Think And Write About Them

We now supports full creative tools, including image and video generation.

So yes, the whitespace can write. But it can now also turn ideas into visual and moving outputs.

That matters because creative and strategic work does not live in one medium. Sometimes the fastest route to clarity is a paragraph. Sometimes it is a frame. Sometimes it is a visual reference. Sometimes it is a motion sketch that makes the idea suddenly obvious.

We want Ape Space to support actual creative throughput, not just eloquent text production. We currently support image and video models from Google (Gemini, Veo & Nano Banana) and OpenAI (GPT Image & Sora 2), with support for more generative AI coming soon. 

The Foundation

All of this builds on the foundations we already laid. Ape Space is not one agent, but the first multi-agentic co-cognitive system we know of that uses a multitude of agents inside a deterministic harness to help people think better.

Under the hood, Ape Space now runs on the fully updated APE2 framework. What it means in practice is this: agents inside Ape Space now behave like they belong to the same species. They share a unified agent experience. They work with the same prompting and reference capabilities. They can use skills and tools in a more consistent way. They support multi-model execution. And they come with full transcript observability and historic transcript lookup.

Ape Space can now look back through prior transcripts not just as chat messages, but as full cognitive traces: tool calls, reasoning steps, model responses, decisions, and execution paths. The full agent trail. Not just what happened, but how it happened.

We are building an exoskeleton for thinking.

And APEx is just getting started.

From Lego To Code

There is a straight line from the first Lego brick to a codebase.

Not a neat line, obviously. More like a child’s line. Slightly crooked. Unreasonably ambitious. Heading directly toward something structurally unsound and wonderful.

I know this because I’ve been building like that for as long as I can remember.

As a kid, I had thousands of Lego pieces. They came in through the usual channels: birthdays, Easter, Christmas, the occasional parental lapse in judgment. Naturally, every new set was built once according to instruction. Because of course it was. You had to understand the official version first. Respect the system. Learn the intended shape of the thing.

And then, just as naturally, it had to be destroyed.

Not out of disrespect, but out of curiosity. Out of creative necessity. The pristine police station, spaceship, pirate fortress, whatever it was, had served its purpose. It had delivered its parts into the republic. The bricks were no longer loyal to the box art. They had been assimilated into larger, stranger, more important plans being run by a child brain with absolutely no regard for scope management.

That was one of the first true flow states of my life.

Hours disappeared. The world fell away. There was only structure, tension, possibility. A pile of pieces and the intoxicating sense that reality was, at least in some small radius around me, negotiable.

That instinct went beyond Lego.

When I was three, I apparently deconstructed a vacuum cleaner. “Deconstructed” is a generous word for what was, from the vacuum’s perspective, a catastrophic event. My father, an electrician and engineer, had to put it back together. I’m sure this was inconvenient for him. But in retrospect I like to think he recognized the species of problem. Some children play with toys. Some want to know what the toy is hiding.

Or, more precisely: how the machine works, where the seams are, and whether it could be made to do something else.

That urge led me, among many other things, toward engineering. Which felt less like a career choice than a formalization of a preexisting condition.

Engineering, at its best, is organized agency.

It is the refusal to stand in front of a system and treat it as fixed just because somebody else assembled it first. It is the belief that environments can be understood, modified, redesigned. That constraints are real, but not sacred. That the world is, in fact, made of parts. And if it is made of parts, then it can be learned. If it can be learned, it can be shaped. And if it can be shaped, then maybe you are not merely living inside fate. Maybe, to some degree, you get to build it.

That idea never really left me. Only the medium changed.

I learned actual coding with QBasic, then Pascal, then C in the early 90s. In the early 2000s I moved into web development. From there into design, creative direction, strategy. On paper, that can look like a sequence of pivots. From the inside, it feels much simpler: I’ve always been building.

Sometimes with bricks. Sometimes with code. Sometimes with language, systems, teams, narratives, interfaces, brands, and operating models. But always with the same basic instinct: take the thing apart, understand the pieces, imagine a better architecture, build again.

Which is why the current AI moment feels less alien to me than it seems to feel to some people.

A lot of the current discourse around coding agents carries either panic or cosplay. Either software is over, or everyone is suddenly a ten-person product team with a prompt window and a dream. Both are a little silly. The more interesting truth is simpler: the cost of building has collapsed again, and that changes who gets to play.

That matters.

Because agents, at their best, do not eliminate the need for human taste, judgment, or ambition. They amplify them. They give people with agency more surface area. More reach. More iterations. More ways to move from idea to artifact without needing an entire institutional machine just to test whether the idea has legs.

In other words: more people get to play with Legos again.

Just with different bricks.

This weekend, while working on our latest release, I had that thought more than once. There I was, once again in a room, happily immersed for hours, arranging parts into systems: AI agents, code fragments, Python classes, components, prompts, event flows, schemas, states. Same feeling. Same quiet electricity. Same ridiculous optimism that if I keep moving the pieces around long enough, something elegant might emerge.

And maybe that is one of the most beautiful things about this moment.

For all the noise around AI, one of its gifts is that it returns building to people who were previously kept at the edge of the workshop. Not everyone will use that gift well. Many will build nonsense. Some are building haunted demoware held together by vibes and unsecured API keys. That, too, is part of the tradition.

But some people are using these new tools the way children use bricks: seriously, playfully, obsessively, with taste and nerve and unreasonable hope. They will build because they can. Then build because they must. Then wake up one day and realize that the real joy never was the finished object. It was the agency.

The chance to shape your environment a little more deliberately.

The chance to shape yourself with it.

We never really stop being the child on the floor surrounded by parts. The lucky ones just find better workshops.

And better toys.

 

—

Jo Wedenigg is the founder of Apes on fire, where he builds human x AI collaboration systems for creative, strategic, and transformation work. He is the creator of Ape Space and focuses on turning AI into a partner for advanced thinking.

 

—

What I Learned About The Value Of Human Work, After Months of Working With AI Coding Agents

I’ll start with a confession:

I was wrong.

Not about AI being powerful. It is.

Also, not about AI changing software work. It already has.

I was wrong about what kind of thing AI is. I assumed, at first, that AI might simply be “more intelligent” than humans in the way a crane is stronger than a person: bigger machine, faster output, same category.

After ~14 months of building with coding agents — shipping prototypes, breaking systems, rebuilding them, and moving from a locally run CLI toy into a real platform — I don’t think that anymore.

What I see now is this: AI is not a better human mind, it’s a different cognitive architecture all together. If you miss that, you will misread both AI and human work. A tiny lapse in reasoning, that sits underneath a lot of the current AI discourse. It’s also why “software is dead” hot takes sound clever on social media and then die the moment you need auth, billing, persistence, observability, or a system that still works on Tuesday.

Thesis 1: Clarity is kindness

The first thing coding agents taught me about human work: Clarity is not bureaucracy. Clarity is kindness.

Kindness to your team. Kindness to your future self.

Kindness to the machine you just asked to produce 5,000 lines of code before lunch.

LLM-based agents are wildly capable. But in their cognitive core, the LLM doing all the “thinking” churn, still operates as bursts of token throughput: tokens in, inference, tokens out. Let me be clear, in case there is any doubt: this is NOT how human brains work. Humans live in something else entirely: a continuous cognitive stream. We keep context alive across time (within the boundaries of our long-term and short-term memory). We carry intent. We revisit assumptions. We ask, nonstop:

  • Is this still the right direction?

  • What problem are we actually solving?

  • What are the non-goals?

  • Which constraint is real, and which one is just noise?

That loop is not overhead — we call it ‘inner monologue’, ‘strategic thinking’, ‘executive functions’. And however you want to call it: that loop is the work.

In long development sessions with coding agents, we’ve seen this pattern clearly reflected: what we are doing is often not “coding” per se. Coding agents have focused almost our entire developer time on doing directional labor:

  • defining scope

  • goals setting

  • non-goals definition

  • specs writing

  • requirements alignment

  • sequencing constraints

  • sharpening product intent

Yes, the AI can generate pieces of that. But it doesn’t have your intent. It doesn’t know your taste. It doesn’t know which compromise is acceptable and which one would quietly wreck the product six weeks from now.

This is not just an anecdotal founder rant. Anthropic’s 2025 internal study (132 engineers/researchers, 53 interviews, internal Claude Code usage data) found strong AI use for debugging and code understanding, with big self-reported productivity shifts — but also explicit concern about losing deep technical competence, weakening collaboration, and needing new approaches to learning and mentorship. They describe this as an early signal of broader societal transformation. 

That tracks exactly with what we’ve seen:

The agent can move fast.

It cannot care.

It’s the equivalent of a self-driving chainsaw. Human judgment is the only thing between your code and its teeth.

Thesis 2: Vibe architecture is no architecture

The funniest and most dangerous lie in AI right now is the idea that, because “vibe coding” can produce software, architecture no longer matters.

It matters more.

Coding agents can produce impressive looking output fast, and it was still the wrong move.

Our early version was a local CLI MVP. Great. Fast. Useful. Then we moved toward a real platform and the grown-up questions arrived immediately:

  • user identity

  • authentication

  • storage/persistence

  • billing

  • deployment strategy

  • infrastructure

  • observability

  • failure modes

That’s where many people discover: “generate app” is not the same ask as “design a system.”

It’s not that AI can’t help with these kinds of problems. It absolutely can. It can accelerate implementation and explore options quickly. But the truth is modern software development is a series of deliberate choices. If you don’t know the landscape,  if you don’t understand the option space, a coding agent will happily assist you as you “vibe code” yourself into a backdeadend you never meant to even build in the first place.

I’ve done it. Several times.

And that is not an AI failure. It’s a leadership failure. A product failure. An architecture failure.

The benchmarks are quietly saying the same thing. OpenAI’s SWE-Lancer benchmark used 1,400+ real freelance software tasks (including managerial decision tasks), and OpenAI explicitly reports that frontier models were still unable to solve the majority of tasks. METR’s randomized trial with experienced open-source developers on their own repos found that, in that setting, AI tool use made them 19% slower on average—even though the developers expected speedups. METR also stresses not to overgeneralize, but the result is a useful antidote to benchmark fantasy. 

That doesn’t mean AI is bad. It just means reality is large.

So yes, vibe coding is real. It’s useful, and it can be magical. But It is also often a speedrun into hidden complexity.

Vibe architecture is no architecture.

Thesis 3: Creativity does not come from abundance

The third thing coding agents taught me surprised me the most.

AI makes cognition feel abundant:

Need 20 implementation paths? Done.

Need 10 names? Done.

Need 4 refactor strategies? Done.

But creativity does not thrive in abundance. Innovation is born from scarcity. And creativity is innovation + relevance, optimized under utility constraints.

That last part matters: utility constraints.

A coding agent can be inventive. It can absolutely produce novel moves. But novelty is not creativity by itself. Creativity starts when someone makes a judgment:

  • this is the direction

  • these options are out

  • this tradeoff is worth it

  • this is elegant enough

  • this is useful enough

  • this is aligned

In other words: creativity is not just generation.

Creativity is selection under constraints.

And selection is painful. It means cutting away options, aying no. It means carrying the weight of taste, context, and accountability.

Machines are very good at generating options. Humans are still doing most of the meaningful reduction.

This is where the broader evidence is nuanced. The OECD’s 2025 review of experimental evidence summarizes real productivity gains (often 5% to 25%+ in the right tasks), especially when task fit is good — but also emphasizes that benefits depend on user skill, output evaluation, and proper use. They also flag a real risk: over-reliance can reduce independent thinking if people stop critically engaging with outputs. 

AI doesn’t eliminate the need for human judgment. It dramatically raises the cost of not having any.

This is not a software story, but a civilization story

If machines become abundant generators, then human value shifts upstream and downstream:

  • upstream: framing, intent, constraint design, ethics, taste

  • downstream: judgment, integration, accountability, consequences

You can see this in the current public discourse around coding roles: even people building agent tools are saying the center of gravity is moving from typing code to writing specs, defining intent, and talking to users. Boris Cherny, creator of Claude Code, said he expects major role shifts and more emphasis on spec work.  Stanford HAI’s expert predictions similarly point toward collaborative agent systems with humans providing high-level guidance — and note the growing pressure to prove real-world value, not just demos. 

And globally, the labor signal is neither utopian nor apocalyptic. The ILO’s 2025 update says one in four workers is in an occupation with some degree of GenAI exposure, but also emphasizes that most jobs are more likely to be transformed than eliminated, because human input remains necessary.  Meanwhile, the World Economic Forum’s 2025 digest says 39% of workers’ skills are expected to be transformed by 2030, with AI skills rising alongside creative thinking, resilience, leadership, and lifelong learning. 

That combination is the signal: Humanity is being re-specified, not replaced: Humanity is going to get itself one giant promotion — from working to leading. Leading armies of AI agents doing the work.

The danger is not (only) job loss. It’s skill atrophy, shallow thinking, and handing over too much judgment because the machine sounds fluent.

The opportunity is the opposite: teach people critical thinking, taste, rigor, ethics, architecture, and the discipline to choose. And the result will be a world where more people can build and thrive.

AI is changing what “being useful” means.

AI accelerates cognitive work. It does not make it any less tedious. If you want the upside without the chaos, you still need the “boring” things:

  • architecture

  • product thinking

  • systems design

  • constraints

  • taste

  • deliberate choice

Not sequentially. In parallel. All the time.

That’s the real lesson from 14 months of building with agents: the machine can do more of the work than I expected, and it has made human thinking more critical than ever.

Inconvenient for people who expected a shortcut.

Excellent news if you are in it to build.

—

Jo Wedenigg is the founder of Apes on fire, where he builds human x AI collaboration systems for creative, strategic, and transformation work. He is the creator of Ape Space and focuses on turning AI into a partner for advanced thinking.

 

 

 

 

 

 

A Better Writer – For Every Brief

Most AI writing tools try to impress you. They promise speed. Volume. Infinite drafts. They spray words onto the page and call it creativity.

We didn’t build that.

The ‘Writer’ agent in Ape Space is a disciplined expressive writing engine. Nothing more. Nothing less.

It exists for one simple reason: to help you say exactly what you mean — with clarity, intention, and style — without losing the thread of what you’re actually trying to build.

Not louder writing, not more writing. Better writing.

Writing is not typing

Here’s a quiet truth most tools ignore: Writing is thinking under constraint. Good writing doesn’t start with words. It starts with context, intention, and tension. That’s why the Writer in Ape Space doesn’t behave like a chat prompt with autocomplete. It behaves like a system — one that respects how real writers actually work.

Under the hood, Writer is an agent system: a small, disciplined ensemble of sub-agents, each with a clear job, designed to stay deterministic, inspectable, and steerable. No vibes, no black boxes. No “hope this prompt works.”

Here’s how it works:

No worries 
 HERE is how it actually works.

1. Any prompt. Any format. No drama.

You start with:

  • A prompt (rough, sharp, or half-formed)

  • A desired output format — essay, memo, poem, manifesto, viral post, strategy doc, screenplay fragment

That’s it. No magic incantations. No prompt gymnastics.

Writer doesn’t assume you know how to ask. It assumes you know what you’re trying to express, even if it’s still fuzzy.

2. The ideal writer persona (built fresh, every time)

Before a single sentence is written, Writer creates an ideal writer persona, purpose-built for this task, this whitespace, this moment. Not a generic “great author.”

Instead, the system asks:

  • What is being built here?

  • Who is this for?

  • What tone serves the intention?

  • What should be avoided?

  • What kind of writer would actually succeed at this?

The result is a writer optimized for your context, not our defaults. Different whitespace → distinct writer. For every prompt.

3. The writer plans before it writes

Real writers don’t just type. They plan — even if subconsciously.

So does Writer.

Before drafting, the writer persona:

  • Outlines an approach

  • Identifies structural moves

  • Decides where to build tension and where to release it

  • Chooses a pacing strategy

This plan isn’t hidden. It’s explicit and intentional. Writing without a plan is how you get word salad.

We’re not into that.

4. Iterative writing with built-in self-critique

Now the writing begins — but not in one big dump.

Writer works iteratively:

  • Drafting a section

  • Critiquing it against the original intent

  • Improving clarity, precision, and rhythm

  • Checking for drift, fluff, or contradiction

Each pass tightens the work. This isn’t one giant “regenerate until it sounds good” loop. As you can see in the schematics, we tried to build more of a controlled refinement approach.

The writer is allowed — encouraged even — to disagree with itself. The difference is a huge uptick in writing fluency. The model constantly looking at its own output and critiquing it against a stable set of priorities. That’s where quality comes from.

5. You stay in the loop

This matters more than people admit. Hence we have built-in human gates at several points along the agent flow. At any point, you can:

  • Comment

  • Approve

  • Push back

  • Redirect

  • Say, “yes — but not like that”

Writer treats feedback as a signal, not interruption. You’re not fighting the system. You’re co-directing it.

6. Final polish, guided by human intent

Once you approve the direction, Writer enters its final phase:

  • Tightening language

  • Aligning voice

  • Removing excess

  • Sharpening edges

The goal isn’t perfection. As with anything you do in a Whites[ace, the goal is to create output that are faithful to what you want.

Good writing feels inevitable. Like it couldn’t have been written any other way. That’s the bar we set to meet.

Agent Systems

Technically, Writer is what we call an agent system. Not because “agents” are trendy, but because separation of concerns is how you keep things controllable:

  • One component reasons about intent

  • One constructs the writer persona

  • One plans

  • One writes

  • One ensures coherence

  • One integrates feedback

Each step is explicit. Each transition is observable. That’s how you get reliability without killing creativity.

This isn’t about productivity

We didn’t build Writer to help you “ship more content.”

We built it for:

  • Expression

  • Precision

  • Voice

  • Imagination

For poems that don’t embarrass you later, or essays that actually say something. For memos that cut through noise and for posts that don’t feel hollow. For writing that means it.

If you care about language and if you want a machine that thinks with you, not over you, while you think up new poetry, write manifestos, or the next viral hit.

Try it now, in Ape Space.

A blank page never felt so good.

The Current AI Stack Is Anthropomorphic Garbage — Let’s Rebase It!

There is a comforting fiction spreading through AI discourse: that AI systems learn and that they remember. You see it everywhere — in agent frameworks, in product decks, in breathless posts about “long-term memory” and “self-improving agents.” It sounds intuitive. It feels human. And it is quietly sabotaging how we design software.

(more…)

Let’s Kill The Brainstorming

Let’s get this out of the way: brainstorming is broken.

It eats time, drains mental energy, and reliably produces mediocre ideas wrapped in the illusion of progress.

And yet we keep doing it.

We gather smart people in rooms, cover walls with sticky notes, congratulate ourselves on “divergent thinking” — and then quietly go back to doing what we were going to do anyway.

If creativity is your most valuable asset, brainstorming is one of the most expensive ways to misuse it.

So we decided to kill it.

Why Brainstorming Fails (and Not Just in Practice)

This isn’t just a vibe. It’s science.

Decades of research show that traditional group brainstorming:

  • Produces fewer ideas than individuals working independently
  • Produces lower-quality ideas on average
  • Suffers from production blocking (only one person can talk at a time)
  • Encourages groupthink and safe, obvious answers
  • Penalizes weird, risky, or unfinished ideas

Even worse: brainstorming feels productive. Which makes it dangerously convincing.

People leave sessions energized, surrounded by artifacts — post-its, clusters, canvases — that look like output. But output isn’t insight. And artifacts aren’t ideas.

The real cost isn’t just the meeting time.

It’s the opportunity cost: creative energy spent performing creativity instead of solving hard problems.

Time that designers, strategists, founders, and writers could spend thinking deeply is burned on coordination rituals.

We think creative people deserve better tools.

A Better Way to Generate Ideas

Instead of asking humans to simulate an algorithm badly, we’ve re-built the algorithm.

Meet Spark

Screenshot of Spark in Ape Space

Spark is an Explore agent designed to do one thing extremely well: generate aligned, relevant, non-obvious ideas — without meetings.

One prompt.

Your problem frame.

Your existing context.

And Spark goes idea hunting.

What Spark Actually Does

Spark doesn’t brainstorm. It explores. It navigates your Whitespace’ Context Fabric — all the constraints, priorities, assumptions, and signals that usually get lost in a room full of people talking over each other. Spark respects your problem frame. It stays aligned with your intent. It doesn’t get distracted by the loudest voice. And it doesn’t stop at one angle.

Ideation Strategies Built In

Not all ideas should come from the same direction. So Spark lets you choose how to think:

  • Oppose – attack your assumptions head-on
  • Build On – systematically improve what already exists
  • Wild Reframe – twist the problem until it reveals something new
  • Cross-Domain – steal structures from unexpected places

You’re not just getting “more ideas”, you’re getting ideas shaped by intent.

This is ideation as a tool, not as a ritual.

Now Add Experts 
 Any Experts

Sometimes you don’t just want ideas, you want perspective.

That’s where Expertspark comes in. With Expertspark, you can ask any expert — real or imaginary, historical or fictional — to weigh in on your problem:

A scientist.

A philosopher.

A strategist who doesn’t exist yet.

A version of yourself five years in the future.

No scheduling.

No gatekeeping.

No “can you spare 30 minutes?”

Just insight, on demand.

Expertspark isn’t about replacing human expertise. It’s about making perspective cheap enough to use early, before you’ve already committed to the wrong direction.

Why This Changes Everything

Brainstorming was never really about ideas. It was about coordination.

Spark and Expertspark remove that coordination tax.

They let individuals think deeply, explore widely, and then come together with substance, not sticky notes.

This doesn’t kill collaboration. It makes collaboration worth the time.

Creative work is hard.

Good ideas are rare.

Time is too precious to waste a tree on post-its.

Spark and Expertspark exist so you can stop performing creativity — and start doing the work that actually moves things forward.

Let’s kill the brainstorming. Once, and for all. 

We are live!

Ape Space is now live and in public beta.

We’ve been building this quietly for a while — nights, weekends, whiteboards full of crossed-out ideas — and today we’re opening the doors. Ape Space is a co-cognitive system for creative and strategic thinking in the AI age. It doesn’t think for you. It thinks with you.

We’re creatives who became coders, and coders who couldn’t let go of creativity. Somewhere along the way, we realized that most AI tools optimize for speed and output — but ignore the hardest part: thinking well. Ape Space exists to change that. We’re engineering creativity with intention: structuring context, running multiple thinking strategies in parallel, and creating a dynamic workspace — the Whitespace — where ideas can be explored, framed, and sharpened without collapsing into generic slop.

This is not an autopilot. It’s not a prompt vending machine. It’s a system designed to accelerate human thinking to machine speed — while keeping taste, judgment, and direction firmly in human hands.

This is a public beta. Things will break. Edges are rough. And that’s exactly the point.

If you think for a living — as a designer, writer, strategist, founder, or builder — we’d love for you to try Ape Space. Use it for something that actually matters to you. Push it. Bend it. Tell us where it surprises you — and where it doesn’t.

There’s much more coming, and our backlog is not very patient. But today, we’re live — and we’re excited to start building this with you.

Welcome to Ape Space.

Why “Fully Autonomous” AI Agents Are a Fool’s Errand — And What We Build Instead

You keep hearing it: autonomous agents will take over tasks, free humans from drudgery, run entire businesses without supervision. It’s a seductive narrative. But in reality, full autonomy is a mirage — one often sold by marketers, not engineers. In this post, we argue that chasing full autonomy is not only impractical, it’s dangerous. The smarter bet is co-cognition: tightly controlled, collaborative AI systems that sit alongside human reasoning instead of trying to replace it.

(more…)

Forge

PUBLIC BETA COMING SOON

Forge is where you take your ideas from spark to impact – providing you all the tools to drive interactive, AI powered brainstormings, and breakthrough innovation sessions.

Rapid innovation and brainstorming

Lightning-fast ideation cycles that transform scattered thoughts into structured innovation frameworks.

Graph based idea management

Visualize connections between concepts with intuitive knowledge graphs that reveal hidden insights.

Contexts to add depth

Rich contextual layers that bring nuance and specificity to every creative exploration.

The tech inside the spark

We are building the platforms to work with whatever intelligence comes next

Thinking bigger at scale

We are building the platforms to work with whatever intelligence comes next

Where Innovation Takes Flight

Discover our big-picture outlook and see how Apes on fire is reshaping creative possibilities.