Ignite Bold Ideas, Faster

We fuse human ingenuity with AI to unleash limitless creative sparks. Are you ready to set yours on fire?

↓

From Lego To Code

There is a straight line from the first Lego brick to a codebase.

Not a neat line, obviously. More like a child’s line. Slightly crooked. Unreasonably ambitious. Heading directly toward something structurally unsound and wonderful.

I know this because I’ve been building like that for as long as I can remember.

As a kid, I had thousands of Lego pieces. They came in through the usual channels: birthdays, Easter, Christmas, the occasional parental lapse in judgment. Naturally, every new set was built once according to instruction. Because of course it was. You had to understand the official version first. Respect the system. Learn the intended shape of the thing.

And then, just as naturally, it had to be destroyed.

Not out of disrespect, but out of curiosity. Out of creative necessity. The pristine police station, spaceship, pirate fortress, whatever it was, had served its purpose. It had delivered its parts into the republic. The bricks were no longer loyal to the box art. They had been assimilated into larger, stranger, more important plans being run by a child brain with absolutely no regard for scope management.

That was one of the first true flow states of my life.

Hours disappeared. The world fell away. There was only structure, tension, possibility. A pile of pieces and the intoxicating sense that reality was, at least in some small radius around me, negotiable.

That instinct went beyond Lego.

When I was three, I apparently deconstructed a vacuum cleaner. “Deconstructed” is a generous word for what was, from the vacuum’s perspective, a catastrophic event. My father, an electrician and engineer, had to put it back together. I’m sure this was inconvenient for him. But in retrospect I like to think he recognized the species of problem. Some children play with toys. Some want to know what the toy is hiding.

Or, more precisely: how the machine works, where the seams are, and whether it could be made to do something else.

That urge led me, among many other things, toward engineering. Which felt less like a career choice than a formalization of a preexisting condition.

Engineering, at its best, is organized agency.

It is the refusal to stand in front of a system and treat it as fixed just because somebody else assembled it first. It is the belief that environments can be understood, modified, redesigned. That constraints are real, but not sacred. That the world is, in fact, made of parts. And if it is made of parts, then it can be learned. If it can be learned, it can be shaped. And if it can be shaped, then maybe you are not merely living inside fate. Maybe, to some degree, you get to build it.

That idea never really left me. Only the medium changed.

I learned actual coding with QBasic, then Pascal, then C in the early 90s. In the early 2000s I moved into web development. From there into design, creative direction, strategy. On paper, that can look like a sequence of pivots. From the inside, it feels much simpler: I’ve always been building.

Sometimes with bricks. Sometimes with code. Sometimes with language, systems, teams, narratives, interfaces, brands, and operating models. But always with the same basic instinct: take the thing apart, understand the pieces, imagine a better architecture, build again.

Which is why the current AI moment feels less alien to me than it seems to feel to some people.

A lot of the current discourse around coding agents carries either panic or cosplay. Either software is over, or everyone is suddenly a ten-person product team with a prompt window and a dream. Both are a little silly. The more interesting truth is simpler: the cost of building has collapsed again, and that changes who gets to play.

That matters.

Because agents, at their best, do not eliminate the need for human taste, judgment, or ambition. They amplify them. They give people with agency more surface area. More reach. More iterations. More ways to move from idea to artifact without needing an entire institutional machine just to test whether the idea has legs.

In other words: more people get to play with Legos again.

Just with different bricks.

This weekend, while working on our latest release, I had that thought more than once. There I was, once again in a room, happily immersed for hours, arranging parts into systems: AI agents, code fragments, Python classes, components, prompts, event flows, schemas, states. Same feeling. Same quiet electricity. Same ridiculous optimism that if I keep moving the pieces around long enough, something elegant might emerge.

And maybe that is one of the most beautiful things about this moment.

For all the noise around AI, one of its gifts is that it returns building to people who were previously kept at the edge of the workshop. Not everyone will use that gift well. Many will build nonsense. Some are building haunted demoware held together by vibes and unsecured API keys. That, too, is part of the tradition.

But some people are using these new tools the way children use bricks: seriously, playfully, obsessively, with taste and nerve and unreasonable hope. They will build because they can. Then build because they must. Then wake up one day and realize that the real joy never was the finished object. It was the agency.

The chance to shape your environment a little more deliberately.

The chance to shape yourself with it.

We never really stop being the child on the floor surrounded by parts. The lucky ones just find better workshops.

And better toys.

 

—

Jo Wedenigg is the founder of Apes on fire, where he builds human x AI collaboration systems for creative, strategic, and transformation work. He is the creator of Ape Space and focuses on turning AI into a partner for advanced thinking.

 

—

Your Agent Is Just a Cron Job With a God Complex

2026 has already been dubbed the “Year of the Agent” — but not just by LinkedIn airball posts and X threads. A viral tool called OpenClaw (previously Moltbot/Clawdbot) has been making headlines for autonomously managing digital lives and spawning a full-on AI-only social network called Moltbook, where bots post, debate, and mimic social behavior without humans directly involved. And now, you can even follow the first AI Journalists on their own Substack.

Meanwhile Anthropic’s Claude Code rolled out longer-running session tasks that can coordinate multi-step workflows across time.  And in cybersecurity circles, researchers have been dissecting Moltbook’s rapid rise and even a major security flaw that exposed agent credentials — raising fresh questions about what “autonomy” really means in practice. 

Agents Are Software (And Why “Human” Is a Terrible Default)

Here’s the truth nobody’s selling you: agents are software. Period.

They run code. They follow control flow. They execute policies, read and write state, call tools, emit outputs. There is nothing mystical happening here but somewhere along the way, we started lying to ourselves.

We stopped saying “software” and started saying “agent.”
We stopped saying “program” and started saying “coworker.”
We stopped saying “automation” and started saying “autonomy.”

And with that shift, we quietly imported a dangerous assumption:

If it acts like a human, it must be better.

Let’s pause right there.

Humans are incredible.
Humans are creative.
Humans are adaptable.

Humans are also:

  • inconsistent
  • emotional
  • biased
  • forgetful
  • reactive
  • non-deterministic
  • sometimes just
 having a bad day

If we genuinely want agents to “act like humans,” then we don’t just get empathy and creativity — we also inherit bad vibes, erratic behavior, partial understanding, and mistakes.

Not because the software is bad. But because “human” is not an optimization target.

It’s a compromise.

The Hard Problems Are Human

Your “AI agent” is fundamentally a cron job with opinions — a while-loop that can hallucinate. Your agent doesn’t “decide” to do anything meaningful. It follows a probability distribution shaped by training data, system prompts, and temperature settings. When it succeeds, it’s because a human somewhere made good choices about what to optimize for. When it fails, it’s usually because those choices were implicit, unexamined, or wrong.

When we build agent systems, the industry loves to obsess over the easy stuff. Which LLM? What vector database? How many tools should it have access to? Should we use LangChain or roll our own framework?

This is intellectual theater. The hard problems aren’t technical — they’re human:

  • Deciding what actually matters
  • Judging quality when there’s no ground truth
  • Choosing between legitimate trade-offs
  • Setting direction when the path isn’t clear

Here’s the uncomfortable truth we discovered by actually running an always-on agent 24/7:

  • You don’t use it.
  • You manage it.
  • You onboard it.
  • You train it.
  • You correct it.
  • You set expectations.
  • You accept blind spots.

That’s not a tool relationship. That’s leadership. And leadership is cognitively expensive.

People already manage:

  • coworkers
  • managers
  • Slack threads
  • Jira tickets
  • family dynamics
  • their own internal chaos

The last thing they want is another quasi-human entity that needs supervision.

The industry calls this progress.

Most user call this work.

Autonomy Sounds Great – Until You Ask ‘For Whom’?

Let’s be precise about autonomy because the word has become meaningless through overuse.

Real autonomy is delegated execution within bounded constraints. It’s your agent retrying a failed job without waking you up at 3 AM. It’s polling a data source, summarizing logs, or surfacing anomalies for human review. The human set the goal. The human defined the boundaries. The software executed within those guardrails.

Fake autonomy is the absence of human intent dressed up as intelligence. It’s when your system makes choices nobody asked for, optimizes metrics nobody validated, or “decides” based on reasoning nobody can inspect. Fake autonomy isn’t agentic behavior — it’s organizational negligence.

On paper, autonomy sounds incredible:

  • General problem solving
  • Self-directed behavior
  • Minimal human involvement
  • Agents acting “on your behalf”

In practice, the most “autonomous” demos we keep seeing are
 revealing.

  • “It can sort through 10,000 emails!”
  • “We put 1,000 agents into a social network and watched what happened!”

Really?

That’s the bar?

We already failed at email.
We already failed at social networks.
We already built systems that amplify bias, conflict, and misinformation — with humans in the loop.

So here’s the question nobody wants to answer:

Why would software built in our likeness — with our biases and blind spots — perform better in those same systems?

If anything, it will fail faster.
Autonomy without judgment is just acceleration.
General problem solving without values is just noise.

The Real Black Box 

Here’s where things get subtle: Non-determinism isn’t actually the scary part. Humans are non-deterministic too. The real problem is role ambiguity.

Is this thing:

  • a tool?
  • a coworker?
  • a service?
  • a witness?
  • something that remembers me?
  • something that judges me?

Humans are excellent at social calibration when roles are clear. We’re terrible when they aren’t. That uncanny valley people feel with agents isn’t technical? It’s relational. We didn’t solve human unpredictability with explainability.

We solved it with:

  • social contracts
  • relationship scopes
  • interpersonal rituals
  • bounded responsibility
  • forgiveness

Trust isn’t built by saying “look how smart this is.”

Trust is built by knowing what it will not do.

Stop Worshipping Your Code

We name our agents. We give them personas. We say “the agent thinks” or “the agent wants” or “the agent decided.” This isn’t harmless fun — it’s a cognitive trap.

We are so eager to recreate ourselves in software — before we’ve even agreed that we’re a good reference design.

Maybe the future isn’t:

  • more autonomous agents
  • more generalized problem solvers
  • more human-like behavior

Maybe it’s something quieter, sharper, and more disciplined. Software that:

  • is explicit about its limits
  • is boring in the right ways
  • makes human judgment clearer, not optional
  • optimizes for intent, not imitation

Agents aren’t creatures. They’re tools with loops. Forgetting that is how you worship your own code instead of using it. It’s how you abdicate responsibility for decisions that should have human oversight. It’s how you end up with systems that “surprise” you in production in ways that aren’t surprising at all — they’re just unexamined.

The Boring Future We Need

2026 won’t be the year of the agent. It’ll be the year we finally stop pretending software is sentient and start building systems we can actually understand.

The best “agentic” systems won’t feel agentic at all. They’ll feel obvious. They’ll feel boring — in all the best ways. They’ll feel like what they are: well-designed software that does exactly what it was asked to do, shows its work, and knows when to ask for help.

Everything else is just a cron job with delusions of grandeur.

Writer2 is live in Ape Space

When we introduced the original Writer two weeks ago, our claim was simple — and deliberately provocative:

There is no such thing as the best writer.

There is only the best writer for the brief.

The Writer agent proved that premise: by generating a purpose-built writer persona for each task, it already outperformed generic “write me an article” prompts. For many teams, that alone was a meaningful shift. And the data from the past two weeks, gave us real insight into how people are using the Writer agent, how it’s being prompted and directed.

What we learned: great writing isn’t just about voice. It’s about thinking, planning, iteration, and polish—the parts most AI systems still pretend to do, but don’t actually model.

So we built Writer2. Not an upgrade – a completely new architecture.

Introducing: Writer2

Writer2 isn’t a faster Writer. In fact, it’s deliberately taking more time, fully leveraging the deep reasoning capabilities of current flagship models — from Anthropic to Google to OpenAI. It’s a system designed to behave less like a text generator — and more like a disciplined human writer with time, structure, and judgment.

That distinction matters. And here’s how we enhanced the new Writer:

1. Writer Personas That Actually Hold Up Under Pressure

Writer1 generated personas, Writer2 constructs them. Each Writer2 run creates (or accepts) a deep, role-accurate writer persona with:

  • Real domain expertise (not vibes)

  • A clear editorial POV

  • Audience awareness

  • Structural preferences

  • Explicit tradeoffs (what this writer won’t do)

This matters because most AI writing fails before the first sentence: if the writer’s mental model is shallow, everything downstream is noise—no matter how fluent the prose looks. For each run Writer2 asks: “Who would responsibly write this—and how would they think while doing it?”

That shift alone eliminates a huge class of AI slop.

2. A Real Writing Loop (Instead of a Single, Optimistic Pass)

Most AI writing tools follow the same tragic pattern: Prompt → Generate → Hope

Writer2 doesn’t hope, it writes through a deterministic, multi-step writing loop:

  • The content is planned in advance

  • Sections are grouped into logical editing/writing steps

  • Each step writes 1–3 sections at a time

  • Progress is tracked explicitly

  • Context is loaded fresh for each step, so the model can’t actually forget what it’s writing about – it gets a fresh infusion of domain context for each pass

  • The agent always knows what’s done — and what’s next

This is how humans write when they care about quality.  And we do not claim to have solved writing. But we now have introduced controlled, intentional forward motion, that will help optimize Writer2’s skills over each new version.

3. A Separate, Serious Polishing Loop

While the original Writer already had a polishing step, Writer2 separates creation from polish—on purpose. Once the draft is complete, a second deterministic loop kicks in, focused purely on:

  • Tightening language

  • Removing repetition

  • Eliminating AI tells

  • Improving rhythm

  • Sharpening positions

  • Clarifying structure

This loop works section by section, with the original draft always available for comparison. The goal here isn’t more words, but fewer, better ones.

Polish is not creativity. It’s judgment and taste.

4. Cognitive Planning & Thinking Tools (Not Memory Theater)

Writer2 thinks in artifacts. Under the hood, it uses explicit cognitive tools to:

  • Infer intent from underspecified briefs

  • Derive a style guide automatically

  • Build a concrete writing plan

  • Track execution across iterations

  • Maintain continuity across long runs

This is why Writer2 can handle long-form content without collapsing into repetition or filler: It’s not relying on memory hacks, but uses explicit planning and fresh, context injection for each prompt.

5. Anti-Slop Is Enforced, Not Politely Suggested

Writer2 enforces a strict set of quality rules during both writing and polish, including:

  • No repetitive phrasing

  • No vague abstractions

  • No empty openings

  • No hedging where a position is required

  • No decorative formatting

  • No fake conclusions

If a sentence doesn’t earn its place, it doesn’t survive. This is how you get writing that feels intentional — because it is.

6. Runs on All Flagship Models

It took us about 2 weeks, to get from Writer to Writer2 — most of the time we spent on making the system work reliably across all major AI providers: Google, Anthropic and OpenAI. Writer2 runs on all major flagship models — by design.

Why? Because LLMs are rapidly becoming a commodity layer. The real leverage is no longer which model you pick, but what harness you wrap around it. Different models bring different strengths. Writer2 brings structure, discipline, and taste. By testing Writer2 across models, we give that choice back to the user. Do you want to:

  • Pick your preferred model?

  • Optimize for speed vs depth?

  • Run the same article on three models in parallel — and keep only the best draft?

Ape Space lets you do exactly that.

Why We Didn’t Build “Another General Purpose Agent”

We could have built another all-purpose creative agent. But we didn’t — intentionally. Optimizing for one creative task — writing — dramatically reduces the problem space. That reduction allows for far deeper solutions:

  • Better personas

  • Better planning

  • Better iteration

  • Better polish

  • Better outcomes

This is what we mean by domain-specific utilligence. Not a hallucinating, all-knowing general agent, but engineered creativity, purpose-built for real work.

AI agents don’t need more creativity, they need better constraints.

Try Writer2 Today

If you’ve ever thought:

  • “This sounds fine but says nothing.”

  • “Why does every AI article feel the same?”

  • “I want help thinking — not just typing.”

Writer2 was built for you. Welcome to the next generation of writing in Ape SpaceÂ đŸ”„

More Human or 
 More Useful?

The agent discourse is starting to sound like a gym-bro conversation.

“Bro, your loop is too small.”

“Bro, your context window isn’t stacked enough.”

“Bro, add memory. No —  m o r e  memory.”

“Bro, agent rules don’t matter.”

“Bro, recursive language models.”

And sure—some of that is real engineering. Miessler’s “the loop is too small” is a fair provocation: shallow tool-call loops do cap what an agent can do. Recursive Language Models are also legitimately interesting — an inference-time pattern for handling inputs far beyond a model’s native context window by treating the prompt as an “environment” you can inspect and process recursively.

But here’s the problem: a growing chunk of the discourse is no longer about solving problems. It’s about reenacting our folk theories of “thinking” in public—and calling it progress.

If you squint, you can already see the likely destination: not AGI. AHI – Artificial Humanoid Intelligence: the mediocre mess multiplied. A swarm of synthetic coworkers reproducing our worst habits at scale—overconfident, under-specified, distractible, endlessly “reflecting” instead of shipping. Not because the models are evil. Because we keep using human-like cognition as the spec, rather than outcomes.

And to be clear: “more human” is not the same as “more useful.” A forklift doesn’t get better by developing feelings about pallets.

The obsession with “agent-ness” is becoming a hobby

Memory. Context. Loop size. Rules. Reflection. Recursion.

These are not products. They’re ingredients. And we’ve fallen in love with the ingredients because they’re measurable, discussable, and tweetable.

They also create an infinite runway for bike-shedding. If the agent fails, the diagnosis is always the same: “needs more context,” “needs better memory,” “needs a bigger loop.”

Convenient — because it turns every failure into an invitation to build a bigger “mind,” instead of asking the humiliating question:

What problem are we actually solving?

A lot of agent builders are inventing new problems independent of solutions: designing elaborate cognitive scaffolds for tasks that were never constrained, never modeled, never decomposed, and never given domain primitives.

It’s like trying to build a universal robot hand 
 to butter toast.

Our working hypothesis: Utilligence beats AGI

At Apes on fire, we’re not allergic to big ideas. We’re just allergic to confusing vibes with value.

Our bet is Utilitarian Intelligence — Utilligence — the unsexy kind of “smart” that actually works: systems that reliably transform inputs into outcomes inside a constrained problem space. (Yes, we’re aware that naming things is half the job.)

If you want “real agents,” start where software has always started:

Classic systems design. State design. Architecture. Domain-centric applications.

Not “Claude Coworker for Everything.” — more like: “The Excel for this.” “The Photoshop for that.” “The Figma for this workflow.”

The future isn’t one mega-agent that roleplays your executive assistant. It’s a fleet of problem-shaped tools that feel inevitable once you use them — because their primitives match the domain they are operating in.

Stop asking the model to be an operating system

LLMs are incredible at what they’re good at: stochastic synthesis, pattern completion, recombination, compression, ideation, drafting, translation across representations.

They are not inherently good at being your cognitive scaffolding. Models are much closer to a processor in the modern technology stack, than an operating system.

So instead of building artificial people, we’re building an exoskeleton for human thinking: a structured environment where the human stays the decider and the model stays the probabilistic engine. The scaffolding lives in the system — state machines, constraints, domain objects, evaluation gates, deterministic renderers, auditability.

In other words: let the model do the fuzzy parts. Let the product do the responsible parts.

If we must learn from humans, let’s learn properly

Here’s the irony: the same crowd racing to build “human-like” agent cognition often has the loosest understanding of human cognition.

Before we try to manufacture artificial selves, maybe we should reread the observers of the human state. Kahneman’s Thinking, Fast and Slow is still a brutal reminder that “how we think” is not a very flattering blueprint. We are bias engines with a narrative generator strapped on top. Is that what we want an artificial “problem solver” to mimic?

Maybe not. Maybe the move is not: “let’s copy humans harder.” Maybe the move is: define the problem first, then build the machine that solves it. 

Because “more of us” isn’t automatically the solution. Sometimes it’s just
 more of the problem. So instead of Artificial Humanoid Intelligence, let’s work on Utilligence: intelligence with a job description.

We are live!

Ape Space is now live and in public beta.

We’ve been building this quietly for a while — nights, weekends, whiteboards full of crossed-out ideas — and today we’re opening the doors. Ape Space is a co-cognitive system for creative and strategic thinking in the AI age. It doesn’t think for you. It thinks with you.

We’re creatives who became coders, and coders who couldn’t let go of creativity. Somewhere along the way, we realized that most AI tools optimize for speed and output — but ignore the hardest part: thinking well. Ape Space exists to change that. We’re engineering creativity with intention: structuring context, running multiple thinking strategies in parallel, and creating a dynamic workspace — the Whitespace — where ideas can be explored, framed, and sharpened without collapsing into generic slop.

This is not an autopilot. It’s not a prompt vending machine. It’s a system designed to accelerate human thinking to machine speed — while keeping taste, judgment, and direction firmly in human hands.

This is a public beta. Things will break. Edges are rough. And that’s exactly the point.

If you think for a living — as a designer, writer, strategist, founder, or builder — we’d love for you to try Ape Space. Use it for something that actually matters to you. Push it. Bend it. Tell us where it surprises you — and where it doesn’t.

There’s much more coming, and our backlog is not very patient. But today, we’re live — and we’re excited to start building this with you.

Welcome to Ape Space.

Forge

PUBLIC BETA COMING SOON

Forge is where you take your ideas from spark to impact – providing you all the tools to drive interactive, AI powered brainstormings, and breakthrough innovation sessions.

Rapid innovation and brainstorming

Lightning-fast ideation cycles that transform scattered thoughts into structured innovation frameworks.

Graph based idea management

Visualize connections between concepts with intuitive knowledge graphs that reveal hidden insights.

Contexts to add depth

Rich contextual layers that bring nuance and specificity to every creative exploration.

The tech inside the spark

We are building the platforms to work with whatever intelligence comes next

Thinking bigger at scale

We are building the platforms to work with whatever intelligence comes next

Where Innovation Takes Flight

Discover our big-picture outlook and see how Apes on fire is reshaping creative possibilities.