Ignite Bold Ideas, Faster

We fuse human ingenuity with AI to unleash limitless creative sparks. Are you ready to set yours on fire?

↓

Your Agent Is Just a Cron Job With a God Complex

2026 has already been dubbed the “Year of the Agent” — but not just by LinkedIn airball posts and X threads. A viral tool called OpenClaw (previously Moltbot/Clawdbot) has been making headlines for autonomously managing digital lives and spawning a full-on AI-only social network called Moltbook, where bots post, debate, and mimic social behavior without humans directly involved. And now, you can even follow the first AI Journalists on their own Substack.

Meanwhile Anthropic’s Claude Code rolled out longer-running session tasks that can coordinate multi-step workflows across time.  And in cybersecurity circles, researchers have been dissecting Moltbook’s rapid rise and even a major security flaw that exposed agent credentials — raising fresh questions about what “autonomy” really means in practice. 

Agents Are Software (And Why “Human” Is a Terrible Default)

Here’s the truth nobody’s selling you: agents are software. Period.

They run code. They follow control flow. They execute policies, read and write state, call tools, emit outputs. There is nothing mystical happening here but somewhere along the way, we started lying to ourselves.

We stopped saying “software” and started saying “agent.”
We stopped saying “program” and started saying “coworker.”
We stopped saying “automation” and started saying “autonomy.”

And with that shift, we quietly imported a dangerous assumption:

If it acts like a human, it must be better.

Let’s pause right there.

Humans are incredible.
Humans are creative.
Humans are adaptable.

Humans are also:

  • inconsistent
  • emotional
  • biased
  • forgetful
  • reactive
  • non-deterministic
  • sometimes just
 having a bad day

If we genuinely want agents to “act like humans,” then we don’t just get empathy and creativity — we also inherit bad vibes, erratic behavior, partial understanding, and mistakes.

Not because the software is bad. But because “human” is not an optimization target.

It’s a compromise.

The Hard Problems Are Human

Your “AI agent” is fundamentally a cron job with opinions — a while-loop that can hallucinate. Your agent doesn’t “decide” to do anything meaningful. It follows a probability distribution shaped by training data, system prompts, and temperature settings. When it succeeds, it’s because a human somewhere made good choices about what to optimize for. When it fails, it’s usually because those choices were implicit, unexamined, or wrong.

When we build agent systems, the industry loves to obsess over the easy stuff. Which LLM? What vector database? How many tools should it have access to? Should we use LangChain or roll our own framework?

This is intellectual theater. The hard problems aren’t technical — they’re human:

  • Deciding what actually matters
  • Judging quality when there’s no ground truth
  • Choosing between legitimate trade-offs
  • Setting direction when the path isn’t clear

Here’s the uncomfortable truth we discovered by actually running an always-on agent 24/7:

  • You don’t use it.
  • You manage it.
  • You onboard it.
  • You train it.
  • You correct it.
  • You set expectations.
  • You accept blind spots.

That’s not a tool relationship. That’s leadership. And leadership is cognitively expensive.

People already manage:

  • coworkers
  • managers
  • Slack threads
  • Jira tickets
  • family dynamics
  • their own internal chaos

The last thing they want is another quasi-human entity that needs supervision.

The industry calls this progress.

Most user call this work.

Autonomy Sounds Great – Until You Ask ‘For Whom’?

Let’s be precise about autonomy because the word has become meaningless through overuse.

Real autonomy is delegated execution within bounded constraints. It’s your agent retrying a failed job without waking you up at 3 AM. It’s polling a data source, summarizing logs, or surfacing anomalies for human review. The human set the goal. The human defined the boundaries. The software executed within those guardrails.

Fake autonomy is the absence of human intent dressed up as intelligence. It’s when your system makes choices nobody asked for, optimizes metrics nobody validated, or “decides” based on reasoning nobody can inspect. Fake autonomy isn’t agentic behavior — it’s organizational negligence.

On paper, autonomy sounds incredible:

  • General problem solving
  • Self-directed behavior
  • Minimal human involvement
  • Agents acting “on your behalf”

In practice, the most “autonomous” demos we keep seeing are
 revealing.

  • “It can sort through 10,000 emails!”
  • “We put 1,000 agents into a social network and watched what happened!”

Really?

That’s the bar?

We already failed at email.
We already failed at social networks.
We already built systems that amplify bias, conflict, and misinformation — with humans in the loop.

So here’s the question nobody wants to answer:

Why would software built in our likeness — with our biases and blind spots — perform better in those same systems?

If anything, it will fail faster.
Autonomy without judgment is just acceleration.
General problem solving without values is just noise.

The Real Black Box 

Here’s where things get subtle: Non-determinism isn’t actually the scary part. Humans are non-deterministic too. The real problem is role ambiguity.

Is this thing:

  • a tool?
  • a coworker?
  • a service?
  • a witness?
  • something that remembers me?
  • something that judges me?

Humans are excellent at social calibration when roles are clear. We’re terrible when they aren’t. That uncanny valley people feel with agents isn’t technical? It’s relational. We didn’t solve human unpredictability with explainability.

We solved it with:

  • social contracts
  • relationship scopes
  • interpersonal rituals
  • bounded responsibility
  • forgiveness

Trust isn’t built by saying “look how smart this is.”

Trust is built by knowing what it will not do.

Stop Worshipping Your Code

We name our agents. We give them personas. We say “the agent thinks” or “the agent wants” or “the agent decided.” This isn’t harmless fun — it’s a cognitive trap.

We are so eager to recreate ourselves in software — before we’ve even agreed that we’re a good reference design.

Maybe the future isn’t:

  • more autonomous agents
  • more generalized problem solvers
  • more human-like behavior

Maybe it’s something quieter, sharper, and more disciplined. Software that:

  • is explicit about its limits
  • is boring in the right ways
  • makes human judgment clearer, not optional
  • optimizes for intent, not imitation

Agents aren’t creatures. They’re tools with loops. Forgetting that is how you worship your own code instead of using it. It’s how you abdicate responsibility for decisions that should have human oversight. It’s how you end up with systems that “surprise” you in production in ways that aren’t surprising at all — they’re just unexamined.

The Boring Future We Need

2026 won’t be the year of the agent. It’ll be the year we finally stop pretending software is sentient and start building systems we can actually understand.

The best “agentic” systems won’t feel agentic at all. They’ll feel obvious. They’ll feel boring — in all the best ways. They’ll feel like what they are: well-designed software that does exactly what it was asked to do, shows its work, and knows when to ask for help.

Everything else is just a cron job with delusions of grandeur.

More Human or 
 More Useful?

The agent discourse is starting to sound like a gym-bro conversation.

“Bro, your loop is too small.”

“Bro, your context window isn’t stacked enough.”

“Bro, add memory. No —  m o r e  memory.”

“Bro, agent rules don’t matter.”

“Bro, recursive language models.”

And sure—some of that is real engineering. Miessler’s “the loop is too small” is a fair provocation: shallow tool-call loops do cap what an agent can do. Recursive Language Models are also legitimately interesting — an inference-time pattern for handling inputs far beyond a model’s native context window by treating the prompt as an “environment” you can inspect and process recursively.

But here’s the problem: a growing chunk of the discourse is no longer about solving problems. It’s about reenacting our folk theories of “thinking” in public—and calling it progress.

If you squint, you can already see the likely destination: not AGI. AHI – Artificial Humanoid Intelligence: the mediocre mess multiplied. A swarm of synthetic coworkers reproducing our worst habits at scale—overconfident, under-specified, distractible, endlessly “reflecting” instead of shipping. Not because the models are evil. Because we keep using human-like cognition as the spec, rather than outcomes.

And to be clear: “more human” is not the same as “more useful.” A forklift doesn’t get better by developing feelings about pallets.

The obsession with “agent-ness” is becoming a hobby

Memory. Context. Loop size. Rules. Reflection. Recursion.

These are not products. They’re ingredients. And we’ve fallen in love with the ingredients because they’re measurable, discussable, and tweetable.

They also create an infinite runway for bike-shedding. If the agent fails, the diagnosis is always the same: “needs more context,” “needs better memory,” “needs a bigger loop.”

Convenient — because it turns every failure into an invitation to build a bigger “mind,” instead of asking the humiliating question:

What problem are we actually solving?

A lot of agent builders are inventing new problems independent of solutions: designing elaborate cognitive scaffolds for tasks that were never constrained, never modeled, never decomposed, and never given domain primitives.

It’s like trying to build a universal robot hand 
 to butter toast.

Our working hypothesis: Utilligence beats AGI

At Apes on fire, we’re not allergic to big ideas. We’re just allergic to confusing vibes with value.

Our bet is Utilitarian Intelligence — Utilligence — the unsexy kind of “smart” that actually works: systems that reliably transform inputs into outcomes inside a constrained problem space. (Yes, we’re aware that naming things is half the job.)

If you want “real agents,” start where software has always started:

Classic systems design. State design. Architecture. Domain-centric applications.

Not “Claude Coworker for Everything.” — more like: “The Excel for this.” “The Photoshop for that.” “The Figma for this workflow.”

The future isn’t one mega-agent that roleplays your executive assistant. It’s a fleet of problem-shaped tools that feel inevitable once you use them — because their primitives match the domain they are operating in.

Stop asking the model to be an operating system

LLMs are incredible at what they’re good at: stochastic synthesis, pattern completion, recombination, compression, ideation, drafting, translation across representations.

They are not inherently good at being your cognitive scaffolding. Models are much closer to a processor in the modern technology stack, than an operating system.

So instead of building artificial people, we’re building an exoskeleton for human thinking: a structured environment where the human stays the decider and the model stays the probabilistic engine. The scaffolding lives in the system — state machines, constraints, domain objects, evaluation gates, deterministic renderers, auditability.

In other words: let the model do the fuzzy parts. Let the product do the responsible parts.

If we must learn from humans, let’s learn properly

Here’s the irony: the same crowd racing to build “human-like” agent cognition often has the loosest understanding of human cognition.

Before we try to manufacture artificial selves, maybe we should reread the observers of the human state. Kahneman’s Thinking, Fast and Slow is still a brutal reminder that “how we think” is not a very flattering blueprint. We are bias engines with a narrative generator strapped on top. Is that what we want an artificial “problem solver” to mimic?

Maybe not. Maybe the move is not: “let’s copy humans harder.” Maybe the move is: define the problem first, then build the machine that solves it. 

Because “more of us” isn’t automatically the solution. Sometimes it’s just
 more of the problem. So instead of Artificial Humanoid Intelligence, let’s work on Utilligence: intelligence with a job description.

The Co-Dependent Mind: Rethinking The Singularity

A Different Kind Of Creation Myth

Silicon Valley’s favorite bedtime story says that one morning we’ll wake up, pour a coffee, and discover a server farm somewhere “became conscious” overnight. From Ray Kurzweil’s Singularity to Nick Bostrom’s super-intelligence cautionary tales, the plot twist is always the same: machines awaken, humans scramble.

What if the script is backwards? Neuroscientist Antonio Damasio argues consciousness is a dance of feeling and knowingthat stitches body, language, and culture together. If that’s true, then large language models – mind-boggling statistical engines with zero embodiment – lack the very ingredients that make experience felt. They can scale inference at light-speed, but they borrow meaning, purpose, and curiosity from the humans who steer them.

Viewed through that lens, the fabled “AGI moment” looks less like a hard software upgrade and more like a gradual handshake: humans extend their cognition into silicon; silicon extends human reach back into the world. Singularity is simply the day we admit the partnership has already begun.

“The singularity isn’t when machines wake up – it’s when we realize the potential of our collaboration.”

1 · Mind-Power vs. Meaning-Power

LLMs such as OpenAI’s o3 family or Anthropic’s Claude 3 can store and remix more text than any human will read in a lifetime, scoring top-tier on reasoning benchmarks like LMSYS Arena. What they don’t do is care whether a solution matters. Caring – facilitating, prioritizing, discarding – remains a human super-power rooted in emotion and social context (Damasio, 2021).

DimensionAI StrengthHuman Strength
Span (tokens, permutations, inference)Near-infiniteBounded
Sense-making (salience, intention)BorrowedNative
Agency (goals, values)ProgrammaticIntrinsic
Embodiment (gut feelings, hormones)NoneConstant

The implication for product teams and strategists: breakthroughs happen where machine span meets human meaning. The two are complementary processors, not rivals.

2 · A (Very) Short Dialogue with ChatGPT o3

Human: â€œIf humans supply the ‘why’ in a problem and language models supply the raw ‘how,’ who owns consciousness?”

ChatGPT o3: â€œI would phrase it like this: humans inhabit consciousness; I approximate fragments of it. Without your questions I idle in mathematical potential – no goals, no felt sense, just suspended probabilities.”

That answer lines up with OpenAI’s own alignment reflections – emphasizing incremental real-world testing, not a single awakening event .

“I idle in mathematical potential – no goals, no felt sense.” — ChatGPT o3

3 · Co-Dependency Over Emergence

Three converging research threads reinforce the symbiosis thesis:

  1. Extended-Mind Theory â€” First articulated by Clark & Chalmers (1998) and echoed in recent AI-ethics work, it holds that notebooks, smartphones, and now LLMs are literal extensions of cognition, not external aids.
  2. Human-in-Loop Alignment â€” OpenAI and Anthropic both embed RLHF stages precisely because human preference grounds otherwise drifting optimization targets.
  3. Emotion as Computation Constraint â€” Damasio’s “homeostatic feelings” model suggests decisions arise from bodily value signals; without those, simulation drifts into infinite branches.

Together they hint that the “raw mind-power” of AI still requires a living feedback loop to crystallize anything like purpose.

4 · Tactical Implications for Creative Teams

a) Treat the LLM as Amplifier, Not Author

  • Draft briefs with explicit emotional stakes.
  • Use LLMs to multiply options, then apply human sense-checking for resonance.

b) Encode Intuition Into Prompts

Reference sensory or cultural anchors (“queue-jumping feels like stale coffee in a cold cup”) to feed the model cues it cannot feel.

c) Plan for Choice Architecture

Map out decision gates where humans must pick direction—don’t let the pipeline run to completion on autopilot.

“Choice is the last mile of intelligence.”

5 · Rethinking AGI Metrics

Traditional road-maps chase raw benchmark scores (MMLU, GSM8K). If co-dependency is the reality, better yard-sticks are:

Old MetricNew MetricWhy it matters
Measures collaborative densityHuman touch-points per outputMeasures collaborative density
Evaluates emotional impactResonance score (human panel)Evaluates emotional impact
Raw latencyDecision latency (time until human commits)Captures friction in mixed workflows

6 · Three Bold Claims for Future Discussion & Research

  1. Singularity as Recognition Event: What are the chances, that the long-awaited “AGI moment” won’t be a machine awakening but a social tipping-point where industry and policy explicitly treat human × AI decision loops as one cognitive system?
  2. Intuition Engineering Will Eclipse Prompt Engineering: Crafting which feelings, stakes, and value signals we feed into models will matter more than syntactic prompt tricks – could this usher in a discipline that merges affective science with system design.
  3. Legal & Creative Credit Will Shift to “Co-Agency” Models: Copyright, liability, even revenue-share contracts will evolve to acknowledge outputs as jointly authored artifacts, forcing new frameworks for ownership and responsibility.

Stay tuned to hear more about those themes from our ongoing research.

Forge

PUBLIC BETA COMING SOON

Forge is where you take your ideas from spark to impact – providing you all the tools to drive interactive, AI powered brainstormings, and breakthrough innovation sessions.

Rapid innovation and brainstorming

Lightning-fast ideation cycles that transform scattered thoughts into structured innovation frameworks.

Graph based idea management

Visualize connections between concepts with intuitive knowledge graphs that reveal hidden insights.

Contexts to add depth

Rich contextual layers that bring nuance and specificity to every creative exploration.

The tech inside the spark

We are building the platforms to work with whatever intelligence comes next

Thinking bigger at scale

We are building the platforms to work with whatever intelligence comes next

Where Innovation Takes Flight

Discover our big-picture outlook and see how Apes on fire is reshaping creative possibilities.