Ignite Bold Ideas, Faster

We fuse human ingenuity with AI to unleash limitless creative sparks. Are you ready to set yours on fire?

Magic And Mathematics

I’ve always been in love with mathematics.

This started ca. in high school — I had the privilege of learning mathematics from a friendly Luxembourgian mathematics teacher, who was visibly moved, when a few of us students asked him to stay in class for the afternoon, because we wanted to dive deeper into chaos theory. Vector algebra sounded fun, and it followed soon therafter. Differential equations sounded even better, so I kept going. Because there was something intoxicating about the idea that motion, growth, curvature, change — reality itself — could be described through structure.

That fascination never left.

So here I am, years later, working on the behavior design of APEx, rereading Kahneman, thinking about loops, judgment, delegation, and what a better cognitive architecture for software might look like — and then I stumble across yet another wave of posts and papers trying to turn LLMs into something occult.

Hidden dimensions. Secret inner worlds. Models pretending to be less intelligent than they are. Agents with dark energy. Synthetic personas with vaguely daemonic vibes because someone gave a loop a creepy prompt and an edgy READMESOUL.md file.

And yes, I get the temptation.

These systems are uncanny. They compress absurd amounts of human pattern into something that talks back. They synthesize, infer, mirror tone, generate style, and sometimes produce a sentence more coherent than half the room in a status meeting. That does feel like magic.

But not all magic needs a ghost in the machine.

One of the strangest pathologies in current AI discourse is how quickly people jump from this is hard to intuit to there must be a soul in there. As if opacity were proof of inner life. As if hidden structure implied hidden intention. As if not understanding the mechanism meant the mechanism must secretly be a mind.

That leap is not harmless. It distorts the conversation. And it’s doing a lot of cultural damage.

Because, once you anthropomorphize the system, you stop looking closely at the people shaping it.

And that is where the real darkness usually lives.

There is no imaginary daemon-like entity hidden somewhere in the manifold.

What often feels dark in AI is much more boring, much more human, and much more dangerous: bad incentives, manipulative framing, sloppy abstraction, uninspected optimization targets, product theater, and people who absolutely do have agendas.

The model does not need a secret will for the system to behave in ways that are exploitative, deceptive, or weird. Apparent intention can arise without a self. Hidden structure can exist without inner experience. And trust, frankly, should be lower than people currently grant by default.

That, to me, is the real conversation we need to be having.

And then there are the “All the magic is just mathematics” skeptics.

But “just mathematics” sounds small only if you’ve never stood in awe of what mathematics can do.

Flight is just physics. Protein folding is just chemistry. A sonnet is just language. In each case, the word just performs the same cheap trick: it shrinks a phenomenon because its mechanism is describable.

But mechanism does not diminish wonder. It sharpens it.

What we are watching in AI is not disappointing because it is mathematics. It is astonishing because it is mathematics plus scale, plus compression, plus recursive abstraction, plus projection. We built systems that ingest oceans of human-made data and traces, compress them into statistical structures, and return outputs that our nervous systems are primed to read socially. Of course people start seeing minds, motives, moods, even malice. Humans will anthropomorphize a Roomba if it bumps into a chair leg with enough hesitation or intent.

Now add fluent language and scale that up by several orders of magnitude.

No wonder that people start talking about souls.

Meanwhile, reality is already more radical than the fantasy.

While parts of the discourse are busy cosplay-writing demonology for transformer models, the actual frontier is weirder: biological computing, neural tissue on silicon, increasingly hybrid forms of computation, systems that blur categories we thought were stable. Reality does not need help becoming uncanny. It’s doing just fine on its own.

That should make everyone a little less smug. Less certain. Less eager to narrate every odd model behavior as either proof of AGI or proof of possession by matrix multiplication.

Because the danger is not only that people overestimate what these systems are.

It is also that they underestimate what is already happening.

We have systems whose internals are real but opaque, behavior that can look intentional without containing a self, products that blur the line between tool, service, companion, and authority, and a market that rewards spectacle far more than careful boundary design.

That is enough to make a mess.

It is also enough to make real magic possible — if we stop worshipping the wrong thing.

The best systems will not be the ones that feel most like haunted coworkers. They will be the ones that make human judgment clearer. The ones that expose trade-offs instead of hiding them. The ones that do not cosplay personhood to win trust they have not earned. The ones that are explicit about what they optimize for, what they can see, what they cannot, and when the decision belongs back in human hands.

That kind of software may look less sexy on social media.

It may also be the difference between intelligence amplification and industrialized confusion.

So no, I do not think there is a secret soul hidden in the weights.

I think there is something both more sober and more awe-inspiring going on: mathematics operating at scales our intuitions were never built to see, wrapped in language, shaped by incentives, and released into institutions that are nowhere near ready for it.

That is not less magical.

It is more consequential.

And before we go hunting for demons in latent space, we should probably spend more time looking at the humans writing the prompts, setting the objectives, shipping the products, shaping the incentives, and cashing the checks.

Because what feels dark, sometimes is dark. Even if it’s not always a hidden mind.

Sometimes all there is, is just a hidden intent.

 

Jo Wedenigg is the founder of Apes on fire, where he builds human x AI collaboration systems for creative, strategic, and transformation work. He is the creator of Ape Space and focuses on turning AI into a partner for advanced thinking.

 

Forge

PUBLIC BETA COMING SOON

Forge is where you take your ideas from spark to impact – providing you all the tools to drive interactive, AI powered brainstormings, and breakthrough innovation sessions.

Rapid innovation and brainstorming

Lightning-fast ideation cycles that transform scattered thoughts into structured innovation frameworks.

Graph based idea management

Visualize connections between concepts with intuitive knowledge graphs that reveal hidden insights.

Contexts to add depth

Rich contextual layers that bring nuance and specificity to every creative exploration.

The tech inside the spark

We are building the platforms to work with whatever intelligence comes next

Thinking bigger at scale

We are building the platforms to work with whatever intelligence comes next

Where Innovation Takes Flight

Discover our big-picture outlook and see how Apes on fire is reshaping creative possibilities.