Skip to main content
← Back to thoughts
The Offloaded Mind

The Offloaded Mind

· 9 min read ·
Engineering

A few months ago I was reading Mustafa Suleyman’s The Coming Wave. He talks about what he calls ACI, artificial capable intelligence, and I’ve been thinking a lot about that lately. As AI advances, and as we keep offloading more and more tasks to these capable systems, I wonder what humanity looks like five or ten years from now.

I am not even talking mainly about jobs or the market or whether some roles disappear and others appear. Humanity is dynamic. We usually find some way to correct and adapt. What worries me more is something else: I do not think that in history, at least not to this degree, we have ever given ourselves this much mental freedom by offloading so many cognitive tasks.

I am dictating this piece through Wispr Flow. I sit at the PC, talk into a microphone, and the text appears. It is useful. It means I do not have to worry about every single word or character while I am still thinking. I can think more freely and just talk. That is great. It is also part of what worries me.

Learning like an interpreter

A person reading code line by line, finger tracing down a page

For a long time, no matter whether we were juniors or seniors, there was a path we had to take to understand the things we were doing. As we understood them, new paths opened. If you hit a roadblock, you usually had two options. Either a more senior colleague helped unblock you, even if you did not fully understand yet, or you had to stop, learn something, and then move on.

In that sense, we almost had to work like an interpreter. You move forward step by step. If the interpreter cannot understand a line, it breaks. And for a long time our growth worked a bit like that too. When you did not understand something deeply enough, the work pushed back.

That did not mean everyone learned perfectly. Of course not. People have always worked around gaps. But the gaps were harder to hide. You had to come into contact with the thing itself.

What changed on my side

Two hands hovering above a keyboard without touching it

I am a programmer. I am a tech lead. I have been in this industry for a long time. And I have not written a single meaningful line of code by hand in months. Maybe longer. Maybe close to a year.

Yes, I still work. I write specifications. I think about the problem. I think about the architecture. I think about the solution and how it lands in a system that already has history. Then I offload a lot of the implementation to these tools.

Of course that is possible only because of the experience I gained over ten or fifteen years or more. I have done this work many times before. I know what to look for. I know which questions to ask. But still, it feels a little scary.

There is something strange about realizing that a lot of the hard cognitive work I used to do myself is now being handed to a machine. All of that is great. All of that is useful. But it still leaves me with the question: what does that do to us over time?

Reading, writing, and surface-level thinking

A person skimming the surface of water with their fingertips

People now automate writing emails. People automate writing articles. People summarize books. People summarize papers. Maybe people will summarize this article too because they will not have the patience to read it fully.

So then the question becomes: what happens to our writing abilities? What happens to our reading abilities? What happens to patience? What happens to the ability to stay with something instead of reducing it immediately to the shortest possible version?

Of course, a lot of this can also be useful. If a tool gives you quick access to the one point you care about, that can help. It can save time. It can make learning more accessible in some cases. I am not denying that.

But I do not know. A lot of this feels like one vicious circle. If we stop writing, if we stop reading fully, if we keep training ourselves on summaries and surface-level information, then do we also start thinking that way? Do we train ourselves, as a species, to stay at the surface?

Humanity already likes absolute thinking. We already like simple binaries. Good and bad. Left and right. Enemy and ally. Everything is simplified, everything is flattened, everything wants closure. If capable systems reinforce that by making shallow understanding even easier, where does that end?

Sometimes I think about Idiocracy, not as a literal prediction, but as a question. What happens when the shallow path becomes the easiest path, the fastest path, and eventually the rewarded path?

What happens when nothing new gets made

A bucket lowered into a dry well

Another thing I keep wondering about is the systems themselves.

If these models need fresh data to improve, what happens if fewer people keep producing new work? What happens if people stop writing open source software in public and start closing more of it because the bargain has changed? What happens if more and more of the internet becomes recycled knowledge, synthetic output, and repeated summaries of old material?

Then the question becomes: who builds better systems? Who pushes things forward? Who actually adds something new instead of only recombining what already exists?

I am not claiming I know the answer. I am saying it feels like a question worth sitting with.

Curiosity is not the main problem

A wide-open eye looking through a magnifying glass

I do not actually think the main problem is human curiosity. I think plenty of people still want to learn. I do. I still want to understand things on a deeper level. I read philosophy. I read Stoicism. I like going deep. I do not think the human desire to learn has disappeared.

The issue, to me, is pressure from the other direction.

The pressure from the other direction

A person being squeezed between two large clock hands

If we train each other that writing code or solving complex problems is now just a prompt or two, maybe sent to ten different models at the same time, and if organizations start expecting an order of magnitude more output per day than before, then where does the time for understanding come from?

How do people ship so much stuff and still remain knowledgeable about what they are doing? How do you generate so much code, or product, or output, and still have time to understand the consequences of it?

There is also an ownership problem here. People on the internet like to ask who is responsible when generated code fails. To me the answer is simple. The person who ships it is responsible. If you use AI, you are still responsible for both the good and the bad. That makes deep understanding even more important, not less important.

Because if AI becomes the norm, and if that speed becomes the expectation, then you end up with massive pressure on programmers and other professionals. You have to produce more, but you also have to understand more. And the people setting those expectations will often be the same people who do not understand how long it takes to really understand a complex system in the first place.

Juniors, axioms, and the hard part

There is a huge discussion right now about junior employees. The market is rough for them, obviously. But I think the deeper issue is older than that.

A junior person in any field still has to learn the axioms of the profession. Programming, law, accounting, math, whatever it is. Of course a junior programmer today will have a very different path than we did in the 1990s or 2000s. That is not the problem.

The real question is whether they will really understand the consequences of their code, or of their product, and how much they will be able to reason about it without constantly falling back to AI.

There is nothing wrong with AI. AI can be very accurate. But if you do not have someone in the room who really understands the underlying thing, who is there to confirm what is true and what is false? If nobody understands the axioms, and everything becomes prediction on top of prediction, then what is the difference between a person with a formal title and anyone else who can prompt well?

That is one of the hardest questions in this whole thing for me.

The open edge

A figure standing at an open doorway looking out into empty space

I do not know what humanity looks like in five or ten years. I do not think anybody really knows. That is what makes all of this both exciting and intimidating.

What I worry about is not only whether we lose jobs or whether the market changes. I worry about what happens if we normalize a way of working where output stays high, but understanding gets thinner. I worry about what happens if companies keep taking the speed but do not create the time for people to actually understand what they are building.

Maybe some kind of self-correction happens. Maybe expectations and reality collide hard enough that a new balance gets forced on us. Maybe companies push too far, ask for too much output, put understanding on the bottom shelf, and then reality pushes back through failures, bad decisions, lost clients, broken trust, and errors that are simply too expensive to keep making.

In simple words, maybe over time we discover that moving faster and faster eventually becomes too fast. Not because people suddenly become wiser, but because systems break, customers leave, and businesses learn the hard way that output without understanding has a cost.

To me that is the open edge of the problem. The tools are not going away. The question is whether we build workplaces, expectations, and habits that still leave room for depth, or whether depth becomes something you are expected to pursue after hours, on your own, like a hobby.

I also think we owe something to ourselves here. As leaders, as individuals, no matter where we are, we owe ourselves the discipline to stay pragmatic when it comes to knowledge. We still need to read books. We need to resist the urge to always have things summarized and handed to us. We still need to think, to argue, to discuss, and to evolve our thinking beyond the surface.

In simple words, almost like a junkie has to refuse another shot, we sometimes have to refuse the temptation to offload everything just because we can. These tools are amazing. That is exactly why the temptation is so strong.

No matter what the future brings, I still believe the people who matter most will be the ones who stay curious. The ones who are still willing to run the marathon, finish the book, and learn how the things they build actually work.

I hope it is the first one. I really do. I am just not sure it will be.

Read next

Enjoyed this? Subscribe via RSS to get notified when I publish new posts.