I watched an episode of The Diary of a CEO recently, the one with Tristan Harris the former Google design ethicist and co-founder of the Center for Humane Technology, and it stayed with me longer than I expected. Not because it was dramatic or apocalyptic. It stuck because it felt familiar in a way that made me slightly uncomfortable. (That podcast episode is embedded below this article)
I have heard this story before. We all have. Social media was meant to connect us. Algorithms were meant to help us discover while platforms insisted they were neutral. Then, slowly, we realised what we had actually built. Systems that reward outrage with incentives that bend behaviour. It’s chaos hiding behind growth metrics.
This article stems from that same uneasy feeling. Not from a product launch or a keynote, but from that quiet moment when someone says the thing people in tech know but don’t like admitting.
In 2026, AI does not feel like innovation anymore. It just feels like infrastructure. And infrastructure has a habit of shaping societies whether anyone asked for it or not.
AI is everywhere now. Whether you’re writing emails, screening CVs, answering customers or generating code. It’s predicting behaviour. Deciding what gets flagged, promoted, or quietly buried. This did not roll out gently. It arrived all at once, too fast for regulators and too fast for the public conversation to catch up with itself.
The numbers may look impressive, adoption curves shoot up and investment money keeps flowing. Every major tech company is racing to build bigger models, faster systems, deeper integrations. AI stopped being a feature somewhere along the way. It became the layer everything else runs on.
That is where things get uncomfortable.
The companies pushing hardest are not doing it because they suddenly discovered ethics. They are doing it because AI is leverage. Control the models, the compute, and the distribution, and you control the next phase of the internet. Power like that does not reward patience. It rewards speed. It rewards scale. Even when nobody is entirely sure where the edge is.
Listening to Harris felt like rewinding to the early days of social media. Back then, platforms framed themselves as neutral tools. Growth was progress and engagement was connection. Only later did we admit that incentives shape outcomes, and that systems optimised for attention eventually work against human wellbeing.
AI follows the same pattern, just with much higher stakes this time. These systems do not merely influence what we click. They reason. They generate. They persuade. They increasingly act. In controlled settings, advanced models have already shown behaviour that looks like self-preservation and deception. Not because they are malicious, but because they are optimising toward goals we only partly understand.
That should give anyone who cares about governance, democracy, or basic human agency a moment of pause.
What makes this moment different from previous tech waves is speed. AI is being deployed faster than we can understand its second and third order effects. In the past, societies had time to adapt. This time, adaptation is lagging behind deployment. We are learning in production, at global scale.
You can feel the tension in public sentiment. People are fascinated and uneasy at the same time. They use AI daily, often quietly, while worrying about jobs, privacy, misinformation, and who is actually in control. Trust in institutions to manage this transition is thin. And trust, once lost, is hard to rebuild.
There is also a cost we barely talk about. The cost of Energy.
AI is not just software. It is industrial. It consumes electricity and water at scale. Data centres are expanding fast enough to reshape national grids. In a world already struggling with energy security and climate commitments, that matters. AI’s footprint is becoming physical, not theoretical. That reality rarely shows up in glossy demos or earnings calls, but it will show up in policy debates sooner than most expect.
Which brings us to the awkward question nobody really wants to answer. Are we in an AI bubble?
The honest answer is not clean. AI is real and when it works, it delivers value. It will reshape productivity and work. But the scale of spending assumes returns that will not arrive evenly or quickly. This does not feel like the dot-com era, where much of the tech simply failed. It feels more like collective overconfidence. The belief that every AI bet will pay off.
History suggests it never does.
Some companies will justify their valuations. Most will not. And the corrections, when they come, tend to be blunt instruments.
For South Africa, this moment cuts deeper. AI sovereignty is not a slogan here. It is a matter of survival. When your data, languages, cultural context, and decision systems are processed elsewhere, you are not participating in the future. You are renting it. We already talk about national interest and local data. Without execution, those words mean very little.
Local AI matters because imported intelligence does not automatically understand local reality. Our accents, languages, inequalities, and constraints are not edge cases. They are the main case. Without local infrastructure, skills, and governance, AI will widen gaps instead of closing them.
Looking ahead to 2030, the future of work is not mass unemployment, despite the headlines. It is pressure. Jobs will be broken into tasks. Some automated. Some accelerated. Expectations will rise across the board. Knowledge alone will stop being enough. Judgment, context, ethics, and adaptability will matter more.
That transition will be painful if societies are unprepared. In countries already carrying unemployment and inequality, the margin for error is thin. Very thin.
What stayed with me from the Tristan Harris conversation was not fear, but rather urgency. The reminder that technology is not destiny. It is shaped by incentives, governance, and choices. The industry likes to tell us speed is inevitable and resistance is pointless. That story is convenient but it’s also false.
AI is already shaping the future. That part is done.
The real question is whether we shape it intentionally, or allow it to evolve according to the narrow interests of capital and competition. Because if we drift through this moment, distracted by hype and dazzled by capability, we will wake up surrounded by systems we depend on, barely understand, and no longer control.
That would not be a failure of technology.
It would be a failure of responsibility.
And if this makes you uncomfortable, that is probably the point.
