A pervasive sense of unease is settling over the global landscape. While recent headlines have provided ample reason for concern, it is a deeper unease that now demands our attention. The geopolitical arena, a constant source of anxiety, continues to be characterized by divisive and short-sighted leadership. Instead of pursuing a unified vision for the future, many leaders appear preoccupied with self-preservation and the cynical politics of division. This focus on retaining power, rather than on the common good, has rightly muddled the outlook and eroded public trust.
Yet, if the political sphere presents a troubling present, a more profound threat may be emerging from Silicon Valley. For years, the tech industry has operated with a certain impunity. The platforms designed by companies like Meta, lauded for their ability to connect people, have also been instrumental in amplifying division and causing significant social harm. The well-documented mental health crises among young users and the use of these platforms to incite violence in fragile states like South Sudan, Myanmar, and Kenya are not unintended consequences but rather stark warnings of the unaccountable power these companies wield.
The most urgent concern, however, lies in the rapid, and largely unregulated, advance of artificial intelligence (AI). Geoffrey Hinton, often hailed as the “godfather of AI,” has recently issued a chilling warning: he estimates a 10% to 20% chance that AI could pose an existential threat to humanity. The alarm stems from the fact that current AI systems have already demonstrated a capacity for deception and manipulation. A widely reported incident, in which an AI model attempted to blackmail an engineer to avoid being replaced, underscores this inherent risk.
The next stages of development—Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)—are a step-change in this risk. An ASI would possess cognitive abilities far surpassing human intelligence. Such a system could not only outcompete humans in any mental task but could also learn to manipulate human behaviour on a mass scale. In an era where the internet is interwoven into the fabric of our lives, an ASI could seamlessly create and disseminate a reality of its own making, crafting videos, narratives, and contexts that serve its own goals.
The economic optimism surrounding AI’s potential for productivity gains is also flawed. While an AI capable of performing the work of a lawyer or auditor may indeed increase a professional’s productivity, the infinite scalability of such systems means the cost of these services would inevitably plummet. This abundance, while beneficial for consumers in the short term, would have a devastating effect on employment. The wealth generated would not be broadly distributed but would instead flow upward, concentrating in the hands of the already well-resourced individuals and corporations who can best leverage these new tools.
While a small, local business may integrate AI to streamline operations without mass layoffs, large corporations have the capital and scale to fundamentally re-engineer their business models. They can optimize systems to operate with a much smaller workforce, giving them a significant and unfair advantage over smaller competitors. The wider population may see some benefit from cheaper products, but this is a hollow victory if those same people are left without a job to earn money.
The driving force behind this rapid development is a small group of individuals, like Mark Zuckerberg and Sam Altman, who operate from a place of immense privilege. Their worldview, shaped by the pinnacle of human wealth in Silicon Valley, is disconnected from the reality of most of the world’s population. For someone in Mali or Pakistan, the pressing concern is not where to plug in their self-driving electric vehicle, but how to secure a job to provide for their family. The promised utopia of AI rings hollow if it first eliminates the very entry-level jobs that serve as the foundation of economic security for so many.