Authored by Marcus König, with AI as a helpful editor in the process.
Published: Mar 12,2025
Technology has always been a force of change. The internet, personal computing and artificial intelligence were built on promises of empowerment, freedom and progress. Yet, today, these tools are concentrated in the hands of a few powerful companies and states, raising an urgent question: Who gets to decide the trajectory of technological progress and whose interests does it serve?
From Utopian Ideals to the Reality of Digital Power
The early vision of the tech industry was built on radical optimism. In its early days, computing was seen as a means of personal empowerment, decentralization and liberation from bureaucratic control. Innovators sought to disrupt rigid institutions, create open systems and give individuals more control over information and creativity.
But something changed. What started as a movement for individual empowerment turned into an industry of platform monopolies, data extraction and algorithmic control. The ideology of disruption - once framed as an attack on entrenched power structures - has instead led to a system where a handful of corporations control the flow of information, the digital economy and even the infrastructure of public discourse. The libertarian ideals of the tech world have morphed into a justification for unchecked corporate power.
This shift matters because technology is no longer just a tool - it is the architecture of modern life. And those who control its design, implementation and rules effectively shape the future. If left unchecked, the unintended consequence of tech’s utopian origins is a world where a select few define what is possible, permissible and profitable.
The Race for AI Leadership: Innovation vs. Ethics
Nowhere is this dynamic clearer than in the race to define the future of artificial intelligence. AI is poised to become the most transformative technology of the 21st century, reshaping industries, labor markets, governance and even creative expression. But who sets the rules for AI? Who ensures it serves society rather than exploits it?
The answer is complicated. Many AI researchers see regulation as a necessary safeguard - protecting data privacy, mitigating bias and preventing uncontrolled weaponization of the technology. However, among the tech-savvy public in Europe, there’s a widespread belief that regulation inherently stifles innovation and that Europe is already falling behind the U.S. and China in AI leadership. This assumption—that regulation and technological progress are at odds—is misleading.
The real power in AI will belong to those who lead in research, investment and deployment. Those who build the most advanced AI systems will define the standards, influence regulation and set global norms. Ethical AI will not come from passively observing or reacting to developments elsewhere - it will come from being at the forefront of innovation and shaping the direction of AI while embedding ethical principles into its foundation.
If Europe, for example, resigns itself to a regulatory role rather than a leadership role in AI, it risks becoming a consumer of technology rather than a creator of it. Ethical leadership requires both technological capability and the influence to dictate how that technology is used. Power in AI governance won’t just come from writing policy - it will come from driving breakthroughs, attracting talent and setting the economic incentives for ethical AI development.
The Fragility of Progress
History has shown that even the most advanced civilizations can collapse when they become too reliant on fragile systems. The Late Bronze Age, for example, was an era of sophisticated trade networks, cultural exchange and economic interdependence among the great civilizations of the Mediterranean. But when these systems became too stretched - facing environmental pressures, invasions and economic instability - they collapsed in rapid succession. Societies that once seemed unshakable suddenly found themselves unable to adapt.
This historical moment is eerily relevant today. The modern digital economy is more interconnected than any civilization before it, yet it is also increasingly vulnerable. Over-reliance on a handful of technology platforms, economic concentration in a few dominant players, and geopolitical tensions in semiconductor and AI research create an ecosystem that is both powerful and fragile. If we do not consciously shape the rules of technological progress, we risk allowing these structures to dictate terms in ways that could be catastrophic when external shocks - economic downturns, cyber threats or AI failures - inevitably occur.
Who Will Define Technology?
The future of technology will not be determined by fate. It will be shaped by the decisions we make today: who controls AI, who sets the ethical guardrails, and who dictates the economic incentives for innovation.
If we want AI to serve humanity rather than control it, we must lead - not just in governance, but in research, talent development and economic power. Ethical AI leadership is not about slowing progress; it is about ensuring that progress aligns with long-term human values.
This is not a European challenge, an American challenge, or a Chinese challenge - it is a human challenge. The question is no longer whether AI will define the future, but rather, who will define AI? The answer to that question will shape the next century.