If Data is the New Oil, Who Benefits from its Extraction?

1 week ago| 7 min read
2
1
0
Restart Audio
Play Audio
Play
Restart
  • The Sovereignty-Internationalism Paradox
  • AI in Migration Management and State Control
  • Digital Colonialism and Algorithmic Bias
  • Big Tech as a new Actor in International Relations
  • Geopolitical Competition and Competing Governance Models
  • Way Forward
Share Article

As Artificial Intelligence becomes intertwined with contemporary governance, it becomes indispensable to study its impact in political decision making. While adoption of AI can be seen as a benefit to the economy, there exists another side of the same coin. What happens when complex lives are to be condensed into narrow checkboxes? Contemporary utilisation of AI in migration governance systems poses a huge challenge by reducing political subjects to data clusters. While this belittling of lives and experiences of migrants in itself raises questions of ethics and morality, the established fact that algorithmic systems are biased in nature only adds fuel to the fire. Moreover, the proliferation of AI is also fundamentally reshaping the dynamics of global governance. States today need to find the right balance between asserting sovereign control over algorithmic systems while simultaneously engaging in transnational collaboration to regulate these borderless technologies. Reinforcing historical power imbalances, such systems are now seen as “digital colonialists” in which data extraction and biased algorithms perpetuate the dominance of the Global North and push vulnerable groups deeper down the rabbit hole. Concurrently, the rise of Big Tech giants like Google, Amazon and Facebook, reflects new forms of private authority. States depend on Big Tech’s infrastructure for critical functions, while also seeking to regulate their power through antitrust and privacy legislation. Therefore, summits like the AI Impact Summit, Digital Inclusion Summit, provide the crucial space for discussing these notions. Algorithmic systems are not objective, they are built on the subjective designs of those who code them, built on trained datasets, because machines are as intelligent as the humans who code them, they amplify societal biases and vulnerabilities. This creates significant human rights risks, as biased algorithmic systems gain the power to regulate and make decisions about lives of marginalised groups, based on opaque and automated decision-making processes. Therefore, it is critical to realize that the need has now long moved beyond having a “human in the loop” to having a “human in command.”

The Sovereignty-Internationalism Paradox

Algorithmic systems today have challenged the deeply ingrained notion of the Westphalian principle of territorial sovereignty. States today are faced with the challenge of establishing their dominance and strengthening their roots in the international arena by having sovereign control of this newfound domain of power. However, at the same time they must collaborate, transcending national boundaries; because unlike earlier domains of power revolving around territorial boundaries, AI systems operate beyond the confines of traditional jurisdictions of power, exposing gaps in accountability and legitimacy. In fact, AI can become a catalyst for internationalism, by necessitating shared cooperation to manage the risks posed by opacity in decision-making, monopolistic control over data by a few tech giants and the need for ethical AI standards. Therefore, AI was born with the torch of multilateral governance in its hand, an extension of Nye’s conception of “complex interdependence.”

AI in Migration Management and State Control

Algorithmic systems have given rise to a new form of “politics of invisibility.” Such systems are now increasingly being used to render populations visible for the purposes of political discipline, predictive policing, control and governance. Following a Foucaultian framework, visibility has now been weaponized as a resource for governance. It is one of the most powerful tools at the hands of the state to monitor, categorize and control migrant populations at an unprecedented scale and with unprecedented efficiency. These are well documented in experiences of Uyghur Muslims in Xinjiang, China which uses AI driven facial recognition and surveillance systems which are deployed to monitor marginalized communities. However, adoption of such technologies is often driven by institutional pressures. Drawing from the New Institutional Theory, states adopt similar algorithmic systems for largely three reasons:

  • Pressures from a stronger party (powerful states or international organizations) in order to abide by certain global standards of regulation.

  • Imitating already successful models in order to gain legitimacy and stronger footing in the international arena.

  • Or simply as professional standards, accepting recommendations by experts.

Digital Colonialism and Algorithmic Bias

“AI can be seen as a tool of imperialism derived from colonial philosophies." Most AI systems today are products of the Global North; a dynamic which continues to perpetuate structures of power, **********, subjugation and weakness. A key phenomena that works simultaneously is that of data colonialism, where data is now analogous to ‘oil’, a new form of resource extraction from the Global South. Tech giants concentrated in the North, extract this ‘new oil’ without consent, or adequate compensation to train algorithms and generate profits which remain walled within the Silicon Valley. In fact, this western-centric bias is visible in the prevalent models of GPTs, which are trained predominantly in English language and western-centric sources. A glaring example of this algorithmic prejudice is the misrecognition of darker skinned individuals by facial recognition systems, a spectrum on which, dark-skinned asian women fall on the most disadvantaged end of, or instances where recruitment algorithms have downgraded applications from women, before they even reach a human being. Ruha Benjamin identifies this as the "New Jim Code," where developers "encode judgments into technical systems but claim that the racist results of their designs are entirely exterior to the encoding process."

Big Tech as a new Actor in International Relations

Today, Big Tech corporations have emerged as new forms of private authority. Their control over public data and information, directly implicates them in international politics. Their relationship with the state can be explained by 3 terminologies:

  • Interdependence: Today, states rely on corporate algorithms to delegate governance. This manifests in the US intelligence agencies sharing data with Google and Facebook, Amazon partnering with over 2000 US law enforcement agencies.

  • Circumvention: Tech Giants however, do not blindly work for the state, and actively push back, committing to ideas around human rights and corporate ethics. A prime example of this was Apple's refusal to unlock the San Bernardino shooter's iPhone for the FBI, Facebook's indefinite suspension of a sitting US President, Donald Trump and Microsoft's call for a "Digital Geneva Convention" positioning tech firms as a "neutral Digital Switzerland."

  • Curtailment: States also try to regulate these new power holders by means of acts like the EU's implementation of the General Data Protection Regulation (GDPR) or US Department of Justice antitrust suits against Google and Facebook, and Germany’s "Network Enforcement" law making companies liable for illegal speech.

Shoshana Zuboff calls this dynamic as a “fast-growing abyss between what we know and what is known about us." In fact the unprecedented power at the hands of Big Tech raises critical questions on legitimacy; because today they govern billions of users who have not consented to its rule and are unaware of the extent of their influence.

Geopolitical Competition and Competing Governance Models

Grounding the above theoretical discussion in contemporary narrative, three cases stand out in particular, EU’s AI Act, China’s Social Credit System and US’s legislations like Section 230 of the Communications Decency Act. The EU’s AI Act prioritizes human rights and ethical considerations, following a risk-based approach, representing graduated digital sovereignty. Whereas, China’s Social Credit System tries to consolidate state control by means of surveillance and behavioural monitoring. And lastly, the US legislation focuses on sectoral and market driven ideas, granting internet platforms immunity from liability for user-generated content. The multifaceted approach across states renders one thing clear, states today must choose which standards to align with, but in today’s complex international environment these decisions are often influenced by economic ties and political ideologies.

Way Forward

As the discussion comes to a close, the need of the hour remains to decolonialize AI, amplifying voices from the Global South, not just for training data sets but to build technology that is ethical and inclusive. UNESCO's Recommendation on the Ethics of AI, the Toronto Declaration on non-discrimination in machine learning are some notions the international community must pay heed to if we wish to envision a world where AI acts for the benefit of all and not just for those who code and create.

2
1
0
Comments

User

More Authors
More Articles By Same Author

Dive into HerVerse

Subscribe to HerConversation’s newsletter and elevate your dialogue

@ 2025 All Rights Reserved.

@ 2025 All Rights Reserved.