Cosmopolitan Civil Societies: An Interdisciplinary Journal

Vol. 17, No. 1
2025


ARTICLE (REFEREED)

Trading Efficiency for Control: the AI Conundrum in Migration Management

Gianluca Iazzolino

University of Manchester

Corresponding author: University of Manchester, Arthur Lewis Building, The University of Manchester Oxford Road Manchester M13 9PL, UK, Gianluca.iazzolino@manchester.ac.uk

DOI: https://doi.org/10.5130/ccs.v17.i1.9423

Article History: Received 30/10/2024; Revised 22/12/2024; Accepted 14/01/2025; Published 31/03/2025

Citation: Iazzolino, G. 2025. Trading Efficiency for Control: the AI Conundrum in Migration Management. Cosmopolitan Civil Societies: An Interdisciplinary Journal, 17:1, 35–46. https://doi.org/10.5130/ccs.v17.i1.9423

Abstract

This paper contributes to the discussions on AI initiatives applied to migration management by drawing attention to critical issues in the AI systems field. It suggests a research agenda to investigate how AI-generated insights inform policies and how ideologies are reflected into policies and shape AI deployments. Specifically, this paper leverages the data justice and algorithmic accountability debates to examine two application of AI systems. The first, based on predictive AI, aims at supporting governments and humanitarian organisations in estimating timing, destination and size of refugee inflows. The second application refers to Natural Language Processing (NLP) and to the integration of voice and speech recognition within a broader repertoire of techniques to automate immigration systems. The paper finally suggests that to better harness the analytical power of AI, AI systems must be recognised as inherently political, in the sense that they enshrine a specific view of power and relations of subordination.

Keywords

AI; Migration Management; Asylum Processing; Prediction; Policymaking

Introduction

The recent surge of interest in Artificial Intelligence (AI) is seeping into policymaking. Generative AI, popularised by Large Language Models (LLM) such as ChatGPT or Google Gemini and aimed at generating content on the basis of users’ prompts, and Predictive AI, in which forecasts are produced by algorithms trained to recognise patterns and identify trajectories based on historical records, have spilled from data science laboratories into the public debate. National governments are rushing to produce AI strategies as part of their broader agendas focusing on digital transformation – or a radical improvement of a public or private entity’s performances ‘through combinations of information, computing, communication, and connectivity technologies’ (Vial 2019, p. 118) – and digital sovereignty – an increasingly politically relevant concept that posits a central role of the national state in the global governance of data and digital infrastructures (Pohle and Thiel 2020). Moreover, the legal and technological standards regulating the construction and management of AI and data infrastructures also reflect the infrastructural geopolitics of influential national, such as the US, China, and supranational bodies, such as the European Union (De Goede & Westermeier 2022; Liebig et al. 2022; Schiff et al. 2021). These AI strategies are then translated into policies to enable the implementation of AI systems in the public sector (Sun & Medaglia 2019; Desouza 2018).

Inspired by often politically motivated logics of efficacy and cost-efficiency (Henriksen & Blond 2023), these deployments entail organisational restructuring, new skills and expertise, and governance models. However, they also raise unprecedented issues. The translation of algorithms’ results into decisions, and the way public managers shape the design and implementation of AI systems, are fraught with loopholes and ambiguities (Filgueiras 2023). These ethical quandaries build on top of, and exacerbate, anxieties about privacy and function creep, in which data collected by welfare or healthcare organisations are then repurposed for security or commercial goals. The scholarly and policy debate on data justice (Taylor 2017; Masiero & Buddha 2021), which focuses on how data can help advance distributive justice by giving visibility to marginalised individuals communities while addressing dataveillance and data extraction (Taylor 2017), have increasingly incorporated concerns of ‘algorithmic accountability’, which refers to the assessment of a set of automated instructions (the algorithm indeed) ‘to discern and track bias, presuppositions, and prejudices built into, or resulting from algorithms’ (Wieringa 2020, p. 2)

Migration is a particularly problematic field of application of AI to policymaking. AI models enable civil servants to process rapidly large amounts of data, identify patterns and provide insights to support policymakers’ decisions. This approach, though, marks a shift from established evidence-based policymaking. Given the opacity of the datasets used to train the models and the computing procedures that yield the final outputs, scrutinising the source of the insights is particularly challenging. As this contribution suggests, the overall influence of AI in the epistemic infrastructure that informs migration policies, and its ethical implications, are not fully recognised by policymakers.

To be clear, AI does offer opportunities for gleaning granular knowledge about humanitarian crises and designing more suitable strategies to reduce human suffering and increase the responsiveness of aid actors. For the past years, for instance, the Danish Refugee Council (DRC), has used predictive AI to track and anticipate displacement patterns in areas hit by conflict or hydro-meteorological disasters (Nair et al. 2023). Moreover, humanitarian and advocacy organisations have started experimenting with AI-based initiatives to support migrants. As I shall explain below, Mozilla Foundation Common Voice program, for instance, is an open-access initiative based on the construction of African language datasets to address the issue of ‘low-resourced’ languages, or languages which are underrepresented online or for which datasets to train AI models are limited or lacking. The ideal application of these LLM would be the development of chatbots to assist migrants looking for information on asylum procedures. However, currently most funding and policy attention are devoted to the implementation of AI system to migrant management. This approach is imbued by ‘neophilia’, in which an ‘optimistic faith in the possibilities of technology’ is combined ‘with a commitment to the power of markets’ (Scott-Smith 2016, p. 2229). Facing tight budgets against a background of polycrisis – or ‘the entanglement of multiple types of shocks each amplifying the other’ (Hillmann et al. 2024, p, 4), policymakers view AI as a tool to streamline migration management and cut costs. In pursuit of greater efficiency, border agencies are increasingly using predictive AI, where historical records are used to train models aimed at producing forecasts and thus strengthening the preparedness of states and humanitarian actors, and LLM, to identify the place of origin of asylum seekers. Recent years have thus seen a proliferation of initiatives leveraging big data and data analytics to predict displacement trajectories and migration routes. Projects funded by policy entities at national and supranational level, and by philanthropic foundations, and often developed by academic institutions, tech firms and NGOs, have drawn attention to the potential of predictive AI to shape knowledge and decision-making around migration issues. Other initiatives, such voice recognition systems deployed in migrant screening processes, have highlighted not only the risk of embedding misconceptions and biases in these sorting systems, but also the growing influence of tech firms in border control and asylum processing.

This paper contributes to the discussions on technological innovations in migration management by drawing attention to critical issues in AI systems for migration management. In doing so, it leverages the data justice and algorithmic accountability debates to examine two application of AI systems.

The first, based on predictive AI, aims at supporting governments and humanitarian organisations in estimating timing, destination and size of refugee inflows. The case considered here is ITFlows, a project financially supported by the European Union and developed by a consortium of universities and a multidisciplinary research team (Suleimenova et al. 2017). Using an Agent-based modelling (ABM) approach, and training on data provided, among the others, by Oxfam and the Italian branch of the Red Cross, this system became operative in 2020. Since then, it has attracted scrutiny from investigative journalism organisations based on leaked reports from the project ethical board, warning of the risk that ITFLOWS ‘may pose several risks if misused for stigmatising, discriminating, harassing, or intimidating individuals, especially those that are in vulnerable situations such as migrants, refugees and asylum seekers’ (Xanthaki et al. 2021, p.171).

The second application of AI discussed in this paper refers to Natural Language Processing (NLP), the broader field to which LLM pertains, and to the integration of voice and speech recognition within a broader repertoire of techniques to automate immigration systems. The case examined here refers to attempts by the German Federal Office for Migration and Refugees (BAMF) to use speech analysis to help determine the country of origin for asylum seekers in the late 1990s and in 2017, using speech recognition techniques. The discussion will highlight the unreliability of the system because of very limited resources for language training datasets for machine learning and the use of datasets created and curated by humanitarian agencies.

In discussing these case studies, this contribution to the special issue of Cosmopolitan Civil Societies suggests a research agenda to investigate how AI-generated insights inform policies and how ideologies are reflected into policies and shape AI deployments.

The paper is structured as follows. The first part discusses the theoretical debates on AI and migration management, namely data justice and algorithmic accountability, stressing the growing significance of AI systems for producing evidence on which policies are based, and arguing that there is a mutual shaping of policies and AI deployments. It then delves into the cases of predictive models, such as ITFlows, for migrant flows; and speech recognition for asylum processing. It concludes by tracing a research agenda on AI and migration policies.

AI Systems in Policymaking

Recent years have witnessed a proliferation of AI initiatives for migration management in the Global North, with applications ranging from the use of predictive models to anticipate the size and trajectory of migration flows to verification systems to speed up asylum processing (Menon et al. 2024).

The rationale behind these deployments is twofold: operational and political. Operationally, policymakers have emphasised the critical role of digital tools in addressing bureaucratic congestion (Andrews et al. 2017) caused by understaffing and excessive workloads. Politically, the rise of nativist sentiment across the Global North is reshaping migration governance. This has led to a focus on the production of metrics to demonstrate the achievement of specific ‘targets’ related to migrant populations, which can then be used for electoral campaigns.

During the same time span, innovations in the field of AI – broadly referring to ‘machines that mimic human intelligence’ (IBM 2019) – have spread across multiple sectors, expanding processes of datafication, a concept encompassing ‘both the masses of digital traces left by people and technologies in online spaces and the proliferation of advanced tools for the integration, analysis, and visualization of data patterns’ (Flyverbom et al. 2019, p. 6; Mayer-Schönberger & Cukier 2013). Data justice has emerged as a critical framework for examining the societal implications of datafication, focusing on issues of equity, fairness, and power dynamics (Dencik et al. 2018; Heeks & Renken 2018). This literature identifies three main perspectives on data justice: instrumental, procedural, and distributive/rights-based (Heeks & Renken 2018). However, other scholars have argued for a broader, more inclusive approach that considers structural factors and global contexts (Leslie et al. 2022; Heeks & Renken 2018). This discussion emphasises the need to address data injustices among vulnerable populations, the importance of sustainability and capability approaches, and the recognition of diverse historical and geographical perspectives on data justice practices (Heeks & Renken 2018; Leslie et al. 2022; Taylor 2017). Scholars embracing this perspective have argued for the integration of social justice principles with critiques of power imbalances in data-driven societies to provide both critical insights and constructive resources for transformative data justice practices (Leslie et al. 2022). However, the data justice framework reveals its limitations, due to the technology and political economy of AI, both of which have implications for the application of AI to migration management.

At the technical level, it is important to remember that the recent AI boom followed major breakthroughs in speech pattern recognition (Hinton et al. 2012) and image recognition (Krizhevsky et al. 2012; Lee 2018), which led to the dominance of the neural network approach. As explained by Lee, ‘[i]nstead of trying to teach the computer the rules that had been mastered by a human brain, these practitioners tried to reconstruct the human brain itself’ (2018, p. 23). Also known as ‘narrow AI’, the neural networks approach ‘takes data from one specific domain and applies it to optimizing one specific outcome’ and is mostly used in fields like insurance and making loans to ‘make specific predictions […] based on quantifying probability’ (Pasquale 2021, p. 55). This is why ‘[d]ata scientists sometimes joke that (narrow) AI is simply a better-marketed form of statistics’ (Pasquale 2021, p. 1922). The development of Machine Learning (ML), a new generation of algorithms built from data, and advancements in a specific subfield of machine learning, deep learning, boosted AI performances (Strusani & Houngbonon 2019). Neural networks took advantage of the rapid increase in computing power and data availability. As Lee explains, ‘the data “trains” the program to recognize patterns by giving it many examples, and the computing power lets the program parse those examples at high speeds’ (2018, p. 27). Algorithms, broadly defined as a ‘systematic method composed of different steps’ (Jaton 2021), started ‘engaging experimentally with the world’ to learn ‘by inductively generating outputs that are contingent on their input data’ (Amoore 2020, p. 12).

Data scientists anticipated that the surge in datasets, sourced from various points like the internet, smartphones, and home devices, coupled with affordable computing power, would facilitate increasingly effective training of neural networks (Amoore 2020, p. 12). However, critical media and data scholars pointed out that these dynamics of data extraction mirror colonial patterns of dispossession and exploitation. Couldry and Mejias, among others, describe data colonialism as ‘an emerging order for the appropriation of human life so that data can be continuously extracted from it for profit […] Through data relations, human life is not only annexed to capitalism but also becomes subject to continuous monitoring and surveillance’ (2019, p. xiii). Data extractive infrastructures lay the foundation for AI systems by providing the algorithmic fodder that enables data scientists to train predictive models.

Access to large datasets, computing power, and regulatory influence is crucial for advancing AI systems. As a result, dominant private tech actors—­commonly referred to as Big Tech—­wield ‘significant influence over both the direction of AI development and the academic institutions wishing to research it’ (Whittaker 2021, p. 51). This underscores why, while much emphasis is placed on the technology itself, understanding the political economy of AI is essential for anticipating its limitations and criticalities.

Leveraging their financial resources, tech giants are shaping the trajectory of AI research (Ahmed et al. 2023), its geopolitical implications (Kak & Myers West 2023), and its ethics agenda (Gerdes 2023) to advance their core business of creating ‘prediction products’ (Zuboff 2019, p. 129). They also work to integrate themselves into national policy spaces to further their interests. The ‘corporate socio-technical imaginary’ (Hockenhull & Cohn 2021; see also Jasanoff & Kim 2009), infused with quasi-magical notions (Giuliano 2020) and bolstered by substantial investments and lobbying efforts, significantly influences policy discourses, particularly through the diffusion of automated decision-making systems. In particular, the corporate actors shaping and advancing this imaginary have been able to produce evidence for policymaking to an extent that academics have only partially been able to achieve.

In response to the growing the risk of epistemic displacement (Pozzi & Durán 2024), in which the knowledge generated by AI systems marginalises the voices of the experts or the individuals affected by these deployments, an interdisciplinary conversation has emerged around the concept of algorithmic accountability. This discourse emphasises the need to make algorithmic outcomes explainable, identify the datasets used for training, and assign responsibility (Criado & Guevara-Gómez 2024; Hunt & McKelvey 2019; Wieringa 2020). Scholarly and activist debates on algorithmic accountability have highlighted both the technical complexities of achieving these goals and the ways these complexities obscure and perpetuate asymmetric power relations among the actors involved in these systems. This is relevant for policy as it addresses the negative impacts of AI and multi-stakeholder collaboration, public regulation and citizen participation (Criado & Guevara-Gómez 2024). Some scholars argue that algorithmic accountability should be grounded in democratic ideals of public reason, where decision-makers have to justify their decisions in terms of broader principles that all can agree on (Binns 2017). This approach aims to clarify the purpose of algorithmic accountability and measure progress towards it. Algorithmic accountability in AI for migration management is a growing issue as governments are increasingly using automated decision-making tools for identity checks, border security and visa application analysis (Beduschi 2020), but the opacity of AI algorithms makes it hard to establish clear accountability frameworks (Forti 2021). Concerns include migrants being portrayed as security threats during algorithm design and testing and lack of migrant and researcher involvement in data usage decisions (Bircan & Korkmaz 2021).

With reference to the case studies considered here, the data justice and algorithmic accountability literature highlight two main implications of the opacity of AI systems.

The first refers to function creep, ‘whereby techniques that were initially adopted for specific uses and purposes have been gradually spreading into much wider spheres and practices of governance’ (Ajana 2013, p.576). This concern has been raised by civil society organisations with regard to cases where sensitive information on people affected by humanitarian crises could be used for security or policing purposes. An often-mentioned case is the partnership between the United Nations (UN) World Food Programme (WFP) and Palantir, an American data analytics firm (WFP 2019a) with strong ties to the US intelligence community (Easterday 2019).

The second implication is what Thylstrup et al. (2022) call ‘entanglement’ between problematic data for machine learning and prediction. Their argument challenges the possibility of drawing a clear boundary between ‘data and algorithms and ‘good’ and ‘bad’ in machine learning regimes’ (2022, p. 5) by emphasising that parameters of machine learning models bear the legacy of previous iterations with other training datasets. This entails that, for both the predictive and generative AI systems applied to migration management that I will discuss in the next section, there is a risk of reproducing bias through datasets previously used to improve the model.

Predictive analysis for migration flows

The first case analysed here involves the EU-funded ITFLOWS project, launched in 2020 with a €5 million budget to develop EUMigraTool (EMT), a predictive tool intended to assist EU authorities in managing migration flows. EMT uses agent-based modelling along with data from news media, social media, and conflict databases to simulate migration patterns, forecast migration trends, analyse public sentiment, and identify potential sources of tension between migrants and local communities across EU countries.

However, EMT’s development has raised concerns among civil society groups and researchers over its potential for misuse. Critics argue that EMT could be repurposed as a tool for surveillance and control, potentially discriminating against migrants instead of supporting them. According to an investigative report by Disclose (Campbell & D’Agostino 2022), internal memos from ITFLOWS’ ethical board highlight the risk of EMT being used for profiling, which could lead to the stigmatisation or targeting of migrants based on factors such as ethnicity or immigration status. The board also cautioned that EU states might misuse EMT data to segregate migrant communities or create restrictive conditions based on religious, sexual, or national backgrounds.

To develop EMT, ITFLOWS partnered with NGOs such as Oxfam and the Red Cross, which provided interview data from migrants coming from countries including Nigeria, Mali, Eritrea, and Sudan. For example, Oxfam and the Italian Red Cross conducted interviews with migrants arriving in Italy to inform authorities about migration routes and motivations. While this data aimed to support policy in high-traffic migration areas, the ITFLOWS ethical board worried that it might instead reinforce discriminatory practices. Information about migrants’ cultural or ethnic backgrounds could be used to justify restrictive policies or fuel anti-immigration sentiment, potentially leading to division in regions with significant migrant populations.

A major ethical concern surrounding ITFLOWS is its potential connection with Frontex, the EU’s border agency, which has faced criticism for alleged illegal pushbacks. Internal reports reveal that Frontex closely monitored ITFLOWS and even supplied data for the project. This association alarmed ITFLOWS’ ethical board, as they feared EMT’s predictive analytics could enable Frontex to enforce stricter border controls and harsher measures against migrants.

Despite attempts to mitigate these ethical risks, efforts within ITFLOWS appear to have encountered challenges. In 2021, the ITFLOWS ethical board expressed disappointment, noting that recommendations to safeguard migrant rights were largely overlooked, with technical goals seemingly prioritised over human rights. Concerns were also raised about the project’s commitment to ethical transparency and responsible data use.

Oxfam and the Italian Red Cross have voiced confidence in ITFLOWS, downplaying fears of potential misuse. In response to Disclose, they emphasised their limited role in providing data and maintaining neutrality. However, critics remain concerned about the ethical implications of EMT, particularly the risk that it could shift EU migration policy toward control and restriction, rather than compassion and integration.

NLP for asylum processing

The second case study involves pilot projects led by Germany’s Federal Office for Migration and Refugees (BAMF), which aim to use Natural Language Processing (NLP) and voice recognition technologies to improve the efficiency of asylum processing. Since 2017, BAMF has employed a name transliteration tool to convert names in non-Latin alphabets, like Arabic, into standardised Latin characters. This tool, developed with tech firms SVA and IBM, helps create consistent records across European databases by minimising variations in name spelling, which can often arise due to different phonetic transliteration practices used in each country (Memon et al. 2022). Without standardisation, individuals may end up with multiple versions of their name in various databases, complicating case tracking and potentially leading to errors.

The transliteration tool is particularly useful for asylum seekers without official documentation. During the registration process, the applicant’s name is entered in its original script by the individual or an interpreter, and the tool then converts it into Latin characters through phonetic analysis. Drawing on a comprehensive global database of names, the tool accounts for regional differences in spelling to ensure greater accuracy. Initially developed for Arabic names, the tool may eventually be expanded to handle languages like Persian, Russian, and Georgian, supporting more consistent data management across European migration systems.

In addition to standardizing names, BAMF leverages this technology to support country-of-origin identification, suggesting that specific name spellings might provide clues about an individual’s regional background. For example, a particular name may be more prevalent in Libya than Syria, which can assist in plausibility checks for applicants claiming to come from conflict-affected regions like Syria, where asylum prospects are generally more favourable.

Complementing the transliteration system, BAMF also uses automated dialect recognition to help verify an asylum seeker’s origin. Initially a manual process, this tool has been automated to use voice recognition technology that identifies speech patterns indicative of specific dialects. Differences in pronunciation, vocabulary, and rhythm often reflect particular regions, especially within diverse languages like Arabic, which vary significantly across the Middle East and North Africa. This feature allows the software to suggest probable origins based on regional speech characteristics.

Voice recognition technology in these pilot projects relies on creating a ‘voiceprint’—­a unique biometric signature capturing features like pitch, tone, and rhythm that reflect individual vocal tract characteristics. These voiceprints not only support origin verification but can also help identify repeat applicants attempting to re-enter the asylum process under new identities. Additionally, the biometrics can reveal biological details, such as gender and approximate age range, adding layers of information for evaluating cases more comprehensively.

However, the use of NLP and voice technology in migration management raises ethical concerns. Resonating concerns raised with respect to the use of remote-sensing and UAV technologies by border agencies (McLeman 2019), critics argue that these tools risk crossing into surveillance territory, blurring the line between humanitarian support and intelligence gathering. As in the case of ITFlows, also for the application of NLP to migrant screening processes, human rights advocates warn that while such technologies may streamline processes, there is a concern that these technologies could eventually be used to profile individuals based on ethnicity or socio-political status, violating privacy rights and exacerbating discrimination. Moreover, the limited availability of datasets for ML hinder the precision of the voice recognition system, leading to high number of false negatives, in which asylum seekers’ applications are rejected because some languages are not recognised or the model embeds the assumption that the applicant would use specific terms if originating from a given locality (ignoring for instance the complex dynamics shaping refugees’ and migrants’ journeys and experiences).

Moreover, the line between public purpose and commercial use of language datasets is increasingly blurred. The global voice recognition market is rapidly expanding, projected to grow from USD 12.62 billion in 2023 to USD 59.62 billion by 2030, and refugee processing is increasingly integrated into this growth. Moreover, there are expectations that voice recognition could be used for supporting the sustainability agenda. For instance, the Common Voice project by Mozilla Foundation, a large-scale, crowdsourced multilingual dataset for Automatic Speech Recognition (ASR), and Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ), Germany’s development agency, provides a model for developing more open and ethically sourced data for voice technology. Unlike proprietary systems, such open-source projects invite public scrutiny, fostering more transparent and ethical AI development.

While tools like BAMF’s transliteration and dialect recognition software are part of this trend, they highlight both the promise and peril of NLP and voice recognition in asylum processing and raise complex questions about transparency and accountability in AI applications. While they can streamline operations and improve consistency, the risks of misuse underscore the need for rigorous ethical frameworks and safeguards. To address these concerns, there is a call for explainable AI that clearly outlines the logic behind its outputs, especially in sensitive areas like asylum processing where decisions can drastically impact lives.

Conclusions and Further Research

The integration of AI into migration policy and management highlights complex ethical, operational, and accountability challenges that have only begun to be addressed by current scholarship. My paper underscores the need for deeper examination of the ways in which AI systems, particularly predictive and generative AI, affect migration policy development and implementation. While AI presents possibilities for efficiency gains and enhanced decision-making capabilities, it simultaneously raises critical questions about privacy, potential biases, and accountability, especially within ethically charged domains like migration. In fact, AI deployments, particularly in the fields of LLM and NLP, may also serve the interests of marginalised groups to facilitate their access to information. For instance, Mozilla Foundation, a not-for-profit organisation financed, among others, by Google, and in partnership by GIZ, the German development agency, has rolled out a crowdsourced initiative, Common Voices, aimed at creating an open-source dataset of diverse voice recordings. The project was designed to improve speech recognition systems by providing developers, researchers, and organisations with freely available voice data that reflects the diversity of global languages and accents. The overall aim of Common Voice was to democratise access to high-quality speech data, encouraging inclusivity and fairness in AI and machine learning technologies. Common Voices focused specifically on under-represented African languages and one of the applications of this initiative was the project Mbaza (‘ask me’ in Kinyarwanda), a chatbot developed by a Rwandan tech firm, Digital Umuganda, to provide information on the COVID-19 pandemic in vernacular language. Similar initiatives are currently rolled out in other African languages, highlighting the potential of AI to address epistemic injustice by creating and curating datasets of low-resourced languages.

However, to better harness the analytical power of AI, AI systems must be recognised as inherently political, in the sense that they enshrine a specific view of power and relations of subordination. The power relationships underpinning AI systems are particularly evident in the way humanitarian crises and displacement are often turned into ‘living labs’ in which to test predictive or generative models. Thus, Amoore’s (2020) argument that algorithms have found their way of learning by engaging with the world can be adapted here to suggest that the world AI-systems are targeting is the one of what Michel Agier (2011) calls ‘the undesirables’.

Future research, grounded in the frameworks of data justice and algorithmic accountability, can offer valuable insights to avoid making AI just another tool to manage the undesirables and, instead, help create ethical, transparent, and just AI applications to address the ethical and operational gaps identified in the implementation of AI in migration policy. These recommendations should be viewed as part of a broader effort to bridge academic research—­particularly qualitative studies—­with policymaking. This involves fostering stronger partnerships with civil society actors and governments to co-produce knowledge that is both relevant and actionable for policy development (Hillman et al. 2024).

First, more empirical research is required to understand how policymakers and public agencies interpret and deploy AI-generated insights in real-world decision-making. Since AI models can introduce biases or reinforce stereotypes—­especially in migration management where ethical stakes are high—­it is crucial to systematically assess how AI-driven insights affect policy decisions and whether these decisions align with principles of data justice and algorithmic accountability.

Second, there is a need for longitudinal studies examining the socio-political impacts of AI systems in migration. Since AI policy tools evolve over time, research should investigate their long-term consequences on migrant communities and broader society, particularly regarding how predictive systems may shape public perceptions of migration and influence policy rhetoric. For example, as projects like ITFlows gain prominence, examining the potential consequences of its use on public opinion and border policy may reveal both intended and unintended impacts on migrant rights and state practices.

Furthermore, future research could explore the role of public-private partnerships and the growing involvement of tech firms in migration management. Tech companies play a defining role in developing and supplying AI solutions, yet their influence on border control practices and asylum processes has largely remained outside public scrutiny. This area of inquiry is essential for understanding power dynamics between private and public sectors in migration management, particularly when tech solutions may embed commercial motivations or proprietary interests that conflict with human rights.

Lastly, to advance transparency and accountability, scholars should advocate for and contribute to developing rigorous ethical and operational standards for AI in migration policy. Collaborative frameworks among policymakers, academic researchers, civil society organisations, and technology developers could help design AI systems that prioritise fairness, transparency, and accountability, creating ethical guardrails for AI use in migration. In particular, research should support the establishment of monitoring mechanisms, such as ethical review boards or third-party audits, to oversee AI deployments in migration management and mitigate the risks of discrimination, function creep, and data misuse.

References

Agier, M. 2011, Managing the Undesirables, Polity Press, Cambridge.

Ahmed, N., Wahed, M., & Thompson, N. C. 2023, ‘The growing influence of industry in AI research’, Science, vol. 379, no. 6635, pp. 884–886. https://doi.org/10.1126/science.ade2420

Ajana, B. 2013, ‘Asylum, identity management and biometric control’, Journal of Refugee Studies, vol. 26, no. 4, 576–595. https://doi.org/10.1093/jrs/fet030

Amoore, L. 2020, Cloud Ethics: Algorithms and the Attributes of Ourselves and Others, Duke University Press, Durham, NC. https://doi.org/10.1215/9781478009276

Andrews, R., Boyne, G. A., & Mostafa, A. M. S. 2017, ‘When bureaucracy matters for organizational performance: Exploring the benefits of administrative intensity in big and complex organizations’, Public Administration, vol. 95, no. 1, pp. 115–139. https://doi.org/10.1111/padm.12305

Beduschi, A. 2020, ‘International migration management in the age of artificial intelligence migration studies’, Migration Studies, vol. 9, no. 3, pp. 576–596. https://doi.org/10.1093/migration/mnaa003

Binns, R. 2018, ‘Algorithmic accountability and public reason’, Philosophy and Technology, vol. 31, no. 4, pp. 543–556. https://doi.org/10.1007/s13347-017-0263-5

Bircan, T. & Korkmaz, E.E. 2021, ‘Big data for whose sake? Governing migration through artificial intelligence’, Humanities and Social Sciences Communications, vol. 8, no. 241, pp. 1–5. https://doi.org/10.1057/s41599-021-00910-x

Campbell, Z. & D’Agostino, L. 2022, ‘Predicting Migration Flows with Artificial Intelligence - The European Union’s Risky Gamble’, Disclose, 26 July 2022. https://disclose.ngo/en/article/predicting-migration-flows-with-artificial-intelligence-the-european-unions-risky-gamble

Criado, J.I., & Guevara-Gómez, A. 2024, ‘Who evaluates the algorithms? An overview of the algorithmic accountability ecosystem,’ Proceedings of the 25th Annual International Conference on Digital Government Research, 11-14 June, National Taiwan University. https://doi.org/10.1145/3657054.3657247

Couldry, N., & Mejias, U. A. 2019, The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism, Stanford University Press, Stanford, CA. https://doi.org/10.1515/9781503609754

De Goede, M. & Westermeier, C. 2022, ‘Infrastructural geopolitics’, International Studies Quarterly, vol. 66, no. 3. https://doi.org/10.1093/isq/sqac033

Dencik, L., Hintz, A., Redden, J., & Treré, E. 2019, ‘Exploring data justice: Conceptions, applications and directions’, Information, Communication and Society, vol. 22, no.7, pp. 873–881. https://doi.org/10.1080/1369118X.2019.1606268

Desouza, K. C. 2018, Delivering Artificial Intelligence in Government: Challenges and Opportunities, IBM Center for The Business of Government, Washington, D.C. https://www.businessofgovernment.org/sites/default/files/Delivering%20Artificial%20Intelligence%20in%20Government.pdf

Easterday, J. 2019, ‘Open letter to WFP re: Palantir agreement,’ Responsible Data, 8 February. https://responsibledata.io/2019/02/08/open-letter-to-wfp-re-palantir-agreement/

Filgueiras, F. 2023, ‘Artificial intelligence and education governance’, Education, Citizenship and Social Justice, vol. 19, no. 3, 349-361. https://doi.org/10.1177/17461979231160674

Flyverbom, M., Deibert, R. & Matten, D. 2019, ‘The governance of digital technology, big data, and the Internet: New roles and responsibilities for business’, Business and Society, vol. 58, no. 1, pp. 3–19. https://doi.org/10.1177/0007650317727540

Forti, M. 2021, ‘AI-driven migration management procedures: Fundamental rights issues and regulatory answers’, BioLaw Journal, vol. 2, pp. 433–451.

Gerdes, A. 2022, ‘The tech industry hijacking of the AI ethics research agenda and why we should reclaim it,’ Discover Artificial Intelligence, vol. 2, no. 25. https://doi.org/10.1007/s44163-022-00043-3

Giuliano, R.M. 2020, ‘Echoes of myth and magic in the language of artificial intelligence’, AI & Society, vol. 35, no. 4 pp. 1009–1024. https://doi.org/10.1007/s00146-020-00966-4

Heeks, R. & Renken, J. 2018, ‘Data justice for development: What would it mean?’, Information Development, vol. 34, no. 1, pp. 90–102. https://doi.org/10.1177/0266666916678282

Henriksen, A. & Blond, L. 2023, ‘Executive-centered AI? Designing predictive systems for the public sector’, Social Studies of Science, vol. 53, no.5, pp. 738–760. https://doi.org/10.1177/03063127231163756

Hillman, F., Handayani, W., Iazzolino, G., McLeman, R., Nkansah, P., Veronis, L., Zickgraf, C., Ziegler, A. & Guller Frey, A. 2024, Migration related challenges, connections, and solutions in times of polycrisis: Recommendations for improving academia and migration governance interactions: a thinkpiece, nups Working Paper no.1, Technische Universität Berlin, https://www.static.tu.berlin/fileadmin/www/40000126/Paradigmenwechsel_weiterdenken/1_nups-working_paper_Nr._1_final_.pdf

Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. & Kingsbury, B. 2012, ‘Deep neural networks for acoustic modeling in speech recognition’, IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97. https://doi.org/10.1109/MSP.2012.2205597

Hockenhull, M. & Cohn, M. L. 2021, ‘Hot air and corporate sociotechnical imaginaries: Performing and translating digital futures in the Danish tech scene’, New Media & Society, vol. 23, no. 2, pp. 302–321. https://doi.org/10.1177/1461444820929319

Hunt, R.A. & McKelvey, F.R. 2019, ‘Algorithmic regulation in media and cultural policy: A framework to evaluate barriers to accountability’, Journal of Information Policy, vol. 9, no. 1, pp. 307–335. https://doi.org/10.5325/jinfopoli.9.2019.0307

IBM 2019, What is Machine Learning? https://www.ibm.com/think/topics/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks

Jasanoff, S. & Kim, S.H. 2009, ‘Containing the atom: Sociotechnical imaginaries and nuclear power in the United States and South Korea’, Minerva, vol 47, pp. 119–146. https://doi.org/10.1007/s11024-009-9124-4

Jaton, F. 2021, The Constitution of Algorithms: Ground-truthing, Programming, Formulating, MIT Press, Cambridge, MA. https://doi.org/10.7551/mitpress/12517.001.0001

Kak, A. & West, S. M. 2023, ‘Make no mistake: AI is owned by Big Tech’, MIT Technology Review, 5 December. https://www.technologyreview.com/2023/12/05/1084393/make-no-mistake-ai-is-owned-by-big-tech/

Krizhevsky. A, Sutskever, I. & Hinton, G. 2012, ‘ImageNet classification with deep convolutional neural networks’, Communications of the ACM, vol. 60, no. 6, pp. 84–90. https://doi.org/10.1145/3065386

Lee, K.-F. 2018, AI Superpowers: China, Silicon Valley, and the New World Order, Houghton Mifflin Harcourt, Boston.

Leslie, D., Katell, M., Aitken, M., Singh, J.J., Briggs, M., Powell, R., Rinc’on, C., Chengeta, T., Birhane, A., Perini, A., Jayadeva, S. & Mazumder, A. 2022, ‘Advancing data justice research and practice: An integrated literature review’, ArXiv, 2204.03090. https://doi.org/10.2139/ssrn.4073376

Liebig, L., Güttel, L., Jobin, A. & Katzenbach, C. 2024, ‘Subnational AI policy: Shaping AI in a multi-level governance system’, AI & Society, vol. 39, pp. 1477–1490. https://doi.org/10.1007/s00146-022-01561-5

Masiero, S. & Buddha, C. 2021, ‘Data Justice in digital social welfare: A study of the Rythu Bharosa Scheme’, Proceedings of the 1st Virtual Conference on Implications of Information and Digital Technologies for Development.

Mayer-Schönberger, V. & Cukier, K. 2013, Big Data: A Revolution that will Transform How We Live, Work and Think, Murray, London.

McLeman, R. 2019, ‘International migration and climate adaptation in an era of hardening borders’, Nature Climate Change, vol 9, pp. 911–918. https://doi.org/10.1038/s41558-019-0634-2

Memon, A., Given-Wilson, Z., Ozkul, D., Richmond, K. M. G., Muraszkiewicz, J., Weldon, E. & Katona, C. 2024, ‘Artificial Intelligence (AI) in the asylum system’, Medicine, Science and the Law, vol. 64, no.2, pp. 87–90. https://doi.org/10.1177/00258024241227721

Nair, R., Madsen, B. & Kjærum, A. 2023, ‘An explainable forecasting system for humanitarian needs assessment’, Proceedings of the AAAI Conference on Artificial Intelligence, vol.37, no. 13, pp. 15569–15575. https://doi.org/10.1609/aaai.v37i13.26846

Pasquale, F. 2021, ‘Humans judged by machines: The rise of artificial intelligence in finance, insurance, and real estate’, Robotics, AI, and Humanity: Science, Ethics, and Policy, Springer, pp. 119–128. https://doi.org/10.1007/978-3-030-54173-6_10

Pohle, J. and Thiel, T. 2020, ‘Digital sovereignty’, Internet Policy Review, vol. 9, no. 4 https://doi.org/10.14763/2020.4.1532

Pozzi, G. & Durán, J. M. 2024, ‘From ethics to epistemology and back again: Informativeness and epistemic injustice in explanatory medical machine learning’, AI & Society. https://doi.org/10.1007/s00146-024-01875-6

Schiff, D. S., Schiff, K, J. & Pierson, P. 2021, ‘Assessing public value failure in government adoption of artificial intelligence’, Public Administration, vol. 100, no. 3, pp. 653–673. https://doi.org/10.1111/padm.12742

Scott-Smith, T. 2016, ‘Humanitarian neophilia: the ‘innovation turn’ and its implications’, Third World Quarterly, vol. 37, no. 12, pp. 2229–2251. https://doi.org/10.1080/01436597.2016.1176856

Strusani, D. & Houngbonon, G.V. 2019, ‘The role of artificial intelligence in supporting development in emerging markets’, EMCompass Notes, no. 69 International Finance Corporation, Washington, DC. https://doi.org/10.1596/32365

Suleimenova, D., Bell, D. & Groen, D. 2017, ‘A generalized simulation development approach for predicting refugee destinations’, Scientific Reports, vol. 7, article no. 13377. https://doi.org/10.1038/s41598-017-13828-9

Sun, T. Q. & Medaglia, R. 2019, ‘Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare’, Government Information Quarterly, vol. 36, no. 2, pp. 368–383. https://doi.org/10.1016/j.giq.2018.09.008

Taylor, L. 2017, ‘What is data justice? The case for connecting digital rights and freedoms globally’, Big Data & Society, vol. 4, no. 2. https://doi.org/10.1177/2053951717736335

Vial, G. 2019, ‘Understanding digital transformation: A review and a research agenda’, The Journal of Strategic Information Systems, vol. 28, no. 2, pp. 118–144. https://doi.org/10.1016/j.jsis.2019.01.003

Whittaker, M. 2022, ‘The Steep Cost of Capture’, Social Science Research Network, vol. 28, no. 6, pp. 50–55. https://doi.org/10.1145/3488666

Wieringa, M. 2020, ‘What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability’, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 1–18. https://doi.org/10.1145/3351095.3372833

Xanthaki, A., Hansen, K.B., Moraru, M.-B., Pichierri, F., Gottschalk, T., Teodoro, E., Guillén, A., Macías, M., Jiménez, C., Pi, M., Güell, S., Tschalaer, M. & Xanthopoulou, E. 2021, ITFLOWS: D2.1: Report on the ITFLOWS Legal and Ethical Framework. https://ddd.uab.cat/record/263143

Zuboff, S. 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Public Affairs, New York.