Artificial Intelligence (AI) is rapidly transforming how governments manage migration and control borders. From predictive data analysis and identity verification to border surveillance and asylum decision-making, AI technologies are increasingly embedded in the infrastructure of migration control in Europe and globally. States concerned about managing migration argue that new technologies like AI can make systems more efficient, secure, and cost-effective. Yet with the current development of digital technologies, there are also serious risks to human rights, privacy, and the principles of asylum. This article reflects on Externer Link: years of research from various borders around the world, and provides an overview of how AI is used in migration management in Europe and globally, highlighting key debates about its implications for law, ethics, and society.
Where and How Is AI Used?
Around the world and across the European Union (EU), AI is now used across multiple stages of migration management, including:
Border Surveillance: AI-powered surveillance systems are used to monitor borders. For example, the Externer Link: European Union’s Eurosur platform integrates satellite imagery, drones, and other sensor data analyzed with AI to monitor external borders, particularly in the Mediterranean Sea. Facial recognition and object detection algorithms are deployed at land borders and airports to identify individuals deemed irregular or high-risk. In the United States, Externer Link: companies like Anduril provide autonomous surveillance towers along the southern border, using AI surveillance to detect and track movement. They are supposed to help with intercepting people in the Sonoran Desert, in the southwest of the United States.
Identity and Biometric Systems: AI can also help process biometric data such as fingerprints, iris scans, and facial images for identity verification. Externer Link: Eurodac, the EU’s fingerprint database for asylum seekers, employs AI for more efficient and accurate matching. In Germany, the Federal Office for Migration and Refugees (BAMF) has tested Externer Link: voice and dialect recognition software to determine the origin of asylum seekers, raising debates about accuracy and fairness. The German Federal Police Externer Link: uses biometric and facial recognition technologies at major transport hubs and borders, including predictive elements, while automated document verification systems assist in identifying forged travel documents.
Asylum and Visa Processing: AI tools are increasingly used to assess visa and asylum applications. Some countries like Canada use Externer Link: visa-triaging algorithms while the United States uses risk-scoring algorithms that analyze applicants' data to predict potential fraud or security risks, including from data Externer Link: scraped from social media. While these systems promise efficiency, they raise concerns about algorithmic bias, due process, and the right to individualized assessment. In Germany, BAMF has employed machine translation tools to support the interpretation of asylum interviews and automated Externer Link: document verification, though questions remain about their accuracy and potential impact on asylum decisions.
Migration Forecasting: Governments and international agencies such as Frontex, the EU’s border force, use AI to analyze large datasets, including social media and climate data, to Externer Link: predict migration patterns. These predictive analytics are intended to support early warning systems and resource planning. However, these predictive models are often opaque and may lead to pre-emptive border closures or enforcement actions, as well as activities such as ‘forced interdictions’ or ‘pushbacks’ which are contrary to international law.
Remote Monitoring and "Alternatives" to Detention: Electronic ankle monitors, GPS-enabled smartphones and Externer Link: tagging, and AI-driven check-in systems are marketed as alternatives to immigration detention but often involve constant surveillance. These technologies are used in the US, UK, and other countries, and are sometimes outsourced to private companies.
Who Develops and Promotes These Systems?
The development of AI in migration management involves a mix of public and private actors. Major technology companies, defense contractors, and AI startups develop and market tools to governments. In Europe, the EU’s border agency Frontex supports technological innovation, while funding comes from regional research programs like Horizon 2020 and the Internal Security Fund. Germany in particular has invested in biometric verification, voice analysis, and automated document analysis systems through BAMF and the Federal Police. Several pilot projects funded by the EU have also involved German partners, including universities and research institutes.
Maritime surveillance drone at the DEFEA conference in Athens, an international trade fair for defense and security. (© Petra Molnar)
Maritime surveillance drone at the DEFEA conference in Athens, an international trade fair for defense and security. (© Petra Molnar)
Some states like the United States, Canada, and Australia are also leaders in border AI, often working with firms such as Palantir, Anduril, and Israel’s Elbit Systems. The incursion of private sector interest is part of a growing and lucrative multibillion euro Externer Link: border industrial complex, in which private profit and public security agendas align.
International organizations, including the International Organization for Migration (IOM) and the United Nations High Commissioner for Refugees (UNHCR), are also exploring the use of AI-type tools for registration, aid distribution, and monitoring, sometimes with private sector partners. While framed as humanitarian innovation, these tools often collect sensitive data under Externer Link: unequal power dynamics between the global North and South as well as well as between aid providers and people in need.
Opportunities and Promises
Supporters of AI in migration point to several potential benefits. States employing AI like to cite efficiency as one of the key drivers, arguing that automating routine tasks can reduce backlogs in asylum and visa processing, improve data analysis, and enhance coordination across agencies. Another key driver for states is the promise of accuracy, and that AI may improve biometric matching and identity verification, reducing fraud and administrative errors. And in the current security paradigm that animates policy making in migration management, AI driven enhanced surveillance and risk assessment aim to detect irregular crossings or fraudulent documents. Lastly, the predictive capability of AI is also seductive, as AI-driven forecasting is presented as a way to help prepare for large-scale migration events, support humanitarian responses, and to better allocate resources.
With such advantages in mind, states and the private sector justify investment in AI, with the goal of so-called "smart borders" that are both secure and technologically advanced. In Germany, such innovations are also seen as part of a broader digitalization of migration governance, aligning with EU strategies for digital Transformation.
Challenges and Criticisms
Despite these promises, civil society, researchers, and affected communities argue that the use of AI in migration management introduces serious ethical and legal risks. For example, AI systems can exacerbate discrimination and systemic racism as they often reflect biases in the data they are trained on. For instance, facial recognition is less accurate for people with darker skin tones, raising the risk of misidentification. Risk-scoring systems may disproportionately target individuals from certain nationalities or ethnic groups. There is also profound lack of transparency around the development and deployment of new technologies. Many AI systems are inscrutable, with little public knowledge about how decisions are made. This opacity undermines accountability and makes it difficult for affected communities to challenge harmful outcomes when mistakes are made.
High-tech refugee camp on the Greek island of Kos. (© Petra Molnar)
High-tech refugee camp on the Greek island of Kos. (© Petra Molnar)
Privacy rights and data protection are also impacted: Collecting, storing, and analyzing biometric and personal data pose significant risks to privacy, especially the very sensitive data of an already marginalized group like people-on-the-move. Unauthorized data sharing and hacking are also real threats. In addition, people on the move who find themselves in precarious situations are often unable to provide meaningful consent to the collection and use of their data.
From a legal perspective, AI and surveillance technologies that prevent people from reaching safety also erode the right to seek asylum. Automated decisions may result in disproportional denials of asylum or increased detention based on flawed risk assessments as well. Legal risks of this kind would conflict with current international law, which requires individualized assessment and non-refoulement (an international legal principle that states that a person may not be returned to a place where they may face persecution). It is also often unclear who is responsible for damage caused by AI and who can be held liable– the state, a contractor, the software developer, or the immigration officer using the tool? Legal frameworks for redress are often weak in this new area, especially in border zones, which are already characterized by opaque and discretionary decision-making.
Legal Frameworks and Oversight
Currently, global governance around AI remains weak, with innovation taking precedence over rights-based approaches to digital border technologies. However, several international and regional legal instruments apply to AI in migration. For example, the Externer Link: International Covenant on Civil and Political Rights (ICCPR) protects the right to privacy (Article 17) and the right to liberty and security (Article 9). These rights must be upheld in migration contexts when technologies are applied. The 1951 Externer Link: Geneva Refugee Convention also requires states to provide access to asylum and prohibits refoulement (Article 33). Automated decisions that block access may violate these protections. Under EU law, the Externer Link: General Data Protection Regulation (GDPR) governs data processing, including biometric data, and the Externer Link: EU Charter of Fundamental Rights guarantees rights to privacy, data protection, and non-discrimination. Most recently, the EU ratified the Externer Link: Act to Regulate Artificial Intelligence. However, the act’s Externer Link: focus is not on safeguarding the rights of people on the move.
Germany, as an EU member state, is subject to these legal frameworks but also develops its own implementation rules. Civil society organizations advocating for the rights of refugees and migrants have called for greater transparency about AI use in German migration governance, including public disclosure of contracts and impact assessments. Unfortunately, oversight is limited in practice. Border zones often operate under emergency or exceptional rules, and private companies may shield technologies through trade secrets. Yet, independent systematic audits, impact assessments, and control mechanisms are necessary to any rights-respecting development and deployment of technologies used at the border and to manage migration.
Future Outlooks
In a world driven by technological innovation, AI is poised to play a growing role in migration management. For example, the EU’s upcoming Entry/Exit System (EES) and European Travel Information and Authorization System (ETIAS) will use AI for risk profiling and automated border checks. Projects testing emotion recognition, AI lie detectors, and predictive policing have also been tested, though widely criticized.
The former refugee camp on Samos, Greece. (© Petra Molnar)
The former refugee camp on Samos, Greece. (© Petra Molnar)
Scholars and human rights organizations call for a precautionary approach: rigorous testing to ensure human rights compliance, transparency and oversight mechanisms, and robust public debate before any technologies are implemented. Some, including the UN’s Office of the Human Rights Commissioner (OHCHR), Externer Link: propose moratoriums on certain migration technologies, especially in high-risk contexts. Most importantly, the future developments of AI in migration and at the border will depend not only on legal safeguards and democratic accountability but also on recognizing the human cost of these technologies. AI systems are not abstract tools. They impact real people, often in vulnerable and precarious situations. It is crucial to center the experiences and rights of people-on-the-move in any discussion of AI use.
AI tools for migration management raise profound legal, ethical, and social questions. When governments adopt new technologies, it is essential to ensure that efficiency and security remain compatible with human rights, dignity, and justice. At the heart of these debates are people, not just data points. Transparent rules, robust oversight, and a commitment to human-centered design that also recognizes the expertise of affected individuals and communities must guide any future use of AI in migration.