Hostname: page-component-68c7f8b79f-gx2m9 Total loading time: 0 Render date: 2025-12-29T05:13:20.039Z Has data issue: false hasContentIssue false

Regulatory sandboxes in law enforcement: A pathway towards innovative and constitutionally sound AI systems in police authorities

Published online by Cambridge University Press:  10 December 2025

Michael Kolain*
Affiliation:
Research Fellow for Digital Law and Tech Policy, Institute for Ethics in Technology, Technical University Hamburg (TUHH), Hamburg, Germany Robotics and AI Law Society (RAILS), Member of the Board, Berlin, Germany Head of Policy, Centre for Digital Rights and Democracy, Berlin, Germany
Rights & Permissions [Opens in a new window]

Abstract

The article analyzes whether regulatory sandboxes are a legally intended and useful instrument to create pathways for law enforcement agencies using cutting-edge AI systems in compliance with fundamental rights. It takes a deeper look at the provisions of the EU AI Act for regulatory sandboxes and testing under real-world conditions in contexts relevant to law enforcement. Using a legislative process in Germany as a case study, the article shows the imminent tension of innovative AI systems between modern investigation tools in the digital age, as well as potential pathways towards disproportionate surveillance of citizens. The author suggests taking the extra mile through regulatory sandboxes instead of crafting legal foundations quickly under the influence of current political pressure: On the long run this can save tax money, avoid dependencies to tech companies and pave a European way of AI systems that respect digital human rights in an appropriate manner.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

1. Introduction: artificial intelligence in law enforcement

1.1. Scope and research questions

Artificial intelligence (AI) seems to be disrupting most fields of life in one way or the other. For the field of law enforcement, as in many other sectors, it is nothing new to use the potentials of electronic computation, data analysis and – more generally – integrating scientific advancements into police investigations (Fontes & Perrone, Reference Fontes and Perrone2021; Navarro, Reference Navarro2023). From data scraping on the internet and lie detectors to bulk telecommunication surveillance, biometric identification or instant translation (Donohue, Reference Donohue2014; Edri, 2020; Khder, Reference Khder2021): what is possible with AI is a next step in a long-lasting technological development in computer science rather than the quantum leap, or even an “AI revolution” (Awwad, Reference Awwad2024; Chataut et al., Reference Chataut, Nankya and Akl2024; Makridakis, Reference Makridakis2017), that stakeholders in industry and media seem to suggest at times (Cebral-Loureda et al., Reference Cebral-Loureda, Rincón-Flores and Sanchez-Ante2023; Chowdhury & Sadek, Reference Chowdhury and Sadek2012). Yet, many new AI tools are currently flooding the market, and some have the potential to modernize and improve the work of police and other security forces, while others allow surveillance in a dimension that might come in conflict with digital human rights and the rule of law (Berk, Reference Berk2021; Casaburo & Marsh, Reference Casaburo and Marsh2024; Cataleta, Reference Cataleta2021; Muller, Reference Muller2020; Završnik, Reference Završnik2020).

This article takes a closer look at the instrument of regulatory sandboxing in AI regulation (Charisi and Dignum, Reference Charisi, Dignum, Régis, Denis, Axente and Kishimoto2024; Ruschemeier, Reference Ruschemeier and Steffen2025) – the central measure in support of innovation laid down in Chapter VI of the AIA. The legal instrument aims to “ensure a legal framework that promotes innovation, is future-proof and is resilient to disruption” by allowing the “development and testing of innovative AI systems under strict regulatory oversight before the systems are placed on the market” (Recital 138 AIA). The focus of this article lies on the potential of regulatory sandboxes to develop legally compliant, technologically sovereign and democratically controllable AI systems for law enforcement agencies. Recitals 138, 139 AIA give the first impression that regulatory sandboxes are mainly drafted as a tool to test innovative products of SMEs and start-ups in an early stage (Boura, Reference Boura2024; Yordanova, Reference Yordanova2022) and reduce their administrative and regulatory burden. Yet, the legislator has foreseen the potential of this “limited regulatory space” (DeMeola, Reference DeMeola2021) as a protected environment – a “balanced solution” (Christoph, Reference Christoph2023) – for law enforcement agencies to co-create technical tools, potentially alongside precise legal provisions, including necessary safeguards and technical as well as organizational measures of data protection (Ballestrem et al., Reference Gausling, Ballestrem, Bär, Gausling, Hack and von Oelffen2020; Helminger, Reference Helminger2022; Jandt, Reference Jandt2017; Müller-Quade & Houdeau, Reference Müller-Quade and Houdeau2023) and supervisory structures. By analyzing the provisions of the AIA, I want to find out whether the instrument of regulatory sandboxing – as Christoph (Reference Christoph2023) has put it the context of US law – can contribute to bringing “coherence and discipline to the use of new technology in law enforcement and criminal proceedings” where the status quo of procuring private sector solutions “has led to uncertainty, inconsistency, and the danger of manifest injustice within local, state and federal justice systems” (Christoph, 2023). This article analyzes the hypothesis that regulatory sandboxing can be an effective way out of the conflicting paths of (quickly) modernizing policing by allowing the use of AI systems on the one hand and drafting legal provisions and implementing AI systems that comply with high fundamental rights standards on the other. It takes the discussion around biometric internet scans, as introduced as part of a federal “security package” in the Federal Republic of Germany (section 1.3. of this article), as a case study to outline the advantages of the instrument of regulatory sandboxing in the field of law enforcement compared to hasty legislative processes.

1.2. AI systems in law enforcement under rule of law and the AIA

In the future, those who provide (Art. 3[3] AIA) or deploy (Art. 3[4] AIA) AI systems in the EU will have to comply with product safety provisions of the AI Act. This “uniform legal framework (…) for the development, the placing on the market, the putting into service and the use of artificial intelligence systems” ultimately aims at “ensuring a high level of protection of (…) safety, fundamental rights (…), including democracy, the rule of law” (Recital 1[1] AIA). That the use of technology needs to be supported by a legal basis is nothing new in the field of public administration. Already under the rule of law, law enforcement agencies are not free to integrate technology in their investigative toolbox (Greenstein, Reference Greenstein2022; Hesse, Reference Hesse1999; Kommers and Miller, Reference Kommers and Miller2012;Weber, Reference Weber2008). The AI Act follows this basic understanding of fundamental rights doctrine.

The AIA follows a risk-based approach and densely regulates the category of “high-risk AI systems” (Art. 6 AIA). According to Recital 46 AIA, such systems should only be placed on the Union market, put into service or used if they comply with certain mandatory requirements in order to ensure that they “do not pose unacceptable risks to important Union public interests.” Certain AI systems used by law enforcement agencies are considered high-risk AI systems (Art. 6[2], Annex III [6][a-e] AIA) due to their potential impact on fundamental rights of citizens. Also, the field of “migration, asylum and border control management” widely falls into the high-risk category (Art. 6[2], Annex III[7] AIA). The lists of Annex III, Nr. 6 and 7 show different forms of AI systems that are considered high-risk: from assessing “the risk of a natural person becoming the victim of criminal offences” (Nr. 6[a], polygraphs (Nr. 6[b]), “assess personality traits and characteristics or past criminal behaviour “(Nr. 6[d]) to AI systems in migration meant for “detecting, recognising or identifying natural persons” (Nr. 7[d]).

Apart from whether an AI system is banned or considered high-risk according to the AIA, the limits of the national constitution must be respected (Almada and Petit, Reference Almada and Petit2025; Masala, Reference Masala2024; Micklitz et al., Reference Micklitz, Pollicino, Reichman, Simoncini, Sartor and De Gregorio2021), especially if the use of technology has an impact on fundamental rights of citizens. Every encroachment of fundamental rights must be based on a legally certain provision in national law that is proportionate. The AIA acknowledges the necessity of a legal basis by clarifying in Annex III(6) that their use must additionally be “permitted under relevant Union or national law.” Against this background, it becomes apparent that the use of AI systems by law enforcement agencies will be strictly regulated under the AIA based on a complex interplay of national and EU legislation in the field of AI, data protection and national security law.

1.3 The German “security package” as an example for political and legal debates around the legality and constitutionality of high-risk AI systems in the realm of law enforcement

As an example, I will introduce a case from Germany that has attracted significant public attention. For decades, police forces were not able to track down the alleged leftist terrorist Daniela Klette, who had gone underground in the 1990s and is suspected of having committed a series of robberies afterwards (The Times, 2023). At the end of 2023, a team of journalists was able to find new evidence about Klette’s whereabouts in the course of their research for a podcast: They fed the official mugshot being displayed at airports and train stations across the country into the AI-Tool PimsEyeFootnote 1 and found her face on pictures of a Capoeira group in Berlin on Facebook (Bovermann et al., Reference Bovermann, Fink and Mutter2024; Großekemper et al., Reference Großekemper, Höfner, Rosenbach and Wiedmann-Schmidt2024). Police forces followed the lead of those journalists and were able to arrest Klette a few weeks later. It turned out she had lived and freely moved in the heart of Berlin for years without ever being recognized. Security agencies felt embarrassed and got the impression that the success of the journalists was dancing on their noses. As a result, a public debate sparked: How can it be possible that a small group of journalists can track down an alleged terrorist gone underground with a freely available software application, while police have been unable – and not allowed – to do so?

And: What are the limits of security agencies in the EU to use AI systems that are available on global markets? Legal questions surrounding the use of AI software, especially by some non-EU companies, have arisen in manifold ways. In the case of Palantir, a series of judgments of the German constitutional court (Bundesverfassungsgericht) have circled around the question if and under which circumstances law enforcement authorities in the states of Hessen, Bayern and Hamburg (Bäuerle, Reference Bäuerle2024) are allowed to automatically analyse large databases of state and federal police authorities; with Palantir also being used in Nordrhein-Westfalen. The law enforcement authorities had procured a software application from Palantir and tried to adjust it to the procurement conditions. So far, however, no constitutional possibility has been found, and the Bundesverfassungsgericht asked for adjustments (see Judgment of the First Senate of 16 February 2023, file numbers: 1 BvR 1547/19, 1 BvR 2634/20). Similarly, in the case of Daniela Klette, law enforcement authorities must have felt inclined to procure Clearview AI or PimsEyes. However, Clearview AI had already been heavily fined by Italian and Dutch data protection authorities for stark infringements of data protection law (Der Hamburgische Beauftragte für Datenschutz und Informationsfreiheit, 2020; Martini & Kemper, Reference Martini and Kemper2023a; Martini & Kemper, Reference Martini and Kemper2023b; Rezende, Reference Rezende2020; Jung and Kwon, Reference Jung and Kwon2024).

Working with selected SME or larger companies in the protected experimental environment of a regulatory sandbox would allow for a path that follows the political goal of “digital sovereignty”, now often used as a shorthand for an ordered, value-driven, regulated and therefore reasonable and secure digital sphere (Bellanova et al., Reference Bellanova, Carrapico and Duez2022; European Commission, 2023; European Parliamentary Research Service (EPRS), 2020; Floridi, Reference Floridi2020; Heidebrecht, Reference Heidebrecht2024; Pohle and Thiel, Reference Pohle and Thiel2020). Both the market surveillance authorities (Art. 70 AIA), authorities for the protection of fundamental rights (Art. 77 AIA), and the law enforcement authorities would gain insights into the technical design, configuration and main components of the used AI systems – instead of having to deal with proprietary systems of large tech companies which have been developed in a different regulatory environment. For a regulatory sandbox for AI systems in the field of law enforcement, the federal and state levels could encourage certain companies – especially SME and European-based companies – to figuratively throw their hat in the ring – with the additional (implicit) prospect to later have a top-runner position in procurement procedures to acquire suitable AI systems for the police.

Interestingly, the arrest of Klette coincided with the final negotiations of the European Union’s landmark AI Act. In the final trilogue, Members of the European Parliament had pushed for a ban on biometric internet scans by PimEyes, Clearview AI and the likes by implementing a ban in Art. 5 lit. e AIA (accompanied by Recital 43), forbidding the “placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.” Nonetheless, the German Federal Government – under the influence after a terrorist attack in the city of Solingen (Euractiv, 2024; Kurmayer, Reference Kurmayer2024) – pushed for a “security package” (Sicherheitspaket) to react to threats of supposed Islamist terrorism. The ministerial draft included the legal competence of migration authorities to compare a photograph taken of foreign nationals upon entry biometrically with publicly accessible personal data from the internet by means of an automated data processing application in case there are no other means of identification.Footnote 2 Similar provisions implemented for the Federal Police (Bundeskriminalamt and Bundespolizei) to use the same technology under certain additional requirements – expanding the possibility to not only the face, but also the voice of a person.Footnote 3

The draft law sparked a public debate, including an outcry of civil society organizations who had been campaigning against all forms of biometric identification in public space for years (Reclaim Your Face, n.d.; Reuter, Reference Reuter2024). Under the pressure of the federal government, which wanted to show a swift reaction to the terrorist attack, and after intense negotiations in parliament that added additional safeguards, the Bundestag finally approved the two laws of the Sicherheitspaket.Footnote 4 An expert hearing on the first draft had previously led to media coverage, after several experts held the law in breach of German constitutional law, GDPR and also the AIA. The Federal Commissioner for Data Protection and Freedom of Information, Prof. Dr. Louisa Specht-Riemenschneider, criticized that the legal provisions did not even cursorily sketch the technical functionality and implementation of biometrical internet scans, thus not setting up specific safeguards and limitations, opening law enforcement a too wide scope that can lead to unproportionate infringements of the right to informational self-determination (Bundesbeauftragte für den Datenschutz und die Informationsfreiheit, 2024). While the competence of migration authorities to carry out biometric scans of asylum seekers – particularly in relation to internet data – came into force (with substantial changes during the parliamentary process), the legal provisions concerning the federal police were ultimately rejected by the second chamber of federal legislation, the Bundesrat. The opposing conservative party declared that – in their opinion – the legislative changes in the Bundestag, the Sicherheitspaket had been “softened and watered down” (Welt, 2024).

These developments in recent legislation in Germany show: if law enforcement agencies want to implement novel AI tools in areas sensitive to digital human rights, it is vital not to observe the drafting of legal provisions and their technical implementation as something separate. A provision that is intended to permit the use of an AI system that is not precise enough to give legal clarity, but rather leaves the details of implementation largely to the law enforcement agencies themselves, will necessarily get in conflict with the principle of statutory reservation.

The new provision of Section 15b Asylgesetz – with the competence of migration authorities to conduct biometric scans with internet data – will most likely end up in front of the Federal Constitutional Court (Bundesverfassungsgericht) or even the European Court of Justice (for infringement on the ban in Art. 5 lit. e AIA, see for more details section 5 of this article). In the worst-case scenario (from the perspective of law enforcement agencies), they will have by then acquired an (expensive) AI system infringing upon digital human rights, and would need to restart from scratch.

The case study raises the question: Could the instrument of a regulatory sandbox prevent the possibility of implementing digital technologies in the field of law enforcement that bear the risk of falling short of the requirements laid down in the AIA and fundamental rights doctrine?

2. Regulatory sandboxes in the AIA and their potential in the field of law enforcement

Creating AI systems in the field of law enforcement that are in accordance with fundamental rights is admittedly not the main field of application that the European legislator had in mind whilst drafting “Chapter VI: Measures in Support of Innovation.” Rather, a different intention and target group was the driving force (as Art. 57[9] AIA clearly points out): Regulatory sandboxes are mainly designed for private sector organizations and shall serve both as a fastlane towards a successful conformity assessment, and thus legal compliance, for certain products as well as an instrument to support SMEs and start-ups (“removing barriers for SMEs, including start-ups,” Recital 139 S. 1 AIA) to better deal with the regulatory burden of the AIA.Footnote 5 Since high-risk-AI-systems (Art. 6 AIA) need to be in compliance with the complex requirements of Chapter III – among them Data Governance (Art. 10 AIA), Technical Documentation (Art. 11 AIA) or Human Oversight (Art. 14 AIA) – regulatory sandboxes can reduce the regulatory burden specifically for their providers and deployers. Many of the fields of application in Annex III ultimately focus on public sector deployment of high-risk AI systems. The field of law enforcement is thus certainly not a “blind spot” in the provisions for regulatory sandboxes – yet rather one partly foreseen sector of potential deployment of AI-systems that have gone through the process of a regulatory sandbox.

2.1. Joint regulatory sandboxes in law enforcement and the overall intention of chapter VI

As far as regulatory sandboxes are supposed to open a controlled field of experimentation and testing under the supervision of competent authorities in the “pre-marketing phase” (Recital 139 S. 1 AIA), they can also be interpreted as a vehicle that widens the playbook for public administration on procuring market-ready products that are designed alongside with European values and principles as laid out in the European Charter of Fundamental Rights.Footnote 6 Before this background, regulatory sandboxes can open a sweet spot from both the perspective of digital sovereignty and digital human rights: Instead of being inclined to procure “quick fixes” for the intended investigation methods from other regulatory environments (e.g., biometrical scans of suspected persons with internet data with Clearview AI or PimEyes – or cross data-basis analysis and visualization with Palantir products), a regulatory sandbox could bring law enforcement authorities and European SMUs together to develop a tailored solution.

A secondary goal of regulatory sandboxes has a rather introspective angle for institutions of AI governance and legislation: With the instrument, a gap in the policy circle can be closed as the close ties between providers and supervisory authorities can “facilitate regulatory learning for authorities and undertakings, including with a view to future adaptations of the legal framework” (Recital 139 AIA). This intention can be further made fruitful in the context of AI systems for law enforcement agencies. There is a chance to combine the instrument of regulatory sandboxing with funding schemes of research and development projects, with the process of drafting new legislation in police law. Recital 142 AIA points out the “principle of interdisciplinary cooperation between AI developers, experts on inequality and nondiscrimination (…) and digital rights, as well as academics.” This regulatory idea can be realized in the process of shaping AI tools for police agencies that will comply with constitutional law. If the federal and state levels decide to set up a joint regulatory sandbox in the field of law enforcement, they can potentially also include ministries and parliamentarians who draft and decide on legal provisions allowing such AI systems for police forces – regulatory learning could take place on the spot. The legal provision, a suitable AI system fulfilling the intended purpose under the sole control of public authorities, and its factual implementation in police IT infrastructure and investigative methods could be tailored altogether – reducing the risk of lawsuits, disproportionate infringements of digital human rights and legal uncertainty.

2.2. Further processing of personal data acquired by law enforcement authorities (art. 59[2] AIA)

According to Article 57(1) of the AI Act, each Member State must establish at least one regulatory sandbox by August 2nd 2026. This may involve setting up a new sandbox or significantly adapting an existing one to support the development, testing and validation of AI systems. Yet, the regulation is open for all kinds of sectoral (Moraes, Reference Moraes2023) and particularly for more specific regulatory sandboxes on different state levels (Art. 57[2] AIA). Especially, in federal states such as Germany, where law enforcement and policing are largely regulated and carried out on the state and not the federal level, the option of “AI regulatory sandboxes at regional or local level, or established jointly with the competent authorities of other Member States” might open room for specialized and joint regulatory sandboxes in the field of law enforcement. By setting up such regulatory sandboxes, the goal of “evidence-based regulatory learning” (Art. 57[9] lit. d) could be expanded to comparative legal insights as well.

Regulatory sandboxes can be more than a support mechanism for SMEs and a fast lane to successful conformity assessments. This is underlined by a closer look at Art. 59 AIA. Even though the provision raises manifold legal questions on further processing of personal data in regulatory sandboxes, the article focuses only on those parts that are relevant for the field of law enforcement : The provision shows that the EU legislator had law enforcement in mind when drafting Chapter VI – even if only as a subordinate goal. It allows for the further processing of “personal data collected for other purposes for developing certain AI systems in the public interest” (Recital 140[1] AIA). What comes across solely as an authorization for further processing of personal data under data protection law at first sight in Art. 6 (4) GDPR and Art. 5(1)(b) GDPR, opens a door in Art. 59(2) for further processing of personal data for law enforcement authorities that has been acquired for “the prevention, investigation, detection or prosecution of criminal offenses or the execution of criminal penalties, including safeguarding against and preventing threats to public security” in regulatory sandboxes. So, the provision points to law enforcement agencies using their data pools to develop AI systems together with potential providers – from the private or public sector – to enhance their digital capacities and ultimately enable them as deployers. Recital 140(4) AIA underlines that (prospective) providers “should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to adequately mitigate any identified significant risks to safety, health and fundamental rights.”

Interestingly, though, the corresponding recitals do not specify the scope and effects of Art. 59(2) AIA in the field of law enforcement. Recital 142 rather revolves around other fields of application of Art. 59(1) AIA and explicitly points out “socially and environmentally beneficial outcomes” (Recital 142 p. 1) as use-cases. There is no recital that can help specify the normative scope of Art. 59(2) AIA. A deeper look at legislative history shows that the whole provision was neither mentioned in the AIA’s draft of the European Commission nor in the European Parliaments Mandate, yet was only part of the EU Council’s position. The provision – that most likely goes back to ministries of the interior of national governments – seems to have been added without changes to the final text of the AIA in the – historically long (Bertuzzi, Reference Bertuzzi2023; Waem and Demircan, Reference Waem and Demircan2023) – political trilogue without any further explanation in recitals (den Heijer et al., Reference den Heijer, Abeelen and Maslyka2019). However, the clear structure of the provision makes it possible to interpret it without any further explanation of legislative intent.

Overall, the provision puts a high threshold: the processing of personal data must (1) stay “under the control and responsibility of law enforcement authorities,” (2) “be based on a specific Union or national law” and (3) “subject to the same cumulative conditions as referred to in paragraph 1” which constitutes a total of 10 conditions that must be fulfilled aimed to ensure an alignment with legitimate public interest (see for a full overview Textbox 1 below).

2.3. Can public (law enforcement) authorities play an active part in regulatory sandboxes?

A combined look at Art. 57(2) and Art. 59(2) AIA shows that the EU legislator has foreseen and intended the possibility of setting up joint regulatory sandboxes that involve different state levels (e.g., federal and state level in Germany) where also data sets from law enforcement authorities can be further processed. This opens a wide regulatory space for federal collaboration – to develop, modify or update AI systems in the field of policing, from threat prevention over law enforcement to the execution of criminal offenses.

It is clarified in Art. 58(2)(b) AIA that “providers may also submit applications in partnerships with deployers and other relevant third parties.” In practice, law enforcement agencies can play an active role as (future) deployers of a high-risk AI system (Art. 3[4] AIA). They can also participate as “relevant third party” in cases where the only or most likely deployer of a specific AI system is a police agency. It can be in the interest of the provider of a high-risk AI system to file an application together with a law enforcement agency, without it being clear if a future deployment will take place. Furthermore, Art. 59(1) clearly shows that also “a public authority” can be an active participant in a regulatory sandbox where personal data are processed to “safeguard substantial public interest.” This is underlined by Art. 59(3) AIA which states that the personal data must stay “under the control and responsibility of law enforcement authorities” – how would this be possible without them being involved in the experimental setting and the oversight mechanism of a regulatory sandbox?

Recital 139 S. 5 underlines that “where appropriate,” competent authorities “should cooperate with other relevant authorities, including those supervising the protection of fundamental rights, and could allow for the involvement of other actors within the AI ecosystem,” thus allowing a broad participation of actors who are not directly involved in the product design and marketing of an AI system. However, law enforcement authorities cannot get involved in regulatory sandboxes as authorities supervising the protection of human rights. As deployers of AI-driven surveillance software, they are the object of supervisory activities – and fall short of the necessity to “exercise their duties independently” (Art. 70[1] AIA). According to Art. 74(8) AIA, it is the data protection authorities, “or any other authority designated pursuant to the same conditions laid down in Articles 41 to 44 of Directive (EU) 2016/680” who carry out the task of market surveillance. A police authority cannot be appointed as a national competent authority; it is rather supervised by them while using high-risk AI systems in the field of law enforcement.

According to their respective role, data protection authorities will focus on fundamental rights questions, suggesting privacy-preserving technologies and legal safeguards, while law enforcement agencies will focus on usability and effectiveness of AI systems for carrying out their duties. In an ideal scenario, these opposed views can be mitigated within a regulatory sandbox through dialogue, technical and organizational means and legal safeguards – and lead to AI systems that suit the needs and technical infrastructure of police authority while respecting fundamental rights.

3. Interplay between regulatory sandboxes and testing in real world conditions

According to Art. 57(5) S. 2 AIA, regulatory sandboxes can “include testing in real-world conditions supervised therein.” The instrument framed by Art. 60 AIA differs from the experimental space of a regulatory sandbox by allowing providers to work with “real” data and outside of the box of a mere experimental scenario. Testing in the real world is not limited to nonpersonal data, but allows for the inclusion of information about real people under certain conditions, such as “informed consent by the participants” (Art. 61 AIA; Buocz et al., Reference Buocz, Pfotenhauer and Eisenberger2023). The instrument was proposed by the EU Council and – against the position of the EP – adopted in the final version of the AI in the political trilogue.

For law enforcement authorities who are already engaged in a regulatory sandbox, processing of sensitive data from police databases according to Art. 59(2) AIA might open an interesting enlargement of preparations to incorporate state-of-the-art AI systems into their investigative portfolio: Testing in real-world conditions can acquire new personal data from people interacting with AI systems in an experimental, yet real-life setting and feed it into the process of their development and implementation. Not only that, but especially in the field of criminal investigations, the “real-world test” of digital tools can turn out valuable for several reasons. What works in laboratory conditions might fail “out on the streets,” especially in the field of crime prevention (Gerchick & Cagle, Reference Gerchick and Cagle2024; Schreiner, Reference Schreiner2020). Testing under real-world conditions can thus ultimately contribute to preventing “unnecessary or inappropriate (…) criminal justice tech” (DeMeola, 2023). Ultimately, error-prone technology with which state authority is exercised and fundamental rights are impaired implies that the executive’s legitimate claim to power is at stake.

Pre-testing and evaluation can take place in manifold ways. Volunteers could participate in the experimental space of the regulatory sandbox by providing their personal data, which is not available in police databases (and without the restrictions for further processing, e.g., prior anonymization or synthetization Art. 59[1] lit. b AIA) in a specific app they use in their everyday life – or by partaking in an experimental testing setup during the pre-market phase. In the case of AI systems that rarely translate dialects, real interviews could be used for testing (Hamidov, Reference Hamidov2025; Zhang and Feng, Reference Zhang and Feng2023). In more human rights sensitive scenarios, volunteers might be moving around a protected space, testing the capabilities of gait recognition (Mandlik et al., Reference Mandlik, Labade, Chaudhari and Agarkar2025; Shen et al., Reference Shen, Yu, Wang, Huang and Wang2024) – or providing their social media profiles for testing a scraping tool.Footnote 7 A special focus must in any case lay on the intentional implementation of privacy-enhancing technologies (Ajala et al., Reference Ajala, Arinze, Ofodile, Okoye and Daraojimba2024; Lemieux and Werner, Reference Lemieux and Werner2024; Melzi et al., Reference Melzi, Rathgeb, Tolosana, Vera-Rodriguez and Busch2024; O’Hara, Reference O’Hara2022; Seamons, Reference Seamons2022; Shehu and Shehu, Reference Shehu and Shehu2023) into AI tools, that do not follow the paradigm of massive data collection, centralization and intense extradition of information. An example is the work of Koch (Reference Koch2020) on a “video surveillance framework which maintains the advantages of face recognition whilst prohibiting mass-surveillance.”

Admittedly, the legislator does not see regulatory sandboxing and testing in real-world conditions as Siamese twins. Both instruments of innovation do not necessarily need to be taking place together in the same space – the AIA rather even suggests the opposite: Art. 60 AIA explicitly sets up “appropriate and sufficient guarantees and conditions” (Recital 141, p. 2 AIA) for testing in real-world conditions outside of a regulatory sandbox. The fact that the legislator solely briefly mentions testing in real-world conditions within regulatory sandboxes in Art. 57(5) p. 2 AIA whilst Art. 60 focuses on stand-alone testing, seems to follow the assumption that the safeguards of a sandbox are sufficient and do not necessarily need further legal clarity when combined with real-world testing. This is supported by Art. 58(4) p. 1 AIA, which gives competent authorities the responsibility to “specifically agree on the terms and conditions of such testing, and particularly the appropriate safeguards with the participants, with a view to protecting fundamental rights, health and safety.”

But wouldn’t it be possible for law enforcement authorities to contribute their data to stand-alone testing in real-world conditions and combine it with personal data of real people – outside of a regulatory sandbox? With regard to the text of Art. 59(2) AIA, there seems to be at least no explicit permission in the AIA to further process such data in a real-world testing environment outside of a regulatory sandbox.

It thus seems coherent that personal data can only be used in testing under real-world conditions with explicit consent of data subjects “participating in such testing” (Art. 61[1] S. 1 AIA), who have received information on “their rights (…) in particular their right to refuse to participate” and be handed “dated and documented (…) copy.” However, the legislator mentions exceptions from informed consent in the case of law enforcement, both in Recital 141(3) (“with the exception of law enforcement where the seeking of informed consent would prevent the AI system from being tested”) and in Art. 60(4)(i) AIA. It remains unclear when seeking consent of a data subject would “prevent the AI system from being tested” while “testing in the real-world conditions” would “not have any negative effect on the subjects” (Art. 60[4][i] AIA). How could such personal data even be acquired from a data subject in a stand-alone test under real-world conditions, circumventing the necessity of the data subject to actively participate by giving their consent? In the scope of this article, I will have to leave the specifics as an open question for future research.

4. Retrospective biometric identification with scraped data from the internet

The case of the German “security package” shows (see section 1.3): Where government and legislators want to move fast to integrate AI software that has been made available on global markets into their investigative toolbox, questions of constitutionality, compliance with EU law and technical implementation arise. A clear advantage of the pathway of regulatory sandboxing AI systems before drafting a legal basis or even starting to procure AI products that are in line with a legal basis is. Different perspectives – reaching from effective law enforcement and fundamental rights protection to the technical design of such systems – can be brought together at an early stage and mitigated effectively. Open legal questions, such as if § 15b Asylgesetz allows migration authorities to use an AI system that is banned under Art. 5(1) lit. e AIA – or whether it needs to be merely considered high-risk according to Art. 6(2), Annex III Nr. 7 lit. d,Footnote 8 could at least be discussed and clarified through the direct involvement of different competent authorities and perspectives. To show how legal and technical questions are intertwined, and where further clarification is needed, I will carry out a brief legal analysis around the question: Is § 15b Asylgesetz in breach of Art. 5(1) lit. e – or would an AI system that migration authorities use to put the legal basis into practice solely fall into the high-risk category?

Art. 5(1) lit. e AIA puts a ban on

“the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage”

The corresponding Recital 43 reads as follows:

“The placing on the market, the putting into service for that specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage should be prohibited because that practice adds to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy.”

During parliamentary negotiations and final drafting of the “security package” in the Bundestag, major changes have been made to § 15b AsylG, including several additional safeguards.Footnote 9 In order to stay in the scope of this article, and within the question whether the provision is in breach of EU law, I will focus mainly on the legal basis to use a tool for biometric internet scans, which reads:

Das (…) biometrische Lichtbild des Ausländers darf mit allgemein öffentlich zugänglichen personenbezogenen Daten aus dem Internet mittels einer automatisierten Anwendung zur Datenverarbeitung biometrisch abgeglichen werden.”

It translates as follows:

The (…) biometric photograph of the foreigner may be biometrically compared with generally publicly accessible personal data from the Internet by means of an automated application for data processing.”

Since the bans of the AI Act have only come into force on 2nd February 2025, there is little legal literature and no relevant jurisdiction on the individual bans. In the case of Art. 5(1) lit. e, there is also no literature since it has been proposed in the EP mandate, which came last. On the 4th of February 2025, the EC has, however, published “Guidelines on prohibited artificial intelligence practices” (C[2025] 884 final ANNEX) following its obligation in Art. 96(1) lit. b. Guidelines are soft law instruments and nonbinding for competent authorities. However, they might be of help to interpret the provisions and, after all, represent the EU Commission’s perspective, which can influence judicial and administrative decision-making (Craig and de Búrca, Reference Craig and de Búrca2021; Klamert, Reference Klamert2014).

The national provision in § 15b AsylG solely describes the investigation method that a migration authority is allowed to take: to use software (“automated data processing”) in order to compare a biometric photograph taken of an asylum-seeker “biometrically” with “generally publicly accessible personal data from the internet.” It neither mentions if an AI system (Art. 3[1] AIA) nor if a “facial recognition database” is intended or necessary to be used to fulfill this goal. At first sight, these discrepancies might come with no surprise: The AIA is a product safety law that regulates not only the “use of AI systems” by deployers but mostly the “placing on the market” of a provider and the “putting into service for this specific purpose” (Art. 5[1] lit. e AIA), while § 15 AsylG is a legal basis for a migration authority in the process of identifying asylum-seekers. So while the AIA bans certain products, the legal basis in § 15 AsylG does not specify the product used or its exact way of functioning but outlines rather what is permissible to be done by migration authorities in an abstract manner.Footnote 10

In any case, the purpose of § 15d AsylG is the identification of a specific person (“Feststellung der Identität,” engl. “determination of identity”), which falls inside the scope of Art. 6(2), Annex III Nr. 7 lit. d (“for the purpose of […] recognizing or identifying natural persons”) – and thus surely falls in the tightly regulated high-risk category. But does it also allow migration authorities to use a banned AI system here?

Regarding the explicit wording of Art. 5(1)(e) AIA, § 15b AsylG theoretically leaves room for forms of “automated data processing” for biometric scans using “generally publicly available data from the internet” (§ 15[1] AsylG), provided that the system used is not clearly classified as an AI system banned for scraping internet data to build facial recognition databases. But are there technical solutions that allow a biometric photograph as input and matching pictures of that same person from “the internet” as an output which are not an AI system after all?

Clearview AI has served as the blueprint for the ban in Art. 5(1) lit. e AIA – a provision introduced by the EP Mandate: the software available only to public authorities scrapes the internet using techniques of machine learningFootnote 11 and feeds them into a database for the purpose of facial recognition, which users can check against facial images they use as input. In order to set up the database, Clearview seems to also scrape the internet in an “untargeted” way, collecting every facial image it can find, thus not being limited to a certain group or cohort of people.Footnote 12 Ultimately, a database of all pictures of human faces on the internet is being created (Martini and Kemper, Reference Martini and Kemper2023a). One might argue, though, that the ban does not apply in cases where public authorities use such tools with the purpose of identifying a specific person – the scraping then being “targeted” instead of “untargeted.” However, the legislator has used “untargeted” as an adjective to describe the technical process of “scraping of facial images from the internet” in creating or expanding the database – not specifying the purpose of the intended use of the system. As a result, Clearview AI is banned; the German migration authorities will not be allowed to procure it to carry out the identification of asylum-seekers.

In its guidelines, the EC defines the term “untargeted” as follows:

“If a scraping tool is instructed to collect images or video containing human faces only of specific individuals or a pre-defined group of persons, then the scraping becomes targeted, for example to find one specific criminal or to identify a group of victims.”

This would open room for a permissible use of targeted scraping tools that are used to clarify the identity of asylum-seekers. Yet, from a technical perspective, the relation between the “facial recognition database” and the “targeted scraping” would then become blurry. Either there is a facial image database that has been built up by scraping facial images from the internet, which is then used for a targeted search for a specific person – or a “targeted” scraping of the internet takes place where each image is analyzed in two steps: (1) is it a facial image, and (2) does it match the biometrical image of the targeted person. In the second case, there would be no specific “facial recognition database,”Footnote 13 but only the output of a targeted internet scraping. This interpretation at least suggests that the provision has inconsistencies when thinking about its practical application outside of the business models of Clearview AI, for which the provision has been tailored.Footnote 14 According to the EC guidelines, however, forms of “reverse engineers image search engines” shall be permissible – which would suggest that PimEyes, which adds a component of facial recognition to reverse image search,Footnote 15 shall be considered as a form of a – not banned – targeted scraping. So is the question whether a form of scanning for biometrical matches in facial images on the internet is banned or not, depending on the technical method? The provision in § 15b AsylG abstains from any specification of the functionality of the technical system.

It turns out to be not solely a legal but an inherently interdisciplinary question if biometric scans with internet data can be carried out with software that works either (1) without a “facial recognition database”Footnote 16 in useFootnote 17 or based on (2) “untargeted” (or targeted) scraping of the internet.Footnote 18 The Federal Commissioner for Data Protection and Freedom of Information, Prof. Dr. Louisa Specht-Riemenschneider, has stated in the expert hearing in the German parliament that targeted scraping of the internet for biometric matches without using a forbidden database “is unrealistic under today’s technical conditions.”Footnote 19 On the other hand, the EC considers “reverse engineering image search engines” as a permitted targeted scraping in its guidelines (EC guidelines p. 79). So, what the EC guidelines seem to imply is: Clearview AI – which has set up a database for its clients for facial recognition – is banned, while PimEyes – which reverse engineers image search engines combined with biometric face recognition – is permitted under the AI Act.

Considering the legislative intent to ban a “practice” that creates “the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy” (Recital 43 AIA), it, however, seems questionable to draw the line between banned and permitted alongside merely technical facts. The ban would only apply when a facial recognition database similar to that of Clearview AI is used as a buffer – and not when the internet is scraped to find individual matches via reverse engineering image search engines. Apart from the question whether such search engines deploy a database in the background to make search results available quickly: From the citizens’ perspective, all personal data ever publicly published – intentionally or unintentionallyFootnote 20 – on the internet would be under scrutiny to be collected by law enforcement agencies. Would it make a difference if facial images are ultimately stored in a facial recognition database or gained through reverse engineering image searches with a biometrical matching option? One could argue that data subjects have more influence on whether one of their pictures is findable in the open web through search engines compared to a database that cannot be accessed publicly. Also for courts and supervisory authorities, it might be more difficult to control the content and scope of such a database, which could also be filled with images acquired through social media platforms, data brokers or nonpublic parts of the internet, than a browser-based application that builds on existing search engines of the public internet.

Even though there is no explicit violation of the text of Art. 5(1) lit. e AIA, since it is ultimately unclear and vague how the German migration authorities will make use of the legal basis in § 15(1) AsylG, it seems at least doubtful that there is a possible way to use the legal basis without violating Art. 5(1) lit. e AIA at this point. The German legislator has, anyway, decided to adopt the law. Other than proposed by the government draft, the Bundestag has, however, added a provision to the law that the German Federal Government has to first pass a subordinate statutory regulation (Rechtsverordnung) that specifies technical details and safeguards (§ 15[11] AsylG) before migration authorities can ultimately use such software (see footnote 18 above). There is thus still room for interdisciplinary efforts to decide whether there is a technical and legal way to implement § 15 AsylG without violating Art. 5(1) lit. e AIA while preparing the statutory regulation in consultation with the national data protection authority. Overall, the analysis of § 15 AsylG in light of the bans of the AIA shows that there is and will be immense legal and technical uncertainty in carrying out such biometric internet scans.

I argueFootnote 21 that the described legal and technical specifics could best be clarified in the realm of a regulatory sandbox in close regard to the actual AI system in development and in dialogue with the competent authorities for AI regulation, as well as authorities for the protection of fundamental rights. Taking a regulatory sandbox will surely take more time than just drafting a legal basis for biometric internet scans. Yet, as a thorough first step, it can potentially prevent a maldevelopment towards unlawful infringements of fundamental rights, politically unpleasant judgments of high courts, an inefficient allocation of public funds, and a loss of trust in law enforcement authorities. The outcome in the case of biometric internet scans would in my opinion however be: Such systems are banned under Art. 5(1) lit. and cannot be implemented in a compliant manner.

5. Outlook: A sketch for a joint regulatory sandbox of different state levels and law enforcement agencies

The analysis of the provisions of the AIA has shown that different state levels can set up joint regulatory sandboxes in which not only potential providers but also law enforcement agencies as potential deployers or relevant third parties can work together in a protected and experimental environment. In a regulatory sandbox, personal data from data pools of law enforcement agencies can be further processed (Art. 59[2] AIA) and testing under real-world conditions with personal data of data subjects who give their informed consent can take place (Art. 57[5] S. 2 AIA). It remains unclear when personal data can be used without informed consent in testing under real-world conditions outside of a regulatory sandbox according to Art. 60(4)(i).

Through the active participation of national competent authorities for the AIA, public bodies for the protection of fundamental rights (e.g., data protection authorities), civil society organizations, academic experts (e.g., from the scientific panel of independent experts, Art. 68 AIA) or even civil servants from ministries who draft laws, different goals can be reached simultaneously:

  • It can be avoided that systems are developed that are banned under Art. 5 AIA or that high-risk AI systems are (unwantedly) used in breach of the provisions of Chapter III.

  • AI systems can be designed and developed in a manner that suits the needs and preconditions of the relevant law enforcement agencies and their existing IT infrastructure.

  • National particularities in the regulation of law enforcement can be considered, be it in the realms of constitutional law, police law or shared/separated responsibility on federal levels.

  • A high level of transparency on how the relevant AI systems work, including data flows and data analysis mechanisms, can be reached – potentially leading to AI software that is effectively open-source.

  • Paving a way to effective public procurement of AI systems for law enforcement authorities that follows the overall political goal of digital or technological sovereignty (European Commission, 2023; Floridi, Reference Floridi2020; Pohle and Thiel, Reference Pohle and Thiel2020).

  • Efficient use of public fundsFootnote 22 if law enforcement agencies of different state levels cooperate, rather than operate separately, in procuring AI systems.

  • SMU and start-ups that are currently not the key accounts of law enforcement authorities in the EU might get access to a new market segment of so-called GovTech (Bharosa, Reference Bharosa2022; Kuziemski et al., Reference Kuziemski, Mergel, Ulrich and Martinez2022).

In the preparation of joint regulatory sandboxes in law enforcement, international best practices should be taken in close consideration, e.g., the “criminal justice tech sandbox” in Utah (Christoph 2023), especially in order to “ensure informed, scientifically rigorous and objective evaluations.” This would contribute to the goal of “evidence-based regulatory learning” (Art. 57[9] lit. d AIA) in an optimal way.

So far, there is no specific legal basis yet in either national or Union law that allows further processing in a regulatory sandbox. But, as this article has shown, the door is open for national legislator(s) to take the opportunity to tailor a national provision – on the federal and/or state level – that meets the standard of Art. 59(2) AIA, and later even include testing under real-world conditions in the specific joint regulatory sandbox. In order to set up a joint regulatory sandbox for AI systems that can contribute to the overall goal of developing legally compliant and technically sovereign AI systems for law enforcement agencies, the national legislator(s) need to adopt a legal basis that meets the criteria summarized in Textbox 1.

Personal data lawfully collected by law enforcement agencies can be used in a regulatory sandbox according to Art 59 (2) AIA if

they were originally collected for the purposes of the prevention, investigation, detection or prosecution of criminal offenses or the execution of criminal penalties, including safeguarding against and preventing threats to public security.

They can only be further stay “under the control and responsibility of law enforcement authorities.”

The processing in the regulatory sandbox “is based on a specific Union or national law” – and

the conditions in Art. 59(1) AIA cumulatively met; this means

  • the AI system is developed to protect substantial public interest by a public authority or another natural or legal person in the area of public safety (lit. a [i] 1st alternative)

  • the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, Section 2, where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other nonpersonal data;

  • there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where necessary, stop the processing;

  • Any personal data to be processed in the context of the sandbox is in a functionally separate, isolated and protected data processing environment under the control of the prospective provider, and only authorized persons have access to that data.

  • Providers can further share the originally collected data only in accordance with Union data protection law; any personal data created in the sandbox cannot be shared outside the sandbox.

  • Any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data.

  • Any personal data processed in the context of the sandbox is protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period.

  • The logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox, unless provided otherwise by Union or national law.

  • A complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation referred to in Annex IV.

  • A short summary of the AI project developed in the sandbox, its objectives and expected results is published on the website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.

A joint regulatory sandbox for law enforcement purposes could be permanently set up, thus being able to move fast when new technological trends arise or the state-of-the-art is further developed. The focus of joint regulatory sandboxes in the field of law enforcement would – and should – not only focus on surveillance tools that are highly sensitive to fundamental rights. Also, AI systems in the fields of translation, text or image generation, trace analysis or data visualization can be tailored for the specific needs of law enforcement authorities – which tend to differ from those of commercial enterprises (Kemper and Kolain, Reference Kemper and Kolain2025).

If sandboxes will turn out as “an easier sell politically” (Christoph Reference Christoph2023) remains to be seen. In the case of the “security package” in Germany, this was not the case – maybe due to the novelty of the provisions of the AIA as well as the fact that the mandatory regulatory sandbox (Art. 57[1] AIA) has not been set up yet. But this might change in the future. In a collaborative effort, legislators, competent supervisory authorities and law enforcement authorities of different state levels, potential providers of AI systems and research institutions can walk through the door Chapter VI has opened.

Especially in times of political tensions, polarization and a rise of digital authoritarianism (Dragu and Lupu, Reference Dragu and Lupu2021; Hellmeier, Reference Hellmeier2016; Pearson, Reference Pearson2024; Taylor, Reference Taylor2022; Turner, Reference Turner2019; Wilson, Reference Wilson2022), new surveillance laws that disproportionately interfere with fundamental digital rights and are later stopped by constitutional courts as being contrary to fundamental rights, or turn out as nonfunctional, can destroy trust in the liberal constitutional state and democratic law enforcement authorities. Against this backdrop, it is worth modernizing the police thoroughly and with respect to a high standard of digital human rights by testing new police technologies in regulatory sandboxes first – and then follow up with a legal basis in conformity with constitutional law and a joint procurement plan for technologically sovereign AI systems that align with European values.

Acknowledgements

A special thanks goes to Konstantin von Notz and Dietrich Haußecker for inspiring and discussing the regulatory idea of regulatory sandboxes in law enforcement early on, to Hannah Ruschemeier for inviting me to write this paper, to Katharina Buchsbaum for assisting me in researching for and finalizing the manuscript and to Cigdem Caglayan Kolain for her emotional support.

Funding statement

There are no funders to report for this submission.

Competing interests

The author was involved as a digital policy advisor in der Federal German Parliament (Bundestag) in the final steps of the legislative process around the “Sicherheitspaket,” which is used as a case study in this article.

Michael Kolain is a legal scholar and conducts research at the interface between law, technology and public policy. He has a research focus in state of law and digital human rights, the regulation of emergent digital technologies, digital statehood, and data (protection) law.

Footnotes

1 PimEyes has set up a large database with facial images from the internet combined with biometrical detection tools: Users can feed in a facial image and get matches on publically available images. A similar product on the market is CleaviewAI (Rhinelander, De Fuentes & O’Driscoll, Reference Rhinelander, De Fuentes, O’Driscoll, Schmutzler, Palacios-Chacón, Burvill and Andonova2024).

2 Geset zur Verbesserung der inneren Sicherheit und des Asylsystems, https://www.recht.bund.de/bgbl/1/2024/332/VO.html; Entwurf eines Gesetzes zur Verbesserung der Terrorismusbekämpfung (Reference Deutscher2024). Deutscher Bundestag. https://dip.bundestag.de/vorgang/gesetz-zur-verbesserung-der-terrorismusbekämpfung/315333

3 Key measures of the 2024 draft bill include biometric internet scans, which introduces a power for law enforcement to match biometric data such as facial and voice data with publicly accessible internet data using automated technical procedures to identify and locate suspected terrorists and offenders. Another key measure is automated data analysis, which creates powers for the Federal Criminal Police Office (BKA) and the Federal Police to conduct automated analysis of large volumes of data. See Entwurf eines Gesetzes zur Verbesserung der Terrorismusbekämpfung (Reference Deutscher2024). Deutscher Bundestag. Retrieved from https://dip.bundestag.de/vorgang/gesetz-zur-verbesserung-der-terrorismusbekämpfung/315333.

4 Disclaimer: the author was involved in those negotiations as digital policy advisor to a government-supporting parliamentary group.

5 This is underlined by Recital 142 AIA which states “that SMEs, including start-ups, that have a registered office or a branch in the Union” should have “priority access to the AI regulatory sandboxes” as well as Art. 58(2)(d) AIA that grants them access free of charge “without prejudice to exceptional costs that national competent authorities may recover in a fair and proportionate manner.”

6 Recital 143 AIA at least points in the direction of alternating public procurement practices when it points out that the EC “should complement Members States’ efforts (…) by organizing appropriate communication campaigns to raise awareness about the obligations arising from this Regulation, and by evaluating and promoting the convergence of best practices in public procurement procedures in relation to AI systems.”

7 These are mere examples intendend to illustrate on how the regulatory instrument could be put in practice, yet not suggestions to introduce such surveillance methods through legislation. A critical question that future research might take a deeper look at is whether informed consent to participate in testing in real-world-conditions could be carried out in privacy notices of third party-services, such as specific apps or even data brokers.

8 It covers AI systems used by migration or border control authorities for “the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents”.

9 The Federal Government has to specify the details of the investigation method in a statutory provision (Rechtsverordnung) before being permitted to use it and consult the data protection authority before doing so; the obligations to protocol the use and users of the digital tools were tightened; scans with real-time-data (e.g. livestreams) are prohibited; while using the automated processing tool to identify an asylum-seeker it is to be technically ensured that the country of origin does not obtain any knowledge. See Deutscher Bundestag (Reference Bundestag2024, October 16), Beschlussempfehlung und Bericht des Ausschusses für Inneres und Heimat (4. Ausschuss), p. 9 ff. Retrieved March 15, 2025, from https://dserver.bundestag.de/btd/20/134/2013413.pdf.

10 If the provision comes into conflict with the principle of certainty – or is too vague to allow an interference with the fundamental right to protection of personal data, will ultimately stay out of scope of this article. This complex question needs to be left to future research in the field of German constitutional law and EU fundamental rights doctrine.

11 Even though the exact functioning of those products is unknown due to commercial secrets resp. the proprietary character of the software code, it must be assumed that an AI system (Art. 3(1) AIA) is in place: In order to set up a database to match input data, it is necessary to analyze a huge amount of data in a short time with the objective to sort out internet pictures with a human face on it. Considering the variety of pictures of a human face a part from the obvious passport perspective (face in the background, profile view, side view etc.), it must be assumed that forms of machine learning are used and not purely deterministic algorithms.

12 The EC similarly defines in its guidelines as “without a specific focus on a given individual or group of individuals”. Just as a side-note: A focus on a specific group of individuals could pose constitutional questions of discrimination and equality, e.g. when we think about a facial recognition database that stems from “targeted” scraping just for people with certain ethnic features.

13 See also the EC guidelines (p. 79): “It is questionable whether the matches would appear in a ‘database.’” European Commission. (European Commission, 2025, February 4). Commission guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act). https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act

14 See the clear example on p. 79 of the EC guidelines (European Commission, 2025).

15 Cf. “PimEyes uses a reverse image search mechanism and enhances it by face recognition technology to allow you to find your face on the Internet”, Retrieved March 15, 2025, from https://pimeyes.com/en.

16 The EC guidelines define database broadly as any collection of data, or information, that is specially organized for rapid search and retrieval by a computer and a “facial image database” as one that is “capable of matching a human face from a digital image or video frame against a database of faces, comparing it to images in the database and determining whether there is a likely match between the two” (European Commission, 2025, p. 78).

17 Not being an expert in data scraping or internet crawlers, but a legal scholar, I cannot answer this question ultimately. However, it seems at least questionable whether a technical solution for the purpose of identifying asylum-seekers which has a biometric picture as input and biometric matches as output could be designed and made functionable, basically crawling the (whole) internet, without creating a database.

18 See Stellungnahme der BfDI zum Sicherheitspaket (Bundesbeauftragte für den Datenschutz und die Informationsfreiheit, 2024, p. 3).

19 See Stellungnahme der BfDI zum Sicherheitspaket (Bundesbeauftragte für den Datenschutz und die Informationsfreiheit, 2024, p. 3) in the statement of the Federal Commissioner for Data Protection and Freedom of Information, Prof. Dr Louisa Specht-Riemenschneider, during an expert hearing in the German parliament.

20 May it be the pictures that a “momfluencer” has made public without the explicit consent or will of her child, holiday pictures where the data subject is merely a bystander in the background, or even cases of “revenge porn” or other forms of unwanted publication. Having to suspect law enforcement agencies to be able to scrape the whole public internet and social media platforms for biometrical matches, certainly contributes to the feeling of mass surveillance and the very practical use of the right to privacy in the future.

21 Disclaimer: I don’t just argue for this solution in this academic paper but have also suggested it as a way forward in my past role as policy advisor during the parliamentary negotiations for the “security package” between the governing parties. Yet, due to a lack of political compromise, the policy suggestion did not find its way into the law.

22 Or as Christoph (2023) puts is very comprehensively: “society will benefit where its institutions are not wasting public funds on the acquisition of any ‘unnecessary or inappropriate,’ privately developed criminal justice tech that does not function, as it should, in the interest of truth and justice”.

References

Ajala, O. A., Arinze, C. A., Ofodile, O. C., Okoye, C. C., & Daraojimba, O. D. (2024). Reviewing advancements in privacy-enhancing technologies for big data analytics in an era of increased surveillance. World Journal of Advanced Engineering Technology and Sciences, 11(1), 294300.10.30574/wjaets.2024.11.1.0060CrossRefGoogle Scholar
Almada, M., & Petit, N. (2025). The EU AI Act: Between the rock of product safety and the hard place of fundamental rights. Common Market Law Review, 62(1), 85120.10.54648/COLA2025004CrossRefGoogle Scholar
Awwad, B. (Ed.). (2024). The AI revolution: Driving business innovation and research. Springer.Google Scholar
Gausling, T. (2020). KI und DS-GVO im Spannungsverhältnis. In Ballestrem, J. G., Bär, U., Gausling, T., Hack, S., von Oelffen, S. (Eds.), Künstliche Intelligenz: Rechtsgrundlagen und Strategien in der Praxis (1 ed., pp. 1153). Wiesbaden: Springer Gabler.Google Scholar
Bäuerle, M. (2024). Karlsruhe locuta, causa non finita – Palantir, die Polizei und kein Ende. ZevEDI. Retrieved from https://zevedi.de/karlsruhe-locuta-causa-non-finita-palantir-die-polizei-und-kein-ende/. March 14 , 2025.Google Scholar
Bellanova, R., Carrapico, H., & Duez, D. (2022). Digital/Sovereignty and European security integration: An introduction. European Security, Retrieved from https://doi.org/10.1080/09662839.2022.2101887 31(3), 337355CrossRefGoogle Scholar
Berk, R. A. (2021). Artificial intelligence, predictive policing, and risk assessment for law enforcement. Annual Review of Criminology, 4(1), 209237.10.1146/annurev-criminol-051520-012342CrossRefGoogle Scholar
Bertuzzi, L. (2023, December 7 ). AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement. Euractiv. Retrieved from https://www.euractiv.com/section/artificial-intelligence/news/ai-act-eu-policymakers-nail-down-rules-on-ai-models-butt-heads-on-law-enforcement/ March 15 , 2025Google Scholar
Bharosa, N. (2022). The rise of GovTech: Trojan horse or blessing in disguise? A research agenda. Government Information Quarterly, 39(3), 101692.10.1016/j.giq.2022.101692CrossRefGoogle Scholar
Boura, M. (2024). The digital regulatory framework through EU AI Act: The regulatory sandboxes’ approach. Athens Journal of Law, 10(3), 385.10.30958/ajl.10-3-8CrossRefGoogle Scholar
Bovermann, M. A., Fink, J., & Mutter, J. (2024). PimEyes User auf den Spuren der RAF: Verbotene Früchte und der Gesetzesvorbehalt in polizeilichen Ermittlungsverfahren.10.59704/ee5e12eaf02c1341CrossRefGoogle Scholar
Bundesbeauftragte für den Datenschutz und die Informationsfreiheit. (2024). Stellungnahme zum Entwurf eines Gesetzes zur Verbesserung der Terrorismusbekämpfung an den Ausschuss für Inneres und Heimat des Deutschen Bundestags. Retrieved from https://www.bfdi.bund.de/SharedDocs/Downloads/DE/DokumenteBfDI/Stellungnahmen/2024/StgN_Terrorismusbekämpfung-Gesetz.html. March 15 , 2025.Google Scholar
Bundestag, D. (2024, October 16 ). Beschlussempfehlung und Bericht des Ausschusses für Inneres und Heimat (4. Ausschuss). Retrieved from March 15 , 2025, https://dserver.bundestag.de/btd/20/134/2013413.pdfGoogle Scholar
Buocz, T., Pfotenhauer, S., & Eisenberger, I. (2023). Regulatory sandboxes in the AI Act: Reconciling innovation and safety?. Law, Innovation and Technology, 15(2), 357389.10.1080/17579961.2023.2245678CrossRefGoogle Scholar
Casaburo, D., & Marsh, I. (2024). Ensuring fundamental rights compliance and trustworthiness of law enforcement AI systems: The ALIGNER Fundamental Rights Impact Assessment. AI and Ethics, 4(4), 15691582.10.1007/s43681-024-00560-0CrossRefGoogle Scholar
Cataleta, M. S. (2021). Humane Artificial Intelligence the fragility of human rights facing AI. East West Center.Google Scholar
Cebral-Loureda, M., Rincón-Flores, E. G., & Sanchez-Ante, G. (Eds.). (2023). What AI Can Do: Strengths and Limitations of Artificial Intelligence. CRC Press.10.1201/b23345CrossRefGoogle Scholar
Charisi, V., & Dignum, V. (2024). Operationalizing AI regulatory sandboxes for children’s rights and well-being. In Régis, C., Denis, J.-L., Axente, M. L., Kishimoto, A. (Eds.), Human-Centered AI - A Multidisciplinary Perspective for Policy-Makers, Auditors, and Users (1 ed., pp. 231’249). New York: Chapman and Hall/CRC.Google Scholar
Chataut, R., Nankya, M., & Akl, R. (2024). 6G networks and the AI revolution—Exploring technologies, applications, and emerging challenges. Sensors, 24(6), 1888.10.3390/s24061888CrossRefGoogle ScholarPubMed
Chowdhury, M., & Sadek, A. W. (2012). Advantages and limitations of artificial intelligence. Artificial Intelligence Applications to Critical Transportation Issues, 6(3), 360375.Google Scholar
Christoph, M. C. (2023). Criminal justice technology and the regulatory sandbox: Toward balancing justice, accountability, and Innovation. University of Pittsburgh Law Review, 84, 971.Google Scholar
Craig, P., & de Búrca, G. (2021). EU Law: Text, Cases, and Materials (7. Auflage ed.). Oxford University Press.Google Scholar
den Heijer, M., Abeelen, T. V. O. V. D., & Maslyka, A. (2019). On the use and misuse of recitals in European union law. Amsterdam Law School Research Paper, (2019-31).Google Scholar
Der Hamburgische Beauftragte für Datenschutz und Informationsfreiheit. (2020). https://datenschutz-hamburg.de/fileadmin/user_upload/HmbBfDI/Pressemitteilungen/2020/2020-08-18_Clearview.pdf. Retrieved March 15 , 2025.Google Scholar
Donohue, L. K. (2014). Bulk metadata collection: Statutory and constitutional considerations. Harvard Journal of Law & Public Policy, 37, 757.Google Scholar
Dragu, T., & Lupu, Y. (2021). Digital authoritarianism and the future of human rights. International Organization, 75(4), 9911017.10.1017/S0020818320000624CrossRefGoogle Scholar
Edri (2020). Facial recognition & biometric surveillance: Document pool. Available at: https://edri.org/our-work/facial-recognition-document-pool/ 4 June 2021).Google Scholar
Deutscher, Bundestag. (2024). Entwurf eines Gesetzes zur Verbesserung der Terrorismusbekämpfung. https://dip.bundestag.de/vorgang/gesetz-zur-verbesserung-der-terrorismusbekämpfung/315333. Retrieved March 15 , 2025.Google Scholar
Euractiv. (2024, August 25 ). German police arrest suspect in stabbing rampage. Author. https://www.euractiv.com/section/defence/news/german-police-arrest-suspect-in-stabbing-rampage/. Retrieved March 13 , 2025.Google Scholar
European Commission. (2023). 2023 report on the state of the digital decade. EU Digital Strategy. https://digital-strategy.ec.europa.eu/en/library/2023-report-state-digital-decadeGoogle Scholar
European Commission. (2025, February 4 ). Commission guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act). https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-actGoogle Scholar
European Parliamentary Research Service (EPRS). (2020). Ideas paper: The future of digital policy in the EU.European Parliament. https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/651992/EPRS_BRI(2020)651992_EN.pdfGoogle Scholar
Floridi, L. (2020). The fight for digital sovereignty: What it is, and why it matters, especially for the EU. Philosophy and Technology, 33(3), 369378.10.1007/s13347-020-00423-6CrossRefGoogle ScholarPubMed
Fontes, C., & Perrone, C. (2021). Ethics of surveillance: Harnessing the use of live facial recognition technologies in public spaces for law enforcement. Technical University of Munich Munich Center for Technology in Society, Research Brief https://ieai.mcts.tum.de/wp-content/uploads/2021/12/ResearchBrief_December_Fontes-1.pdfGoogle Scholar
Gerchick, M., & Cagle, M. (2024, February 7 ). When it comes to facial recognition, there is no such thing as a magic number. ACLU. Retrieved March 15 , 2025, from https://www.aclu.org/news/privacy-technology/when-it-comes-to-facial-recognition-there-is-no-such-thing-as-a-magic-numberGoogle Scholar
Greenstein, S. (2022). Preserving the rule of law in the era of artificial intelligence (AI). Artificial Intelligence and Law, 30(3), 291323.10.1007/s10506-021-09294-4CrossRefGoogle Scholar
Großekemper, T., Höfner, R., Rosenbach, M., & Wiedmann-Schmidt, W. (2024, February 28 ). Ex-RAF-terroristin: Wie podcast-macher beinahe daniela klette fanden. Der Spiegel. Retrieved March 10 , 2025. https://www.spiegel.de/panorama/justiz/ex-raf-terroristin-wie-podcast-macher-und-ein-bellingcat-rechercheur-fast-daniela-klette-fanden-a-0808c25c-06f3-44f9-9a69-9f21076dfcedGoogle Scholar
Hamidov, A. (2025). Simultaneous Interpretation: Bridging Language Barriers In Real–Time. International Journal of Artificial Intelligence, 1(1), 6163.Google Scholar
Heidebrecht, S. (2024). From market liberalism to public intervention: Digital sovereignty and changing European union digital single market governance. JCMS: Journal of Common Market Studies, 62(1), 205223.Google Scholar
Hellmeier, S. (2016). The dictator’s digital toolkit: Explaining variation in internet filtering in authoritarian regimes. Politics & Policy, 44(6), 11581191.10.1111/polp.12189CrossRefGoogle Scholar
Helminger, J. (2022). Datenschutzrechtliche Herausforderungen bei der Verwendung von Trainingsdaten. ELSA Austria Law Review, 7(1), 4654.10.33196/ealr202201004601CrossRefGoogle Scholar
Hesse, K. (1999). Grundzüge des Verfassungsrechts der Bundesrepublik Deutschland. CF Müller GmbH.Google Scholar
Jandt, S. (2017). Datenschutz durch Technik in der DS-GVO: Präventive und repressive vorgaben zur gewährleistung der sicherheit der verarbeitung. Datenschutz Und Datensicherheit-DuD, 41(9), 562566.10.1007/s11623-017-0831-yCrossRefGoogle Scholar
Jung, W. K., & Kwon, H. Y. (2024, October). Privacy and data protection regulations for AI using publicly available data: Clearview AI case. In Proceedings of the 17th International Conference on Theory (pp. 4855).10.1145/3680127.3680200CrossRefGoogle Scholar
Kemper, C., & Kolain, M. (2025). K9 Police Robots: An Analysis of Current Canine Robot Models through the Lens of Legitimate Citizen-Robot-State-Interaction. UCLA Journal of Law and Technology (UCLA JOLT), 30(1), 195.Google Scholar
Khder, M. A. (2021). Web scraping or web crawling: State of art, techniques, approaches and application. International Journal of Advances in Soft Computing & Its Applications, 13(3), 145168.10.15849/IJASCA.211128.11CrossRefGoogle Scholar
Klamert, M. (2014). The Principle of Loyalty in EU Law. Oxford University Press.10.1093/acprof:oso/9780199683123.001.0001CrossRefGoogle Scholar
Koch, W. (2020). Privacy-preserving face recognition in large-scale video surveillance systems. www.william-koch.com/papers/2020-12-14-privacy-preserving-face-recognition. Retrieved March 15 , 2025.Google Scholar
Kommers, D. P., & Miller, R. A. (2012). The Constitutional Jurisprudence of the Federal Republic of Germany (Revised and Expand ed.). Duke University Press.Google Scholar
Kurmayer, N. J. (2024, August 29 ). After Solingen attack: Berlin clamps down on knives, sets up 2 task forces. Euractiv. Retrieved March 13 , 2025, from https://www.euractiv.com/section/politics/news/after-solingen-attack-berlin-clamps-down-on-knives-sets-up-2-task-forces/Google Scholar
Kuziemski, M., Mergel, I., Ulrich, P., & Martinez, A. (2022). GovTech practices in the EU: A glimpse into the European GovTech ecosystem, its governance, and best practices.Google Scholar
Lemieux, V. L., & Werner, J. (2024). Protecting privacy in digital records: The potential of privacy-enhancing technologies. ACM Journal on Computing and Cultural Heritage, 16(4), 118.Google Scholar
Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 4660.10.1016/j.futures.2017.03.006CrossRefGoogle Scholar
Mandlik, S. B., Labade, R. P., Chaudhari, S. V., & Agarkar, B. S. (2025). Review of gait recognition systems: Approaches and challenges. International Journal of Electrical & Computer Engineering (2088-8708), 15(1), 20888708.Google Scholar
Martini, M., & Kemper, C. (2023a). Clearview AI: Das Ende der Anonymität? Teil 1: Zulässigkeit der App. Computer Und Recht, 39(5), 341348.10.9785/cr-2023-390516CrossRefGoogle Scholar
Martini, M., & Kemper, C. (2023b). Clearview AI: Das ende der anonymität?—teil 2: Einsatz der clearview-app durch die polizei. Computer Und Recht, 39(6), 414420.10.9785/cr-2023-390622CrossRefGoogle Scholar
Masala, P. (2024, June). Constitutional challenges and regulatory framework: Will the EU’s artificial intelligence Act ensure adequate protection of fundamental rights and democracy?. In International KES Conference on Intelligent Decision Technologies. (203213). Singapore: Springer Nature Singapore.10.1007/978-981-97-7419-7_18CrossRefGoogle Scholar
Melzi, P., Rathgeb, C., Tolosana, R., Vera-Rodriguez, R., & Busch, C. (2024). An overview of privacy-enhancing technologies in biometric recognition. ACM Computing Surveys, 56(12), 128.10.1145/3664596CrossRefGoogle Scholar
Micklitz, H. W., Pollicino, O., Reichman, A., Simoncini, A., Sartor, G., & De Gregorio, G. (Eds.). (2021). Constitutional challenges in the algorithmic society. Cambridge University Press.10.1017/9781108914857CrossRefGoogle Scholar
Moraes, T. (2023). Regulatory sandboxes as tools for ethical and responsible innovation of artificial intelligence and their synergies with responsive regulation. https://direitorio.fgv.br/conhecimento/quest-ai-sovereignty-transparency-and-accountability-official-outcome-un-igf-data-and. Retrieved March 15 , 2025.Google Scholar
Muller, C. (2020). The impact of artificial intelligence on human rights, democracy and the rule of law (Council of Europe). Strasbourg.Google Scholar
Müller-Quade, J., & Houdeau, D. (2023). Datenschatz für KI nutzen, Datenschutz mit KI wahren. Technische und rechtliche Ansätze für eine datenschutzkonforme, gemeinwohlorientierte Datennutzung. Lernende Systeme. Retreived on March, 10, 2025.Google Scholar
Navarro, L. C. M. (2023). Court and police interpreters in the digital era: How does technology shape their workflow? In Proceedings of the International Workshop on Interpreting Technologies SAY IT AGAIN 2023: 2-3 November/Malaga, Spain. (5966). INCOMA Ltd.Google Scholar
O’Hara, K. (2022). Privacy, privacy-enhancing technologies & the individual.Google Scholar
Pearson, J. S. (2024). Defining digital authoritarianism. Philosophy and Technology, 37(2), 73.10.1007/s13347-024-00754-8CrossRefGoogle Scholar
Pohle, J., & Thiel, T. (2020). Digital sovereignty Internet Policy Review, https://policyreview.info/concepts/digital-sovereignty. Retrieved March 15 , 2025.Google Scholar
Reclaim Your Face. (n.d.). Home. Retrieved March 15 , 2025, from https://reclaimyourface.eu/Google Scholar
Reuter, M. (2024). Von Amnesty bis Seawatch: Breite Front gegen Überwachungspaket der Ampel netzpolitik.org. https://netzpolitik.org/2024/von-amnesty-bis-seawatch-breite-front-gegen-ueberwachungspaket-der-ampel/. Retrieved March 15 , 2025.Google Scholar
Rezende, I. N. (2020). Facial recognition in police hands: Assessing the ‘Clearview case’from a European perspective. New Journal of European Criminal Law, 11(3), 375389.10.1177/2032284420948161CrossRefGoogle Scholar
Rhinelander, J., De Fuentes, C., & O’Driscoll, C. (2024). Clearview AI: Ethics and artificial intelligence technology. In Schmutzler, J., Palacios-Chacón, L. A., Burvill, S., & Andonova, V. (Eds.), Cases on Entrepreneurship and Innovation (1 ed., pp. 237246). Edward Elgar Publishing.10.4337/9781802204537.00029CrossRefGoogle Scholar
Ruschemeier, H. (2025). Thinking Outside the Box?. In Steffen, B. (Ed.), Bridging the Gap Between AI and Reality. AISoLA 2023. lecture notes in computer science. Springer. https://doi.org/10.1007/978-3-031-73741-1_20Google Scholar
Schreiner, M. (2020, June 27 ). KI-Kontrolle Berlin Südkreuz: Die Bundespolizei liegt falsch. The Decoder. Retrieved March 15 , 2025, from https://the-decoder.de/ki-kontrolle-berlin-suedkreuz-die-bundespolizei-liegt-falsch/Google Scholar
Seamons, K. E. (2022). Privacy-Enhancing Technologies.10.1007/978-3-030-82786-1_8CrossRefGoogle Scholar
Shehu, V. P., & Shehu, V. (2023). Human rights in the technology era–Protection of data rights. European Journal of Economics, 7(2), 110.Google Scholar
Shen, C., Yu, S., Wang, J., Huang, G. Q., & Wang, L. (2024). A Comprehensive Survey on Deep Gait Recognition: Algorithms, Datasets, and Challenges. IEEE Transactions on Biometrics, Behavior, and Identity Science.Google Scholar
Taylor, M. (2022). China’s Digital Authoritarianism. (6577). Basingstoke: Palgrave Macmillan.10.1007/978-3-031-11252-2CrossRefGoogle Scholar
The Times. (2023, November 17 ). After 30 years on the run, Baader-Meinhof ‘member’ prepares for trial. The Times. Retrieved March 10 , 2025. https://www.thetimes.com/world/europe/article/fugitive-baader-meinhof-member-denies-murder-and-robbery-charges-h8xg5xsqb?utm_source=chatgpt.com®ion=globalGoogle Scholar
Turner, F. (2019). The rise of the internet and a new age of authoritarianism. Harper’s Magazine, 29, 2533.Google Scholar
Waem, H., & Demircan, M. (2023, November 13 ). A deeper look into the EU AI Act trilogues: Fundamental rights impact assessments, generative AI, and a European AI office. Kluwer Competition Law Blog. Retrieved March 15 , 2025, from https://competitionlawblog.kluwercompetitionlaw.com/2023/11/13/a-deeper-look-into-the-eu-ai-act-trilogues-fundamental-rights-impact-assessments-generative-ai-and-a-european-ai-office/Google Scholar
Weber, A. (2008). Rechtsstaatsprinzip als gemeineuropäisches Verfassungsprinzip. Zeitschrift Für Öffentliches Recht, 63(2), 267292.10.1007/s00708-008-0210-0CrossRefGoogle Scholar
Welt.de. (2024). “Aufgeweicht und verwässert”– Union wird Sicherheitspaket der Ampel nicht zustimmen. https://www.welt.de/politik/deutschland/article254023422/Friedrich-Merz-Aufgeweicht-und-verwaessert-Union-wird-Sicherheitspaket-der-Ampel-nicht-zustimmen.html. Retrieved March 15 , 2025.Google Scholar
Wilson, R. A. (2022). Digital Authoritarianism and the global assault on human rights. Human Rights Quarterly, 44(4), 704739.10.1353/hrq.2022.0043CrossRefGoogle Scholar
Yordanova, K. (2022). The EU AI Act-Balancing human rights and innovation through regulatory sandboxes and standardization.Google Scholar
Završnik, A. (2020). Criminal justice, artificial intelligence systems, and human rights. ERA forum, 20, 567583. https://doi.org/10.1007/s12027-020-00602-0.CrossRefGoogle Scholar
Zhang, S., & Feng, Y. (2023). End-to-end simultaneous speech translation with differentiable segmentation. arXiv preprint arXiv:2305.16093.Google Scholar