Specific Challenge:
The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domains, such as fight against crime, including cybercrime and terrorism, cybersecurity (re-)actions, protection of public spaces and critical infrastructure makes the security dimension of AI a matter of priority. Research is needed to assess how to mostly benefit from the AI based technologies in enhancing EU’s resilience against newly emerging security threats (both “classical” and new AI supported) and in reinforcing the capacity of the Law Enforcement Agencies (LEAs) at national and at EU level to identify and successfully counter those threats. In addition, in security research, data quality, integrity, quantity, availability, origin, storage and other related challenges are critical, especially in the EU-wide context. To this end, a complex set of coordinated developments is required, by different actors, at the legislative, technology and Law Enforcement levels. For AI made in Europe, three key principles are: "interoperability", “security by design” and “ethics by design”. Therefore, potential ethical and legal implications have to be adequately addressed so that developed AI systems are trustworthy, accountable, responsible and transparent, in accordance with existing ethical frameworks and guidelines that are compatible with the EU principles and regulations.[1]
Scope:Proposals under this topic should aim at exploring use of AI in the security dimension at and beyond the state-of-the-art, and exploiting its potential to support LEAs in their effective operational cooperation and in the investigation of traditional forms of crime where digital content plays a key role, as well as of cyber-dependent and cyber-enabled crimes. On the one hand, as indicated in “Artificial Intelligence – A European Perspective”, AI systems are being and will increasingly be used by cyber criminals, so research into their capabilities and weaknesses will play a crucial part in defending against such malicious usage. On the other hand, Law Enforcement will increasingly engage in active usage of AI systems to reinforce investigative capabilities, to strengthen digital evidence-making in court and to cooperate effectively with relevant LEAs. Consequently, proposals should:
Building on existing best practices such as those obtained through the ASGARD project [2], proposals should establish a platform of easy-to-integrate and interoperable AI tools and an associated process with short research and testing cycles, which will serve in the short term perspective as a basis for identifying specific gaps that would require further reflection and development. This platform should, in the end, result in a sustainable AI community for LEAs, researchers and industry as well as a specific environment where relevant AI tools would be tailored to specific needs of the security sector, including the requirements of LEAs. Those AI tools would be developed in a timely manner using an iterative approach to define, develop and assess the most pertinent digital tools with a constant participation of end-users throughout the project. By the end of the project, the platform should also enable a direct access for Law Enforcement to an initial set of tools. Specific consideration should be given to the issue of setting an appropriate mechanism to enable a proper access to the relevant data necessary to develop and train AI based systems for security.
Proposals should also:
Finally, in order to have the full picture of all AI-related issues in the domain of work of Law Enforcement and citizen protection, proposals should:
The improvement of research results, application and uptake should be taken into consideration.
The functionality of existing EU LEAs' tools and systems needs to be analysed since they need to support the prevention, reaction and detection of cyber threats and security incidents.
Furthermore, the accuracy of AI tools depends on the quantity and on the quality of the training and testing data, including the quality of their structure and labelling, and how well these data represent the problem to be tackled. In the security domain, this issue is further emphasized due to the sensitivity of the data, which complicates the access to real multilingual datasets and the creation of representative datasets. A huge amount of up-to-date high-quality data needed to develop reliable AI tools in support of Law Enforcement, in the areas of cybersecurity and of the fight against crime, including cybercrime and terrorism, asks for the development of training/testing datasets at a European level. This requires a close cooperation of different national Law Enforcement and judiciary systems. Namely, training and testing data sets considered legal and used in one country have to be shared and accepted in another one, while simultaneously observing fundamental rights and substantial or procedural safeguards. The lack of legislation at the national and international level makes this particularly difficult. The availability of such datasets to the scientific community would ensure future advances in the field.
Thus, in order to address the problem of securing European up-to-date high-quality training and testing data sets in the domain of AI in support of Law Enforcement, proposals under this topic should, from a multidisciplinary point of view, identify, assess and articulate the whole set of actions that should be carried out in a coherent framework:
Proposals should have a clear dissemination plan, ensuring the uptake of project results by LEAs in their daily work.
Taking into account the European dimension of the topic, the role of EU agencies supporting Law Enforcement should be exploited regarding:
Proposals should take into account the existing EU and national projects in this field, as well as build on existing research and articulate a legal, ethical and practical framework to take the best out of the AI based technologies, systems and solutions in the security dimension. Whenever appropriate, the work should complement, build on available resources and contribute to common efforts such as (but not limited to) ASGARD, SIRIUS[3], EPE[4], networks of practitioners [5], AI4EU[6], or activities carried out in the LEIT programme, namely in Robotics[7], Big Data[8], and IoT[9]. As proposals will leverage existing technologies (open source or not), they should show sufficient triage of these technologies to ensure no internalisation of Intellectual Property Rights or security risks as well as demonstrate that such technologies come with adequate license and freedom to operate.
As far as the societal dimension is concerned, proposed solutions of AI applications should respond to the needs of an individual and society as a whole by building and retaining trust. Proposals should analyse the societal implications of AI and its impacts on democracy. Therefore, the values guiding AI and responsible design practices that encode these values into AI systems should also be critically assessed. It should be also shown that the testing of the tools represents well the reality. In addition, AI tools should be unbiased (gender, racial, etc.) and designed in such a way that the transparency and explainability of the corresponding decision processes are ensured, which would, amongst other, reinforce the admissibility of any resulting evidence in court.
Proposals’ consortia should comprehend, besides industrial and research participants, relevant security practitioners, civil society organisations, experts on criminal procedure from a variety of European Member States and Associated Countries as well as LEAs. Proposals should ensure a multidisciplinary approach and have the appropriate balance of IT specialists as well as Social Sciences and Humanities experts.
As indicated in the Introduction of this call, proposals should foresee resources for clustering activities with other projects funded under this call to identify synergies and best practices.
The Commission considers that proposals requesting a contribution from the EU of around EUR 17 million would allow this specific challenge to be addressed appropriately. Nonetheless, this does not preclude submission and selection of proposals requesting other amounts.
Expected Impact:Proposals should lead to:
Short term:
Medium term:
Longer term:
The outcome of the proposal is expected to lead to development from Technology Readiness Levels (TRL) 7-8; please see part G of the General Annexes.
Cross-cutting Priorities:Gender
Socio-economic science and humanities
[1]Special focus should be put on verifying the compatibility with:(1) Guidelines of the European Group on Ethics in Science and New Technologies (regulatory framework to be ready in March 2019), (2) General Data Protection Regulation (GDPR).
[2]ASGARD project - (http://www.asgard-project.eu/) aims to contribute to LEA Technological Autonomy, by building a sustainable, long-lasting community for LEAs and the R&D industry. This community will develop, maintain and evolve a best-of-class tool set for the extraction, fusion, exchange and analysis of Big Data, including cyber-offense data for forensic investigation. ASGARD helps LEAs significantly increase their analytical capabilities.
[3]SIRIUS, launched by Europol in October 2017, is a secure web platform for law enforcement professionals in internet-facilitated crime investigations, with a special focus on counter-terrorism.
[4]EPE (Europol Platform for Experts) is a secure, collaborative web platform for specialists in a variety of law enforcement areas.
[5]Such as ILEAnet (https://www.ileanet.eu/) and I-LEAD (i-lead.eu/)
[6]developing the AI-on-demand platform, central access point to AI resources and tools: http://ai4eu.org/.
[7]For instance exploiting technology developed in H2020 robotics projects in Search and Rescue, support to civil protection, or inspection and maintenance - https://eu-robotics.net/sparc/
[8]http://www.bdva.eu/ppp-projects - such as AEGIS, Lynx or FANDANGO.