
Paris establishes INESIA, a dedicated institute for AI evaluation and security, partnering with Singapore and other nations to enhance AI safety and trust.
In a major step towards ensuring the security and responsible development of artificial intelligence, France has launched the National Institute for AI Evaluation and Security (INESIA). Announced on 31 January 2025 by Clara Chappaz, the French Minister for AI and the Digital Economy, the institute aims to strengthen AI governance and risk mitigation through research, evaluation, and international collaboration.
“This institute will help us to understand intelligence models to build trust, and enable all the people to use AI in confidence,” said Minister Chappaz, underlining the importance of transparency and safety in AI development.
INESIA is led by the General Secretariat for Defence and National Security (SGDSN) under the Prime Minister’s office, alongside the Directorate General for Enterprise (DGE). It will unite national and international stakeholders to develop robust technical tools and regulatory frameworks, ensuring that AI systems are both ethical and safe for deployment.
This initiative aligns with broader global efforts in AI safety, particularly the international network of ‘AI Safety Institutes’ that was first established at the Bletchley Park Summit in 2023. INESIA joins a growing list of such institutes worldwide, working collectively to advance AI model evaluations and governance.
As part of this international effort, Singapore, Japan, and the United Kingdom played a leading role in a joint testing exercise aimed at improving AI model evaluations across multiple languages, including cybersecurity applications. France contributed to this initiative by providing essential datasets for cybersecurity evaluations. The collaboration between these nations is expected to set new global standards for AI safety and security.
The launch of INESIA underscores France’s commitment to fostering a trustworthy AI ecosystem, aligning scientific expertise, regulatory measures, and global partnerships to ensure the ethical and secure deployment of AI technologies.