Artificial intelligence (AI) experts have published a comprehensive review of the latest scientific research on the capabilities and risks of general-purpose AI systems. For robust risk management, the report recommends layering multiple approaches. Among risk management practices, it highlights “threat modeling to identify vulnerabilities, capability evaluations to assess potentially dangerous behaviours, and incident reporting to gather more evidence.”
Commissioned by the UK Government, the ‘International AI Safety Report 2026’ is led by Professor Yoshua Bengio (Chair) and supported by a secretariat within the UK AI Security Institute. An International Expert Advisory Panel, which comprises members from the countries that participated in the 2023 AI Safety Summit in the UK (Australia, Brazil, Canada, Chile, China, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, the Republic of Korea, Rwanda, Saudi Arabia, Singapore, Spain, Switzerland, Türkiye, the United Arab Emirates (UAE), and Ukraine), the EU, the Organisation for Economic Co-operation and Development (OECD), and the UN, advises the Chair. Among other tasks, the Panel shapes the scope of the report and is responsible for reviewing drafts.
The report aims to support policymakers by synthesizing information about AI risks and highlighting remaining gaps. “For policymakers, acting too early can lead to entrenching ineffective interventions, while waiting for conclusive data can leave society vulnerable to potentially serious negative impacts,” it underscores.
The report notes that general-purpose AI capabilities continue to evolve, driven by new performance enhancing techniques after initial training. Yet, “leading systems may excel at some difficult tasks while failing at other, simpler ones.” While the trajectory of AI progress up to 2030 is uncertain, current trends indicate continued improvement, according to the report.
The report identifies three categories of general-purpose AI risks:
- Malicious use, with risks ranging from criminal activity and influence and manipulation – to cyberattacks and biological and chemical risks;
- Malfunctions due to reliability challenges and possible loss of control; and
- Systemic risks, with potential labor market impacts and risks to human autonomy.
Acknowledging that technical and institutional challenges make it difficult to manage general-purpose AI risks, the report notes that some risk management practices in a small number of regulatory regimes are beginning to take the shape of legal requirements. At the same time, technical safeguards, though improving, reveal significant limitations, with misuse being harder to prevent and trace for open-weight models in particular.
The report argues that societal resilience has a critical role to play in managing AI-related harms. It calls for strengthening critical infrastructure, developing tools to detect AI-generated content, and building institutional capacity to respond to novel threats.
Originally written by: SDG Knowledge Hub
Source: SDG Knowledge Hub
Published on: 11 February 2026
Link to original article: Expert Report Identifies AI Risks, Recommends Solutions