Back to Search View Original Cite This Article

Abstract

<jats:p>infrastructure, there is a systemic gap between their capabilities and the maturity of AI Governance systems. A comprehensive study of current cybersecurity challenges related to the autonomous functioning of AI agents has been conducted. It has been revealed that traditional approaches to security based on the prohibition paradigm are not only ineffective, but also exacerbate risks, giving rise to the phenomenon of “shadow AI”. The scientific novelty of the research lies in the development and testing of an original framework for proactive risk assessment – the Agentic Risk Assessment Framework (ARAF). This framework integrates two previously disparate domains: AI CyberSecurity and AI CyberCrimes. Unlike existing analogues such as NIST AI RMF and OWASP LLM Top-10, ARAF for the first time takes into account key modern threats, including “weapons of autonomy”, “Deceptive Chain-of-Thought” and risks of embodied AI. A new taxonomy of 42 threat classes has been proposed and a quantitative risk assessment metric (Agentic Risk Index, ARI) has been introduced. The practical significance of the work is confirmed by the results of pilot implementations of ARAF in 2024-2025 in organizations of the financial sector, public administration and the military-industrial complex, which demonstrated a decrease in the composite risk index ARI by 40-65%. The research results are of high value for the formation of national AI safety standards, the development of robust architectures and the creation of a regulatory framework governing the responsible implementation of autonomous systems.</jats:p>

Show More

Keywords

risk been framework assessment araf

Related Articles

PORE

About

Connect