AI security risk assessment framework
“AI poses new trust, risk and security management requirements that conventional controls do not address.”1 To address this gap, we did not want to invent a new process. We acknowledge that security professionals are already overwhelmed. Moreover, we believe that even though the attacks on AI systems pose a new security risk, current software security practices are relevant and can be adapted to manage this novel risk. To that end, we fashioned our AI security risk assessment in the spirit of the current security risk assessment frameworks.
We believe that to comprehensively assess the security risk for an AI system, we need to look at the entire lifecycle of system development and deployment. An overreliance on securing machine learning models through academic adversarial machine learning oversimplifies the problem in practice. This means, to truly secure the AI model, we need to account for securing the entire supply chain and management of AI systems.
Through our own operations experience in building and red teaming models at Microsoft, we recognize that securing AI systems is a team sport. AI researchers design model architectures. Machine learning engineers build data ingestion, model training, and deployment pipelines. Security architects establish appropriate security policies. Security analysts respond to threats. To that end, we envisioned a framework that would involve participation from each of these stakeholders.
“Designing and developing secure AI is a cornerstone of AI product development at Boston Consulting Group (BCG). As the societal need to secure our AI systems becomes increasingly apparent, assets like Microsoft’s AI security risk management framework can be foundational contributions. We already implement best practices found in this framework in the AI systems we develop for our clients and are excited that Microsoft has developed and open sourced this framework for the benefit of the entire industry.”—
Jack Molloy, Senior Security Engineer, BCG
As a result of our Microsoft-wide collaboration, our framework features the following characteristics:
- Provides a comprehensive perspective to AI system security. We looked at each element of the AI system lifecycle in a production setting: from data collection, data processing, to model deployment. We also accounted for AI supply chains, as well as the controls and policies with respect to backup, recovery, and contingency planning related to AI systems.
- Outlines machine learning threats and recommendations to abate them. To directly help engineers and security professionals, we enumerated the threat statement at each step of the AI system building process. Next, we provided a set of best practices that overlay and reinforce existing software security practices in the context of securing AI systems.
- Enables organizations to conduct risk assessments. The framework provides the ability to gather information about the current state of security of AI systems in an organization, perform gap analysis, and track the progress of the security posture.
Updates to Counterfit
To help security professionals get a broader view of the security posture of the AI systems, we have also significantly expanded Counterfit. The first release of Counterfit wrapped two popular frameworks—
- An extensible architecture that simplifies integration of new attack frameworks.
- Attacks that include both access to the internals of the machine learning model and with just query access to the machine learning model.
- Threat paradigms that include evasion, model inversion, model inference, and model extraction.
- In addition to algorithmic attacks provided, common corruption attacks through AugLy are also included.
- Attacks are supported for models that accept tabular data, images, text, HTML, or Windows executable files as input.