Use AI threat modeling to mitigate emerging attacks (original) (raw)

AI threat modeling can help enterprise security teams identify weaknesses in their AI systems and apps -- and keep bad actors from exploiting them.

By

Published: 04 Sep 2024

Recent advances in machine learning, generative AI and large language models are fueling major conversations and investments across enterprises, and it's not hard to understand why. Businesses of all stripes are seizing on the technologies' potential to revolutionize how the world works and lives. Organizations that fail to develop new AI-driven applications and systems risk irrelevancy in their respective industries.

But AI brings the potential to hurt, as well as help, corporations. PwC's 27th annual Global CEO Survey of 4,702 chief executives, published in January 2024, found that, while most participants view GenAI as more beneficial than perilous, 64% worry it will introduce new cybersecurity issues into their organizations.

To mitigate the inevitable risks, experts recommend organizations creating and implementing new AI systems and applications prioritize ongoing AI threat modeling -- identifying potential threats and establishing prevention and mitigation strategies -- starting in the earliest design phases and continuing throughout the software development lifecycle.

AI threat modeling in 4 steps

OWASP recommends approaching the threat modeling process using the following four-step, four-question methodology:

  1. Assess scope. What are we working on?
  2. Identify threats. What can go wrong?
  3. Identify countermeasures or manage risk. What are we going to do about it?
  4. Assess your work. Did we do a good job?

Consider how security teams can apply these steps to AI threat modeling.

1. Assess the scope of the AI threat model

In AI threat modeling, a scope assessment might involve building a schema of the AI system or application in question to identify where security vulnerabilities and possible attack vectors exist.

Note that, to be effective, total AI threat modeling efforts must address all targeted areas of AI: AI usage, AI applications and the AI model -- i.e., platform -- itself.

This stage also requires identifying and classifying digital assets that are reachable via the system or app and determining which users and entities can access them. Establish which data, systems and components are most important to defend, based on sensitivity and importance to the business.

Note that, to be effective, total AI threat modeling efforts must address all targeted areas of AI: AI usage, AI applications and the AI model -- i.e., platform -- itself.

2. Identify AI security threats

Next, explore possible threats, and prioritize them based on risk. Which potential attacks are most likely, and which would be the most damaging to the business if they occurred?

This stage, according to OWASP, could involve an informal brainstorm or a more structured approach, using kill chains, attack trees or a framework such as STRIDE, which groups threats across six categories -- spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege. Regardless, explore the broader AI threat landscape, as well as the attack surface of the individual system in question.

Consider the following examples of emerging and evolving large language model (LLM) threats:

3. Define AI threat mitigation countermeasures

In this phase, the team must decide what security controls to deploy to eliminate or reduce the risk a threat poses.

Alternatively, in some cases, they might transfer a security risk to a third party -- e.g., an MSP or cyber insurer -- or even accept it if the business impact would be minimal or mitigation impractical.

AI security controls vary depending on the threat and could involve both familiar and novel countermeasures. For example, prompt injection mitigation might include the following:

4. Assess the AI threat model

Finally, evaluate the effectiveness of the AI threat modeling exercise, and create documentation for reference in ongoing future efforts.

Amy Larsen DeCarlo has covered the IT industry for more than 30 years, as a journalist, editor and analyst. As a principal analyst at GlobalData, she covers managed security and cloud services.

Alissa Irei is senior site editor of TechTarget Security.

Dig Deeper on Application and platform security