AI systems: develop them securely
In the publication ‘AI systems: Develop them securely’, the General Intelligence and Security Service (AIVD) shares ways AI systems can be attacked and how you can defend against it happening.
More and more organisations are making use of the possibilities of Artificial Intelligence (AI). AI systems can help organisations execute processes faster, smarter and better. Examples are models for image recognition, speech technology or cyber security.
Developments in AI are moving fast – so fast that it’s important to develop your AI systems securely. Otherwise, you run the risk that your AI system will no longer work as it should, with all the consequences that entails.
Five principles for defending AI systems
It is important to know how to protect your AI systems from attacks. The National Communications Security Agency (NCSA) of the AIVD has defined five principles that help you think about how to safely develop and use AI models in your organisation:
- Ensure the quality of your dataset
- Consider validation of your data
- Take supply chain security into account
- Make your model robust against attacks
- Make sure your model is auditable
Five attacks on AI systems
Currently, the NCSA classifies five different categories of attacks that specifically target AI systems:
- Poisoning attacks (an attacker tries to manipulate your data, algorithm or model, in order to ‘poison’ the AI system and prevent it from functioning properly)
- Input (evasion) attacks (with specific input, for example an image covered with noise, an attacker tries to fool an AI system to make it perform incorrectly or not at all)
- Backdoor attacks (by building a backdoor into an AI model, an external party can add an extra path to determine the model’s ultimate decision)
- Model reverse engineering & inversion attacks (an attacker tries to ‘reverse engineer’ your model to discover how it works, or to recover the data set that was used to train your model)
- Inference attacks (are aimed at reconstructing whether a specific set of information was used as training data for a model)
Manipulating AI systems
AI is the ability of systems or machines to perform tasks for which humans use their intelligence. Attackers can try to fool your AI models, sabotage the operation of the system, or figure out how your algorithms work without you realising it yourself. Think, for example, of an automatic scanner for the transit of goods that inadvertently passes weapons.
An AI-based malware detection program that received incorrect training data and now no longer works. Or attackers who manage to extract sensitive data from your AI system. To make sure your AI system keeps working as intended, you need to think about security from the onset of your system’s development.
Want to know more about how to securely develop AI systems? Read our publication below. This publication is available in English and Dutch.
Publications
-
AI-systems: develop them securely
Artificial intelligence (AI) gives computerised machines the ability to solve problems on their own. More and more computer ...
-
AI-systemen: ontwikkel ze veilig
Steeds meer computersystemen maken gebruik van artificële (of kunstmatige) intelligentie (AI). De ontwikkelingen op dit gebied ...