Deep learning is an artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for usage in decisions. In recent years, the popularity of deep learning has increased significantly when the technique became a practical and powerful tool for automating tasks. Due to its ability to learn complex abstract concepts, deep learning has been employed as a solution for many different cybersecurity tasks: malware detection, network intrusion detection, voice authentication, etc.
Neural networks form a basis for current deep learning applications. They were shown to be effective in domains that can provide a large amount of labeled data to be able to learn the classification model with a sufficient level of accuracy. One neural network is a network of interconnected nodes or neurons where a signal is transmitted from input neurons towards output neurons.
In the past decade, neural networks have been shown to be vulnerable to small adversarial perturbations either in the inputs or during the computation. Such adversarial attacks can cause misclassification and other attacker desired behaviour of the neural networks. Consequently resulting in harmful outcomes during the usage of those networks. For example, misclassification of a traffic sign can lead to severe accidents. In this talk, we will present a few adversarial attacks on neural networks and discuss the corresponding consequences.
Neural networks form a basis for current deep learning applications. They were shown to be effective in domains that can provide a large amount of labeled data to be able to learn the classification model with a sufficient level of accuracy. One neural network is a network of interconnected nodes or neurons where a signal is transmitted from input neurons towards output neurons.
In the past decade, neural networks have been shown to be vulnerable to small adversarial perturbations either in the inputs or during the computation. Such adversarial attacks can cause misclassification and other attacker desired behaviour of the neural networks. Consequently resulting in harmful outcomes during the usage of those networks. For example, misclassification of a traffic sign can lead to severe accidents. In this talk, we will present a few adversarial attacks on neural networks and discuss the corresponding consequences.
Sign in to Autumn ITAPA 2024
Xiaolu Hou
Xiaolu Hou is currently an Assistant Professor at Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia. She received her Ph.D. in mathematics from Nanyang Technological University (NTU), Singapore, in 2017. Before coming to Slovakia, she held multiple research fellow positions at National University of Singapore as well as NTU. Her research focuses on fault injection and side-channel attacks on cryptographic and AI implementations.
See more info about the speaker