Adversarial Machine Learning and Robust Classification
This past Saturday, I talked about attacks on ML within cybersecurity at the RAISA3 Workshop. The slides for my talk, “Of Search Lights and Blind Spots: Machine Learning in Cybersecurity,” are available on Slideshare (see also below). I want to reiterate and expatiate on the conclusions of the talk (i.e. the five points of slide 29) using this less ephemeral medium.
First, it is important to educate decision makers about the limitations and possible risks of ML. In the context of cybersecurity, this is relevant both for the decision makers selecting security software and the decision makers at security vendors choosing what types of technology are added into their respective products. The notion that ML alone is a panacea to all challenges the security industry faces is still touted by a number of companies, and it is harmful to the safety of our society. ML is an important technique that should not be missing in the portfolio for its obvious benefits, but ensuring proper defense-in-depth remains the best remedy.
Second, ML has been more accessible to more practitioners than ever before. That is a good development. However, it also means that the vast majority of users of ML are no longer up-to-speed on the research of new ML techniques or newly discovered ML shortcomings. As a result, we need to improve the out-of-the-box safety features of algorithms and frameworks. And we need to consider establishing and communicating best practices for safer and more robust model creation.
Third, adversarial ML techniques will result in cost reductions for attackers and simplify some types of attack. It is critical to continue to work towards increasing the cost to attackers. The goal is not to be able to thwart the totality of adversarial examples but to increase the cost of finding viable ones. Reliable detection can still be achieved by defense-in-depth and by avoiding an ML monoculture.
Fourth, adversarial ML research is an opportunity for defenders to create better models that are more robust. This applies not only to robustness against attacks based on adversarial ML but also towards improving model generalization. For example, adversarial ML techniques can help defenders getting ahead of the manual or undirected perturbations we currently see in the wild. Attackers will continue to focus on such basic evasions as long as those are economically viable, so we as defenders have a head-start.
Fifth, the possibility of silent failure is a large risk to one’s security posture. Being able to detect when adversarial ML attacks occur is critical to defenders to be able to reliably adapt. Similarly, for a detection system to be reliable, it must still produce correct results even if individual techniques can be sidestepped. Especially in this context the aforementioned ML monoculture can be dangerous.
Overall, I am optimistic as far as cybersecurity is concerned. Cybersecurity has always been an adversarially-driven domain, and the development of automated adversarial techniques may allow us to muster better defenses faster.
Subscribe via RSS