Robustness of Artificial Intelligence for Hybrid Warfare
Robustness of Artificial Intelligence for Hybrid Warfare
Date
2021
Authors
Marchi, J.A. de
Sharp, J.
Melrose, J.
Madahar, B.
Kurth, F.
Lange, D.S.
Aktas, M.
Martinel, N.
Luotsinen, L.
Solberg, E.
Journal Title
Journal ISSN
Volume Title
Publisher
Netherlands Aerospace Centre NLR
Abstract
There are many activities, projects and programs that look at manipulation of machine learning systems (MLS) and
how specific systems can be influenced by creative input. But there is too little activity in machine learning research to
look at how we can create more robust systems and how such systems might require a fundamental change in
training, testing, validation and/or product phases.
One problem might be that commercial MLS may be trained in ways that cannot be verified through the product.
Can the products contain back doors in the system, much like software in general, only made by creatively crafting the
input/training data? E.g. is it possible to train a missile detection system, that is trained to report no detection on one
specific type of missile, and that this manipulation cannot be detected because the machine learning model is too
large and complex? This RTG will look into methods for how such training can take place, how training can take place
which will avoid these types of challenges, and how systems must be documented in order to avoid being the victim
for such solutions as a customer.
Data from military sensors are being fed directly into systems for fast analysis and decisions. Robustness in training
phase is only one step towards a more robust overall system. Military systems also need sensor input to be
unpredictive enough that the analysis will not be compromised with fake data. Robustness in operations will also be
an important area of research.
Another problem might be the accountability of using MLS when decisions have been made. How can the decision be
documented at the time of the event in a way that later can be verified was correct with information currently
available. This accountability will require machine learning systems, especially dynamic MLSs, to have major changes
from todays “take it or leave it” output.
With the extreme growth of machine learning systems into military equipment, it is important to cover the potential
problems listed above. In order to achieve trust in military systems using complex machine learning models and
algorithms, the military needs to be able to prove both robustness and accountability. Robustness is important for the
availability and integrity of any military system, with or without both sensors and effectors. Accountability is likely a
future requirement for such systems, and the more complex a system becomes, the documentation of accountability
will grow towards “non-human” complexity. Military decision makers must be able to document how the military
decision making systems operates in order to show why their system recommended those specific actions based on
the input from these specific sensors.
Description
This report is based on an article published as “Robustness of Artificial
Intelligence for Hybrid Warfare”, NATO publication STO-MP-IST-190-17.
In the IST-190-RSY Symposium ‘’AI, ML and BD for Hybrid Military
Operations (AI4HMO)’, this article won the Best Paper Award.