Presentation + Paper
10 May 2019 Model poisoning attacks against distributed machine learning systems
Richard Tomsett, Kevin Chan, Supriyo Chakraborty
Author Affiliations +
Abstract
Future military coalition operations will increasingly rely on machine learning (ML) methods to improve situational awareness. The coalition context presents unique challenges for ML: the tactical environment creates significant computing and communications limitations while also having to deal with an adversarial presence. Further, coalition operations must operate in a distributed manner, while coping with the constraints posed by the operational environment. Envisioned ML deployments in military assets must be resilient to these challenges. Here, we focus on the susceptibility of ML models to be poisoned (during training) or fooled (after training) by adversarial inputs. We review recent work on distributed adversarial ML, and present new results from our own investigations into model poisoning attacks on distributed learning systems without a central parameter aggregation node.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Richard Tomsett, Kevin Chan, and Supriyo Chakraborty "Model poisoning attacks against distributed machine learning systems", Proc. SPIE 11006, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 110061D (10 May 2019); https://doi.org/10.1117/12.2520275
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Machine learning

Distributed computing

Data communications

Neural networks

Stochastic processes

Analytical research

Back to Top