Presentation + Paper
7 June 2024 Optimized and secured AI for the tactical edge
Sek Chai, Scott Ostrowski
Author Affiliations +
Abstract
As application workloads transition to artificial intelligence (AI) based workloads, there is a greater need for more secured neural network runtimes and developer tools to increase operational cybersecurity effectiveness of the deployed software. Additionally, as more systems deploy, they are more likely to be captured and reversed engineered by external bad actors, making AI model security a critical aspect in the Machine Learning Operations (MLOps) pipeline. In this paper, we describe our approach to simultaneously optimize and secure AI runtimes, in order to address both agility and security of the AI system simultaneously. A more capable neural network runtime enables a greater ability to detect, assess, and mitigate malicious attacks on the deployed AI-based system. We propose an open-standard application programming interface (API), with security features such as encryption and watermarking intimately integrated into the model runtime.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Sek Chai and Scott Ostrowski "Optimized and secured AI for the tactical edge", Proc. SPIE 13054, Assurance and Security for AI-enabled Systems, 130540P (7 June 2024); https://doi.org/10.1117/12.3014167
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Reverse modeling

Artificial intelligence

Computer security

Process modeling

Digital watermarking

Network security

Neural networks

Back to Top