As application workloads transition to artificial intelligence (AI) based workloads, there is a greater need for more secured neural network runtimes and developer tools to increase operational cybersecurity effectiveness of the deployed software. Additionally, as more systems deploy, they are more likely to be captured and reversed engineered by external bad actors, making AI model security a critical aspect in the Machine Learning Operations (MLOps) pipeline. In this paper, we describe our approach to simultaneously optimize and secure AI runtimes, in order to address both agility and security of the AI system simultaneously. A more capable neural network runtime enables a greater ability to detect, assess, and mitigate malicious attacks on the deployed AI-based system. We propose an open-standard application programming interface (API), with security features such as encryption and watermarking intimately integrated into the model runtime.
|