We’re on the cusp of an artificial intelligence revolution, and the productive AI trend doesn’t seem to be slowing down anytime soon. A study by McKinsey found that 72% of organizations used productive AI in one or more business functions by 2024—up from 56% in 2021.
As businesses explore how productive AI can streamline workflows and unlock new efficiencies, security teams are actively evaluating how best to secure the technology. The one big gap in many AI security strategies today? Generative AI for work.
While many are familiar with the methods used to secure AI models such as OpenAI, ChatGPT, or Anthropic, AI workloads are a completely different animal. Not only must security teams evaluate how the underlying model was developed and trained but they must also consider the surrounding architecture and how users interact with the project. In addition, AI security operates under a cloud-like shared responsibility model. Workload responsibilities vary depending on whether the AI integration is based on Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS).
By only considering the risks associated with the AI model, security teams miss the big picture and fail to address all aspects of the workload. Instead, cyber defenders must take a multi-pronged approach using cloud-native security solutions to safely prepare for and deploy the AI workloads that cloud computing generates.
How layered protection protects productive AI workloads
By implementing multiple security strategies at every stage of the AI lifecycle, security teams can add multiple redundancies to better protect AI workloads—and the data and systems they touch. It starts with checking how your chosen model was developed and trained. Because of AI’s potential to create harmful or harmful effects, it must be responsibly and ethically developed to prevent bias, operate transparently, and protect privacy. In the case of companies outsourcing commercial AI workloads to proprietary data, you must also ensure that the data is of high enough quality and sufficient quantity to produce strong results.
Next, defenders must understand their workload responsibilities under the AI shared responsibility model. Is it a SaaS-style model where the provider secures everything from the AI infrastructure and plugins to protecting data from access without the end customer’s identity? Or (more likely) is it a PaaS-style arrangement where the internal security team manages everything from building a secure data infrastructure and mapping ownership and access controls to task configuration, deployment, and AI output controls?
When these productive AI workloads run in highly connected, highly dynamic multi-cloud environments, security teams must also monitor and protect all other components the workload touches during operation. This includes a pipeline used to offload AI workloads, access controls that protect storage accounts where sensitive data resides, APIs that call AI, and more.
Cloud-native security tools such as cloud security posture management (CSPM) and extended detection and response (XDR) are especially useful here because they can scan the underlying code and wider infrastructure of multiple clouds for vulnerabilities and other posture risks while monitoring and responding to threats at runtime. Because multicloud environments are so dynamic and interconnected, security teams must also consolidate their cloud security environment under a cloud application protection platform (CNAPP) to better coordinate and contextualize alerts.
Fully secures productive AI for multi-cloud deployments
Ultimately, the specific components of your layered security strategy are greatly influenced by the environment itself. After all, securing productive AI workloads in a traditional on-premises environment is very different than securing those workloads in a hybrid or multi-cloud environment. But by exploring all the layers that affect an AI workload, security teams can fully secure their multi-cloud environment while maximizing the transformative power of AI.
Because more understanding For more productive AI workloads, check out our series, “Security using Azure Native services.”
Source link