A security analysis of the assets hosted on major cloud providers’ infrastructure shows that many companies are opening security holes in their rush to build and deploy AI applications. Common findings include the use of insecure default and dynamic settings for AI-related services, using vulnerable AI packages, and not following security-enhancing guidelines.
The analysis, conducted by researchers at Orca Security, involved scanning the workload and configuration data of billions of assets hosted on AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud between January and August. Among the researchers’ findings: exposed API access keys, exposed AI models and training data, limited access roles and users, poor configuration, lack of data encryption at rest and in transit, tools with known vulnerabilities, and more.
“The pace of AI development continues to increase, as new AI devices introduce features that promote the ease of use of security considerations,” Orca researchers wrote in their 2024 State of AI Security report. “Improper maintenance of facilities often accompanies the rollout of a new service. Users are neglecting to properly configure settings related to roles, buckets, users, and other assets, which presents serious risks to the environment.”
Source link