“AI can produce a secure code, but does not have true understanding of the Society’s threatening model, compliance needs, and the nature of different risks,” Moockandani said.
The COISO of Tusira lists two major stories: First, that the Security Code produced by AI will not be difficult against the attack strategies; Second, in order to fail to indicate a specific safety area and needs of the organization. Additionally, a code made by AI can provide a sense of false safety, such as developers, especially inexperienced, often think that it is automatically secure.
In addition, there is a risk that is accompanied by contrary to the violation of storms or control standards, which can result in legal issues on the floor. “Many tools in AI, especially those producing the Open-Source Code, can informally present, indirectly, or licensed,” said Obrien.
Open permissions, for example, often have specific requirements related to display, redistribution, and reduction in the AI can mean violating these licenses by accident. “This is especially dangerous in the context of the software development of cybererseriter, where open source licensing is not just legal obligation but also contributes to security safety,” O’Brien is adding. “The risk of crime to abuse laws of intellectual or legal debt is important.”
Source link