Building a foundation for secure Generative AI

Generative Artificial Intelligence is a transformative technology that is capturing the interests of companies worldwide and is rapidly being integrated into business IT roadmaps. Despite the promise and speed of change, business and cybersecurity leaders are showing caution about adoption due to security risks and concerns. A recent ISMG survey found that leaking sensitive data was the top startup concern for both business leaders and cybersecurity professionals, followed by malicious data entry.

Cybersecurity leaders can mitigate many security concerns by reviewing and revising internal IT security processes to account for productive AI. Some areas of focus for their efforts include implementing the Zero Trust model and adopting basic standards of internet hygiene, which still remarkably protect against 99% of attacks. However, productive AI providers also play an important role in safe business use. Given this shared responsibility, cybersecurity leaders may want to better understand how security is handled throughout the AI ​​production chain.

Best practices for productive AI development are constantly evolving and require a holistic approach that considers the technology, its users, and society at large. But in that broader context, there are four basic areas of security that are most relevant to business security efforts. These include data privacy and ownership, transparency and accountability, user guidance and policy, and security by design.

  1. Data privacy and ownership

Generative AI providers must have clearly written data privacy policies. When evaluating vendors, customers should ensure that their chosen provider will allow them to retain control of their information and not use it to train basic models or share it with other customers without their express consent.

  1. Transparency and accountability

Providers must maintain the integrity of the content their tools create. Like humans, artificial AI can sometimes make things go wrong. But while perfection cannot be expected, clarity and accountability should. To achieve this, productive AI providers must, at a minimum: 1) use authoritative data sources to promote accuracy; 2) providing visibility into thinking and sources of transparency; and 3) providing a user feedback mechanism to support continuous improvement.

  1. User guidance and policy

Business security teams are responsible for ensuring safe and responsible production AI use within their organizations. AI providers can help support their efforts in a number of ways.

Malicious abuse by insiders, however unlikely, is one such consideration. This will include efforts to engage productive AI in dangerous activities such as malicious code execution. AI providers can help mitigate this type of risk by building security protocols into their system design and setting clear limits on what artificial intelligence can or cannot do.

The most common area of ​​concern is user overconfidence. Generative AI is intended to assist workers in their day-to-day tasks, not replace them. Users should be encouraged to think critically about the information AI provides them. Providers can clearly cite sources and use carefully considered language that promotes thoughtful use.

  1. Protect by design

Generative AI technologies must be designed and developed with security in mind, and technology providers must be transparent about their security development practices. Security development life cycles can be adapted to new threat vectors introduced by generative AI. This includes updating threat model requirements to address AI and machine learning-specific threats and implementing robust input validation and sanitization of user-supplied commands. The AI-aware red team, which can be used to look for exploitable vulnerabilities and things like the production of potentially harmful content, is another important security enhancement. The red team has the advantage of adaptability and can be used before and after the release of the product.

While this is a solid start, security leaders who wish to dive deeper can contact many promising industry and government initiatives that aim to help ensure productive AI development and use. One such initiative is the NIST AI Risk Management Framework, which provides organizations with a common approach to reducing concerns while supporting confidence in productive AI systems.

Undoubtedly, secure business use of productive AI must be supported by strong business IT security practices and guided by a carefully considered strategy that includes implementation planning, clear implementation policies, and related governance. But leading providers of productive AI technology understand that they have an important role to play and are willing to provide information on their efforts to develop safe, secure, and reliable AI. Working together will not only promote safe use but also foster the confidence needed for productive AI to fulfill its full promise.

To learn more, visit us here.


Source link