Mediating this effect in the sense of oversimplification, unfortunately, suggests that more is needed. More specifically, the discussion of the basic functionality of AI systems needs to be married to as many different outcomes as possible in order to emphasize the potential of the technology.
AI education and training should emphasize the diversity of outcomes based on social, political, commercial, and defense decisions. Cybersecurity personnel should be guided as much as possible in understanding the results that depend on the existing method with respect to variables such as differences in data used for training, communication biases used to feed and interpret incoming information, and so on.
A particular opportunity for this purpose would be the establishment of penetration testing requirements involving different segments of the workforce using new AI tools. In other words, new platforms or systems must be tested with a diverse sample representative of the security people who might use them, making it necessary for adopters or developers to provide accessibility testing options for users with the lowest skill set.
Source link