Organizations should implement proactive measures, including rigorous testing of plugins such as comprehensive vendor risk assessments (VRAs). From an operational perspective, strong protection involves enforcing company-owned browsers, blocking all plugins by default, and approving only verified plugins through a controlled whitelist. Additionally, organizations should be wary of open source plugins.
CONFIRMATION: At the time of writing, it was announced that as many as 16 Chrome extensions were vulnerable, exposing more than 600,000 users to potential vulnerabilities. This is just the beginning and I expect this to get worse in 2025-2026, especially from the growth of AI plugins. Do you really have full control over browser plugin risks in your organization? If you don’t, you better start.
3. Dangers of Agent AI: Robots are evil
The growth of Agentic AI systems capable of autonomous decision-making presents significant risks as adoption scales by 2025. Companies and employees may be eager to use Agentic-AI bots to streamline workflows and perform tasks at scale, but the power of these systems. Robbery is a looming threat. Adversarial attacks and indirect optimization can turn these bots into liabilities. For example, attackers can manipulate reinforcement learning algorithms to issue unsafe instructions or hijack feedback loops, exploiting workflows for malicious purposes. In some cases, AI that controls industrial machinery may be used to overload systems or stop operations altogether, creating safety hazards and shutdowns. We are still in the early stages of this, and companies need to have strong code reviews, regular pen tests, and regular audits to ensure system integrity – if not, these risks could flare up and cause serious business disruption. The International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST) have good frameworks to follow, as well as ISACA and its AI assessment tools; expect more content in 2025.
Source link