CISOs may be intimately familiar with the myriad forms of authentication for their privileged environments, but a very different problem arises in environments where authentication was once and is not required.
Domains such as sales call centers or social-facing sites are fast becoming the primary battlegrounds over humanity, where AI bots and humans collide and CISOs struggle to reliably and quickly distinguish one from the other.
“Bad bots have become more sophisticated, attackers are analyzing defenses and sharing exploits on marketplaces and message boards. They are also becoming more accessible, and bot services are available to anyone who can pay for them,” Forrester researchers wrote in the company’s latest Forrester Wave: Bot Management Software, Q3 2024. fraud, such as credentialing attacks, or they may participate in large-scale attacks on applications, making scraping or web retargeting to help target subsequent activities.”
Forrester estimates that 30% of today’s internet traffic comes from malicious bots.
The bot problem goes beyond the issue of fake network traffic costs, however. For example, bot DDoS attacks can be launched against a sales call center, blocking lines with fake customers in an attempt to frustrate real customers to call competitors instead. Or bots may be used to integrate text-based customer service applications, generating a virtual reality of your service bots tied to round-robin chats with attacker bots.
To confirm humanity
What makes these AI-powered bots so dangerous is that they can be scaled almost infinitely at relatively low cost. That means an attacker can easily bypass even the world’s largest call centers, which often don’t want to add the complexity involved with authentication methods.
“This is a big issue. “These deep attacks are automated so there’s no way for a call center to grow as quickly or efficiently as a server array,” said Jay Meier, SVP of North American operations at identity company FaceTec. “This is a new DDoS attack and will easily shut down a call center.”
Meier’s use of the term deepfake is worth noting, as today’s deepfakes are often thought of as accurate impersonations of a specific person, such as the CFO of the target business. But with bot attacks like this, they will be impersonating a composite human that may not exist.
Another recently announced effort to deny such bot attacks comes from a group of major vendors, including OpenAI and Microsoft, working with researchers from MIT, Harvard, and the University of California, Berkeley. The result of the paper revealed a system that will use government offices to create “personal information” to deal with the fact that old web systems designed to block bots, such as CAPTCHA, have been used as useless because the generating AI can choose images, say, road signs. so – if not more so – than people.
Proof of person (PHC), the researchers argue, “gives its owner the ability to prove to digital service providers that they are a person without revealing anything else. By building on related concepts such as proof of person and anonymous credentials, these credentials can be stored digitally on the owners’ devices and verified with anonymous proof.”
This way, the system will not reveal anything about the person’s identity. But, the researchers point out, the PHC system will have to meet two important requirements. First, validation limits will need to be set. “The issuer of the PHC provides at least one certificate to the eligible person,” according to the researchers. Second, a “service-specific” pseudonym would need to be employed so that “a user’s digital activity cannot be tracked by the issuer and is not linked to all service providers, even if service providers and service issuers meet.”
Another author of the report, Tobin South, a senior security researcher and PhD candidate at MIT, said such a system is important because “there are no tools today that can stop thousands of legitimate questions.”
Government offices could be used to issue warrants, or perhaps retail stores, because, as South says, bots are growing in sophistication and “the only thing we’re sure of is that they can’t come from somewhere.”
Challenges of identity proof
Although interesting, the human system has fundamental problems. First, credentials are easily generated by gen AI systems. Second, customers may be hard-pressed to take the time and significant effort to gather documents and wait in line at a government office to prove their identity simply by visiting public websites or sales call centers.
Some argue that the mass creation of personal cookies will create another very important cybersecurity vulnerability.
“What if I get control of devices that have a cookie on them?” asked Meier of FaceTec. “The Chinese may have a million traditional cookies under one person’s control.”
Brian Levine, managing director of cybersecurity at Ernst & Young, believes that, while such a system may be useful in the short term, it is unlikely to effectively protect businesses in the long term.
“It’s the same cat-and-mouse game” cybersecurity vendors constantly play with attackers, Levine said. “As soon as you create software to identify the bot, the bot will change its information to fool that software.”
Is all hope lost?
Sandy Cariella, principal analyst at Forrester and lead author of the Forrester bot report, says the key to any bot protection program is not to slow down the good bots, like legitimate search engine spiders, in an effort to block the bad ones.
“The crux of any bot management program should be that it never introduces conflict to good bots and certainly not to legitimate customers. You need to pay close attention to customer conflict,” says Cariella. “If you annoy your human customers, you won’t stay.”
Some of the best bot protection programs today use deep learning to sniff out deceptive bot behavior. While some question whether such systems can stop an attack — such as a bot DDoS attack — quickly enough, Carriella believes better systems play a bigger game. They may not stop the first wave of bot attacks, but they are generally effective at identifying the characteristics of the attacking bots and stopping subsequent waves, which often occur within minutes of the first attack, he says.
“They are designed to stop every attack, not just the first attack. [The enterprise] he will be able to continue doing business,” said Cariella.
CISOs must also work with their C-suite colleagues to get the bot strategy working, he adds.
“If you take it seriously but don’t discuss fraud, marketing, e-commerce, etc., you don’t have a coherent strategy,” he said. So, you may not be solving every problem. You should have a conversation with all those involved. “
However, Carriella believes that the bot’s defense needs to be accelerated. “The speed of adapting to new rules and new attacks with bots is much faster than attacks on your traditional application,” he said.
Steve Zalewski, long-time CISO of Levi Strauss until 2021 when he becomes a cybersecurity consultant, is also worried about how quickly malicious bots can adapt to countermeasures.
Asked how software can protect against the latest bot attacks, Zalewski replied: “Quite simply, they don’t know today. Today’s IAM infrastructure is not prepared for this level of sophistication in authentication attacks that hit help desks. ”
Zalewski encourages CISOs to emphasize goals when thinking carefully about their bot protection strategy.
“What kind of mutual trust relationship do we want? Are you a living person on the other end of the phone, compared to, Are you a living person that I trust?” he asks.
Most AI-powered productivity bots aren’t designed to sound human, Zalewski said, citing bank customer service bots as an example. These bots should not fool anyone into thinking they are human. But attack bots are designed to do just that.
And that’s another important point. People who are used to dealing with a customer service bot may be quick to dismiss the threat because they think that bots that use plain language are easy to spot.
“But with a malicious bot attacker,” Zalewski said, “they’re using an awful lot of effort.”
Because there’s a lot riding on tricking you into thinking you’re communicating with someone.
Source link