Arkose Labs has analyzed and reported on tens of billions of bot attacks from January through September 2023, collected via the Arkose Labs Global Intelligence Network.
Bots are automated processes acting out over the internet. Some perform useful purposes, such as indexing the internet; but the majority are Bad Bots designed for malicious ends. Bad Bots are increasing dramatically — Arkose estimates that 73% of all internet traffic currently (Q3, 2023) comprises Bad Bots and related fraud farm traffic.
The top five categories of Bad Bot attacks are fake account creation, account takeovers, scraping, account management, and in-product abuse. These haven’t changed from Q2, other than in-product abuse replacing card testing. The biggest increases in attacks from Q2 to Q3 are SMS toll fraud (up 2,141%), account management (up 160%), and fake account creation (up 23%).
The top five targeted industries are technology (Bad Bots comprise 76% of its internet traffic); gaming (29% of traffic); social media (46%), e-commerce (65%), and financial services (45%). If a bot fails in its purpose, there is a growing tendency for the criminals to switch to human operated fraud farms. Arkose estimates there were more than 3 billion fraud farm attacks in H1 2023. These fraud farms appear to be located primarily in Brazil, India, Russia, Vietnam, and the Philippines.
The growth in the prevalence of Bad Bots is likely to increase for two reasons: the arrival and general availability of artificial intelligence (primarily gen-AI), and the increasing business professionalism of the criminal underworld with new crime-as-a-service (CaaS) offerings.
From Q1 to Q2, intelligent bot traffic nearly quadrupled. “Intelligent [bots] employ sophisticated techniques like machine learning and AI to mimic human behavior and evade detection,” notes the report (PDF). “This makes them skilled at adaptation as they target vulnerabilities in IoT devices, cloud services, and other emerging technologies.” They are widely used, for example, to circumvent 2FA defense against phishing.
Separately, the rise of artificial intelligence may or may not relate to a dramatic rise in ‘scraping’ bots that gather data and images from websites. From Q1 to Q2, scraping increased by 432%. Scraping social media accounts can gather the type of personal data that can be used by gen-AI to mass produce compelling phishing attacks. Other bots could then be used to deliver account takeover emails, romance scams, and so on. Scraping also targets the travel and hospitality sectors.
“This is a website you can use to make sure your bots aren’t getting prevented by a website,” Kevin Gosschalk, founder and CEO of Arkose Labs, told SecurityWeek, referring to a specific provider that will not mention. “You can purchase this software. It has enterprise support and so on. But it is purpose built to commit crime. That is what it does. And there are many other different websites like this, but they look like legitimate businesses. It is a good example of a product purpose built to commit fraud.”
It is also a good example of crime-as-a-service. Crime-as-a-service enables wannabe criminals who may have the intent but not the skills to engage in cybercrime. “The massive rise of CaaS has completely changed the economics for adversaries” continued Gosschalk. “It’s much cheaper to attack companies and the attacks are just better because it’s a dev shop that is doing the attacks instead of just individual cybercriminals.”
The continuing increase in the volume of Bad Bots suggests they remain profitable for the criminals. The arrival of gen-AI will improve the performance of Bad Bots, while the growth of CaaS will increase the number of Bad Bot operators; so, it will get worse. The only solution is Bad Bot detection and mitigation to limit the access of the bots to their human or system targets. If it is not profitable, they won’t do it.