The legal and ethical implications of sharing the web with bots
Bots have evolved rapidly in recent years, and with the new era of Artificial Intelligence (AI), they continue to progress
Bots have rapidly evolved over the years, and with the new era of Artificial Intelligence (AI), they continue progressing. A report shows that in 2022, 47.4% of all internet traffic came from bots. This is a 5.1% increase over the previous year. Meanwhile, human traffic, reported at 52.6%, was the lowest in eight years.
Internet bots are most often associated with harmful or suspicious activity, like distributed denial-of-service (DDoS) attacks or spreading misinformation on social media. However, bots are also widely used for task automation and efficiency. Therefore, it is necessary for businesses to learn how to distinguish between the two.
Now that AI has made various tasks, including coding, easier to scale, there’s no doubt that cybercriminals will continue employing bad bots to attack businesses and disrupt operations. However, good bots should also continue evolving, driven by the same factor—AI progress, and offering undeniable benefits by optimizing tedious manual business processes.
CEO at Oxylabs.
The good bots: intent, behavior, impact
One of the best practices that helps to identify whether a bot is good or bad is looking at three key factors: intent, behavior, and impact. A good bot has a legitimate purpose. For example, automating time-consuming work or doing tasks that would simply be impossible to perform manually, such as gathering large-scale public web data and building real-time automated data flows.
Good bots follow a certain code of conduct. Overall, they positively impact websites and their users by doing tasks such as indexing pages for search engines, helping people find information or compare prices, and identifying malicious activities on the web.
Below are some examples of the most prevalent good bots:
Data automation bots
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Web intelligence software collects publicly available data, such as product prices and descriptions for market research, travel fares for price comparison, or brand mentions for brand protection and counterfeiting purposes. Data automation bots are employed by ecommerce price comparison websites and travel fare aggregators.
Search engine crawlers
Also known as web crawlers or spiders, these bots review content on web pages and index it. Once the content is indexed, it appears on search engine result pages. These bots are essential for search engine optimization. Most sites want them to crawl and index their pages as soon as possible after publishing.
Site monitoring bots
This software monitors sites for backlinks or system outages. It can alert users in case of a major change or downtime, which enables teams to react quickly and restore their services without considerable losses.
Chatbots
Chatbots are programmed to answer certain questions. Many companies integrate these bots into their websites to take off the workload from customer support teams. The chatbot market is rapidly growing as more and more companies employ generative AI chatbots, and it is predicted to reach $1.25 billion in 2025.
The bad bots
We can identify bad bots by considering the same three key identifiers—purpose, behavior, and impact. Bad bots’ intent is exploitative or harmful toward websites and their users. Their behavior is unethical and, in most cases, illegal as this software accesses unauthorized pages and performs unauthorized actions, such as personal data theft, DDoS attacks, and malware spread.
Malicious bots usually do not respect server capacities, overloading it with requests and slowing the target site’s performance.
One of the most popular “use cases” for bad bots is ad fraud, aimed at generating fake traffic and ad metrics, such as CTR, by employing bots that generate clicks, views, or impressions. Below are more examples of the most prevalent bad bots:
Account takeover bots
Most are familiar with credential stuffing and cracking. These are automated threats that can result in identity theft or granting illegal access to user accounts. Account takeover bots can run mass login attempts, leading to broken infrastructure or even a business loss.
Spamming bots
These bots spread fake news and propaganda and post fake reviews on competitor products and services. Spamming bots can also hide malicious content, such as malware, inside the click-bait links. In more elaborate cases, this can lead to fraud.
Scalper bots
While scalping bots have been around for a while, they’ve become especially active during the pandemic. This software automates a bulk purchase of goods or services, resulting in a quick sold-out. Later, these items or services are resold at a much higher price. This is often seen with event tickets or limited-edition goods.
Legal and ethical implications
Specific tactics, ranging from behavioral analysis to user-agent strings and traffic patterns, allow website owners to identify bad bots more easily. Unfortunately, in the age of AI and the boom of commercial bot farms, it is a constant battle. The ethical issues and implications of using bad bots are more than evident. Legal regulation, however, is still lacking, with bot activity often falling into the gray area.
In 2019, California passed its Bolstering Online Transparency Act (the B.O.T. Act), which mandates clear disclosure and transparency for using bots, meaning that bots are not allowed to hide their identities. The B.O.T. Act primarily targets automated software that aims to influence purchasing and voting behavior. However, at least theoretically, it can also address the bot-related challenge of disinformation, fake news, and artificially inflated social media metrics.
In the EU, such areas as AI-assisted deepfakes and disinformation will hopefully be tackled by the EU AI Act. However, as of today, it is not yet in force.
Even though legal regulation is still obscure, there are explicit legal and financial risks that businesses must consider before using bots—even if they think their bots are “good”. For example, a chatbot can give bad advice, resulting in reputational damage and legal liability.
Even more extreme situations might happen in cases of data mismanagement. In 2020, the UK’s Ticketmaster was fined £1.25 million over a data leak that occurred due to a security breach via their chatbot.
Summary
Knowing good bots from bad ones is essential for any business. But the world is rarely just back or white. Some bots may not be inherently good or bad. What pushes them to one side or the other are their intent, behavior, and impact. If you make sure the bot you are using has a reasonable and fair intent, respects website rules, and doesn’t cause harm, you will most probably find yourself on the good side.
Nevertheless, examples show that even the most innocent bots can cause trouble sometimes, ranging from reputational damage to legal and financial liability due to data mishandling. Therefore, it is vital to know the risks before implementing business bots, no matter whether they are simple chatbots or complex web intelligence collection tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Julius Cerniauskas, CEO, Oxylabs.