The legal and ethical implications of sharing the web with bots

A person working from home.
(Image credit: Getty Images)

Bots have rapidly evolved over the years, and with the new era of Artificial Intelligence (AI), they continue progressing. A report shows that in 2022, 47.4% of all internet traffic came from bots. This is a 5.1% increase over the previous year. Meanwhile, human traffic, reported at 52.6%, was the lowest in eight years.

Internet bots are most often associated with harmful or suspicious activity, like distributed denial-of-service (DDoS) attacks or spreading misinformation on social media. However, bots are also widely used for task automation and efficiency. Therefore, it is necessary for businesses to learn how to distinguish between the two.

Now that AI has made various tasks, including coding, easier to scale, there’s no doubt that cybercriminals will continue employing bad bots to attack businesses and disrupt operations. However, good bots should also continue evolving, driven by the same factor—AI progress, and offering undeniable benefits by optimizing tedious manual business processes.

Julius Cerniauskas

CEO at Oxylabs.

The good bots: intent, behavior, impact

One of the best practices that helps to identify whether a bot is good or bad is looking at three key factors: intent, behavior, and impact. A good bot has a legitimate purpose. For example, automating time-consuming work or doing tasks that would simply be impossible to perform manually, such as gathering large-scale public web data and building real-time automated data flows.

Good bots follow a certain code of conduct. Overall, they positively impact websites and their users by doing tasks such as indexing pages for search engines, helping people find information or compare prices, and identifying malicious activities on the web.

Below are some examples of the most prevalent good bots:

Data automation bots 

Web intelligence software collects publicly available data, such as product prices and descriptions for market research, travel fares for price comparison, or brand mentions for brand protection and counterfeiting purposes. Data automation bots are employed by ecommerce price comparison websites and travel fare aggregators. 

Search engine crawlers 

Also known as web crawlers or spiders, these bots review content on web pages and index it. Once the content is indexed, it appears on search engine result pages. These bots are essential for search engine optimization. Most sites want them to crawl and index their pages as soon as possible after publishing. 

Site monitoring bots 

This software monitors sites for backlinks or system outages. It can alert users in case of a major change or downtime, which enables teams to react quickly and restore their services without considerable losses. 

Chatbots 

Chatbots are programmed to answer certain questions. Many companies integrate these bots into their websites to take off the workload from customer support teams. The chatbot market is rapidly growing as more and more companies employ generative AI chatbots, and it is predicted to reach $1.25 billion in 2025.

The bad bots

We can identify bad bots by considering the same three key identifiers—purpose, behavior, and impact. Bad bots’ intent is exploitative or harmful toward websites and their users. Their behavior is unethical and, in most cases, illegal as this software accesses unauthorized pages and performs unauthorized actions, such as personal data theft, DDoS attacks, and malware spread.

Malicious bots usually do not respect server capacities, overloading it with requests and slowing the target site’s performance.

One of the most popular “use cases” for bad bots is ad fraud, aimed at generating fake traffic and ad metrics, such as CTR, by employing bots that generate clicks, views, or impressions. Below are more examples of the most prevalent bad bots:

Account takeover bots 

Most are familiar with credential stuffing and cracking. These are automated threats that can result in identity theft or granting illegal access to user accounts. Account takeover bots can run mass login attempts, leading to broken infrastructure or even a business loss. 

Spamming bots 

These bots spread fake news and propaganda and post fake reviews on competitor products and services. Spamming bots can also hide malicious content, such as malware, inside the click-bait links. In more elaborate cases, this can lead to fraud. 

Scalper bots 

While scalping bots have been around for a while, they’ve become especially active during the pandemic. This software automates a bulk purchase of goods or services, resulting in a quick sold-out. Later, these items or services are resold at a much higher price. This is often seen with event tickets or limited-edition goods.

Specific tactics, ranging from behavioral analysis to user-agent strings and traffic patterns, allow website owners to identify bad bots more easily. Unfortunately, in the age of AI and the boom of commercial bot farms, it is a constant battle. The ethical issues and implications of using bad bots are more than evident. Legal regulation, however, is still lacking, with bot activity often falling into the gray area.

In 2019, California passed its Bolstering Online Transparency Act (the B.O.T. Act), which mandates clear disclosure and transparency for using bots, meaning that bots are not allowed to hide their identities. The B.O.T. Act primarily targets automated software that aims to influence purchasing and voting behavior. However, at least theoretically, it can also address the bot-related challenge of disinformation, fake news, and artificially inflated social media metrics.

In the EU, such areas as AI-assisted deepfakes and disinformation will hopefully be tackled by the EU AI Act. However, as of today, it is not yet in force.

Even though legal regulation is still obscure, there are explicit legal and financial risks that businesses must consider before using bots—even if they think their bots are “good”. For example, a chatbot can give bad advice, resulting in reputational damage and legal liability.

Even more extreme situations might happen in cases of data mismanagement. In 2020, the UK’s Ticketmaster was fined £1.25 million over a data leak that occurred due to a security breach via their chatbot.

Summary

Knowing good bots from bad ones is essential for any business. But the world is rarely just back or white. Some bots may not be inherently good or bad. What pushes them to one side or the other are their intent, behavior, and impact. If you make sure the bot you are using has a reasonable and fair intent, respects website rules, and doesn’t cause harm, you will most probably find yourself on the good side.

Nevertheless, examples show that even the most innocent bots can cause trouble sometimes, ranging from reputational damage to legal and financial liability due to data mishandling. Therefore, it is vital to know the risks before implementing business bots, no matter whether they are simple chatbots or complex web intelligence collection tools.

We feature the best AI tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Julius Cerniauskas, CEO, Oxylabs.

Read more
Avast cybersecurity
How to beat ‘shadow AI’ across your organization
An AI face in profile against a digital background.
Navigating transparency, bias, and the human imperative in the age of democratized AI
A person holding out their hand with a digital AI symbol.
DeepSeek kicks off the next wave of the AI rush
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
What companies can learn from the gold rush for the AI boom
A person holding out their hand with a digital AI symbol.
Understanding the difference between assisted intelligence and artificial intelligence
Hands typing on a keyboard surrounded by security icons
The psychology of scams: how cybercriminals are exploiting the human brain
Latest in Pro
Finger Presses Orange Button Domain Name Registration on Black Keyboard Background. Closeup View
I visited the world’s first registered .com domain – and you won’t believe what it’s offering today
Racks of servers inside a data center.
Modernizing data centers: an efficient path forward
Dr. Peter Zhou, President of Huawei Data Storage Product Line
Why AI commonization is so important for business intelligent transformation and what Huawei’s data storage has to offer
Wix automation
The world's leading website builder aims to save businesses time with new tool
Data Breach
Thousands of healthcare records exposed online, including private patient information
China
Juniper patches security flaws which could have let hackers take over your router
Latest in News
Google Pixel 8a in aloe green showing
Google Pixel 9a benchmark link teases the performance of the upcoming mid-ranger
Quordle on a smartphone held in a hand
Quordle hints and answers for Monday, March 17 (game #1148)
NYT Strands homescreen on a mobile phone screen, on a light blue background
NYT Strands hints and answers for Monday, March 17 (game #379)
NYT Connections homescreen on a phone, on a purple background
NYT Connections hints and answers for Monday, March 17 (game #645)
Apple iPhone 16 Pro HANDS ON
Leaked iPhone 17 dummy units may have given us our best look yet at all four models
A super close up image of the Google Gemini app in the Play Store
It's official: Google Assistant will be retired for phones this year, with Gemini taking over