AI and Machine Learning will not save the planet (yet)
AI isn't developed enough to save the environment
Artificial General Intelligence, when it exists, will be able to do many tasks better than humans. For now, the machine learning systems and generative AI solutions available on the market are a stopgap to ease the cognitive load on engineers, until machines which think like people exist.
Generative AI is currently dominating headlines, but its backbone, neural networks, have been in use for decades. These Machine Learning (ML) systems historically acted as cruise control for large systems that would be difficult to constantly maintain by hand. The latest algorithms also proactively respond to errors and threats, alerting teams and recording logs of unusual activity. These systems have developed further and can even predict certain outcomes based on previously observed patterns.
This ability to learn and respond is being adapted to all kinds of technology. One that persists is the use of AI tools in envirotech. Whether it's enabling new technologies with vast data processing capabilities, or improving the efficiency of existing systems by intelligently adjusting inputs to maximize efficiency, AI at this stage of development is so open ended it could theoretically be applied to any task.
Co-Founder of VictoriaMetrics.
AI’s undeniable strengths
GenAI isn’t inherently energy intensive. A model or neural network is no more energy inefficient than any other piece of software when it is operating, but the development of these AI tools is what generates the majority of the energy costs. The justification for this energy consumption is that the future benefits of the technology are worth the cost in energy and resources.
Some reports suggest many AI applications are ‘solutions in search of a problem’, and many developers are using vast amounts of energy to develop tools that could produce dubious energy savings at best. One of the biggest benefits of machine learning is its ability to read through large amounts of data, and summarize insights for humans to act on. Reporting is a laborious and frequently manual process, time saved reporting can be shifted to actioning machine learning insights and actively addressing business-related emissions.
Businesses are under increasing pressure to start reporting on Scope 3 emissions, which are the hardest to measure, and the biggest contributor of emissions for most modern companies. Capturing and analyzing these disparate data sources would be a smart use of AI, but would still ultimately require regular human guidance. Monitoring solutions already exist on the market to reduce the demand on engineers, so taking this a step further with AI is an unnecessary and potentially damaging innovation.
Replacing the engineer with an AI agent reduces human labor, but removes a complex interface, just to add equally complex programming in front of it. That isn’t to say innovation should be discouraged. It’s a noble aim, but do not be sold a fairy tale that this will happen without any hiccups. Some engineers will be replaced eventually by this technology, but the industry should approach it carefully.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Consider self-driving cars. They're here, they're doing better than an average human-driver. But in some edge cases they can be dangerous. The difference is that it is very easy to see this danger, compared to the potential risks of AI.
Today’s ‘clever’ machines are like naive humans
AI agents at the present stage of development are comparable to human employees - they need training and supervision, and will gradually become out of date unless re-trained from time to time. Similarly, as has been observed with ChatGPT, models can degrade over time. The mechanics that drive this degradation are not clear, but these systems are delicately calibrated, and this calibration is not a permanent state. The more flexible the model, the more likely it can misfire and function suboptimally. This can manifest as data or concept drift, an issue where a model invalidates itself over time. This is one of many inherent issues with attaching probabilistic models to deterministic tools.
A concerning area of development is the use of AI in natural language inputs, trying to make it easier for less technical employees or decision makers to save on hiring engineers. Natural language outputs are ideal for translating the expert, subject specific outputs from monitoring systems, in a way that makes the data accessible for those who are less data literate. Despite this strength even summarizations can be subject to hallucinations where data is fabricated, this is an issue that persists in LLMs and could create costly errors where AI is used to summarize mission critical reports.
The risk is we create AI overlays for systems that require deterministic inputs. Trying to make the barrier to entry for complex systems lower is admirable, but these systems require precision. AI agents cannot explain their reasoning, or truly understand a natural language input and work out the real request in the way a human can. Moreover, it adds another layer of energy consuming software to a tech stack for minimal gain.
We can’t leave it all to AI
The rush to ‘AI everything’ is producing a tremendous amount of wasted energy, with 14,000 AI startups currently in existence, how many will actually produce tools that will benefit humanity? While AI can improve the efficiency of a data center by managing resources, ultimately that doesn't manifest into a meaningful energy saving as in most cases that free capacity is then channeled into another application, using any saved resource headroom, plus the cost of yet more AI powered tools.
Can AI help achieve sustainability goals? Probably, but most of the advocates fall down at the ‘how’ part of that question, in some cases suggesting that AI itself will come up with new technologies. Climate change is now an existential threat with so many variables to account for it stretches the comprehension of the human mind. Rather than tackling this problem directly, technophiles defer responsibility to AI in the hope it will provide a solution at some point in future. The future is unknown, and climate change is happening now. Banking on AI to save us is simply crossing our fingers and hoping for the best dressed up as neo-futurism.
We've listed the best collaboration platform for teams.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Roman Khavronenko, Co-Founder of VictoriaMetrics.