Wasteful AI ecosystems
A race to introduce AI capabilities risks overloading an already pressured infrastructure and negatively impacting companies financials while damaging users' trust
The current race to offer AI functionalities has a tremendous systemic impact in the proliferation of AI agents and the evolution of API to support AI models. This will allow an interconnection of AI-powered systems that can manage specific problems and orchestrate different agents.
One can see this as a more powerful version of current non-AI API1 based ecosystems that can manage specific problems and orchestrate different applications interacting with each other.
However, any human capability, in particular in the tech industry, produces waste. And AI is not immune to that.
One example of waste in the tech world is represented by the cryptocurrency phase, which led to a surge in electricity consumption. All to fuel mostly scams and money laundry operations.
All that electricity to mine useless coins and all the traffic generated by that data was waste for the overall society.
AI is not even remotely similar to the cryptocurrency. It has proven to have incredible potential to solve tedious tasks or enhance people’s capabilities while doing explorations in certain topics. But it is undeniable that with the increase of popularity came a hype that is currently causing many people and organizations to sprinkle AI functionalities everywhere hoping to see the advertised 10X (this is probably the lowest percentage advertised by most companies) performance improvements
Some of the AI costs are going down but the whole market is growing and propagating its effects. Improvements in algorithm efficiency appear to not be the most predominant factor in driving down costs (source epoch.ai). Hopefully this will change but for now, big tech companies are desperate for electricity to run more and more data centres to feed the growth that the AI introduction claims to bring to customers.
I argue that the race will bring its excess usage of AI agents where a simple, well thought and maintained non agentic API could save time (for users) and data traffic for the company. Similarly to when everything became an app in the early mobile period, even when this brought no value to end users but satisfied the need for a marketing stunt or enabled companies to harvest more data from users.
Eventually, I believe, a balance will be found but I wonder (as a thought exercise if for nothing else) how the world will be until the balance is found.
The reason for this “abuse” of AI agents can be explained by the fact that we are in a world of tech companies competing to gain space or upsell customers in:
saturated markets
monopolies
In these situations consumers of tech products (e.g. apps) will have little or no choice on whether they these tools will be present in the software they are using. Users will be helpless because even if they run from one software, the next option most likely will suffer from the same level of enshittification due to the market dynamics. Imagine both iOS and Android introduce forcefully some AI feature that nobody likes, which OS will you use? You will need to chase a custom Android version but for the average consumer, is this a reasonable choice?
Let’s see an example
Imagine you use an app to read the news headlines and you have selected a set of newspapers you want to get news from in your personalized feed. Let’s imagine you even pay a subscription to this app to avoid having ads.
One day, someone in the product team responsible for the app thinks that instead users could ask the AI agent to get you the list of news you are interested in. They think this will give them more power and better access to info. Now users are stuck having to ask the agent every time to get the list of articles. Every time.
Is this an unrealistic scenario? Maybe. But not far from what’s already happening (see “Meta's AI-everywhere push raises hackles”).
The investors race fuels the enshittification
Unfortunately, with a powerful magnet for capital investments as the introduction of AI agents into existing processes, the temptation and the risk of leaning too much into the use of AI without understanding whether there is a problem that needs to be solved first is high. When companies are more focused on using a technology as opposed to solve users’ problem this can lead to solutions that don’t help anybody.
And that’s waste. Waste in the electricity to run AI model or to transfer the data as AI arguably uses more data than a simple non-AI API request that extracts data using conventional approaches (e.g. read a pre-calculated value stored in database). Waste in time of developers building and maintaining an overly complex system. Waste in time for users that every day they are forced to use a convoluted and over-engineered app their happiness is destroyed a bit.
Interestingly enough, OpenAI admits that even just using “please” and “thank you” incurs unnecessary costs for them due to usage of network and infrastructure. Looking at what is already happening we can predict that when AI will be deployed to several systems throughout the world, the systemic effects of using AI agents for everything will be significant. And so will be the effect of any wasteful activity these systems will cause.
Why is waste a risk for the introduction and development of AI applications? Because it risks compromising users’ trust and creating resistance even when AI is correctly introduced. Advocating for correct usage of AI is in the best interest of whoever is fully invested in the AI-wave that is taking over the tech world; the better we do at recognizing the actual problems that need AI, the steadier and more reliable the growth of the ecosystem will be.
Finding a balance will be necessary to create sustainable growth and benefits for everybody involved (companies AND users). Unfortunately, preventing this issue most likely will require fixing the economic and policy systems that allow these enshittification acts to flourish and persist.
And that’s not a coding problem so it is not so easy to fix.
non-AI API in this document indicates any API not powered by an AI agent or LLM of some sort