In the ever-evolving landscape of artificial intelligence (AI), the trends point toward an insatiable appetite for larger, more powerful models. Large language models (LLMs) have become the torchbearers of this trend and epitomize the relentless quest for more data, more parameters, and inevitably, more computational power.
But this progress comes at a cost, one not adequately accounted for by Silicon Valley or its patrons — a carbon cost.
The equation is straightforward yet alarming: Larger models equate to more parameters, necessitating increased computations. These computations, in turn, translate to higher energy consumption and a more substantial carbon footprint. While the benefits of AI, which range from predicting weather disasters to aiding in cancer research, are clear, the environmental viability of less critical applications, such as generating AI-based superhero selfies, are more open to question.
This predicament brings us to the heart of a significant challenge in modern computing: Moore’s Law. For decades, this axiom has anticipated the exponential growth in computing power. However, this growth has not been matched by a proportional increase in energy efficiency. Indeed, the environmental impact of computing, especially in the field of AI, is becoming increasingly untenable.
These ecological costs are profound. Data centers, the backbone of AI computations, are notorious for their high energy demands. The carbon emissions from these centers, which often rely on fossil fuels, contribute significantly to global warming and stand at odds with the growing global emphasis on sustainability and environmental responsibility.
In the era of net zero, corporate environmental responsibility is under intense scrutiny, and numerous companies are quick to trumpet their commitment to energy efficiency. Often they acquire carbon credits to balance their carbon footprint, even as critics dismiss such measures as mere accounting maneuvers rather than a substantive change in operational behavior.
In contrast, Microsoft and other select industry leaders are pioneering a more proactive approach. These firms are optimizing their energy consumption by conducting energy-intensive processes during off-peak hours and synchronizing their operations with periods of maximum solar output and other times of higher renewable energy availability. This strategy, known as “time-shifting,” not only mitigates their environmental impact but also underscores a tangible shift toward sustainability.
Enter the realm of environmental, social, and governance (ESG) regulation, a framework that encourages companies to operate in a socially responsible way and consider their environmental costs. ESG scores, which rate companies based on their adherence to these principles, are becoming a crucial part of investment decisions. AI development, with its high energy demands, faces a unique challenge in this regard. Companies involved in AI research and development must now reconcile their pursuit of technical innovation with the necessity of maintaining a favorable ESG score. But have the ESG vendors caught on to this hot problem?
In response to these challenges, carbon aware, green AI, and eco AI and other concepts are gaining traction. These initiatives advocate for more energy-efficient algorithms, the use of renewable energy sources, and more environmentally conscious approaches to AI development. This shift is not just a moral imperative but also a practical necessity, as investors and consumers increasingly favor companies that demonstrate a commitment to sustainability.
The AI community is at a crossroads. On one hand, the pursuit of larger and more complex models is propelling us toward new frontiers in technology and science. On the other, we cannot ignore the associated environmental costs. The challenge, therefore, is to strike a balance — to continue the pursuit of groundbreaking AI innovations while minimizing their ecological toll.
This balancing act is not just the responsibility of AI researchers and developers. It extends to policymakers, investors, and end-users. Policy interventions that encourage the use of renewable energy sources in data centers, investment in green AI start-ups, and a conscious effort by users to favor environmentally friendly AI applications can collectively make a positive difference.
The journey of AI is a story of technological achievement, but it must also be one of environmental responsibility. As we continue to push the boundaries of what AI can accomplish, we must also innovate in how we power these advancements. The future of AI should not just be smart; it must also be sustainable. Only then can we ensure that the benefits of AI are enjoyed not just by current generations but by the many generations to come.
If you liked this post, don’t forget to subscribe to Enterprising Investor and the CFA Institute Research and Policy Center.
All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.
Image credit: ©Getty Images / Jordan Lye
Professional Learning for CFA Institute Members
CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.