AI environmental impact and sustainability data

Balancing Performance with Sustainability

The acceleration of artificial intelligence (AI) and machine learning (ML) usage creates a conundrum for organizations seeking to balance their initiatives to scale their AI and ML capabilities with their initiatives and constraints on energy consumption and sustainability. While Alexander the Great solved his problem with a single slice of his sword, infrastructure and operations leaders have no such option to overcome their Gordian Knot with a single tool and must leverage a combination of tools and strategies.

Optimizing Data Management

  • Data Pruning: Reducing the amount of redundant data through pruning can significantly decrease the computational load.
  • Data Curation: Errors in the training sets, such as incorrect tagging of unstructured data, can lead to errors in the model output which result in lost precision.  In turn, the training may need to use larger data sets and consume more GPU time to achieve the required precision.  Background work to curate the data set, eliminating its errors, can improve efficiency across multiple training runs.

Leveraging Energy-Efficient Hardware

  • Distributed Computing: The traditional CPU-centric systems architecture relied upon moving all the data to one central, general-purpose processor in the system (typically an x86 processor).  With the massive increase in data generation (which shows no signs of slowing), the CPU-centric architecture is fraught with inefficiency – from excessive energy used in moving data, to bottlenecks leaving the processors starved for data, to the CPU’s themselves being bottlenecks.  Transitioning to a data-centric architecture that utilizes distributed, domain-specific processors boosts efficiency.  GPUs, TPUs, SmartNICs, and CSDs are examples of domain-specific processors.  Deploying purpose-built processors and system-on-chip (SoC) components for specific functions such as packet inspection, data compression, and graphics processing, is one tactic for improved power efficiency.  Arm-based processors, known for their reduced power consumption and high efficiency, can significantly cut down energy usage, are often integrated into these specialty SoCs. 
  • Efficient Data Storage: Carefully matching the data storage systems to the workloads can contribute to optimizing the balance between performance, power, and cost.  Meta and Microsoft have both conducted extensive research on their infrastructures to determine the balance of HDD, SSD, and Memory.  New technologies such as CXL and computational storage drives (CSDs), shift that balance.  
  • AI-Centric GPUs: GPUs are no longer generically optimized for graphics.  There are now multiple classes of GPUs specialized for various tasks such as graphics, scientific simulations, AI and deep learning, mobile, and crypto mining. Employing GPUs designed for AI tasks can optimize performance while maintaining energy efficiency. These GPUs are tailored to handle large-scale data processing required for training complex AI models.

Virtualization and Cloud Solutions

  • Cloudcentric computing: Not to be confused with moving all the workloads to a cloud service provider (CSP) instead of maintaining a combination of on-premises hardware and CSP hosting, cloud-based, virtualized and composable solutions enhance resource utilization and scalability by eliminating resource silos.  Enabling pooling and sharing of compute, storage and memory resources across workloads can increase the utilization of the hardware, saving on carbon footprint.  

Challenges in Deploying Game-Changing Technologies

Deploying new technologies and changing architectures can be challenging.  Even though the benefits can be tremendous, the upfront investment in learning the new technologies, working through the perceived and real risks, securing the budgets and navigating the deployments are all potential blockers.  For example:

AWS

Software Impacts

  • Application Integrations: Deploying new hardware technologies can provide serious power and performance efficiency improvements.  But re-writing mission-critical applications to actualize the improvements can be a showstopper. Hardware and solutions vendors must make deployment as seamless as possible and minimize the integration work for the infrastructure and operations teams. 

Cost Considerations

  • Initial Investment: The upfront cost of transitioning to new hardware and optimizing data centers for energy efficiency can be substantial. This includes not just the purchase of new equipment and potential downtime during the transition, but also the cost in dollars and precious time of the IT team to learn and adapt to the new technologies. While the long-term return on investment (ROI) from reduced energy costs and improved efficiency is compelling, the immediate financial outlay can be a barrier for many organizations.

Role of Data Complexity in Driving Energy Consumption

The sheer volume of data required to train sophisticated AI models is a primary driver of energy consumption. More complex models necessitate larger datasets, which in turn require more computational power and energy.

As data complexity increases, the computational requirements for processing and analyzing this data also rise. This includes the need for more advanced algorithms and greater computational resources.

Mitigating Energy Consumption Through Near-Data Processing

Near-Data Processing technologies, such as computational storage, help data center and AI operations teams get more “work per Watt” out of their infrastructure.  Moving massive amounts of data between devices, systems and sites consumes a large portion of the total energy used in data centers.   Shifting from “move the data to the processor” to “moving the processor to the data” reduces the energy used in simply moving data around. 

This is not intended to say that the entirety of complex computational processes can be moved into a storage device.  However, computational storage solutions can significantly reduce the energy required for data transfer by (1) enabling a portion of the work to be done in the drive to reduce the amount of data that needs to leave the drive, (2) increasing the effective capacity of  the drives to reduce the thrash of local caches and hence reduce fetches of data from remote storage, (3) using specialized processing engines to operate more power efficiently than general-purpose cores, and (4) improving the performance of the local storage to reduce wasted CPU cycles. 

Summing it Up

In conclusion, strategically planning AI and ML deployments to balance performance with sustainability involves a multi-faceted approach that includes optimizing data management, leveraging energy-efficient hardware, adopting green data center practices, implementing advanced AI algorithms and managing workloads intelligently.

The integration of computational storage solutions and Arm-based computing into existing IT infrastructures presents challenges but also offers significant opportunities for enhancing efficiency and reducing energy consumption. Addressing data complexity through localized processing and data compression can further mitigate energy consumption and drive sustainable AI and ML practices.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY