This video belongs to the openHPI course clean-IT: Towards Sustainable Digital Technologies. Do you want to see more?
An error occurred while loading the video player, or it takes a long time to initialize. You can try clearing your browser cache. Please try again later and contact the helpdesk if the problem persists.
To enable the transcript, please select a language in the video player settings menu.
About this video
Today, GPU Computing is widely used across many application domains, with machine learning being a particularly prevalent use case. There, large amounts of energy are spent on performing millions of GPU-hours on training deep neural networks.
In preliminary experiments, we have identified two straightforward strategies that can be applied to reduce energy consumption across various GPU workloads in scenarios where a slight increase in processing time is acceptable: First, balanced GPU hardware can yield higher energy-efficiency compared to high-end GPU models. Second, the efficiency of high-end hardware can by decreasing the clock speed of the GPUs slightly causing only mildly increased processing times. More information...
Max Plauth is a PhD candidate in the Operating Systems and Middleware Group at the Hasso Plattner Institute. In 2017, Max Plauth was awarded the IBM Ph.D. Fellowship Award for his work on integrating hardware accelerators in virtualized environments. Recently, he has focused his research efforts on energy-aware computing and heterogeneous systems.