El curso no ha empezado todavía

Sustainability in the Digital Age: Efficient AI Techniques in the LLM Era

Impartido por Haojin Yang
Sustainability in the Digital Age: Efficient AI Techniques in the LLM Era

Welcome to the "Sustainability in the Digital Age" series

In an era where digital technologies are reshaping industries and daily life, the environmental impact of AI systems has become a growing concern. This course explores efficient AI methodologies to address these challenges. From deep learning model compression to low-bit quantization and collaborative inference, we delve into techniques that enhance computational efficiency and reduce energy consumption. In Week 2, we focus on low-bit quantization specifically for large language models (LLMs), showcasing cutting-edge open-source tools and models. Join us to learn how to build sustainable AI systems while pushing the boundaries of innovation.


This course is part of the Sustainability in the Digital Age series, a collaborative project between colleagues from Stanford University, SAP and the Hasso Plattner Institute.

mayo 27, 2025 - junio 10, 2025
Idioma: English
Advanced

Información del curso

Is this course for me?

Prerequisites
  • Basic understanding of Machine Learning and Deep Learning principles
  • Proficiency in Python programming
  • Familiarity with neural networks is recommended
Knowledge

There two preparatory course options we found, which should be suitable in preparation for this course:

Time required: The course runs for two weeks with a total workload of approximately 6-10 hours.

All learning materials (Video lectures and slides, additional reading materials and case studies, self-tests) are available from the start of the course. The final exam is activated at the end of the first week and remains open until the course ends, giving participants two weeks to complete the content and one week for the exam.

Lo que aprenderá

  • Efficient deep learning techniques with a focus on sustainability
  • Principles of model compression and low-bit quantization
  • Collaborative inference strategies to optimize resource usage
  • Application of low-bit quantization for large language models (LLMs)
  • Overview and practical use of open-source tools for efficient AI

A quién se dirige este curso

  • Students
  • Professionals
  • Lifelong learners

Matricularme en este curso

El curso es gratuito. Solo tiene que crear una cuenta en openHPI ¡y ya puede hacer el curso!
Matricularme ahora

Requisitos para el certificado

  • Obtenga un certificado de estudios al obtener más del 50% del número máximo de puntos de todos los trabajos evaluados.
  • Obtenga una confirmación de participación al completar al menos el 50% del material del curso.

Para saber más, consulte la guía de certificados.

Curso impartido por

Haojin Yang

PD Dr. Haojin Yang is a senior researcher and multimedia and machine learning (MML) research group leader at Hasso-Plattner-Institute (HPI). Since 2019, he has been habilitated for a professorship. His research focuses on efficient deep learning, model acceleration and compression, and AI agentic systems.