Skip to content

Introduction

The carbon footprint of the global Information and Communication Technologies (ICT) sector continues to grow as digital infrastructure and computational demand expand. This motivates a central research interest of mine: how to make computing accessible to everyone while operating at a fraction of today’s emissions. The rapid rise of Artificial Intelligence (AI) has intensified this challenge, as modern (ML) systems require substantial compute, memory bandwidth, and power. Reducing the energy intensity of ML workloads has therefore become an important direction for sustainable computing.

Among the many possible approaches, the combination of Hyperdimensional Computing (HDC) and Compute-in-Memory (CiM) offers a particularly promising path, pairing HDC’s algorithmic robustness with CiM’s potential for low-power analog computation. DimCiM examines this intersection and evaluates how these two paradigms might jointly enable more energy-efficient machine learning.

This project was developed during the Fall 2025 offering of CS349H: Software Techniques for Emerging Hardware Platforms, a Stanford University research seminar examining the opportunities and challenges of novel hardware substrates and the software techniques required to make them practical at scale. Over the quarter, the work progressed through an annotated bibliography, a focused research survey, and an initial research project culminating in a quantitative study of HDC-on-CiM precision and noise tolerance.

This website documents the current state of the DimCiM initiative, including the motivation, background research, and experimental results obtained during the seminar. It also outlines directions for continued work beyond the course, with the goal of evolving DimCiM into a broader investigation of energy-efficient machine learning on emerging analog hardware.