Today, Machine Learning pervades every aspect of our life. While machine learning has a variety of AI applications, it is well understood that it is a success because it relies on well-designed language/frameworks (like TensorFlow, PyTorch), architectural innovations (like multi-core CPUs, GPUs, FPGAs and, now TPUs), improved libraries for specific operations (like BLAS, MKL, numpy), and "plain old" good system design (like the support systems like runtimes).
At the heart of successful machine learning frameworks (like TensorFlow, pyTorch, TVM, Diesel, etc.) lie the compilers that act as the bridge between the (general and domain specific) languages and the (general and domain specific) efficient architectures.
Presently, many organizations (including large ones like Google, Facebook, Apple, etc. and quite-a-few startups) are designing efficient compilers for specialized hardwares using mathematical models like polyhedral compilation techniques. And, mainstream compilers like LLVM are adding explicit support to machine learning (like the Machine Learning Intermediate Representation MLIR).
In this 1 credit course, we will focus on understanding these issues, we will study some of these systems. A large focus would be on TensorFlow related systems, but, some other systems will be touched as well.
The following are some of the areas that we plan to study: Introduction ML Systems; The JIT and AOT systems; TensorFlow and XLA in depth; Some systems: XLA, TVM, Glow, nGraph, etc.; MLIR in depth.
The course is named after its eponymous workshop C4ML. (With thanks to Prof. Albert Cohen for suggesting it.)
Activity | Weight |
---|---|
Class Participation | 10% |
Class Presentations | 10%+20%+25% |
Assignments | 35% |