Vineeth N Balasubramanian


Our research group, Lab 1055 (find out why it is called so at! follow us on X at, works at the intersection of the theory and application of machine learning - with a focus on applications in computer vision. With a strong interest in the mathematical fundamentals and a passion for real-world application, our group aims on being at the forefront of the field, by carrying out impactful research in the areas of deep learning, machine learning and computer vision, guided by application contexts derived from real-world use.

Keywords: Deep Learning, Machine Learning, Computer Vision, Explainable AI

Our problems of interest in recent times have focused on learning reliable and robust AI/ML systems in ever-evolving environments. Some of the problems we tackle include:

  • Explainable and robust machine/deep learning: This includes problems on explainable AI (largely ante-hoc inherently interpretable methods), use of causality in machine learning, adversarial and attributional robustness, disentanglement of latent variables, compositionality in deep learning models

  • Organic lifelong learning: This direction focuses on learning continuously in evolving environments with whatever data and labels are available at hand; this includes settings such as continual learning, zero-shot learning, few-shot learning, active learning, domain adaptation, domain generalization and more importantly, the amalgamation of these settings that could organically arise in real-world settings.

  • Multimodal vision-language models: Our efforts herein focus on the recent emergence of multimodal vision-language models that allow us to parse images/videos and text to perceive, represent as well as communicate with users effectively.

From an application standpoint, problems of our recent interest include:

  • Agriculture: Plant phenotyping using computer vision

  • Drone-based vision: Detection of objects from drone imagery, as well as low-resolution imagery

  • Autonomous navigation: adding levels of autonomy to driving vehicles in developing countries, focusing on India

  • Human behavior understanding: Detection of emotions, human poses, gestures, etc of the human body using images and videos

This interview featured in the IEEE Signal Processing Newsletter (Feb 2021 issue) also describes our ongoing research efforts.

Ongoing Projects (Selected)

  • Exploring Connections between Adversarial Robustness and Explainability (Google Research Scholar Award, Microsoft Research Postdoctoral Research Grant)

  • Learning with Limited Labeled Data: Solving the Next Generation of Machine Learning Problems (DST-JSPS Indo-Japan Collaborative Research program)

  • Learning with Weak Supervision for Autonomous Vehicles (Funded by Intel and SERB IMPRINT program)

  • Explainable Deep Learning (Funded by Adobe)

  • Deep Generative Models: Going Beyond Supervised Learning (Funded by Intel)

  • Towards Next-Generation Deep Learning: Faster, Smaller, Easier (Funded by DST/SERB ICPS program)

Completed Projects (Selected)

  • Object Detection in Drone Images (Funded by MeitY, MoE)

  • Explainable Machine Learning (Funded by MHRD, DST and Honeywell through the MHRD UAY program)

  • Deep Learning for Agriculture (A DST-JST SICORP Collaborative Project with Univ of Tokyo, IIT-B, IIIT-H, and PJTSAU)

  • Non-convex Optimization and Deep Learning (Funded by Intel PhD Fellowship and SERB MATRICS program)

  • Object Detection in Unconstrained Settings and Low-resolution Thermal Images (Funded by DRDO and IBM)

My past research focused extensively on the use of machine learning and computer vision in assistive technology applications. Please see this link for past/completed projects.


We are grateful to the following organizations whose support sustains our research.

Funding Organizations