Research
Research Interests
My research focuses on developing robust and efficient machine learning methods with real-world applicability. Specific areas of interest include:
- Representation Learning – learning compact, transferable feature representations from limited labelled data.
- Uncertainty Quantification – building models that know what they do not know.
- Scientific Machine Learning – applying ML to accelerate discovery in physics, biology, and materials science.
- Explainability & Fairness – ensuring models are interpretable and unbiased.
Publications
2024
2023
[Title of Paper 3]
Co-Author D, Nakul Padalkar, Co-Author E
Journal Name, 2023
[PDF] [Code]
Brief abstract or one-sentence description of the contribution goes here.
Preprints
[Title of Preprint]
Nakul Padalkar, Co-Author F
arXiv, 2024
[arXiv]
Brief description of the preprint.
Projects
Active Projects
| Project | Description | Links |
|---|---|---|
| Project Alpha | Efficient neural architecture search for scientific domains. | GitHub |
| Project Beta | Benchmark dataset and evaluation suite for uncertainty estimation. | GitHub |
Past Projects
| Project | Description | Links |
|---|---|---|
| Project Gamma | Reproducible ML pipelines using Quarto and DVC. | GitHub |
Collaborators
I am fortunate to collaborate with researchers across several institutions. If you are interested in collaborating, please feel free to get in touch.