I am a PhD candidate at the Center for Cybersecurity, New York University (NYU), advised by Prof. Brandon Reagen. I’m broadly interested in cryptographically secure machine learning computation and work at the intersection of deep learning and applied cryptography (precisely, homomorphic encryption and multiparty computation) as a part of DPRIVE projects. My primary research focuses on co-designing deep neural networks and cryptographic primitives to achieve real-time inference on encrypted data. In addition to my research, I’ve also served as a (invited) reviewer for NeurIPS'23, ICLR'24, and CVPR'24.
Before joining NYU, I completed my M.Tech. from CSE IIT Hyderabad, where my research centered around the Hardware-Aware Co-Optimization of Deep Convolutional Neural Networks. Before this, I spent two years as an electrical design engineer at Seagate Technology Bangalore (INDIA), on a team of solid-state drive (SSD) development.
I wholeheartedly embrace collaborative opportunities, especially when our research interests align. If you think we could benefit from working together, don’t hesitate to shoot me an email expressing your interest!
Ph.D. in Privacy-preserving Deep Learning, 2020 - present
New York University
M.Tech. (Research Assistant) in Computer Science and Engineering, 2017 - 2020
Indian Institute of Technology Hyderabad
B.Tech. in Electronics and Communication Engineering, 2009 - 2013
National Institute of Technology Surat
Circa reduces the runtime overhead of ReLU operation by 1.9x by decoupling the sign evaluation and multiplication steps in the Garbled circuit with no loss in accuracy. Further, it achieves a total of 4.7x runtime reduction by employing the sign approximation in the Garbled circuit by leveraging the error-tolerant properties of neural networks within a 1% accuracy margin.
DeepReDuce is a set of optimizations for the judicious removal of ReLUs to reduce private inference latency by leveraging the ReLUs heterogeneity in classical networks. DeepReDuce strategically drops ReLUs upto 4.9x (on CIFAR-100) and 5.7x (on TinyImageNet) for ResNet18 with no loss in accuracy. Compared to the state-of-the-art for private inference DeepReDuce improves accuracy and reduces ReLU count by up to 3.5% (iso-ReLU) and 3.5×(iso-accuracy), respectively.
Responsibilities include: