I'm a dedicated AI researcher and Master's student at the University of Michigan Dearborn, driven to bridge cutting-edge research and practical applications. With a 3.7 GPA in my AI program, I specialize in computer vision, representation learning, and energy-based approaches to extract robust features across modalities.
My academic foundation includes a B.Tech in Computer Science from Medi-Caps University (CGPA: 8.12), where I first developed my passion for data-driven solutions. During my undergraduate studies, I led star-based navigation and RF interference mitigation projects at IIT Indore, securing real-world experience in deep learning and signal processing.
Currently, at UMich TAI Lab, I lead a 5-member team to build an innovative authentication and gesture system using polarized tape patterns on illuminated nails, collaborating closely with Prof. Xiao Zhang.
Connect with me on LinkedIn.
Implemented a Siamese CNN to learn a meaningful distance metric on MNIST, achieving high verification accuracy and smooth latent-space embeddings.
Fine-tuned U-Net on low-light driving datasets (A2D2, BDD100K), using advanced augmentation to maintain accuracy under challenging conditions.
Combined CNN+LSTM to capture spatio-temporal features for autonomous control in Mario Kart 64, interfacing via Mupen64Plus for real-time testing.
Fine-tuned a latent diffusion model to generate images from speech prompts, integrating audio feature extraction for multimodal synthesis.
Developed a hierarchical convolutional autoencoder for large-scale image compression, enabling efficient encoding of high-resolution inputs with minimal loss.
Built a Tkinter GUI for Meta’s Segment-Anything model, enabling intuitive region selection and mask refinement on arbitrary images.
Implemented real-time hand-tracking with OpenCV to enable 3D air-drawing, capturing gestures and rendering strokes on screen.