I am currently a Robotics Masters student at Worcester Polytechnic Institute graduating in May 2024.
I am interested in Full-time roles in the domain of Robotics, specifically Perception, Computer Vision and deep learning starting May 2024! Previously, I was an intern at MathWorks where I worked on enabing C++ code generation support for Simulink models. At WPI, I worked on multiple projects in the domain of 3D perception like Structure from Motion, NeRF, Semantic Point Cloud segmentation and mapping, feature matching etc. I worked under Prof. Ziming Zhang in the VISlab on analysis and comparison of state-of-the-art feature matching algorithms SuperGlue and LoFTR.
I am interested in Full-time roles in the domain of Robotics, specifically Perception, Computer Vision and deep learning starting May 2024! Previously, I was an intern at MathWorks where I worked on enabing C++ code generation support for Simulink models. At WPI, I worked on multiple projects in the domain of 3D perception like Structure from Motion, NeRF, Semantic Point Cloud segmentation and mapping, feature matching etc. I worked under Prof. Ziming Zhang in the VISlab on analysis and comparison of state-of-the-art feature matching algorithms SuperGlue and LoFTR.
Education
Master of Science in Robotics Engineering
Worcester Polytechnic Institute, Massachussets, USA(Aug 2022 - Present)
- Expected Graduation - May 2024
- CGPA - 4.0/4.0
- Specialization - Perception, CV, Deep Learning
Bachelor of Technology, Electronics and Telecommunication Engineering
College of Engineering Pune, India(Aug 2018 - May 2022)
- CGPA - 8.05/10.0
Experience
Software Intern in the EDG Group mentored by Zijad Galijasevic
Natick, MA (May 2023 - Aug 2023)
Natick, MA (May 2023 - Aug 2023)
Enabling C++ code generation support for the Xilinx Zynq SoC boards and Embedded Linux Boards:
Enabling C++ code generation support for Xilinx Zynq SoC Blockset in the Embedded Processor Modelling team. Providing C++ support for Linux-based embedded development boards to optimize development tools for robotics applications.
Research Intern Supervised by Prof. Arpita Sinha
Mumbai, India (June 2021 - Aug 2021)
Mumbai, India (June 2021 - Aug 2021)
Simulating Trochoidal Patterns using Multiple Drones in Gazebo:
Simulated trochoidal patterns in Gazebo using multiple drones for surveillance of hilly/steep regions.Implemented a generalized consensus strategy for single-integrator kinematic agents for precise control of the drones. Implemented using the PX4, MAVROS packages in ROS and verified the results in MATLAB.
Project Intern.
Pune, India (Oct 2020 - Dec 2020)
Pune, India (Oct 2020 - Dec 2020)
Keywords: ROS, SLAM, Navigation
Mobile Robots: Worked on Mobile Robots for Autonomous navigation through ICUs for hospital staff assistance. Worked on Localization and Mapping techniques (SLAM) using a 2D Lidar and implemented Hector slam with monte carlo
(amcl) and movebase packages. Implemented ROS Navigation stack on a Raspberry Pi 4 with ROS running on top of Ubuntu 18.04
Undergraduate Researcher
Pune, India (Oct 2020 - Jan 2021)
Pune, India (Oct 2020 - Jan 2021)
Keywords:STM32, Arduino, Altium Designer, OpenCV, PyTorch
Lead a team of 15 students in the ABU Robocon with team COEP for 3 years. Responsible for programming, perception, sensor fusion, circuit & PCB designing, and testing robots.
Lead a team of 15 students in the ABU Robocon with team COEP for 3 years. Responsible for programming, perception, sensor fusion, circuit & PCB designing, and testing robots.
Projects
AutoPano
Stitched multiple images to create a Panorama using classical image processing as well as Deep Learning method. Implemented concepts like Homography, RANSAC, Adaptive Non-Maximal Suppression, Corner Detection and Feature Matching. Implemented a supervised Deep Learning model using the HomographyNet CNN architecture in PyTorch. Deep Learning based Robotic Grasping of unknown objects [link] | PyTorch, OpenCV, MoveIt! Aug 2021 - May 2022
Deep Learning based Robotic Grasping of unknown objects
Developed a pipeline to optimally Grasp objects of variable shape, size, and orientation with a robotic arm using vision capabilities. Implemented VGG16 and ResNet50 based network architecture using Transfer Learning in PyTorch. Implemented the pipeline on a custom 3D printed 5-DoF robotic arm using MoveIt! and KinectV2 depth camera. The problem of robotic grasping is still an unsolved problem with many approaches trying to generalize grasp predictions for unseen and dynamic en- vironments. Here we explore two approaches, one based on transfer learning, and another using a popular grasp detection model known as GG-CNN. In the transfer learning approach we tried 2 base models, VGG-16 and ResNet- 50. ResNet-50 provided better results with a testing accuracy of 83.3% while VGG-16 provided an accuracy of 78.2%. In order to test our model on a real robotic arm, we built a 5-DOF arm and added a custom parallel plate gripper.
Complete ROS and Moveit support is added to our developed robotic arm. The processed RG-D image from the KinectV2 camera is given as an input to the model which predicts the 5-D grasp configuration. Required electronic system design and its PCB is built which controls the robotic arm. The pre- dicted 5-D grasp configuration is then transformed to the object pose w.r.t the base link frame of the robot. Finally, A ROS node that automates the task of picking objects lying in different positions & orientations and sends the joint angle values over pyserial communication to the Arduino (PCB) is written. Thus, we have developed a complete pipeline for the task of Deep Learning based robotic grasping.
V-SLAM Implementation and object detection using Kinect V2
Worked on RTABMap implementation using the KinectV2 depth camera. Simulated the same using the Turtlebot3 in Gazebo. Performed CNN based Object Detection using the YOLO V3 Tiny framework in combination with map generation from RTABMap.
Point Cloud Feature Detection for Visual Grasping
Designed an algorithm to generate a convolution mask for optimal grasp estimation of multiple objects. Performed Point Cloud segmentation and transformed this Point Cloud to a 2D image.Convolved the generated mask over the 2D image to obtain the grasp parameters