Harsha Aduri

ML Researcher • PC and Tech Enthusiast • Inquisitive Reader


2017 - Now
I am an Applied Research Scientist at Amazon working on problems cross-cutting several domains including Natural Langauge Processing (NLP/NLU), Computer Vision, Clickstream and Behavioral analytics.

Currently Leading effort on the building ML solutions to improve Seller experience in Partner Support and Solutions (PSAS) org
2016
I was interning at Akamai as a machine learning scientist in the ghost security team.

Helped them preface their rule based WAF with a suite of positive and negative security models bringing over 50x speed-up per request (1ms vs 20microsecs)
2015 - 2017
As a grad student at UC San Diego, my research focused on problems related to Robotics and Computer Vision (Autonomous driving, health robotics), and interactions based on Vision/NLP (Scene description, Visual question answering)

Worked as a Research Assistant in the Contextual Robotics Institute solving challenges to aid in detection of motion in the scene while the camera is itself mounted on a fast moving platform like cars/drones.
2014 - 2015
Worked as a Software Engineer in the Android Advanced Solutions R&D group.

Built several features into the Samsung calendar app including D-Day, Rich Media events and Smart events for subscription calendars. Spent sometime at the Samsung Headquartes optimizing apps for Folding dual screen prototypes.
2010 - 2014
B. Tech from National Institute of Technology, Surat. My research focused on problems at the intersection of Computer Vision and Perception.

Worked as a Research Assistant in the Computer Vision lab where I generated dense 3D reconstructions by fusing depth images from Kinect sensor as well as the RGB camera and accelerated the processing by using a CUDA pipeline. The project was funded by TEQIP II (Govt. of India)
misc projects
Hand of ROS is an OWI-535 robotic arm that is mounted on top of a iRobot Roomba. We use a combination of beaglebone, Odroid XU4, stereo cameras and few ultrasonic depth sensors to create an autonomous navigation vehicle. The vehicle uses SLAM, intelligent obstacle avoidance and dynamic path planning to navigate an arena with other bots to compete to pick up as many rewards as possible.
PyROS bot is an autonomous firefighting robot. It is build on top of the iRobot create2 platform and uses a pair of cameras, ultrasonic depth sensors and SLAM to navigate.
xDCTCP is a custom datacenter congestion control protocol that improves upon the popular DCTCP by differentiating between long flows and short flows in the event of congestion. We effectively reduce the latency of short flows without compromising on the throughput of the longer flows.
misc unsorted