Development Of Socially-Aware Robot

University of Toronto Professor Tim Barfoot and his team of researchers are developing a revolutionary approach for robots to avoid colliding with humans by predicting the future positions of dynamic impediments in their way.

The study will be presented at the International Conference on Robotics and Automation in Philadelphia at the end of May. The results of the simulation, which have not yet been peer-reviewed, are available on the arXiv preprint service.

Spatiotemporal Occupancy Grid Maps are used by the robot to determine where to move. These are 3D grid maps kept in the robot’s processor, with each 2D grid cell holding projected information about activity in that location at that moment. The robot determines its next steps by running these maps via current trajectory-planning algorithms.

Light detection and range lidar, a distant sensing technology comparable to radar but using light instead of sound, is another important instrument employed by the team. Each lidar ping generates a point that is saved in the robot’s memory. The team’s previous work focused on designating these spots based on their dynamic features. This enables the robot to distinguish many sorts of items in its environment.

Dr, Tim Barfoot, and his team are hopeful that with this work, the robots can easily navigate through crowded spaces in a socially aware manner

The team provides good simulation results from the technique in the publication. The next challenge is to replicate this performance in real-world conditions, where human actions might be unpredictable. The team tested its device on the first floor of the University of Toronto’s My Hal Center for Engineering Innovation & Entrepreneurship, where the robot was able to pass past crowded students.

“When we run simulation experiments, we have agents that are programmed to a given behavior and they will get to a certain spot by taking the optimal route to get there,” Thomas explains. “But that is not how people behave in real life.”

People may rush or stop unexpectedly to talk to someone else or turn in an entirely another way as they travel through places. To deal with this type of behavior, the network applies self-supervised learning, a machine learning approach.

Self-supervised learning differs from other machine-learning approaches, such as reinforcement learning, in which the algorithm learns to accomplish a task by maximizing a reward concept through trial and error. While this method works well for some jobs, such as a computer learning to play chess or Go, it is not optimal for this sort of navigation.

Self-supervised learning, on the other hand, is simple and straightforward, making it simple to grasp how the robot makes decisions. This technique is also point-centric rather than object-centric, thus the network reads raw sensor data more carefully, allowing for multimodal predictions.

“Many older approaches recognize humans as discrete objects and plot their paths. However, because our model is point-concentricz, our system detects places where people should be rather than quantifying them as individual objects. And when the number of people increases, the space expands “explains Thomas

“This research points towards a promising route that might have significant consequences in fields such as autonomous driving and robot delivery when the environment is not completely predictable.”

The team hopes to expand its network in the future to understand more nuanced hints from dynamic aspects in a scenario.

“A lot more training data will be necessary,” Barefoot says. “However, it should be achievable because we’ve set ourselves up to collect data in a more autonomous manner: where the robot can take more data when navigating, train better prediction models while not in operation, and then use them the next time it navigates an area.”

WATER ROCKET, ROBOSOCCER , WRC BOTS COMBAT ROBO RACE, FASTEST LINE FOLLOWER, INNOVATION CONTEST, MAZE SOLVER, RC CRAFT, QUADCOPTER

 

 

 

 

 

 

 

 

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *