Studies

Proxemics and comfort

Background: As autonomous mobile robots become more common in social and organizational settings, human comfort alongside these robots becomes increasingly important. Proxemic theory is used to describe and define personal space and in the field of robotics, it is used to understand and improve social navigation of robots amongst pedestrians. Although proxemic theory serves as a guide in human-robot interaction (HRI), the development of socially competent robot behavior remains challenging. The purpose of the study is to measure comfort based on distance between the given participant and another hypothetical person or object to determine at what distance comfort levels change.  

Methods: In a preregistered study, participants watched videos of a human confederate being approached by a human or a robot with different approach directions. Participants stopped the video when they felt their comfort level change to uncomfortable. Participants then answered the Negative Attitudes Toward Robots scale to see if attitudes toward robots moderated levels of comfort. 

Results: Participants allowed a robot to approach closer than a human from the front perspective and allowed a human to approach closer than a robot from the side perspective. The moderation of negative attitudes was not significant for both human and robot approach groups. 

Potential impact: This study will further influence the development of social navigation in robots by integrating the psychological concept of proxemics. 

Key words: human-robot interaction, proxemics, comfort, anxiety, social navigation 

Mutual Anticipation study

As robots begin to integrate into pedestrian environments, it is important to know how we can create robots with socially competent behaviors. In this pioneering field of human-robot interaction it is important to create robotic behavior that is based on real human behavioral data. The purpose of this study is explore how humans walk and self-organize in crowds and use those data to improve social navigation in robots. 

Participants will be assigned to two randomized groups. The two groups will be placed at opposite ends of a sectioned off space and then will be instructed to walk toward each other as they would normally walk such as when changing classes. Participants will wear colored hats as way to help the researchers identify and map out self-organization and lane formation patterns.

This study could further influence the development of competent robot behavior amongst pedestrians. 

Machine Learning in Human-Robot Interaction

The following videos showcase various neural networks simulating pedestrian behavior. In each video, the black dots represent the pedestrians in the scene while the red lines represent walls. The pedestrians cannot cross over these walls, which allows us to construct obstacles and model different environments. Each pedestrian has a goal destination and preferred speed, which drives them to move from their starting location. The neural networks shown here use a simulated LIDAR scanner to collect information about each pedestrian’s local environment. Through supervised learning from simulation data, made with ORCA and Social Force Model, the neural networks learn to move towards their goal while avoiding collisions with other pedestrians and the obstacles in their environment. Note that during training, the neural networks have not seen any obstacles but thanks to Lidar representation, they are able to learn to avoid colliding with obstacles.