Roboticists around the world have created a wide range of robots to help humans during the last few decades. These include robots that could assist the elderly and behave as true companions to improve their well-being and quality of life. The companion robots must ideally possess human-like qualities. Many computer science professionals have been trying to incorporate qualities that are observed commonly in human caregivers to robots. Now, researchers from Japan’s Hitachi R&D Group and the University of Tsukuba have created a novel approach for synthesizing emotional speech, which might allow companion robots to replicate how human caretakers interact with old or vulnerable patients.
Speech synthesis is used with emotional speech recognition algorithms in this method. Initially, the researchers trained a machine-learning model using a collection of human voice recordings obtained at various points throughout the day. The model’s emotion recognition component learned to detect emotions in a human speech during training. The researchers carried out a series of experiments to evaluate the effectiveness of the model and the results were highly promising. This new approach could aid roboticists in the development of more advanced companion robots that can adapt the emotion in their speech based on the time of day they are interacting with users, to match their levels of wakefulness and arousal.