Emotional artificial intelligence is AI that detects, processes, interpretirovat and reproduces human emotions. Based on the recognized emotional state of the interlocutor voice assistant will be able to choose how they will communicate: to tell jokes if you funny, or encourage, and to ask about the health. Also being developed, which distilled the speech synthesis will add tones that make it more artificial humanoid.

Why you need emotional AI
A common situation when a friend or acquaintance asks, “are you all right? You’re some kind of sad”. You can tell him about all your feelings or hide them for duty “All okay”. Anyway, you’ll notice the care of another person, he will be closer to you. The ability to pay attention to the feelings and emotions of another person is called empathy. The more precisely you can’t interpret emotions, intentions and motivations, the higher the level of your emotional intelligence.

Of empathy and says Satya Nadella, CEO Microsoft: “you can’t just listen to customers and give them what they want. In fact, they want you to go beyond their desires. It requires empathy, empathy”.

The ability to empathize is an important aspect not only of the person as the participant of communication, but also of the innovator. But why is it necessary robotics? First, it’s marketing. Empathy builds trust and attachment, including to the brand. Even now you like online when radio picks up the new tracks on the basis of preferences. Imagine if your playlist is still formed on the basis of emotions, recognized when you unlock Face ID. And Siri will allow you to talk to her after a hard day and give some advice, and then advise the film to escape. Already, “Alice” more than 30 000 skills, and emotional intelligence could be next.

Technology in the field of emotional computing will help brands to communicate with customers on a much deeper, personal level.

Secondly, the Internet is changing social processes. The speed of information transfer and in General the pace of life has increased tremendously. But the person is becoming increasingly lonely. Real communication is displaced virtual, creates an alternate reality where rare problems. The ability to hold a conversation with eye to eye also lost, not to mention about the above empathy to the people around. There is even a whole discipline digital anthropology, which studies the impact of technology on society.

By 2022 the market for personal robotics will reach $35 billion will Be the actual robot companion that will satisfy the need for communication, understanding, support.

In Japan already there are digital companions, functionality which goes beyond mere consultants. They become helpers, friends, and even wives. The West is not too far behind: 47 million Americans — almost 20% of all adults use a smart speaker, Amazon Echo and Google Alexa. Moreover, based on recent studies, use of smart speakers not only to give them commands. 25% taking them to bed, 20% joking with them, 15% is used as a babysitter for children — a column tells the tale and helps to distract the child.

Thirdly, the emergence of the ability to recognize emotions is a natural path of development of AI. Industry highlights multiple levels of artificial intelligence: weak, strong and generalized. Weak AI is already there: the notorious Siri opponent in online chess, a selection of news and articles according to your interests.

Theory of strong artificial intelligence suggests that computers can acquire the ability to think and to realize himself as a personality, and thought process is similar to a human. But without empathy everything is not possible

AI today

The creation of emotional artificial intelligence — the question of the future. At a certain stage of its development, robots and digital assistants will have the personality, character, motivation to act. But some experts believe that AI will not be able to fully possess this unique ability of man, as the perception of emotion and expression, all the more to surpass us in this. People did not believe that a computer could beat a human in chess. Until then, until 1996, and then in 1997, IBM supercomputer Deep Blue beat world champion Garry Kasparov.

In the book “Evolution of mind, or the Infinite possibilities of the human brain, based on pattern recognition” ray Kurzweil, engineering Director for Google, says that “the machines will possess the most varied emotional experience, can make us laugh and cry. And will be extremely unhappy, if we’re going to argue that they do not possess consciousness.”

Today we can speak about the existence of automatic systems for recognition of emotions in video and audio and the first attempts of synthesis of expressive speech.

Basically recognition technology focused on the facial expression or voice and can distinguish 6 emotions: joy, sadness, surprise, anger, fear and neutral state. Multimodal emotion recognition via several channels — something to aspire to leading research laboratories. It is both the facial expression, voice, body motion, pulse, respiratory rate. Such systems are more precisely: for example, when the robot will not see the face of his interlocutor, he will be able to focus on the tone of voice.

Laboratory specializing in affective Sciences, wants to give the machine the ability to recognize social cues — non-verbal behavior that people use in communication. Such technologies are not yet implemented in commercial applications, and manufacturers are limited to a simple set of emotions. The reason is the complexity of development and the need to increase the cost of the robot due to the difficult technical stuffing.

In other industries, where it is not necessary to understand a wide range of emotions, only begins the rise of affective technologies. “Alfa-Bank” the first in Russia launched a pilot project on the analysis of the emotions of the clients offices. Startup complements HireVue video interviews of the candidates emotional intelligence. While Samsung is investing millions of dollars in South Korean startup Looxid Labs working in the field of VR, emotional intelligence and biometrics, and Apple acquires company Emotient.

Emotional intelligence and robotics

The market emotion recognition is estimated at $20-30 billion only while the vendors don’t know how to approach him. Since that time, algorithms have become better, but still relevant question about the practical benefits of the technology. Developers do not have enough channels to test hypotheses, tools, implementation ideas, and here is a great tool to become service robots.

CES 2017 and 2018 will remember the headlines about emotional robots. The robots had limited functionality and could, for example, to change color depending on the emotions expressed by the interviewee.

In fact, since the late twentieth century commercial affective robotics were far less advanced. Kismet, created in lab at MIT in the 1990’s, already knew how to recognize social behavior and respond to it. He focused on the visual, auditory information and data about the movements of the interlocutor, thus interpreting his actions. In response, the robot was able to move his head to change the direction of gaze, wiggle his ears and talking.

Following the goal set for themselves the developers of service robotics — the ability to recognize complex emotions and cognitive States. Such plans in August 2018 announced a major company there’s affectiva and Softbank. Robot Pepper with the help of technology there’s affectiva will have to recognize subtle displays of emotion, and in addition to the usual smiles and wonder, to distinguish between a smirk and confusion.

See also

Robots instead of grandchildren: gadgets for users 60+

Read more •••

LEAVE A REPLY

Please enter your comment!
Please enter your name here