Google works to recognize sign language in video calls

The development team seeks to make this tool more accessible for this communication system

Google is working on technologies to make video calls more accessible and developed a new system that allows detecting in real time when a participant uses sign language, with the aim of highlighting them in group calls.

Most video calling services use systems to highlight people who speak loudly in group meetings, which is inconvenient for people with hearing problems when communicating using sign language.

To solve this problem, a team of researchers from Google Research prepared a real-time sign language detection model based on the estimation of poses that can identify people as speakers while communicating with this language.

The system developed by Google, presented at the European computer vision conference ECCV’20, employs a lightweight design that reduces the amount of CPU load required to run it, so as not to affect call quality.

The tool uses a model for estimating the position of arms and hands, known as PoseNet, which reduces the image data to a series of markers on the eyes, nose, hands and shoulders of the users, among others, so that movement is also detected.

This Google development shows about 80% effectiveness in detecting people who speak sign language when it uses only 0.000003 seconds of data, while if the previous 50 frames are used the effectiveness rises to 83.4%.

K. Tovar

Source: dpa

You might also like