This article originally appeared on Badass Times and is reproduced with permission.
Alongside the whole craziness happening in the world right now, phenomenal inventions are being created. For instance, MIT researchers have been working on an AI model that can detect people infected with COVID-19 through a digitally recorded cough.
It turns out that there are people who do not have any symptoms, thus they will not seek a test. However, they are infection transmitters and can unknowingly spread the virus.
“We hypothesized that COVID-19 subjects, especially including asymptomatics, could be accurately discriminated only from a forced-cough cell phone recording using Artificial Intelligence. To train our MIT Open Voice model, we built a data collection pipeline of COVID-19 cough recordings through our website between April and May 2020 and created the largest audio COVID-19 cough balanced dataset reported to date with 5,320 subjects.”
The MIT researches created an AI model that distinguishes asymptomatic individuals from healthy ones through forced-cough recordings. People submitted them voluntarily through their phones and computers. The model accurately detected 98.5% of people infected with COVID-19, including 100% of those who didn’t have any symptoms but had the virus.
Now the team is working on a user-friendly app. On a large scale, it could be a very convenient and easy to use tool for identifying people infected with COVID-19, even if they are asymptomatic. Users can simply cough into their phones, and instantly know if they might be infected, and claim a formal test.
“The effective implementation of this group diagnostic tool could diminish the spread of the pandemic if everyone uses it before going to a classroom, a factory, or a restaurant,” Brian Subirana, a research scientist at MIT’s Auto-ID Laboratory and co-author of the model.
Previously research groups have already been working on the algorithms that recorded coughs to diagnose pneumonia and asthma. The MIT researchers were working on the AI model to detect Alzheimer’s, as it’s also associated with weakened vocal cords.
The second neural network was developed to distinguish different emotional states in speech. Actors were expressing a different set of emotions to form a dataset for a speech classifier.
Then the researchers developed a third neural network to find changes in lung and respiratory performance. All three networks combined allowed them to detect muscular degradation. Eventually, together, all three networks combined were effective for diagnosing the disease.
When the pandemic began, Subirana wondered whether their AI networks for Alzheimer’s could work for diagnosing the coronavirus too. People infected with COVID had similar symptoms, such as temporary neuromuscular impairment.
The researchers have collected 70,000 recordings. Each of them contains several coughs, having 200,000 forced-cough audios in total. Around 2,500 of them were submitted by people infected with COVID-19, including asymptomatic ones.
The researchers are working with a company to develop a free pre-screening app based on their AI model. They are also partnering with several hospitals worldwide to collect a larger, more diverse set of cough recordings, which will help train and strengthen the model’s accuracy.
“Pandemics could be a thing of the past if pre-screening tools are always on in the background and constantly improved.”