Researchers at the Massachusetts Institute of Technology (MIT) have found that asymptomatic COVID-19 carriers may differ from healthy ones in how they cough. The human ear cannot distinguish these differences. However, it turns out that they can be picked up by artificial intelligence.

The article, published in the IEEE Journal of Engineering in Medicine and Biology, describes an artificial intelligence model that distinguishes asymptomatic carriers of the virus from healthy people using their forced-cough recordings, which people voluntarily recorded on smartphones and sent to developers.

The researchers trained their program on tens of thousands of cough samples. When the cough samples obtained from the test group were uploaded to the program, the accuracy of detecting COVID-19-positive carriers (with symptoms) was 98.5%. Furthermore, the accuracy of the diagnosis of asymptomatic patients was 100%.

The team is working to integrate the service into a convenient mobile app that, if approved by medical experts and organizations, could potentially become a free, convenient, non-invasive pre-screening tool for detecting COVID-19 in patients without symptoms. The user can use the app daily, send the sound of their cough to their phone, and instantly receive information about whether they may be infected and whether they need to be tested for coronavirus.

Sound biomarkers

Even before the pandemic began, research teams were already training algorithms for recording coughs on mobile phones to diagnose pneumonia and asthma conditions accurately. Similarly, the MIT team developed artificial intelligence models to analyze recordings of random coughs to see if this could detect signs of Alzheimer’s disease. This disease is associated with memory loss and neuromuscular degradation, such as weakening of the vocal cords.

First, scientists created a General machine learning algorithm or neural network known as ResNet50. Using this neural network, you can recognize sounds and determine the vocal cords’ strength in them. Research has shown that the sound quality is “mmmm” it may indicate how strong or weak a person’s vocal cords are. The neural network was trained on audiobooks with more than 1,000 hours of speech to learn how to distinguish the word “them” from other words, such as “the” and “then.”

The team trained a second neural network to distinguish between emotional states in speech because patients with Alzheimer’s disease, and other neurological disorders, show frustration or flattened affect more often than express happiness or calmness. Researchers developed a model for classifying emotional speech by training it with actors intoning emotional States such as neutral, calm, happy, and sad.

The researchers then uploaded a database of different coughing examples to a third neural network to distinguish between changes in lung function and respiration.

Finally, the team combined all three models and applied an algorithm to detect muscle degradation. The algorithm simulates a sound mask or noise layer, distinguishing a strong cough from a weaker one.

With the new artificial intelligence framework, the team analyzed audio recordings, including Alzheimer’s patients, and found that the system can identify the disease better than existing diagnostic models. The results showed that the strength of the vocal cords and mood, performance of the lungs and breathing, and muscle degradation was an effective biomarker for diagnosing the disease.

When the coronavirus pandemic began, the MIT team wondered if their artificial intelligence system for treating Alzheimer’s could also work to diagnose COVID-19, as there was growing evidence that infected patients experienced similar neurological symptoms, such as temporary neuromuscular impairment.

Big data processing

In April, the team set out to collect as many cough records as possible, including patients with COVID-19. They created a website where people can record their cough on a mobile phone or other Internet access devices. Besides, participants fill out a questionnaire describing their current symptoms, regardless of whether they have COVID-19 or whether they have been diagnosed with an official test. Participants also indicate their gender, place of residence, and native language.

To date, researchers have collected more than 70,000 records, each of which contains several examples of coughing. In total, this is about 200,000 audio samples of random coughing, which, according to the development team, is the most extensive set of cough research data in history. People submitted approximately 2,500 entries confirmed to have COVID-19, including those who were asymptomatic.

The team used 2,500 coronavirus cough records and another 2,500 records they randomly selected from the collection to balance the data set. 4,000 such samples were taken to train the AI model. The remaining 1,000 entries were then entered into the model to see if it could accurately distinguish coughing from COVID-19 patients compared to healthy people.

Asymptomatic diagnosis

This artificial intelligence model is not intended to diagnose people with COVID-19 symptoms, as they are similar or may be related to other diseases, such as flu or asthma. The advantage of the system is its ability to distinguish an asymptomatic cough from a healthy cough.

The team is developing a free pre-diagnosis app based on an artificial intelligence model. They are also working with several hospitals worldwide to expand the cough samples database to train the model further and improve its accuracy.

The developers believe that pandemics could be a thing of the past if pre-diagnosis tools were always at hand.

Ultimately, they predict that audio models of artificial intelligence like the one they developed can be embedded in ubiquitous smart devices so that people can conveniently get an initial assessment of their disease risk, possibly daily.

This study was partially supported by the Japanese pharmaceutical company Takeda Pharmaceutical Company Limited.

Sources

Our Telegram channel: