Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square

Covid-19: scientists should build on solid grounds

The emergency spinned by the outbreak of the Covid-19 disease is huge, we all know. Several categories of workers are urged to do their best, especially those that are financed by governments. Among these, we have scientists.

All clinicians, practitioners and health researchers are doing a lot at the front line. Engineers and computer science guys are standing back, with the feeling of being futile. This is what pushed a lot of Machine Learning scientists to seek for solutions for the epidemics.

Since January 2020 there's been a lot of scientific works aiming at exploiting novel learning algorithms for screening, classification and prediction. That's a good idea, of course. But what are the odds that this effort introduces noise and confusion in a moment of emergency? Very high, I argue.

THE CASE

Let us take one specific task that serves as a good example: computer vision for Covid positivity detection from CT (computer tomography) chest scans. Simplifying a bit: detecting whether a patient is positive to Covid-19 by his X-ray scans. Automatic detection of health problems from imaging diagnostics has become a common task in the very last years for the scientific literature.

(image taken from this article)

However, knowing how much data quality is important to Machine Learning, and how much bias can influence the data, there is a ground skepticism with respect to using ML in delicate tasks that relates to health. I've been occasionally reviewing such works as a referee for Neural Networks conferences, and I had to push back hard several very light papers.

Am I totally wrong in doing this? Kinda... not!

Several studies have been quickly published since January 2020 providing promising results related to Covid-19 detection from CT scans. After consulting a few doctor friends, I've got a rather different view. Currently, the best method for Covid-19 detection is conducting RT-PCR on swab samples, and any other fancy technique may be less accurate and add troubles to the healthcare system. A recent review paper is against the use of CT for Covid-19 diagnosis (and they don't even consider the practical difficulties). Their conclusions are quite utter on the topic:

"To date, the radiology literature on COVID-19 has consisted of limited retrospective studies that do not substantiate the use of CT as a diagnostic test for COVID-19."

[link to article]

I believe that research in such a delicate field should be pursued in collaboration with experts. A position paper recently published by a panel of physicians, suggest guidelines about the diagnosis and the triage of Covid-19. Specifically, CT diagnosis may be of help in specific cases. These are the recommendations:

  • Imaging is not indicated in patients with suspected COVID-19 and mild clinical features unless they are at risk for disease progression.

  • Imaging is indicated in a patient with COVID-19 and worsening respiratory status.

  • In a resource-constrained environment, imaging is indicated for medical triage of patients with suspected COVID-19 who present with moderate-severe clinical features and high pre-test probability of disease.

[link to article]

Therefore, data collection should be performed by clinical experts. Data analysis and algorithms design should be conducted with clinical experts, and not by computer scientists alone.

But let's move on. While scientific societies deeply involved in the advancements of signal processing and computer technologies such as IEEE are magnifying the surge of algorithms at the service of diagnostics (see, e.g. this IEEE Spectrum article), on the other hand clinical scientists have to turn them down. A recent open-access paper published in the BMJ (British Medical Journal), reports the screening of 27 papers dealing with diagnosis and prognosis in Covid-19 patients. The result is terrible, the conclusions state that:

"This review indicates that proposed models are poorly reported, at high risk of bias, and their reported performance is probably optimistic.

[...]

Methodological guidance should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions."

[l