top of page

Covid-19: scientists should build on solid grounds

The emergency spinned by the outbreak of the Covid-19 disease is huge, we all know. Several categories of workers are urged to do their best, especially those that are financed by governments. Among these, we have scientists.

All clinicians, practitioners and health researchers are doing a lot at the front line. Engineers and computer science guys are standing back, with the feeling of being futile. This is what pushed a lot of Machine Learning scientists to seek for solutions for the epidemics.

Since January 2020 there's been a lot of scientific works aiming at exploiting novel learning algorithms for screening, classification and prediction. That's a good idea, of course. But what are the odds that this effort introduces noise and confusion in a moment of emergency? Very high, I argue.

THE CASE

Let us take one specific task that serves as a good example: computer vision for Covid positivity detection from CT (computer tomography) chest scans. Simplifying a bit: detecting whether a patient is positive to Covid-19 by his X-ray scans. Automatic detection of health problems from imaging diagnostics has become a common task in the very last years for the scientific literature.

(image taken from this article)

However, knowing how much data quality is important to Machine Learning, and how much bias can influence the data, there is a ground skepticism with respect to using ML in delicate tasks that relates to health. I've been occasionally reviewing such works as a referee for Neural Networks conferences, and I had to push back hard several very light papers.

Am I totally wrong in doing this? Kinda... not!

Several studies have been quickly published since January 2020 providing promising results related to Covid-19 detection from CT scans. After consulting a few doctor friends, I've got a rather different view. Currently, the best method for Covid-19 detection is conducting RT-PCR on swab samples, and any other fancy technique may be less accurate and add troubles to the healthcare system. A recent review paper is against the use of CT for Covid-19 diagnosis (and they don't even consider the practical difficulties). Their conclusions are quite utter on the topic:

"To date, the radiology literature on COVID-19 has consisted of limited retrospective studies that do not substantiate the use of CT as a diagnostic test for COVID-19."

[link to article]

I believe that research in such a delicate field should be pursued in collaboration with experts. A position paper recently published by a panel of physicians, suggest guidelines about the diagnosis and the triage of Covid-19. Specifically, CT diagnosis may be of help in specific cases. These are the recommendations:

  • Imaging is not indicated in patients with suspected COVID-19 and mild clinical features unless they are at risk for disease progression.

  • Imaging is indicated in a patient with COVID-19 and worsening respiratory status.

  • In a resource-constrained environment, imaging is indicated for medical triage of patients with suspected COVID-19 who present with moderate-severe clinical features and high pre-test probability of disease.

[link to article]

Therefore, data collection should be performed by clinical experts. Data analysis and algorithms design should be conducted with clinical experts, and not by computer scientists alone.

But let's move on. While scientific societies deeply involved in the advancements of signal processing and computer technologies such as IEEE are magnifying the surge of algorithms at the service of diagnostics (see, e.g. this IEEE Spectrum article), on the other hand clinical scientists have to turn them down. A recent open-access paper published in the BMJ (British Medical Journal), reports the screening of 27 papers dealing with diagnosis and prognosis in Covid-19 patients. The result is terrible, the conclusions state that:

"This review indicates that proposed models are poorly reported, at high risk of bias, and their reported performance is probably optimistic.

[...]

Methodological guidance should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions."

[link to article]

Is this bad? Yes and no. The paper also states that:

"Immediate sharing of well documented individual participant data from covid-19 studies is needed for collaborative efforts to develop more rigorous prediction models and validate existing ones. The predictors identified in included studies could be considered as candidate predictors for new models."

This means that early research results are useful if taken as they are: early. Validation is necessary and guidelines must be followed to improve them. Unfortunately, in research, a lot of works do not see a continuation and many academics will just be glad of having a paper published with the keyword "covid-19" on it, like a pin. Conspiracy-theorists will also apply their confirmative biases by taking those early papers that may support their claims even though one day later they are retracted or proved wrong.

BOTTOM LINE

My thought is that we, as information engineers (computer scientists, electronic engineers, etc.) should be careful and stick to the basics, ground our research on solid works during an emergency. The novel branches of Machine Learning are too weak and stretched to be robust and reliable at this time. There is so much debate and troubles in recent Deep Learning literature and we are not going to save lives but just hurrying up. As a referee I often have to reject papers on Deep Learning: arbitrary choices, no methodology, poor understanding of the model, mislabelled data and a lot of biases affect them. As an author I often struggle to make sense of my own experiments, and I see this is common.

We have to be careful since a handful of computer experiments can, nowadays, turn into a paper.

To conclude and turning back to Covid-19: differently from Deep Learning, control theory is solid knowledge. Developed since the years of WWII it has now turned as a fundamental building block of engineering, biology and sociology. What we can do as information engineers is use this theory to explain in simple terms how can a pandemics be modelled and how to control it under our framework. This IEEE Spectrum article explains it in a nice way. This is useful knowledge and it should be easy to turn it into comprehensible for the regular citizen. Many of us engineers could be better in dissemination rather than in finding practical solutions, leaving the medical work to whom is competent.

Naturally, as things evolve we will see if my appraisal is worth or not, there is no certainty at the moment.

REFERENCED ARTICLES

  • Raptis, C.A. et al. "Chest CT and Coronavirus Disease (COVID-19): A Critical Review of the Literature to Date", American Journal of Roentgenology

  • Rubin, G.D. et al. "The Role of Chest Imaging in Patient Management during the COVID-19 Pandemic: A Multinational Consensus Statement from the Fleischner Society", RSNA Radiology Journal

  • Wynants, L. et al. "Prediction models for diagnosis and prognosis of covid-19 infection: systematic review and critical appraisal", BMJ

Single Post: Blog_Single_Post_Widget
bottom of page