Learning to See – Navigating the scientific experiments and studies during COVID-19

Photo by Pixabay on Pexels.com

This is a guest post from a medical perspective from Mark Surg.  In recent weeks and months we seen a flurry of research to do with the COVID-19 pandemic shared and commented on. It seems at times that people tend to choose the research that suits their own narrative.  I thought it would be helpful to ask Mark to write about how we should treat such research and experiments when published.

Mark works as a surgeon in Australia but was trained and worked in the UK. He is interested in the intersection of art, politics and faith, as well as finding a perfect balance between work, family and church.

My profession (Surgery) carries with it a rather sobering history – if I look back a century before my birth, ectopic pregnancies were a death sentence as were many ailments. Even more concerning, going to see the surgeon was an act of blind faith before aseptic technique, antibiotics and judicious anaesthesia became the norm. Part of the reason e got so far was a lot of trial and error – if you put trial and error into some organised form, it ends up being “research” and has bit by bit allowed us (along with technological advances) to move to a place where in the West, most ectopic pregnancies end well for the patient.


Strangely, there’s never before been a time in my lifetime when medical research has been major news – it’s hard to go a full day without seeing it reported on in the media whether it be the antibody response to Covid or the latest vaccine results. 


It’s been a challenging time for Christian doctors as we have not only seen our jobs become incommensurably more stressful than ever before but we are constantly asked by patients and families, siblings and other church members to comment on research and media reports that we are not always familiar with. We often feel if we don’t comment on it, the person will go online to find an answer which may be correct or very much incorrect.

However, a lot of the time we do fairly simple things to evaluate new research and how it relates to how we practice medicine. I’ll try to take you through the process of evaluating evidence. Here are the simple steps we tend to use.

Go to the source and bypass the middleman 


Certain websites and news sources have a specific agenda. The Daily Mail for example were recently caught fiddling graphs  (https://fullfact.org/health/mail-deaths-chart/) that would then help them make an anti-lockdown point. The Spectator had to change the title of one of their articles as it misled the readers as to the use of masks. (The original title was “Landmark Danish study shows face masks have no significant effect” and got flagged by Facebook as fake news. They had to retitle it to “Landmark Danish study finds no significant effect for face mask wearers”.) 


In this context, I would advise you to look at the original research. You can easily find the abstract (that’s the small summary written by the authors of the research with the main findings) which will give you a general idea of the research for free. Often if it relates to Covid, the full paper PDF is available for free.

Where is it published?

If it’s been published in a peer reviewed journal, then it will have been reviewed by enough qualified people to ensure that it’s not some college student’s glorified blog. Pre-prints (papers not yet published) have been bandied about a lot and they can vary between papers that will end up in the Lancet and others that are complete garbage that no serious journal would ever look at. One example of the latter is a paper (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3674138) that was used endlessly to explain Sweden’s high death rate yet when I read it, it was clear it would never get published – it was riddled with mistakes, odd logic and no statistical analysis. It remains a “work in progress” 5 months later – the reason for that is glaringly obvious yet it has been read almost 30 000 times online. So it remains a situation of “Buyer beware”!

What questions are they answering?


So now we have the abstract or the paper and have assessed its origin, the question is “What were they looking for?”. There’s usually a working hypothesis such as “Dexmethasone reduces mortality in Covid patients”. This question is the string that will lead you throughout the paper and even it answers different questions (such as “Blood pressure control was significantly improved in the dexamethasone group”) as it goes along, you should end the paper with that main question being answered _within the limitations of that paper_.

What is the structure of the experiment?


Let’s take the recent Danish mask paper as an example (https://pubmed.ncbi.nlm.nih.gov/33205991/). The ideal experiment would be to divide all of Denmark into two, give one half masks to be worn and forbid the other half to wear masks, then have an experimenter follow them around all day and make sure they wore them properly and changed them regularly, collecting data as you go along the way. You then retest at the end all of the patients and see how many got Covid in each group and look if the overall Covid infection rate in Denmark as a whole went up or not. 

As you can tell, that’s impossible to do. We always have to replicate things on a smaller scale and as a result, a lot of data and precision is lost. That’s not to say that if we can’t do the perfect experiment we should just give up but we do have to bear in mind the limiting factors of each experiment. In the case of a randomised control group one like this, we are going to be limited by numbers. They were also dependent on the mask group following orders (over half didn’t comply at all times and didn’t measure the transmission rate of the wearers but only whether the wearers would get Covid (that’s a very different question as to whether wearing a mask will stop a Covid positive person spreading it to others). 

So as you can see, just by reading the abstract we have a much better grasp on the question the research was asking and can already know if they have a significant difference (more on that in a bit!) and what this difference relates.


Significant differences

I know the entire idea of statistics is enough to make any lay person glaze over and give up but you don’t have to know exactly how stats work to get the general gist of things. 

We look at differences between groups using a whole battery of statistics but the key thing is if we find a significant difference – that in layperson terms is “Did the group getting the medication/mask/intervention show a difference from the group that wasn’t?”.

Take statins for example (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5895306/) – we know that if you give it to enough people above a certain age, and compare it to people of the same age who don’t take statins, the group taking statin (we call them the “experimental group”) will have less heart attacks than the “Control group” (those not taking aspirin). That difference is “significant” meaning it is a difference that is unlikely to arise by chance. The strength of the results is the P value – the probability of this arising by chance. Arbitrarily, most journals use a p value of less than <0.05 (i.e 5%) – that means that 5% of experiments with a p value of 0.05 will have a difference that arises by chance (i.e the difference we saw was just bad luck not a genuine difference). With a lower p value (0.01 – i.e. 1%), then the likelihood of the difference arising by chance is even less – which is the result for the meta-analysis of statin use. That means, we are fairly sure that taking a statin is a good idea to reduce your heart attack risk especially as you get older, hence why most patients in their 70s and over are on them.

 Draw your own conclusions

At this point, you have a fairly good idea of the differences that were seen and what the researchers were looking for. At this point, you have to evaluate how well or not the research was designed and how applicable it is to real life. 

Back to the Danish mask study, what we now know from reading it, is that it had nothing to say about for example using masks on public transport in an area with a high rate of Covid so I’m not going to make it say something it doesn’t. The same goes for some other research looking at mask wearing and Covid rates – they cannot be sure on how well the mandate to wear masks is being implemented in the different countries so it does make the data hard to interpret at times. For example, if you take a country where there was a mask mandate but the covid rate stays the same as a country with none, can we be sure masks don’t work? Not really – maybe the second country had very little circulating Covid and therefore the mask mandate was not going to have much effect. I write this in New South Wales where we have hit over 3 weeks with no community covid transmission and have had no mask mandate – yet it’s not because we didn’t have one that we got there but rather a lot of good fortune and our overall isolation from most of the rest of the world.

Conclusion:

I hope that this whistle-stop tour was useful and, if anything, will embolden you to look beyond the headlines being fed by the media. If there is one thing this pandemic has taught us, it is that science does not often have a silver bullet of evidence but rather creates a gradual accumulation of research acting as clues to what is happening in the real world.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s