Aha, I thought, just the report that might convince a family member who recovered from COVID-19 that vaccination is still important. Off it went in an email.
The article was a summary of research suggesting that natural antibodies after recovery from COVID-19 may not be effective against new variants.
A response came quickly.
“Ah, but there has been no peer review,” our relative wrote back. Well, yes, the article did include that information. And I failed to try to find any supporting research or articles.
It is likely, I suspect, that we seniors are not alone in finding it hard to evaluate the accelerating number of medical articles, research and advice.
The Harvard Business review seems to have recognized the perplexity.
“The COVID-19 pandemic has produced a tidal wave of data, but how much of it is any good?” a May article asked.
If it is flawed, researchers or publications are expected to issue retractions, but journalists in The Economist last month raised still another question.
“What happens to scientific studies that are retracted?” they wrote.
The answer was not reassuring. “Our data analysis shows that retracted papers often have long afterlives,” they reported.
The Economist article, after describing a method that my-long-ago statistics and related courses failed to prepare me to follow well, concluded that it appears that some scientists studying COVID-19 “did not notice that the evidence they cited in their work was, in fact, no longer evidence at all.”
There are now searchable data bases of retracted articles dating back to the 1970s (a few older, as one from 1756 that involved Benjamin Franklin!), but obviously the problem continues.
Science Magazine found that 52.5% of recent articles citing two COVID-19 papers that were reported in a couple prestigious medical journals failed to note the studies subsequently were retracted.
Ouch. So, what how do we evaluate what we read?
There are some common red flags, the Harvard Business Review article suggests. But even these are challenging.
For example, the authors suggests skepticism of reports that are too broad, perhaps worldwide, or too specific as very localized or simply lack context .
“Look for how the data, technology or recommendations are presented,” the article advises. The more transparent the providers are about the representations of data, analytic methods and the more open to public scrutiny, the better the conclusions.
Additionally, we as readers likely easily misinterpret the information.
The health minister in Israel told a radio interviewer that 40% to 50% of new COVID-19 cases were among people who were vaccinated. He didn’t say, “Half of vaccinated people were stricken.” It makes a difference.
Additionally, just because a variant is more contagious, meaning it leads to more infections, does not necessarily mean it is more severe, The New York Times reported last week. Journalists and some experts talk about the new variant being “worse,” “riskier” or “more dangerous” — broad concepts that muddy the difference between contagion and severity.
For most of us, I suspect, it comes down to reading carefully and from multiple reliable sources rather than grabbing at a single report, as I did in an attempt to influence a vaccine protester.
His reaction was more critical.
“The whole POINT to science is that experiments should be reliable and repeatable or discarded.” he said in his email. Of course there is a layering of money and politics over science,” he added.
My husband additionally would suggest that the media’s rush to be the first to report any new research, statistic or concern factor in.
Certainly the quality both of information, reports and speed of research challenge all of us as consumers of the information.
Without a doubt I will take a closer look before sending any other research reports in hopes of encouraging a positive attitude toward vaccination.