A recently published large clinical study affirmed what previous studies already revealed. Ivermectin, the once touted treatment for COVID-19, does not yield statistically significant benefits that warrant its use as an effective COVID-19 therapeutic.
The researchers concluded that “treatment with ivermectin did not result in a lower incidence of medical admission to a hospital due to progression of Covid-19 or of prolonged emergency department observation among outpatients with an early diagnosis of Covid-19.”
In absence of evidence to support its effectiveness, the Food and Drug Administration (FDA) has gone so far as to post a consumer update entitled “Why You Should Not Use Ivermectin to Treat or Prevent COVID-19,” which is a drug used in various forms to prevent parasites in animals, as well as tablets approved for human use “at very specific doses to treat some parasitic worms.”
Some will still call foul, believing from personal experience that ivermectin works to treat COVID-19 and prevent its worst outcomes.
How can one product garner such variability in clinical responses?
For a pharmaceutical to be labeled as “effective,” it must undergo a rigorous review based on data that can tease out both its benefits and any possible harms. This is how the FDA assesses new products to ensure that they can be used to achieve desired clinical outcomes and remain safe so that the treatment benefits outweigh any possible health risks.
This means that neither the researchers nor the participants are aware of who is getting the pharmaceutical under study (typically labeled the “treatment group”) or who is getting some other pharmaceutical, such as a placebo or another therapeutic with known efficacy (typically labeled the “control group”). Such lack of knowledge is designed to eliminate conscious or unconscious biases that may creep into the study. People are assigned to the treatment or control group randomly and with similar group characteristics, which serves to mitigate the impact of unidentified features of the participants that could also bias the results.
Conducting large-scale double-blind, randomized studies is expensive. Many are funded in the United States by the National Institutes of Health (NIH). Others are funded by the pharmaceutical industry, which wishes to assess the clinical benefits of products developed and/or owned and subsequently make them available as a therapeutic. This could create a natural or perceived conflict of interest, demanding some distance between the researchers conducting the study and the organization paying for it.
Researchers running double-blind, randomized studies must also adhere to biomedical ethical principles, including informed consent. Study participants, who are typically volunteers, are made aware of the potential benefits and risks of the products under study. A randomized study in which risks clearly outweigh benefits, such as the benefits of parachutes compared to placebo when jumping out of an airplane, would never be undertaken.
When double-blind studies are infeasible, the next best option are observational studies. Such studies use data from populations of people based on behaviors that they freely choose to undertake, or products that they choose to use. Treatment and control groups are then assembled based on these choices, further partitioned based on characteristics of the people in these two populations.
The double-blind randomized study on ivermectin dominates observational studies that have supported its use as an effective treatment for COVID-19. Anecdotal observations of small groups of individuals who have used ivermectin that they believe helped them recover are inadequate to establish its population effectiveness. For example, observations from countries in Africa who use ivermectin to treat parasitic infections provide insufficient evidence to assert that it serves as an effective COVID-19 treatment.
Double-blind randomized studies are not always needed to affirm the benefits of a treatment. Given that many pharmaceutical benefits and risks may be subtle, double-blind randomized studies are designed to tease out such effects, so that people who use such treatments are informed of what they can expect when taking such products. Without such organized protocols, medical advances would be made in an ad hoc manner, effectively making it more difficult to assess the benefits and risks of new products.
Double-blind, randomized studies permit the benefits and risk of a product to be assessed in a controlled environment. They also have their limitations, given that they seek to capture population effects, not individual responses.
If people who believe in the benefits of ivermectin against COVID-19 remain unconvinced of the data-driven conclusion it is not an effective treatment, they are rejecting insights available from the best available scientific method for assessing its value.
Whether the product is a therapeutic like ivermectin or one of the COVID-19 vaccines available, a scientific method is needed to understand the benefits and risk that they offer. The recently published clinical trial with ivermectin serves such a role, even if some remain skeptical of its conclusions.
Sheldon H. Jacobson, Ph.D., is a founder professor of Computer Science and the Carle Illinois College of Medicine at the University of Illinois at Urbana-Champaign. As a data scientist, he applies his expertise in data-driven risk-based decision-making to evaluate and inform public policy.