Sunday 20 October 2019

ARTICLE 5: Two-Step Manuscript Submission (in Academia)

Back in Episode 29 of the Podcast-Audiobook (Ch. 29: Is Science Dumb? Part 5 - The Epic Conclusion), we discussed issues that slow down scientific progress, or worse, can cause some to be distrustful of science over issues such as P-Hacking. P-Hacking basically means a scientist is intentionally manipulating the data recorded during an experiment to make it look more favourable for publishing. You can learn more in this video: 

Crash Course Statistics #30

As well, you can learn more about P-Hacking and the following issue I will discuss in this video:

SciShow: P-Values Broke Scientific Statistics - Can We Fix Them?

Before we dive into the proposed solution, let’s first review how bias can occur when a scientist submits their research to a Journal. We explored the problem itself in Episode 29, and today, we’ll explore a ‘new’ solution to address some of the biases that occur in Science. When I say new, I mean this solution has been around for at least over 6 years, so it might really be time for them to update their processes.


The Scientific Method


We’re going to start by discussing a concept many of you probably learned in grade school: The Scientific Method. The Scientific Method is a scientific framework that provides scientist a guideline to further a field of science, while also trying to minimize human emotions and biases we have discussed in previous episodes. It’s important to remember that the Scientific Method has advantages and disadvantages, and its not the only process used in scientific progress.

Here comes another fast and simplistic explanation to describe the scientific method, and we’ll use the example of comic book movies as another one of our terrible analogies to illustrate the concept:


1) Make an observation: Marvel movies appear to do better than DC movies at the box office.

2) Ask a question: Does this mean that the general public prefers marvel movies to DC movies?

3) Formulate hypothesis: Marvel movies are preferred at the box office

4) Conduct an experiment: I will randomly gather 10 people and have them watch a marvel movie and a DC movie. They will have a button they can push while watching the movies. Pushing the button indicates that the person enjoys something that was on the screen. The movie that results in the most button pushes s determined to be the better movie.

5) Analyze Data: The movie that results in the most button pushes s determined to be the better movie.

6) Draw a Conclusion: Either DC or marvel will reign supreme.


Now, maybe you already spot some flaws in the way I conducted my experiment. For instance, I was not very specific in the instructions I provided to the test subjects. What does enjoyment actually mean? The amount of jokes, clever characters and plot points, or the amount of space battles taking place?

Also, I could skew the results in my favour if I consciously or subconsciously wanted one company, Marvel or DC, to win. If I used the box office results, or review sites, I could have my test subjects rate movies that could arguably be called an unfair comparison. For instance, I could have my test subjects compare DC’s Wonder Woman to Marvel’s Hulk. They are both origin stories, but Wonder Woman grossed substantially more money worldwide than both Hulk movies combined (Superhero, N.D.). Conversely, the first Avengers movie doubled the box office take of the Justice League movie (Superhero, N.D.). There are other issues too, such as the small sample size of participants in the experiment.



Peer Review

Isolating variables such as these and ensuring outside factors, such as the scientist’s own bias, don’t affect the results are some of the challenges in science. This is the reason for peer review, which is basically the process of having other experts reproduce the original results and experiments to see if they come to the same results. The more times the experiment’s results are reproduced, the more likely the conclusive results are accurate. The important to note is that this whole process takes time, and is at the mercy of things like bias, funding, or even prestige. But before we get into that, we need to briefly explain the way peer review works and, we should mention that there isn’t exactly one formal process in place, so what we’re explaining isn’t necessarily set in stone.

Once a scientist completes their research, they submit a paper outlining their results to a journal. This journal publishes papers in specific fields. For instance, if I was submitting my earlier findings, I would submit them to a journal specializing in blockbuster movies, not that I think such a thing exists.

The journal’s editors decide whether the paper will be included, which means the scientist usually starts with the more prestigious journals, and then works their way down the list until they find a journal that thinks their paper has merit in the field. To lower the potential for bias, approaches like single-blind or double-blind review can be used. This allows the name of the scientist publishing the review, and/or the reviewers themselves remaining anonymous to ensure the review process is as fair as possible.

What this means is, if I were an editor at a prestigious journal, and Carlos submitted a paper for review, I’d make sure It went right to the top, cause Carlos and I are the best of Bros, but this is obviously a biased approach, so it would be in science’s best interest for me not to know that Carlos wrote the paper. Or let’s say, as a reviewer, I prefer Wonder Woman and I favourably review the papers that share my opinion. 

You can already start to see some potential problems with this system, and yes this sort of thing does happen. Scientists are always concerned with their reputations and their sources of funding, cause sadly, many fields of science may not garner the same amount of interest or financial support as my proposal of determining whether Marvel or DC is better. This may encourage some scientists not to pursue ideas that may be unpopular or controversial.


Two-Step Manuscript Submission


Using the previous analogy, Carlos would submit the experiment itself to my Journal without the results of the study accompanying the initial ‘first’ submission. At this stage, I would make a decision on if the Journal I represent is interested in the study regardless of what the results of the study were. This means I would be unable to only accept submissions with outcomes that I favoured. This prevents the bias of only accepting “positive,” results, which is a problem known to occur within science (Smulders, M., 2013), and removes a lot of a scientist's motivation for faking data.

Here is a quote from a commentary article that featured in a 2013 edition of the Journal of Clinical Epidemiology:

“Although the awareness of publication bias is not at all new, it seems we have not been willing or able to learn to reduce it. On the contrary, there are signs that publication bias is increasing over the years.”

Regardless of whether the two-step manuscript submission process is preferable, or switching to Bayesian statistics, as explained and suggested in the second video, it seems ridiculous to me to still be discussing this issue with over 6 years and no solutions implemented thus far.

Think this isn’t important? Well, what’s to stop another problem like this from occurring:


If the general public’s faith in science wasn’t so shaky to begin with, due to well-known occurring problems like P-Hacking, then maybe scientists would initially be given the benefit of the doubt when situations like this arise.

Scientists appear to update their processes about as quickly as Journalists update the world about an ongoing escalating Crisis…



#YouBuying?
#ViableUnderdogs


*************************************************************************************

References 


Carrington, D., (2011). The Guardian. Q&A: ‘Climategate’
https://www.theguardian.com/environment/2010/jul/07/climate-emails-question-answer

Crash Course, (2018). P-Hacking: Crash Course Statistics #30
https://www.youtube.com/watch?v=Gx0fAjNHb1M

SciShow, (2019). P-Values Broke Scientific Statistics – Can We Fix Them?
https://www.youtube.com/watch?v=tLM7xS6t4FE

Smulders, M., (2013). Journal of Clinical Epidemiology 66 (2013) 946-947. A two-step manuscript submission process can reduce publication bias.
https://www.gwern.net/docs/statistics/peerreview/2013-smulders.pdf

Superhero, (N.D.). Box Office Mojo. Superhero. 1978-Present.
https://www.boxofficemojo.com/genres/chart/?id=superhero.htm

Viable Underdogs, (2019). Uncage Human Ingenuity: A Realistic, Profitable Transition to Sustainability within 10 Years. The book can be purchased here:

Viable Underdogs Presents: Uncage Human Ingenuity

Solving the Global Communication Crisis

Prior to reading: The following article references material included in other books. Check out blurb.com for a list of all books. It may be...