The 10 Most Ridiculous Scientific Studies

examples of bad research papers

I mportant news from the world of science: if you happen to suffer a traumatic brain injury, don’t be surprised if you experience headaches as a result. In other breakthrough findings: knee surgery may interfere with your jogging, alcohol has been found to relax people at parties, and there are multiple causes of death in very old people. Write the Nobel speeches, people, because someone’s going to Oslo!

Okay, maybe not. Still, every one of those not-exactly jaw-dropping studies is entirely real—funded, peer-reviewed, published, the works. And they’re not alone. Here—with their press release headlines unchanged—are the ten best from from science’s recent annals of “duh.”

Study shows beneficial effect of electric fans in extreme heat and humidity: You know that space heater you’ve been firing up every time the temperature climbs above 90º in August? Turns out you’ve been going about it all wrong. If you don’t have air conditioning, it seems that “fans” (which move “air” with the help of a cunning arrangement of rotating “blades”) can actually make you feel cooler. That, at least, was the news from a study in the Journal of the American Medical Association (JAMA) last February. Still to come: “Why Snow-Blower Use Declines in July.”

Study shows benefit of higher quality screening colonoscopies: Don’t you just hate those low-quality colonoscopies? You know, the ones when the doctor looks at your ears, checks your throat and pronounces, “That’s one fine colon you’ve got there, friend”? Now there’s a better way to go about things, according to JAMA, and that’s to be sure to have timely, high quality screenings instead. That may be bad news for “Colon Bob, Your $5 Colonoscopy Man,” but it’s good news for the rest of us.

Holding on to the blues: Depressed individuals may fail to decrease sadness: This one apparently came as news to the folks at the Association for Psychological Science and they’ve got the body of work to stand behind their findings. They’re surely the same scientists who discovered that short people often fail to increase inches, grouchy people don’t have enough niceness and folks who wear dentures have done a terrible job of hanging onto their teeth. The depression findings in particular are good news, pointing to exciting new treatments based on the venerable “Turn that frown upside down” method.

Quitting smoking after heart attack reduces chest pain, improves quality of life: Looks like you can say goodbye to those friendly intensive care units that used hand out packs of Luckies to post-op patients hankering for a smoke. Don’t blame the hospitals though, blame those buzz-kills folks at the American Heart Association who are responsible for this no-fun finding. Next in the nanny-state crosshairs: the Krispy Kreme booth at the diabetes clinic.

Older workers bring valuable knowledge to the job: Sure they bring other things too: incomprehensible jokes, sensible shoes, the last working Walkman in captivity. But according to a study in the Journal of Applied Psychology , they also bring what the investigators call “crystallized knowledge,” which comes from “knowledge born of experience.” So yes, the old folks in your office say corny things like “Show up on time,” “Do an honest day’s work,” and “You know that plan you’ve got to sell billions of dollars worth of unsecured mortgages, bundle them together, chop them all up and sell them to investors? Don’t do that.” But it doesn’t hurt to humor them. They really are adorable sometimes.

Being homeless is bad for your health: Granted, there’s the fresh air, the lean diet, the vigorous exercise (no sitting in front of the TV for you!) But living on the street is not the picnic it seems. Studies like the one in the Journal of Health Psychology show it’s not just the absence of a fixed address that hurts, but the absence of luxuries like, say, walls and a roof. That’s especially true in winter—and spring, summer and fall too, follow-up studies have found. So quit your bragging, homeless people. You’re no healthier than the rest of us.

The more time a person lives under a democracy, the more likely she or he is to support democracy: It’s easy to fall for a charming strong-man—that waggish autocrat who promises you stability, order and no silly distractions like civil liberties and an open press. Soul-crushing annihilation of personal freedoms? Gimme’ some of that, big boy. So it came as a surprise that a study in Science found that when you give people even a single taste of the whole democracy thing, well, it’s like what they say about potato chips, you want to eat the whole bag. But hey, let’s keep this one secret. Nothing like a peevish dictator to mess up a weekend.

Statistical analysis reveals Mexican drug war increased homicide rates: That’s the thing about any war—the homicide part is kind of the whole point. Still, as a paper in The American Statistician showed, it’s always a good idea to crunch the numbers. So let’s run the equation: X – Y = Z, where X is the number of people who walked into the drug war alive, Y is the number who walked out and Z is, you know, the dead guys. Yep, looks like it adds up. (Don’t forget to show your work!)

Middle-aged congenital heart disease survivors may need special care: Sure, but they may not, too. Yes you could always baby them, like the American Heat Association recommends. But you know what they say: A middle-aged congenital heart disease survivor who gets special care is a lazy middle-aged congenital heart disease survivor. Heck, when I was a kid, our middle-aged congenital heart disease survivors worked for their care—and they thanked us for it too. This is not the America I knew.

Scientists Discover a Difference Between the Sexes: Somewhere, in the basement warrens of Northwestern University, dwell the scientists who made this discovery—androgynous beings, reproducing by cellular fission, they toiled in darkness, their light-sensitive eye spots needing only the barest illumination to see. Then one day they emerged blinking into the light, squinted about them and discovered that the surface creatures seemed to come in two distinct varieties. Intrigued, they wandered among them—then went to a kegger and haven’t been seen since. Spring break, man; what are you gonna’ do?

Read about changes to Time.com

More Must-Reads From TIME

  • Jane Fonda Champions Climate Action for Every Generation
  • Passengers Are Flying up to 30 Hours to See Four Minutes of the Eclipse
  • Biden’s Campaign Is In Trouble. Will the Turnaround Plan Work?
  • Essay: The Complicated Dread of Early Spring
  • Why Walking Isn’t Enough When It Comes to Exercise
  • The Financial Influencers Women Actually Want to Listen To
  • The Best TV Shows to Watch on Peacock
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Write to Jeffrey Kluger at [email protected]

You May Also Like

  • Newsletters

Site search

  • Israel-Hamas war
  • 2024 election
  • Solar eclipse
  • Supreme Court
  • All explainers
  • Future Perfect

Filed under:

  • Health Care

How to avoid getting duped by bad research, in 5 case studies

Share this story.

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: How to avoid getting duped by bad research, in 5 case studies

examples of bad research papers

If you want to know for sure whether something works, you have to design a good test. Yet every day, many studies are published that are so poorly designed that you shouldn't even wipe your table with them, let alone use them to inform personal health or policy decisions.

Still, that doesn't stop the media and policymakers from writing about and acting on shoddy research. This problem inspired a health researcher, physician, and journalist to team up to write a new guide on "study design for the perplexed." Published by the Centers for Disease Control and Prevention, the paper outlines five examples of the most common study biases in order to leave  readers with a sense of how to avoid being duped by bad studies.

Some key problems they looked at include:

Healthy user bias: when studies compare healthier users of a medical intervention with less-healthy people who didn't get the intervention. This can make medical treatments seem more effective than they really are. For example, people who seek out and receive a flu shot might be healthier overall than those who don't.  If you compare the two groups in a study, "healthy user bias" could give the false picture that it's the vaccine that made the difference. As they explain, "Healthy user bias is a type of selection bias that occurs when investigators fail to account for the fact that individuals who are more health conscious and actively seek treatment are generally destined to be healthier than those who do not."

Confounding by indication: This is another way the true cause of an outcome can get hidden. This type of bias can "occur because physicians choose to preferentially treat or avoid patients who are sicker, older, or have had an illness longer," the authors write. "In these scenarios, it is the trait (eg, dementia) that causes the adverse event (eg, a hip fracture), not the treatment itself (eg, benzodiazepine sedatives)." In other words, it wasn't necessarily the drugs that caused the hip fractures; it was that doctors tended to prescribe the drugs to populations (such as older folks) who are more likely to suffer hip fractures, regardless of what they are taking.

To learn how to spot three other classic biases, read the full guide here .

Will you help keep Vox free for all?

At Vox, we believe that clarity is power, and that power shouldn’t only be available to those who can afford to pay. That’s why we keep our work free. Millions rely on Vox’s clear, high-quality journalism to understand the forces shaping today’s world. Support our mission and help keep Vox free for all by making a financial contribution to Vox today.

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

examples of bad research papers

Next Up In Science

Sign up for the newsletter today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

Thanks for signing up!

Check your inbox for a welcome email.

Oops. Something went wrong. Please enter a valid email and try again.

Trump, wearing a navy suit and a red baseball cap, faces away from the camera while delivering a speech.

The radical social conservatives who could define Trump’s second term

examples of bad research papers

The lies that sell fast fashion

people in an outdoor crowd wear eclipse glasses and point to the sky.

The terrifying and awesome power of solar eclipses

A partial solar eclipse that shows a crescent-shaped rim of the sun against a black sky.

When is the next total solar eclipse?

Chalk-drawn stick figures stacked in a pyramid on a black background.

Americans are hooked on the fantasy of financial liberation

Beyoncé sitting sidesaddle on a galloping white horse, wearing a red, white, and blue Western outfit with a sash reading Cowboy Carter and holding an American flag.

The Western visuals of Beyoncé’s Cowboy Carter, decoded

  • International edition
  • Australia edition
  • Europe edition

Journals in a library

Research findings that are probably wrong cited far more than robust ones, study finds

Academics suspect papers with grabby conclusions are waved through more easily by reviewers

Scientific research findings that are probably wrong gain far more attention than robust results, according to academics who suspect that the bar for publication may be lower for papers with grabbier conclusions.

Studies in top science, psychology and economics journals that fail to hold up when others repeat them are cited, on average, more than 100 times as often in follow-up papers than work that stands the test of time.

The finding – which is itself not exempt from the need for scrutiny – has led the authors to suspect that more interesting papers are waved through more easily by reviewers and journal editors and, once published, attract more attention.

“It could be wasting time and resources,” said Dr Marta Serra-Garcia, who studies behavioural and experimental economics at the University of California in San Diego. “But we can’t conclude that something is true or not based on one study and one replication.” What is needed, she said, is a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed.

The study in Science Advances is the latest to highlight the “replication crisis” where results, mostly in social science and medicine, fail to hold up when other researchers try to repeat experiments. Following an influential paper in 2005 titled Why most published research findings are false , three major projects have found replication rates as low as 39% in psychology journals , 61% in economics journals , and 62% in social science studies published in the Nature and Science, two of the most prestigious journals in the world.

Working with Uri Gneezy, a professor of behavioural economics at UCSD, Serra-Garcia analysed how often studies in the three major replication projects were cited in later research papers. Studies that failed replication accrued, on average, 153 more citations in the period examined than those whose results held up. For the social science studies published in Science and Nature, those that failed replication typically gained 300 more citations than those that held up. Only 12% of the citations acknowledged that replication projects had failed to confirm the relevant findings.

The academic system incentivises journals and researchers to publish exciting findings, and citations are taken into account for promotion and tenure. But history suggests that the more dramatic the results , the more likely they are to be wrong . Dr Serra-Garcia said publishing the name of the overseeing editor on journal papers might help to improve the situation.

Prof Gary King, a political scientist at Harvard University, said the latest findings may be good news. He wants researchers to focus their efforts on claims that are subject to disagreement, so that they can gather more data and figure out the truth. “In some ways, then, we should regard the results of this interesting article as great news for the health of the scholarly community,” he said.

Prof Brian Nosek at the University of Virginia, who runs the Open Science Collaboration to assess reproducibility in psychology research, urged caution. “We presume that science is self-correcting. By that we mean that errors will happen regularly, but science roots out and removes those errors in the ongoing dialogue among scientists conducting, reporting, and citing each others research. If more replicable findings are less likely to be cited, it could suggest that science isn’t just failing to self-correct; it might be going in the wrong direction.’

“The evidence is not sufficient to draw such a conclusion, but it should get our attention and inspire us to look more closely at how the social systems of science foster self-correction and how they can be improved,” he added.

  • Higher education

More on this story

examples of bad research papers

Fridge magnets can be cool aid to holiday memory recall, study finds

examples of bad research papers

Want to skip that Christmas party? The host probably won’t mind, study shows

examples of bad research papers

‘Succession syndrome’ prevalent among wealthy households, psychiatrists warn

examples of bad research papers

Drugs and alcohol do not make you more creative, research finds

examples of bad research papers

Why being rude to the waiter (or other staff) is the worst strategy

examples of bad research papers

Authors of original dating profiles rated more attractive, research finds

examples of bad research papers

I fear my children are overexposed to technology. Experts say I’m right to worry

examples of bad research papers

I used to be ashamed of being a fangirl. Now I see how joyous and creative it was

examples of bad research papers

Autistic scholar Temple Grandin: ‘The education system is screening out visual thinkers’

examples of bad research papers

Human neurons transplanted into rats to help study brain disorders

Most viewed.

Article type icon

Examples of Good and Bad Research Questions

#scribendiinc

Written by  Scribendi

So, you've got a research grant in your sights or you've been admitted to your school of choice, and you now have to write up a proposal for the work you want to perform. You know your topic, have done some reading, and you've got a nice quiet place where nobody will bother you while you try to decide where you'll go from here. The question looms:     

What Is a Research Question?

Your research question will be your focus, the sentence you refer to when you need to remember why you're researching. It will encapsulate what drives you and be something your field needs an answer for but doesn't have yet. 

Whether it seeks to describe a phenomenon, compare things, or show how one variable influences another, a research question always does the same thing: it guides research that will be judged based on how well it addresses the question.

So, what makes a research question good or bad? This article will provide examples of good and bad research questions and use them to illustrate both of their common characteristics so that you can evaluate your research question and improve it to suit your needs.

How to Choose a Research Question

At the start of your research paper, you might be wondering, "What is a good research question?"

A good research question focuses on one researchable problem relevant to your subject area.

To write a research paper , first make sure you have a strong, relevant topic. Then, conduct some preliminary research around that topic. It's important to complete these two initial steps because your research question will be formulated based on this research.

With this in mind, let's review the steps that help us write good research questions.

1. Select a Relevant Topic

When selecting a topic to form a good research question, it helps to start broad. What topics interest you most? It helps when you care about the topic you're researching!

Have you seen a movie recently that you enjoyed? How about a news story? If you can't think of anything, research different topics on Google to see which ones intrigue you the most and can apply to your assignment.

Also, before settling on a research topic, make sure it's relevant to your subject area or to society as a whole. This is an important aspect of developing your research question, because, in general, your research should add value to existing knowledge .

2. Thoroughly Research the Topic

Now that you've chosen a broad but relevant topic for your paper, research it thoroughly to see which avenues you might want to explore further.

For example, let's say you decide on the broad topic of search engines. During this research phase, try skimming through sources that are unbiased, current, and relevant, such as academic journals or sources in your university library.

Check out: 21 Legit Research Databases for Free Articles in 2022

Pay close attention to the subtopics that come up during research, such as the following: Which search engines are the most commonly used? Why do some search engines dominate specific regions? How do they really work or affect the research of scientists and scholars?

Be on the lookout for any gaps or limitations in the research. Identifying the groups or demographics that are most affected by your topic is also helpful, in case that's relevant to your work.

3. Narrow Your Topic to a Single Point

Now that you've spent some time researching your broad topic, it's time to narrow it down to one specific subject. A topic like search engines is much too broad to develop a research paper around. What specifically about search engines could you explore?

When refining your topic, be careful not to be either too narrow or too broad. You can ask yourself the following questions during this phase:

Can I cover this topic within the scope of my paper, or would it require longer, heavier research? (In this case, you'd need to be more specific.)

Conversely, is there not enough research about my topic to write a paper? (In this case, you'd need to be broader.)

Keep these things in mind as you narrow down your topic. You can always expand your topic later if you have the time and research materials.

4. Identify a Problem Related to Your Topic

When narrowing down your topic, it helps to identify a single issue or problem on which to base your research. Ask open-ended questions, such as why is this topic important to you or others? Essentially, have you identified the answer to "so what"?

For example, after asking these questions about our search engine topic, we might focus only on the issue of how search engines affect research in a specific field. Or, more specifically, how search engine algorithms manipulate search results and prevent us from finding the critical research we need.

Asking these "so what" questions will help us brainstorm examples of research questions we can ask in our field of study.

5. Turn Your Problem into a Question

Now that you have your main issue or problem, it's time to write your research question. Do this by reviewing your topic's big problem and formulating a question that your research will answer.

For example, ask, "so what?" about your search engine topic. You might realize that the bigger issue is that you, as a researcher, aren't getting the relevant information you need from search engines.

How can we use this information to develop a research question? We might phrase the research question as follows:

"What effect does the Google search engine algorithm have on online research conducted in the field of neuroscience?"

Note how specific we were with the type of search engine, the field of study, and the research method. It's also important to remember that your research question should not have an easy yes or no answer. It should be a question with a complex answer that can be discovered through research and analysis.

Perfect Your Paper

Hire an expert academic editor , or get a free sample, how to find good research topics for your research.

It can be fun to browse a myriad of research topics for your paper, but there are a few important things to keep in mind.

First, make sure you've understood your assignment. You don't want to pick a topic that's not relevant to the assignment goal. Your instructor can offer good topic suggestions as well, so if you get stuck, ask them!

Next, try to search for a broad topic that interests you. Starting broad gives you more options to work with. Some research topic examples include infectious diseases, European history, and smartphones .

Then, after some research, narrow your topic to something specific by extracting a single element from that subject. This could be a current issue on that topic, a major question circulating around that topic, or a specific region or group of people affected by that topic.

It's important that your research topic is focused. Focus lets you clearly demonstrate your understanding of the topic with enough details and examples to fit the scope of your project.

For example, if Jane Austen is your research topic, that might be too broad for a five-page paper! However, you could narrow it down to a single book by Austen or a specific perspective.

To keep your research topic focused, try creating a mind map. This is where you put your broad topic in a circle and create a few circles around it with similar ideas that you uncovered during your research. 

Mind maps can help you visualize the connections between topics and subtopics. This could help you simplify the process of eliminating broad or uninteresting topics or help you identify new relationships between topics that you didn't previously notice. 

Keeping your research topic focused will help you when it comes to writing your research question!

2. Researchable

A researchable question should have enough available sources to fill the scope of your project without being overwhelming. If you find that the research is never-ending, you're going to be very disappointed at the end of your paper—because you won't be able to fit everything in! If you are in this fix, your research question is still too broad.

Search for your research topic's keywords in trusted sources such as journals, research databases , or dissertations in your university library. Then, assess whether the research you're finding is feasible and realistic to use.

If there's too much material out there, narrow down your topic by industry, region, or demographic. Conversely, if you don't find enough research on your topic, you'll need to go broader. Try choosing two works by two different authors instead of one, or try choosing three poems by a single author instead of one.

3. Reasonable

Make sure that the topic for your research question is a reasonable one to pursue. This means it's something that can be completed within your timeframe and offers a new perspective on the research.

Research topics often end up being summaries of a topic, but that's not the goal. You're looking for a way to add something relevant and new to the topic you're exploring. To do so, here are two ways to uncover strong, reasonable research topics as you conduct your preliminary research:

Check the ends of journal articles for sections with questions for further discussion. These make great research topics because they haven't been explored!

Check the sources of articles in your research. What points are they bringing up? Is there anything new worth exploring? Sometimes, you can use sources to expand your research and more effectively narrow your topic.

4. Specific

For your research topic to stand on its own, it should be specific. This means that it shouldn't be easily mistaken for another topic that's already been written about.

If you are writing about a topic that has been written about, such as consumer trust, it should be distinct from everything that's been written about consumer trust so far.

There is already a lot of research done on consumer trust in specific products or services in the US. Your research topic could focus on consumer trust in products and services in a different region, such as a developing country.

If your research feels similar to existing articles, make sure to drive home the differences.

Whether it's developed for a thesis or another assignment, a good research topic question should be complex enough to let you expand on it within the scope of your paper.

For example, let's say you took our advice on researching a topic you were interested in, and that topic was a new Bridezilla reality show. But when you began to research it, you couldn't find enough information on it, or worse, you couldn't find anything scholarly.

In short, Bridezilla reality shows aren't complex enough to build your paper on. Instead of broadening the topic to all reality TV shows, which might be too overwhelming, you might consider choosing a topic about wedding reality TV shows specifically.

This would open you up to more research that could be complex enough to write a paper on without being too overwhelming or narrow.

6. Relevant

Because research papers aim to contribute to existing research that's already been explored, the relevance of your topic within your subject area can't be understated.

Your research topic should be relevant enough to advance understanding in a specific area of study and build on what's already been researched. It shouldn't duplicate research or try to add to it in an irrelevant way.

For example, you wouldn't choose a research topic like malaria transmission in Northern Siberia if the mosquito that transmits malaria lives in Africa. This research topic simply isn't relevant to the typical location where malaria is transmitted, and the research could be considered a waste of resources.

Do Research Questions Differ between the Humanities, Social Sciences, and Hard Sciences?

The art and science of asking questions is the source of all knowledge. 

–Thomas Berger

First, a bit of clarification: While there are constants among research questions, no matter what you're writing about, you will use different standards for the humanities and social sciences than for hard sciences, such as chemistry. The former depends on subjectivity and the perspective of the researcher, while the latter requires answers that must be empirically tested and replicable.

For instance, if you research Charles Dickens' writing influences, you will have to explain your stance and observations to the reader before supporting them with evidence. If you research improvements in superconductivity in room-temperature material, the reader will not only need to understand and believe you but also duplicate your work to confirm that you are correct.

Do Research Questions Differ between the Different Types of Research?

Research questions help you clarify the path your research will take. They are answered in your research paper and usually stated in the introduction.

There are two main types of research—qualitative and quantitative. 

If you're conducting quantitative research, it means you're collecting numerical, quantifiable data that can be measured, such as statistical information.

Qualitative research aims to understand experiences or phenomena, so you're collecting and analyzing non-numerical data, such as case studies or surveys.

The structure and content of your research question will change depending on the type of research you're doing. However, the definition and goal of a research question remains the same: a specific, relevant, and focused inquiry that your research answers.

Below, we'll explore research question examples for different types of research.

Examples of Good and Bad Research Questions

Comparative Research

Comparative research questions are designed to determine whether two or more groups differ based on a dependent variable. These questions allow researchers to uncover similarities and differences between the groups tested.

Because they compare two groups with a dependent variable, comparative research questions usually start with "What is the difference in…"

A strong comparative research question example might be the following:

"What is the difference in the daily caloric intake of American men and women?" ( Source .)

In the above example, the dependent variable is daily caloric intake and the two groups are American men and women.

A poor comparative research example might not aim to explore the differences between two groups or it could be too easily answered, as in the following example:

"Does daily caloric intake affect American men and women?"

Always ensure that your comparative research question is focused on a comparison between two groups based on a dependent variable.

Descriptive Research

Descriptive research questions help you gather data about measurable variables. Typically, researchers asking descriptive research questions aim to explain how, why, or what.

These research questions tend to start with the following:

What percentage?

How likely?

What proportion?

For example, a good descriptive research question might be as follows:

"What percentage of college students have felt depressed in the last year?" ( Source .)

A poor descriptive research question wouldn't be as precise. This might be something similar to the following:

"What percentage of teenagers felt sad in the last year?"

The above question is too vague, and the data would be overwhelming, given the number of teenagers in the world. Keep in mind that specificity is key when it comes to research questions!

Correlational Research

Correlational research measures the statistical relationship between two variables, with no influence from any other variable. The idea is to observe the way these variables interact with one another. If one changes, how is the other affected?

When it comes to writing a correlational research question, remember that it's all about relationships. Your research would encompass the relational effects of one variable on the other.

For example, having an education (variable one) might positively or negatively correlate with the rate of crime (variable two) in a specific city. An example research question for this might be written as follows:

"Is there a significant negative correlation between education level and crime rate in Los Angeles?"

A bad correlational research question might not use relationships at all. In fact, correlational research questions are often confused with causal research questions, which imply cause and effect. For example:

"How does the education level in Los Angeles influence the crime rate?"

The above question wouldn't be a good correlational research question because the relationship between Los Angeles and the crime rate is already inherent in the question—we are already assuming the education level in Los Angeles affects the crime rate in some way.

Be sure to use the right format if you're writing a correlational research question.

How to Avoid a Bad Question

Ask the right questions, and the answers will always reveal themselves. 

–Oprah Winfrey

If finding the right research question was easy, doing research would be much simpler. However, research does not provide useful information if the questions have easy answers (because the questions are too simple, narrow, or general) or answers that cannot be reached at all (because the questions have no possible answer, are too costly to answer, or are too broad in scope).

For a research question to meet scientific standards, its answer cannot consist solely of opinion (even if the opinion is popular or logically reasoned) and cannot simply be a description of known information.

However, an analysis of what currently exists can be valuable, provided that there is enough information to produce a useful analysis. If a scientific research question offers results that cannot be tested, measured, or duplicated, it is ineffective.

Bad Research Question Examples

Here are examples of bad research questions with brief explanations of what makes them ineffective for the purpose of research.

"What's red and bad for your teeth?"

This question has an easy, definitive answer (a brick), is too vague (What shade of red? How bad?), and isn't productive.

"Do violent video games cause players to act violently?"

This question also requires a definitive answer (yes or no), does not invite critical analysis, and allows opinion to influence or provide the answer.

"How many people were playing balalaikas while living in Moscow on July 8, 2019?"

This question cannot be answered without expending excessive amounts of time, money, and resources. It is also far too specific. Finally, it doesn't seek new insight or information, only a number that has no conceivable purpose.

How to Write a Research Question

The quality of a question is not judged by its complexity but by the complexity of thinking it provokes. 

–Joseph O'Connor

What makes a good research question? A good research question topic is clear and focused. If the reader has to waste time wondering what you mean, you haven't phrased it effectively.

It also needs to be interesting and relevant, encouraging the reader to come along with you as you explain how you reached an answer. 

Finally, once you explain your answer, there should be room for astute or interested readers to use your question as a basis to conduct their own research. If there is nothing for you to say in your conclusion beyond "that's the truth," then you're setting up your research to be challenged.

Good Research Question Examples

Here are some examples of good research questions. Take a look at the reasoning behind their effectiveness.

"What are the long-term effects of using activated charcoal in place of generic toothpaste for routine dental care?"

This question is specific enough to prevent digressions, invites measurable results, and concerns information that is both useful and interesting. Testing could be conducted in a reasonable time frame, without excessive cost, and would allow other researchers to follow up, regardless of the outcome.

"Why do North American parents feel that violent video game content has a negative influence on their children?"

While this does carry an assumption, backing up that assumption with observable proof will allow for analysis of the question, provide insight on a significant subject, and give readers something to build on in future research. 

It also discusses a topic that is recognizably relevant. (In 2022, at least. If you are reading this article in the future, there might already be an answer to this question that requires further analysis or testing!)

"To what extent has Alexey Arkhipovsky's 2013 album, Insomnia , influenced gender identification in Russian culture?"

While it's tightly focused, this question also presents an assumption (that the music influenced gender identification) and seeks to prove or disprove it. This allows for the possibilities that the music had no influence at all or had a demonstrable impact.

Answering the question will involve explaining the context and using many sources so that the reader can follow the logic and be convinced of the author's findings. The results (be they positive or negative) will also open the door to countless other studies.

How to Turn a Bad Research Question into a Good One

If something is wrong, fix it if you can. But train yourself not to worry. Worry never fixes anything.

–Ernest Hemingway

How do you turn something that won't help your research into something that will? Start by taking a step back and asking what you are expected to produce. While there are any number of fascinating subjects out there, a grant paying you to examine income disparity in Japan is not going to warrant an in-depth discussion of South American farming pollution. 

Use these expectations to frame your initial topic and the subject that your research should be about, and then conduct preliminary research into that subject. If you spot a knowledge gap while researching, make a note of it, and add it to your list of possible questions.

If you already have a question that is relevant to your topic but has flaws, identify the issues and see if they can be addressed. In addition, if your question is too broad, try to narrow it down enough to make your research feasible.

Especially in the sciences, if your research question will not produce results that can be replicated, determine how you can change it so a reader can look at what you've done and go about repeating your actions so they can see that you are right.

Moreover, if you would need 20 years to produce results, consider whether there is a way to tighten things up to produce more immediate results. This could justify future research that will eventually reach that lofty goal.

If all else fails, you can use the flawed question as a subtopic and try to find a better question that fits your goals and expectations.

Parting Advice

When you have your early work edited, don't be surprised if you are told that your research question requires revision. Quite often, results or the lack thereof can force a researcher to shift their focus and examine a less significant topic—or a different facet of a known issue—because testing did not produce the expected result. 

If that happens, take heart. You now have the tools to assess your question, find its flaws, and repair them so that you can complete your research with confidence and publish something you know your audience will read with fascination.

Of course, if you receive affirmation that your research question is strong or are polishing your work before submitting it to a publisher, you might just need a final proofread to ensure that your confidence is well placed. Then, you can start pursuing something new that the world does not yet know (but will know) once you have your research question down.

Master Your Research with Professional Editing

About the author.

Scribendi Editing and Proofreading

Scribendi's in-house editors work with writers from all over the globe to perfect their writing. They know that no piece of writing is complete without a professional edit, and they love to see a good piece of writing transformed into a great one. Scribendi's in-house editors are unrivaled in both experience and education, having collectively edited millions of words and obtained nearly 20 degrees. They love consuming caffeinated beverages, reading books of various genres, and relaxing in quiet, dimly lit spaces.

Have You Read?

"The Complete Beginner's Guide to Academic Writing"

Related Posts

21 Legit Research Databases for Free Journal Articles in 2022

21 Legit Research Databases for Free Journal Articles in 2022

How to Research a Term Paper

How to Research a Term Paper

How to Write a Research Proposal

How to Write a Research Proposal

Upload your file(s) so we can calculate your word count, or enter your word count manually.

We will also recommend a service based on the file(s) you upload.

English is not my first language. I need English editing and proofreading so that I sound like a native speaker.

I need to have my journal article, dissertation, or term paper edited and proofread, or I need help with an admissions essay or proposal.

I have a novel, manuscript, play, or ebook. I need editing, copy editing, proofreading, a critique of my work, or a query package.

I need editing and proofreading for my white papers, reports, manuals, press releases, marketing materials, and other business documents.

I need to have my essay, project, assignment, or term paper edited and proofread.

I want to sound professional and to get hired. I have a resume, letter, email, or personal document that I need to have edited and proofread.

 Prices include your personal % discount.

 Prices include % sales tax ( ).

examples of bad research papers

  • Open access
  • Published: 02 June 2022

Tolerating bad health research: the continuing scandal

  • Stefania Pirosca 1 ,
  • Frances Shiely 2 , 3 ,
  • Mike Clarke 4 &
  • Shaun Treweek   ORCID: orcid.org/0000-0002-7239-7241 1  

Trials volume  23 , Article number:  458 ( 2022 ) Cite this article

13k Accesses

22 Citations

406 Altmetric

Metrics details

At the 2015 REWARD/EQUATOR conference on research waste, the late Doug Altman revealed that his only regret about his 1994 BMJ paper ‘The scandal of poor medical research’ was that he used the word ‘poor’ rather than ‘bad’. But how much research is bad? And what would improve things?

We focus on randomised trials and look at scale, participants and cost. We randomly selected up to two quantitative intervention reviews published by all clinical Cochrane Review Groups between May 2020 and April 2021. Data including the risk of bias, number of participants, intervention type and country were extracted for all trials included in selected reviews. High risk of bias trials was classed as bad. The cost of high risk of bias trials was estimated using published estimates of trial cost per participant.

We identified 96 reviews authored by 546 reviewers from 49 clinical Cochrane Review Groups that included 1659 trials done in 84 countries. Of the 1640 trials providing risk of bias information, 1013 (62%) were high risk of bias (bad), 494 (30%) unclear and 133 (8%) low risk of bias. Bad trials were spread across all clinical areas and all countries. Well over 220,000 participants (or 56% of all participants) were in bad trials. The low estimate of the cost of bad trials was £726 million; our high estimate was over £8 billion.

We have five recommendations: trials should be neither funded (1) nor given ethical approval (2) unless they have a statistician and methodologist; trialists should use a risk of bias tool at design (3); more statisticians and methodologists should be trained and supported (4); there should be more funding into applied methodology research and infrastructure (5).

Conclusions

Most randomised trials are bad and most trial participants will be in one. The research community has tolerated this for decades. This has to stop: we need to put rigour and methodology where it belongs — at the centre of our science.

Peer Review reports

At the 2015 REWARD/EQUATOR conference on research waste, the late Doug Altman revealed that his only regret about his 1994 BMJ paper ‘The scandal of poor medical research’ [ 1 ] was that he used the word ‘poor’ rather than ‘bad’. Towards the end of his life, Doug had considered writing a sequel with a title that included not only ‘bad’ but ‘continuing’ [ 2 ].

That ‘continuing’ is needed should worry all of us. Ben Van Calster and colleagues have recently highlighted the paradox that science consistently undervalues methodology that would underpin good research [ 3 ]. The COVID-19 pandemic has generated an astonishing amount of research and some of it has transformed the way the virus is managed and treated. But we expect that much COVID-19 research will be bad because much of health research in general is bad [ 3 ]. This was true in 1994 and it remains true in 2021 because how research is done allows it to be so. Research waste seems to be baked-in to the system.

In this commentary, we do not intend to list specific examples of research waste. Rather, we want to talk about scale, participants and money and then finish with five recommendations. All of the latter will look familiar — Doug Altman and others [ 3 , 4 , 5 , 6 , 7 , 8 ] have suggested them many times — but we hope our numbers on scale, participants and money will lend the recommendations an urgency they have always deserved but never had.

So, how much research is bad?

That research waste is common is not in doubt [ 3 , 4 , 5 , 6 , 7 , 8 ] but we wanted to put a number on something more specific: how much is bad research that is not just wasteful but which we could have done without and lost little or nothing? Rather than trying to tackle all of health research, we have chosen to focus on randomised trials because that is the field we know best and, in addition, they play a central role in decisions regarding the treatments that are offered to patients.

With this in mind, we aimed to estimate the proportion of trials that are bad, how many participants were involved and how much money was spent on them.

Selecting a cohort of trials

We used systematic reviews as our starting point because these bodies of trial evidence often underpin clinical practice through guideline recommendations and policy. We specifically chose Cochrane systematic reviews because they are standardised, high-quality systematic reviews. We were only interested in recent reviews because these represent the most up-to-date bodies of evidence.

Moreover, Cochrane reviews record the review authors’ judgements about the risk of bias of included trials, in other words, they assess the extent to which the trial’s findings can be believed [ 9 ]. We consider that to be a measure of how good or bad a trial is. Cochrane has three categories of overall risk of bias: high, uncertain and low. We considered a high risk of bias trial to be bad, a low risk of bias trial to be good and an uncertain risk of bias trial to be exactly that, uncertain. We did not attempt to look at which type (or ‘domain’) of bias drove the overall assessment. We share the view given in the Cochrane Handbook (Chapter 8) [ 9 ] that the overall risk of bias is the least favourable assessment across the domains of bias. If one domain is high risk, then the overall assessment is high risk. No domain is more or less important than any other and if there is a high risk of bias in even just one domain, this calls into question the validity of the trial’s findings.

We used the list randomiser at random.org to randomly select two reviews published between May 2020 and April 2021 from each of the 53 clinical Cochrane Review Groups. To be included, a review had to consider intervention effects rather than being a qualitative review or a review of reviews. We then extracted basic information (our full dataset is at https://osf.io/dv6cw/?view_only=0becaacc45884754b09fd1f54db0c495 ) about every included trial in each review, including the overall risk of bias assessment. Our aim was to make no judgements about the risk of bias ourselves but to take what the review authors had provided. We did not contact the review or trial authors for additional information. Extracted data were put into Excel spreadsheets, one for each Cochrane Review Group.

To answer our question about the proportion of bad trials and how many participants were in them, we used simple counts across reviews and trials. Counts across spreadsheets were done using R and our code is at https://osf.io/dv6cw/?view_only=0becaacc45884754b09fd1f54db0c495 . To estimate how much money might have been spent on the trials, we used three estimates of the cost-per-participant to give a range of possible values for total spend:

Estimate 1: An estimate of the cost-per-participant for the UK’s National Institute for Health Research Health Technology Assessment (NIHR HTA) Programme trials of 2987 GBP. This was calculated based on a median cost per NIHR HTA trial of 1,433,978 GBP for 2011–2016 [ 10 ] and a median final recruitment target for NIHR HTA trials of 480 for 2004–2016 [ 11 ].

Estimate 2: The median cost-per-participant of 41,413 USD found for pivotal clinical benefit trials supporting the US approval of new therapeutic agents, 2015–2017 [ 12 ].

Estimate 3: The 2012 average cost-per-participant for UK trials of 9758 EUR found by Europe Economics [ 13 ].

These estimates were all converted into GBP using https://www.currency-converter.org.uk to get the exchange rate on 1st January in the latest year of trials covered by the estimate (i.e. 2017 for E2 and 2012 for E3). These were then all converted to 2021 GBP on 11 August 2021 using https://www.inflationtool.com , making E1 £3,256, E2 £35,918 and E3 £9,382. We acknowledge that these are unlikely to be exact for any given trial in our sample, but they were intended to give ballpark average figures to promote discussion.

Scale, participants and money

We extracted data for 1659 randomised trials spread across 96 reviews from 49 of the 53 clinical Cochrane Review Groups. The remaining four Review Groups published no eligible reviews in our time period. The 96 included reviews involved 546 review authors. Trials in 84 countries, as well as 193 multinational trials, are included. Risk of bias information was not available for 19 trials, meaning our risk of bias sample is 1640 trials. Almost all reviews (94) exclusively used Cochrane’s original risk of bias tool (see Supplementary File 1 ) rather than the new Risk of Bias tool (version 2.0) [ 14 ]. Cochrane RoB 1.0 has six domains of bias (sequence generation; allocation concealment; blinding of participants, personnel and outcome assessors; incomplete outcome data; selective outcome reporting; other sources of bias), while RoB 2.0 has five domains (randomisation process; assignment and adherence to intervention; incomplete outcome data; outcome measurement; selective reporting). Where the old tool was used, we used review authors’ assessment of the overall risk of bias. For the two reviews that used Risk of Bias 2, we did not make individual risk of bias judgements for domains, but we did take a view on the overall risk of bias if the review authors did not do this. We did this by looking across the individual domains and making a choice of high, uncertain or low overall risk of bias based on the number of individual domains falling into each category. This was a judgement; we did not use a hard-and-fast rule. We had to do this for 40 trials.

The majority of trials (1013, or 62%) were high risk of bias (Table 1 ). These trials were spread across all 49 Cochrane Review Groups and over half of the Groups (28, or 57%) had zero low risk of bias trials included in the reviews we randomly selected. The clinical area covered by the Anaesthesia Review Group had the highest proportion of low risk of bias trials at 60% but this group included 19 trials with no risk of bias information (see Fig. 1 ).

figure 1

Risk of bias for included trials in randomly selected systematic reviews published between May 2020 and April 2021 by 49 Cochrane Review Groups

Some of the 84 countries in our sample contributed very few trials but Table 2 shows risk of bias data for the 17 countries that contributed 20 or more trials, as well as for multinational trials. The percentage of a country’s trials that were judged as low risk of bias reached double figures for multinational trials (23%) and five individual countries: Australia (10%), France (13%), India (10%), Japan (10%) and the UK (11%). The full country breakdown is given in Supplementary File 2 .

Participants

The 1659 included trials involved a total of 398,410 participants. The majority of these (222,850, or 56%) were in high risk of bias trials (Table 1 ).

Table 3 shows estimates for the amount of money spent on trials in each of the three risk of bias categories.

Using our low estimate for cost-per-participant (estimate 1 from NIHR HTA trials), we get an estimated spend of £726 million on high risk of bias trials. Our high estimate (estimate 2 from USA drug approval trials) gives an equivalent figure of over £8 billion. Based on an annual spend of £76 million for the UK’s NIHR HTA programme [ 15 ], the first figure, our lowest estimate, would be sufficient to fund the programme for almost a decade, while the second figure would fund it for over a century.

While looking at scale, participants and money, we made a few other secondary observations. To avoid distracting attention from our main points, we present these observations in Supplementary File 3 .

Bad trials — ones where we have little confidence in the results — are not just common, they represent the majority of trials across all clinical areas in all countries. Over half of all trial participants will be in one. Our estimates suggest that the money spent on these bad trials would fund the UK’s largest public funder of trials for anything between a decade and a century. It is a wide range but either way, it is a lot of money. Had our random selection produced a different set of reviews, or we had assessed all those published in the last 1, 5, 10 or 20 years, we have no reason to believe that the headline result would have been different. Put simply, most randomised trials are bad.

Despite this, we think our measure of bad is actually conservative because we have only considered the risk of bias. We have not attempted to judge whether trials asked important research questions, whether they involved the right participants and whether their outcomes were important to decision-makers such as patients and health professionals nor have we attempted to comment on the many other decisions that affect the usefulness of a trial [ 16 , 17 ]. In short, the picture our numbers paint is undoubtedly gloomy, but the reality is probably worse.

Five recommendations for change

Plenty of ideas have been suggested about what must change [ 1 , 3 , 4 , 5 , 6 , 7 , 8 ], but we propose just five here because the scale of the problem is so great that providing focus might avoid being overwhelmed into inaction. We think these five recommendations, if implemented, would reduce the number of bad trials and could do so quite quickly.

Recommendation 1: do not fund a trial unless the trial team contains methodological and statistical expertise

Doing trials is a team sport. These teams need experienced methodologists and statisticians. We do not know how many trials fail to involve experienced methodologists and statisticians but we expect it to be a high proportion given the easily avoidable design errors seen in so many trials. It is hard to imagine doing, say, bowel surgery without involving people who have been trained in, and know how to do, bowel surgery. Sadly, the same does not seem to be true for trial design and statistical analysis of trial data. Our colleague Darren Dahly, a trial statistician, neatly captured the problem in a series of ironic tweets sent at the end of 2020:

figure a

These raise a smile but make a very serious point: we would not tolerate statisticians doing surgery so why do we tolerate the reverse? Clearly, this is not about surgeons, it is about not having the expertise needed to do the job properly.

Recommendation 2: do not give ethical approval for a trial unless the trial team contains methodological and statistical expertise

As for recommendation 1, but for ethical approval. All trials need ethical approval and the use of poor methods should be seen as an ethical concern [ 3 ]. No patient or member of the public should be in a bad trial and ethical committees, like funders, have a duty to stop this happening. Ethics committees should always consider whether there is adequate methodological and statistical expertise within the trial team. Indeed, we think public and patient contributors on ethics committees should routinely ask the question ‘Who is the statistician and who is the methodologist?’ and if the answer is unsatisfactory, ethical approval is not awarded until a name can be put against these roles.

Recommendation 3: use a risk of bias tool at trial design

This is the simplest of our recommendations. Risk of bias tools were developed to support the interpretation of trial results in systematic reviews. However, as Yordanov and colleagues wrote in 2015 [ 5 ], by then the horse has bolted and nothing can be changed. They considered 142 high risk of bias trials and found the four most common methodological problems to be exclusion of patients from analysis (50 trials, 35%), lack of blinding with a patient-reported outcome (27 trials, 19%), lack of blinding when comparing a non-drug treatment to nothing (23 trials,16%) and poor methods to deal with missing data (22 trials, 15%). They judged the first and last of these to be easy to fix at the design stage, while the two blinding problems were more difficult but not impossible to deal with. Sadly, trial teams themselves had not addressed any of these problems.

Applying a risk of bias tool at the trial design phase, having the methodological and statistical expertise to correctly interpret the results and then making any necessary changes to the trial, would help to avoid some of the problems highlighted by others [ 3 , 4 , 5 , 6 , 7 , 8 ] in the past and which we have found to be very common.

Applying a risk of bias tool at the trial design phase, having the methodological and statistical expertise to correctly interpret the results and then making any necessary changes to the trial, would help to avoid some of the problems we and others [ 3 , 4 , 5 , 6 , 7 , 8 ] highlight. Funders could ask to see the completed risk of bias tool, as could ethics committees. No trial should be high risk of bias.

Recommendation 4: train and support more methodologists and statisticians

Recommendations 1, 2 and 3 all lead to a need for more methodologists and statisticians. This has a cost but it would probably be much less than the money wasted on bad trials. See recommendation 5.

Recommendation 5: put more money into applied methodology research and supporting infrastructure

Methodology research currently runs mostly on love not money. This seems odd when over 60% of trials are so methodologically flawed we cannot believe their results and we are uncertain whether we should believe the results of another 30%.

In 2015, David Moher and Doug Altman proposed that 0.1% of funders’ and publishers’ budgets could be set aside for initiatives to reduce waste and improve the quality, and thus value, of research publications [ 6 ]. That was for publications but the same could be done for trials, although we would suggest a figure closer to 10% of funders’ budgets. All organisations that fund trials should also be funding applied work to improve trial methodology, including supporting the training of more methodologists and statisticians. There should also be funding mechanisms to ensure methodology knowledge is effectively disseminated and implemented. Dissemination is a particular problem and the UK’s only dedicated methodology funder, the Medical Research Council-NIHR ‘Better Methods, Better Research’ Panel, acknowledges this in its Programme Aims [ 18 ].

Implementing these five recommendations will require effort and investment but doing nothing is not an option that anyone should accept. We have shown that 220,850 people had been enrolled in trials judged to be so methodologically flawed that we can have little confidence in their results. A further 127,290 people had joined trials where it is unclear whether we should believe the results. These numbers represent 88% of all trial participants in our sample. This is a betrayal of those participants’ hopes, goodwill and time. Even our lowest cost-per-participant estimate would suggest that more than £1billion was spent on these bad and possibly bad trials.

The question for everyone associated with designing, funding and approving trials is how many good trials never happen because bad ones are done instead? The cost of this research waste is not only financial. Randomised trials have the potential to improve health and wellbeing, change lives for the better and support economies through healthier populations. But poor evidence leads to poor decisions [ 19 ]. Society will only see the potential benefits of randomised trials if these studies are good, and, at the moment, most are not.

In this study, we have concentrated on risk of bias. What makes our results particularly troubling is that the changes needed to move a trial from high risk of bias to low risk of bias are often simple and cheap. However, this is also positive in relation to changing what will happen in the future. For example, Yordanov and colleagues estimated that easy methodological adjustments at the design stage would have made important improvements to 42% (95% confidence interval = 36 to 49%) of trials with risk of bias concerns [ 5 ]. Their explanation for these adjustments not being made in the trials was a lack of input from methodologists and statisticians at the trial planning stage combined with insufficient knowledge of research methods among the trial teams. If we were to ask a statistician to operate on a patient, we would rightly fear for the patient: proposing that a trial is designed and run without research methods expertise should induce the same fear.

In 2009, Iain Chalmers and Paul Glasziou estimated that 85% of research spending is wasted due to, among other things, poor design and incomplete reporting [ 7 ]. Over a decade later, our estimate is that 88% of trial spending is wasted. Without addressing the fundamental problem of trials being done by people ill-equipped to do them, a similar study a decade from now will once again find that the majority of trials across all clinical areas in all countries are bad.

Our work, and that of others before us [ 1 , 3 , 4 , 5 , 6 , 7 , 8 ], makes clear that a large amount of the money we put into trials globally is being wasted. Some of that money should be repurposed to fund our five recommendations. This may well lead to fewer trials overall but it would generate more good trials and mean that a greater proportion of trial data is of the high quality needed to support and improve patient and public health.

That so much research, and so many trials, is and are bad is indeed a scandal. That it continues decades after others highlighted the problem is a bigger scandal. Even the tiny slice of global research featured in our study describes trials that involved hundreds of thousands of people and cost hundreds of millions of pounds, but which led to little or no useful information.

The COVID-19 pandemic has been a time for many things, including reflection. As many countries start to look to what can be learnt, all of us connected with trials should put rigour and methodology where it belongs — at the centre of our science. We think our five recommendations are a good place to start.

To quote Doug Altman ‘We need less research, better research, and research done for the right reasons’ [ 1 ]. Quite so.

Availability of data and materials

All our data are available at https://osf.io/dv6cw/?view_only=0becaacc45884754b09fd1f54db0c495 .

Abbreviations

British Medical Journal

Coronavirus disease 2019

British Pound Sterling

Health Technology Assessment

National Institute for Health Research

United Kingdom

United States of America

United States Dollar

Altman DG. The scandal of poor medical research. BMJ. 1994;308:283.

Article   CAS   Google Scholar  

Matthews R, Chalmers I, Rothwell P. Douglas G Altman: statistician, researcher, and driving force behind global initiatives to improve the reliability of health research. BMJ. 2018;362:k2588.

Article   Google Scholar  

Van Calster B, Wynants L, Riley RD, van Smeden M, Collins GS. Methodology over metrics: current scientific standards are a disservice to patients and society. J Clin Epidemiol. 2021;S0895-4356(21)00170-0. https://doi.org/10.1016/j.jclinepi.2021.05.018 .

Glasziou P, Chalmers IC. Research waste is still a scandal—an essay by Paul Glasziou and Iain Chalmers. MJ. 2018;363:k4645.

Yordanov Y, Dechartres A, Porcher R, Boutron I, Altman DG, Ravaud P. Avoidable waste of research related to inadequate methods in clinical trials. BMJ. 2015;350:h809.

Moher D, Altman DG. Four proposals to help improve the medical research literature. PLoS Med. 2015;12(9):e1001864. https://doi.org/10.1371/journal.pmed.1001864 .

Article   PubMed   PubMed Central   Google Scholar  

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet . 2009;374(9683):86–9.

Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JPA, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101–4.

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane handbook for systematic reviews of interventions version 6.2 (updated February 2021); 2021. (Chapters 7 and 8) Cochrane, Available from www.training.cochrane.org/handbook

Google Scholar  

Chinnery F, Bashevoy G, Blatch-Jones A, et al. National Institute for Health Research (NIHR) health technology assessment (HTA) Programme research funding and UK burden of disease. Trials. 2018;19:87. https://doi.org/10.1186/s13063-018-2489-7 .

Walters SJ, Bonacho dos Anjos Henriques-Cadby I, Bortolami O, et al. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme. BMJ Open. 2017;7:e015276. https://doi.org/10.1136/bmjopen-2016-015276 .

Moore TJ, Heyward J, Anderson G, et al. Variation in the estimated costs of pivotal clinical benefit trials supporting the US approval of new therapeutic agents, 2015–2017: a cross-sectional study. BMJ Open. 2020;10:e038863. https://doi.org/10.1136/bmjopen-2020-038863 .

Hawkes N. UK must improve its recruitment rate in clinical trials, report says. BMJ. 2012;345:e8104. https://doi.org/10.1136/bmj.e8104 .

Article   PubMed   Google Scholar  

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366(l4898). https://doi.org/10.1136/bmj.l4898 .

Williams H. The NIHR Health Technology Assessment Programme: Research needed by the NHS. https://www.openaccessgovernment.org/nihr-health-technology-assessment-programme-nhs/85065/ [Accessed 11/10/2021].

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383:156–65.

Ioannidis JPA, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–75.

MRC-NIHR Better Methods, Better Research. Programme Aims. https://mrc.ukri.org/funding/science-areas/better-methods-better-research/overview/#aims [Accessed 30/9/2021].

Heneghan C, Mahtani KR, Goldacre B, Godlee F, Macdonald H, Jarvies D. Evidence based medicine manifesto for better healthcare: a response to systematic bias, wastage, error and fraud in research underpinning patient care. Evid Based Med Royal Soc Med. 2017;22:120–2.

Download references

Acknowledgements

We would like to thank Brendan Palmer for helping SP with R coding and Darren Dahly for confirming that he was happy for us to use his tweets. The Health Services Research Unit, University of Aberdeen, receives core funding from the Chief Scientist Office of the Scottish Government Health Directorates. This work was done as part of the Trial Forge initiative to improve trial efficiency ( https://www.trialforge.org ).

This work was funded by Ireland’s Health Research Board through the Trial Methodology Research Network (HRB-TMRN) as a summer internship for SP.

Author information

Authors and affiliations.

Health Services Research Unit, University of Aberdeen, Foresterhill, Aberdeen, AB25 2ZD, UK

Stefania Pirosca & Shaun Treweek

Trials Research and Methodologies Unit, HRB Clinical Research Facility, University College Cork, Cork, Ireland

Frances Shiely

School of Public Health, University College Cork, Cork, Ireland

Northern Ireland Methodology Hub, Queen’s University Belfast, Belfast, UK

Mike Clarke

You can also search for this author in PubMed   Google Scholar

Contributions

ST had the original idea for the work. ST, FS and MC designed the study. SP identified reviews and trials, extracted data and did the analysis, in discussion with ST and FS. ST and SP wrote the first draft and all authors contributed to further drafts. All authors approved the final draft.

Corresponding author

Correspondence to Shaun Treweek .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

ST is an Editor-in-Chief of Trials . MC, FS and ST are actively involved in initiatives to improve the quality of trials and all seek funding to support these initiatives and therefore have an interest in seeing funding for trial methodology increased. SP has no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

The Cochrane Collaboration’s ‘old’ tool for assessing risk of bias.

Additional file 2.

Risk of bias data for all countries in our sample.

Additional file 3.

Additional observations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Pirosca, S., Shiely, F., Clarke, M. et al. Tolerating bad health research: the continuing scandal. Trials 23 , 458 (2022). https://doi.org/10.1186/s13063-022-06415-5

Download citation

Received : 26 November 2021

Accepted : 18 May 2022

Published : 02 June 2022

DOI : https://doi.org/10.1186/s13063-022-06415-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Randomised trials
  • Research waste
  • Risk of bias
  • Statisticians
  • Methodologists

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

examples of bad research papers

View the latest institution tables

View the latest country/territory tables

The 7 deadly sins of research

The most common stumbling blocks.

Gemma Conroy

examples of bad research papers

Credit: erhui1979/Getty

10 December 2019

examples of bad research papers

erhui1979/Getty

While strategies such as p-hacking, double-dipping, and salami slicing may boost the chances of obtaining exciting and novel results, these dodgy practices are driving down the quality of research.

Below are seven common traps that researchers can fall into when pursuing big discoveries.

1. P-hacking

In 2005, Stanford University epidemiologist, John Ionnaidis, made a bold claim : most published research findings are false. The main culprits? Bias, small sample sizes, and p-hacking.

For almost a century, scientists have used the ‘p-value’ to determine whether their results are due to a real effect or simply chance. With the pressure to publish only significant results , some researchers may be tempted to push their p-values to below the cutoff.

P-hacking can be as simple as removing pesky outliers during analysis or as laborious as running several analyses until significant results are achieved.

Scientists may also collect more data to increase their chances of a publishable result, says Robert MacCoun, a social psychologist at Stanford.

“The problem isn't that it is bad to collect more data,” says MacCoun. “The problem is created when we only do that for hypotheses we like and want to support, and when editors only publish the subset of studies that cross that threshold.”

2. HARKing (Hypothesizing After Results Are Known)

Presenting ‘post-results hypotheses’ as if they were created before an experiment and leaving out the original hypotheses when reporting results are examples of HARKing.

A 2017 analysis of six recent surveys on HARKing self-admission rates found that 43% of researchers have HARKed at least once in their career.

Kevin Murphy, a psychologist at the University of Limerick in the UK, says that using results to develop hypotheses “distorts the scientific process”.

“HARKing creates the real possibility that you will make up a story to ‘explain’ what is essentially a sampling error,” says Murphy, who studies research practices. “This will mislead you and your readers into thinking that you have gained some new understanding of a meaningful phenomenon.”

3. Cherry-picking data

Glenn Begley, an oncologist at BioCurate, spoke about an alarming interaction he had with a respected oncologist whose name he declined to make public.

Begley, who was vice president and global head of hematology and oncology research at US pharmaceutical company, Amgen, had attempted to replicate a high-profile study on tumour growth in animal models.

After several tries, he failed to produce the results that were published in Cancer Cell , one of the world’s most well-regarded cancer journals.

A chance meeting with the lead author resulted in a shocking admission, which Begley recounted to Science in 2011: "He said, 'We did this experiment a dozen times, got this answer once, and that's the one we decided to publish.'"

This is a classic case of cherry-picking data, where researchers only publish the results that best support their hypothesis.

Some of the consequences of cherry-picked data include biased results and making wide-reaching generalizations based on limited samples.

4. Data fabrication

From fake stem cell lines to duplicated graphs, it’s surprisingly easy for scientists to publish fabricated or “ made up ” data in prominent journals.

A 2009 systematic review and meta-analysis of 39 global surveys on research misconduct published in PLoS One found that almost 2% of scientists on average admitted to fabricating data at least once, but around 14% of researchers have witnessed colleagues falsifying results.

Last year, a Retraction Watch report on roughly 10,500 papers revealed that half of all retractions over the past two decades were due to fraud.

5. Salami slicing

For some researchers, a large dataset is considered a surefire way to publish multiple papers for the price of one.

Known as salami slicing, the process involves splitting a dataset into smaller chunks and writing separate papers on these ‘sliced’ results. ‘Salami publications’ share the same hypotheses, samples, and methods, which can result in misleading findings.

A 2019 analysis of more than 55,000 health science papers revealed that the number of publications per study increased by more than 20% between 1970 and 2014.

6. Not publishing negative results

Journals are biased towards publishing studies that find effects, and this makes it difficult for researchers to publish non-significant results. This bias can deter researchers from even attempting to publish their ‘failed’ studies, perpetuating the cycle further.

“People only publish when something works, and never when it doesn’t,” says Jean-Jacques Orban de Xivry, a neuroscientist at the Catholic University of Leuven in Belgium. “This means that we only publish half of the studies that we produce in laboratories. That’s troubling to me.”

According to a study of 221 social science experiments, only 21% of null findings make their way into the pages of a journal and two-thirds are never written up. By contrast, more than 95% of the experiments that produced positive findings are included in manuscripts, and more than half end up being published.

7. Double-dipping

Circular analysis, also known as double-dipping, involves using the same data multiple times to achieve a significant result.

As Orban de Xivry explains , double-dipping also includes “analyzing your data based on what you see in the data,” leading to inflated effects, invalid conclusions, and false positive results.

Double-dipping is particularly rife in neuroimaging studies, according to a 2009 paper led by neuroscientist, Nikolaus Kriegeskorte, from the US National Institute of Mental Health. The analysis found that 42% of papers on functional magnetic resonance imaging (fMRI) experiments published in leading journals were bolstered by circular analysis.

A 2019 paper published in eLife , Orban de Xivry recommends that researchers develop a clear outline of analysis criteria before diving into the data to avoid falling into the double-dipping trap.

A cautionary tale

In 2011, an influential psychology paper was able to show how listening to certain songs can change a person’s age.

The point of the paper was to show how, in many cases, a researcher is more likely to find evidence that a false effect exists (a false-positive finding) than it is to correctly find evidence that it does not.

Most often it’s not the result of malicious intent, but is due to the array of decisions researchers must make throughout a study: Should more data be collected? Should some observations be excluded? Which control variables should be considered? Should specific measures be combined or transformed, or both?

“When we as researchers face ambiguous analytic decisions, we will tend to conclude, with convincing self-justification, that the appropriate decisions are those that result in statistical significance,” the team wrote.

With a few tweaks to how they analyzed, interpreted and reported the data, researchers from the University of Pennsylvania and University of California, Berkeley researchers. Achieved a scientifically impossible outcome from their two experiments. But, as they wrote in Psychological Science , “Everything reported here actually happened.”

Skip to content

Bad Studies: Yes, They Exist. How Can You Identify Them?

Vaccine education center.

Bad studies are those in which the standards of scientific protocol were not met. For example, a study may not use appropriate controls — or any controls for that matter. The study may have been done using a method that doesn’t make sense for the kind of question being asked, or inappropriate statistical tests could have been employed. Bad and misleading studies are published all the time. Bad studies can even be published in good journals. So what to do?

Dr. Offit talks about how to evaluate scientific studies using the two pillars of science — peer review and reproducibility.

Paul Offit, MD:  Hi, my name is Paul Offit. I am talking to you today from the Vaccine Education Center, here at Children’s Hospital of Philadelphia. I think one thing parents and doctors and the media, frankly, struggles with is what to do with studies that aren’t very good. So for example, there are in this world about 6,500 medical and scientific journals; those journals publish about 4,000 papers every day. So, those papers not surprisingly follow a bell-shaped curve, some are excellent, some are awful, most are more or less mediocre. But the point being that there is a lot of bad and misleading science published all the time. So, what to do? How can we sort all this out?

I think that scientific studies basically stand on two pillars. One is peer review, meaning that when you submit a paper, your paper is reviewed by people that are experts in your field who will recommend its publication or not. And the second — I think the stronger pillar on which it stands — is reproducibility, which is to say that if you’re right, that you’re ultimately be shown to be right when other investigators in other parts of this world or other parts of this country reproduce what you’ve found.

And it’s not even just that bad studies get published in bad journals, or so-called predatory journal; bad studies can be published in good journal. So for example, in the early 1980s there was a paper published in  The New England Journal of Medicine , arguably the best clinical medical journal in the United States. They claimed that excess coffee drinking increase your risk of pancreatic cancer. The first author in that study was Brian MacMahon. He was at the Harvard School of Public Health; he was an epidemiologist and a good one. But that study was wrong. Excess coffee drinking didn’t cause an increased risk of pancreatic cancer, and we came to know that as more and more people tried to find what Brian MacMahon had found but couldn’t.

Another example would be a paper that was published in The Lancet in the late 1990s. This was published by a British intestinal surgeon named Andrew Wakefield, claiming that the combination measles, mumps, rubella vaccine caused autism. Now, The Lancet is an excellent journal; it’s the oldest, frankly, of the general medical journals. And again, this was a thin study at best, really nothing was studied; it was just a case series of eight children who had autism within a month of receiving the vaccine, proving, frankly, only that the MMR vaccine doesn’t prevent autism. And now 17 different studies have been published in seven different countries, on several different continents, involving hundreds of thousands of children, showing that that hypothesis was wrong.

So, I do think that my advice would be that when you see a study that makes a surprising or outlandish claim, wait — wait to see whether or not it’s reproduced by others, because that’s really the strongest pillar on which good science stands.

Related Centers and Programs: Vaccine Education Center

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write a Research Hypothesis: Good & Bad Examples

examples of bad research papers

What is a research hypothesis?

A research hypothesis is an attempt at explaining a phenomenon or the relationships between phenomena/variables in the real world. Hypotheses are sometimes called “educated guesses”, but they are in fact (or let’s say they should be) based on previous observations, existing theories, scientific evidence, and logic. A research hypothesis is also not a prediction—rather, predictions are ( should be) based on clearly formulated hypotheses. For example, “We tested the hypothesis that KLF2 knockout mice would show deficiencies in heart development” is an assumption or prediction, not a hypothesis. 

The research hypothesis at the basis of this prediction is “the product of the KLF2 gene is involved in the development of the cardiovascular system in mice”—and this hypothesis is probably (hopefully) based on a clear observation, such as that mice with low levels of Kruppel-like factor 2 (which KLF2 codes for) seem to have heart problems. From this hypothesis, you can derive the idea that a mouse in which this particular gene does not function cannot develop a normal cardiovascular system, and then make the prediction that we started with. 

What is the difference between a hypothesis and a prediction?

You might think that these are very subtle differences, and you will certainly come across many publications that do not contain an actual hypothesis or do not make these distinctions correctly. But considering that the formulation and testing of hypotheses is an integral part of the scientific method, it is good to be aware of the concepts underlying this approach. The two hallmarks of a scientific hypothesis are falsifiability (an evaluation standard that was introduced by the philosopher of science Karl Popper in 1934) and testability —if you cannot use experiments or data to decide whether an idea is true or false, then it is not a hypothesis (or at least a very bad one).

So, in a nutshell, you (1) look at existing evidence/theories, (2) come up with a hypothesis, (3) make a prediction that allows you to (4) design an experiment or data analysis to test it, and (5) come to a conclusion. Of course, not all studies have hypotheses (there is also exploratory or hypothesis-generating research), and you do not necessarily have to state your hypothesis as such in your paper. 

But for the sake of understanding the principles of the scientific method, let’s first take a closer look at the different types of hypotheses that research articles refer to and then give you a step-by-step guide for how to formulate a strong hypothesis for your own paper.

Types of Research Hypotheses

Hypotheses can be simple , which means they describe the relationship between one single independent variable (the one you observe variations in or plan to manipulate) and one single dependent variable (the one you expect to be affected by the variations/manipulation). If there are more variables on either side, you are dealing with a complex hypothesis. You can also distinguish hypotheses according to the kind of relationship between the variables you are interested in (e.g., causal or associative ). But apart from these variations, we are usually interested in what is called the “alternative hypothesis” and, in contrast to that, the “null hypothesis”. If you think these two should be listed the other way round, then you are right, logically speaking—the alternative should surely come second. However, since this is the hypothesis we (as researchers) are usually interested in, let’s start from there.

Alternative Hypothesis

If you predict a relationship between two variables in your study, then the research hypothesis that you formulate to describe that relationship is your alternative hypothesis (usually H1 in statistical terms). The goal of your hypothesis testing is thus to demonstrate that there is sufficient evidence that supports the alternative hypothesis, rather than evidence for the possibility that there is no such relationship. The alternative hypothesis is usually the research hypothesis of a study and is based on the literature, previous observations, and widely known theories. 

Null Hypothesis

The hypothesis that describes the other possible outcome, that is, that your variables are not related, is the null hypothesis ( H0 ). Based on your findings, you choose between the two hypotheses—usually that means that if your prediction was correct, you reject the null hypothesis and accept the alternative. Make sure, however, that you are not getting lost at this step of the thinking process: If your prediction is that there will be no difference or change, then you are trying to find support for the null hypothesis and reject H1. 

Directional Hypothesis

While the null hypothesis is obviously “static”, the alternative hypothesis can specify a direction for the observed relationship between variables—for example, that mice with higher expression levels of a certain protein are more active than those with lower levels. This is then called a one-tailed hypothesis. 

Another example for a directional one-tailed alternative hypothesis would be that 

H1: Attending private classes before important exams has a positive effect on performance. 

Your null hypothesis would then be that

H0: Attending private classes before important exams has no/a negative effect on performance.

Nondirectional Hypothesis

A nondirectional hypothesis does not specify the direction of the potentially observed effect, only that there is a relationship between the studied variables—this is called a two-tailed hypothesis. For instance, if you are studying a new drug that has shown some effects on pathways involved in a certain condition (e.g., anxiety) in vitro in the lab, but you can’t say for sure whether it will have the same effects in an animal model or maybe induce other/side effects that you can’t predict and potentially increase anxiety levels instead, you could state the two hypotheses like this:

H1: The only lab-tested drug (somehow) affects anxiety levels in an anxiety mouse model.

You then test this nondirectional alternative hypothesis against the null hypothesis:

H0: The only lab-tested drug has no effect on anxiety levels in an anxiety mouse model.

hypothesis in a research paper

How to Write a Hypothesis for a Research Paper

Now that we understand the important distinctions between different kinds of research hypotheses, let’s look at a simple process of how to write a hypothesis.

Writing a Hypothesis Step:1

Ask a question, based on earlier research. Research always starts with a question, but one that takes into account what is already known about a topic or phenomenon. For example, if you are interested in whether people who have pets are happier than those who don’t, do a literature search and find out what has already been demonstrated. You will probably realize that yes, there is quite a bit of research that shows a relationship between happiness and owning a pet—and even studies that show that owning a dog is more beneficial than owning a cat ! Let’s say you are so intrigued by this finding that you wonder: 

What is it that makes dog owners even happier than cat owners? 

Let’s move on to Step 2 and find an answer to that question.

Writing a Hypothesis Step 2:

Formulate a strong hypothesis by answering your own question. Again, you don’t want to make things up, take unicorns into account, or repeat/ignore what has already been done. Looking at the dog-vs-cat papers your literature search returned, you see that most studies are based on self-report questionnaires on personality traits, mental health, and life satisfaction. What you don’t find is any data on actual (mental or physical) health measures, and no experiments. You therefore decide to make a bold claim come up with the carefully thought-through hypothesis that it’s maybe the lifestyle of the dog owners, which includes walking their dog several times per day, engaging in fun and healthy activities such as agility competitions, and taking them on trips, that gives them that extra boost in happiness. You could therefore answer your question in the following way:

Dog owners are happier than cat owners because of the dog-related activities they engage in.

Now you have to verify that your hypothesis fulfills the two requirements we introduced at the beginning of this resource article: falsifiability and testability . If it can’t be wrong and can’t be tested, it’s not a hypothesis. We are lucky, however, because yes, we can test whether owning a dog but not engaging in any of those activities leads to lower levels of happiness or well-being than owning a dog and playing and running around with them or taking them on trips.  

Writing a Hypothesis Step 3:

Make your predictions and define your variables. We have verified that we can test our hypothesis, but now we have to define all the relevant variables, design our experiment or data analysis, and make precise predictions. You could, for example, decide to study dog owners (not surprising at this point), let them fill in questionnaires about their lifestyle as well as their life satisfaction (as other studies did), and then compare two groups of active and inactive dog owners. Alternatively, if you want to go beyond the data that earlier studies produced and analyzed and directly manipulate the activity level of your dog owners to study the effect of that manipulation, you could invite them to your lab, select groups of participants with similar lifestyles, make them change their lifestyle (e.g., couch potato dog owners start agility classes, very active ones have to refrain from any fun activities for a certain period of time) and assess their happiness levels before and after the intervention. In both cases, your independent variable would be “ level of engagement in fun activities with dog” and your dependent variable would be happiness or well-being . 

Examples of a Good and Bad Hypothesis

Let’s look at a few examples of good and bad hypotheses to get you started.

Good Hypothesis Examples

Bad hypothesis examples, tips for writing a research hypothesis.

If you understood the distinction between a hypothesis and a prediction we made at the beginning of this article, then you will have no problem formulating your hypotheses and predictions correctly. To refresh your memory: We have to (1) look at existing evidence, (2) come up with a hypothesis, (3) make a prediction, and (4) design an experiment. For example, you could summarize your dog/happiness study like this:

(1) While research suggests that dog owners are happier than cat owners, there are no reports on what factors drive this difference. (2) We hypothesized that it is the fun activities that many dog owners (but very few cat owners) engage in with their pets that increases their happiness levels. (3) We thus predicted that preventing very active dog owners from engaging in such activities for some time and making very inactive dog owners take up such activities would lead to an increase and decrease in their overall self-ratings of happiness, respectively. (4) To test this, we invited dog owners into our lab, assessed their mental and emotional well-being through questionnaires, and then assigned them to an “active” and an “inactive” group, depending on… 

Note that you use “we hypothesize” only for your hypothesis, not for your experimental prediction, and “would” or “if – then” only for your prediction, not your hypothesis. A hypothesis that states that something “would” affect something else sounds as if you don’t have enough confidence to make a clear statement—in which case you can’t expect your readers to believe in your research either. Write in the present tense, don’t use modal verbs that express varying degrees of certainty (such as may, might, or could ), and remember that you are not drawing a conclusion while trying not to exaggerate but making a clear statement that you then, in a way, try to disprove . And if that happens, that is not something to fear but an important part of the scientific process.

Similarly, don’t use “we hypothesize” when you explain the implications of your research or make predictions in the conclusion section of your manuscript, since these are clearly not hypotheses in the true sense of the word. As we said earlier, you will find that many authors of academic articles do not seem to care too much about these rather subtle distinctions, but thinking very clearly about your own research will not only help you write better but also ensure that even that infamous Reviewer 2 will find fewer reasons to nitpick about your manuscript. 

Perfect Your Manuscript With Professional Editing

Now that you know how to write a strong research hypothesis for your research paper, you might be interested in our free AI proofreader , Wordvice AI, which finds and fixes errors in grammar, punctuation, and word choice in academic texts. Or if you are interested in human proofreading , check out our English editing services , including research paper editing and manuscript editing .

On the Wordvice academic resources website , you can also find many more articles and other resources that can help you with writing the other parts of your research paper , with making a research paper outline before you put everything together, or with writing an effective cover letter once you are ready to submit.

Unreliable Sources for Your Research Project

  • Writing Research Papers
  • Writing Essays
  • English Grammar
  • M.Ed., Education Administration, University of Georgia
  • B.A., History, Armstrong State University

In conducting research for homework or an academic paper, you are basically conducting a search for facts: little tidbits of truth that you will assemble and arrange in an organized fashion to make an original point or claim. Your responsibility as a researcher is to understand the difference between fact and fiction, as well as the difference between fact and opinion .

When beginning your next assignment that requires sources, consider the credibility of those sources before including them in your final project.

Here are some common sources to avoid; each of these may include opinions and works of fiction disguised as facts.

As you know, anyone can publish a blog on the Internet. The problem with using a blog as a research source is there no way to know the credentials of many bloggers or to get an understanding of the writer’s level of expertise.

People often create blogs to give themselves a forum to express their views and opinions. And many of these people consult less than reliable sources to form their beliefs. You could use a blog for a quote, but never use a blog as a serious source of facts for a research paper.

Personal Web Sites

A personal web page is much like a blog when it comes to being an unreliable research source. Web pages are created by the public, so you have to be careful when choosing them as sources. It's sometimes difficult to determine which websites are created by experts and professionals on a given topic.

If you think about it, using information from a personal web page is much like stopping a perfect stranger on the street and collecting information from him or her.

Wiki websites can be informative, but they can also be untrustworthy. Wiki sites allow groups of people to add and edit the information contained on the pages. So it's easy to see how a wiki source might contain unreliable information.

The question that often arises when it comes to homework and research is whether it’s okay to use Wikipedia as a source of information. Wikipedia is a fantastic site with a wealth of great information, and it is the possible exception to the rule. Your teacher can tell you for certain if you can use Wikipedia as a source. At a minimum, Wikipedia offers a reliable overview of a topic to give you a strong foundation to start with. It also provides a list of resources where you can continue your own research.

Teachers, librarians, and college professors will tell you that students often believe things they’ve seen in movies. Whatever you do, don’t use a movie as a research source. Movies about historical events can contain kernels of truth, but unless it's a documentary, movies are not for educational purposes.

Historical Novels

Students often believe that historical novels are trustworthy sources because they indicate that they are “based on facts.” There is a significant difference between a factual work and a work that is based on facts. A novel that is based on a single fact can still contain ninety-nine percent fiction. Therefore, it's not advisable to use a historical novel as a historical resource .

  • How to Find Trustworthy Sources
  • How to Find Blogs You'll Enjoy
  • 10 Places to Research Your Paper
  • Finding Trustworthy Sources
  • The Best Interactive Debate Websites for Students and Teachers
  • Fashion Throughout History
  • Here's How to Use Attribution to Avoid Plagiarism in Your News Stories
  • How to Use Libraries and Archives for Research
  • What Is Expository Writing?
  • What's a Listicle?
  • 4 Tips for Completing Your Homework On Time
  • Documentation in Reports and Research Papers
  • What Is a Research Paper?
  • When to Cite a Source in a Paper
  • Choosing a Strong Research Topic
  • How to Cite Genealogy Sources

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Poor statistical reporting, inadequate data presentation and spin persist despite editorial advice

Roles Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Visualization, Writing – original draft, Writing – review & editing

Affiliations Sydney Medical School, University of Sydney, Sydney, NSW, Australia, Neuroscience Research Australia (NeuRA), Randwick, NSW, Australia

ORCID logo

Roles Investigation, Methodology, Project administration, Writing – review & editing

Affiliations Neuroscience Research Australia (NeuRA), Randwick, NSW, Australia, University of New South Wales, Randwick, NSW, Australia

Roles Funding acquisition, Investigation, Methodology, Project administration, Writing – review & editing

* E-mail: [email protected]

Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Visualization, Writing – review & editing

  • Joanna Diong, 
  • Annie A. Butler, 
  • Simon C. Gandevia, 
  • Martin E. Héroux

PLOS

  • Published: August 15, 2018
  • https://doi.org/10.1371/journal.pone.0202121
  • Reader Comments

Fig 1

The Journal of Physiology and British Journal of Pharmacology jointly published an editorial series in 2011 to improve standards in statistical reporting and data analysis. It is not known whether reporting practices changed in response to the editorial advice. We conducted a cross-sectional analysis of reporting practices in a random sample of research papers published in these journals before (n = 202) and after (n = 199) publication of the editorial advice. Descriptive data are presented. There was no evidence that reporting practices improved following publication of the editorial advice. Overall, 76-84% of papers with written measures that summarized data variability used standard errors of the mean, and 90-96% of papers did not report exact p-values for primary analyses and post-hoc tests. 76-84% of papers that plotted measures to summarize data variability used standard errors of the mean, and only 2-4% of papers plotted raw data used to calculate variability. Of papers that reported p-values between 0.05 and 0.1, 56-63% interpreted these as trends or statistically significant. Implied or gross spin was noted incidentally in papers before (n = 10) and after (n = 9) the editorial advice was published. Overall, poor statistical reporting, inadequate data presentation and spin were present before and after the editorial advice was published. While the scientific community continues to implement strategies for improving reporting practices, our results indicate stronger incentives or enforcements are needed.

Citation: Diong J, Butler AA, Gandevia SC, Héroux ME (2018) Poor statistical reporting, inadequate data presentation and spin persist despite editorial advice. PLoS ONE 13(8): e0202121. https://doi.org/10.1371/journal.pone.0202121

Editor: Bart O. Williams, Van Andel Institute, UNITED STATES

Received: April 28, 2018; Accepted: July 27, 2018; Published: August 15, 2018

Copyright: © 2018 Diong et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting Information files.

Funding: This work is supported by the National Health and Medical Research Council ( https://www.nhmrc.gov.au/ ), APP1055084. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The accurate communication of scientific discovery depends on transparent reporting of methods and results. Specifically, information on data variability and results of statistical analyses are required to make accurate inferences.

The quality of statistical reporting and data presentation in scientific papers is generally poor. For example, one third of clinical trials in molecular drug interventions and breast cancer selectively report outcomes [ 1 ], 60-95% of biomedical research papers report statistical analyses that are not pre-specified or are different to published analysis plans [ 2 ], and one third of all graphs published in the prestigious Journal of the American Medical Association cannot be interpreted unambiguously [ 3 ]. In addition, reported results may differ from the actual statistical results. For example, distorted interpretation of statistically non-siginficant results (i.e. spin) is present in more than 40% of clinical trial reports [ 4 ].

Many reporting guidelines (e.g. the Consolidated Standards of Reporting Trials; CONSORT [ 5 ]) have been developed, endorsed and mandated by key journals to improve the quality of research reporting. Furthermore, journals have published editorial advice to advocate better reporting standards [ 6 – 9 ]. Nevertheless, it is arguable whether reporting standards have improved substantially [ 10 – 12 ].

In response to the poor quality of statistical reporting and data presentation in physiology and pharmacology, the Journal of Physiology published an editorial series to provide authors with clear, non-technical guidance on best-practice standards for data analysis, data presentation and reporting of results. Co-authored by the Journal of Physiology’s senior statistics editor and a medical statistician, the editorial series by Drummond and Vowler was jointly published in 2011 under a non-exclusive licence in the Journal of Physiology, the British Journal of Pharmacology, as well as several other journals. (The editorial series was simultaneously published, completely or in part, in Experimental Physiology, Advances in Physiology Education, Microcirculation, the British Journal of Nutrition, and Clinical and Experimental Pharmacology and Physiology.) The key recommendations by Drummond and Vowler include instructions to (1) report variability of continuous outcomes using standard deviations instead of standard errors of the mean, (2) report exact p-values for primary analyses and post-hoc tests, and (3) plot raw data used to calculate variability [ 13 – 15 ]. These recommendations were made so authors would implement them in future research reports. However, it is not known whether reporting practices in these journals have improved since the publication of this editorial advice.

We conducted a cross-sectional analysis of research papers published in the Journal of Physiology and the British Journal of Pharmacology to assess reporting practices. Specifically, we assessed statistical reporting, data presentation and spin in a random sample of papers published in the four years before and four years after the editorial advice by Drummond and Vowler was published.

Materials and methods

Pubmed search and eligibility criteria.

All papers published in the Journal of Physiology and the British Journal of Pharmacology in the years 2007-2010 and 2012-2015 and indexed on PubMed were extracted using the search strategy: (J Physiol[TA] OR Br J Pharmacol[TA]) AND yyyy: yyyy[DP] NOT (editorial OR review OR erratum OR comment OR rebuttal OR crosstalk). Papers were excluded if they were editorials, reviews, erratums, comments, rebuttals, or part of the Journal of Physiology’s Crosstalk correspondence series. Of the eligible papers, a random sample of papers published in the four years before the 2011 editorial advice by Drummond and Vowler was published (2007-2010) and four years after (2012-2015) was extracted ( S1 File ), and full-text PDFs were obtained.

Question development and pilot testing

Ten questions and scoring criteria were developed to assess statistical reporting, data presentation and spin in the text and figures of the extracted papers. Questions assessing statistical reporting in the text (Q1-5) determined if and how written measures that summarize variability were defined, and if exact p-values were reported for primary analyses and post-hoc tests. Questions assessing data presentation in figures (Q6-8) determined if and how plotted measures that summarize variability were defined, and if raw data used to calculate the variability were plotted. Questions assessing the presence of spin (Q9-10) determined if p-values between 0.05 and 0.1 were interpreted as trends or statistically significant.

A random sample of 20 papers before and 20 papers after the editorial advice was used to assess the clarity of the scoring instructions and scoring agreement between raters. These papers were separate from those included in the full audit. All papers were independently scored by three raters (AAB, JD, MEH). Scores that differed between raters were discussed to reach agreement by consensus. The wording of the questions, scoring criteria, and scoring instructions were refined to avoid different interpretations by raters. The questions are shown in Fig 1 . Scoring criteria and additional details of the questions are provided in the scoring information sheets in the supporting information ( S2 File ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Counts and proportions of papers that fulfilled scoring criteria for each question before (white) and after (gray) the editorial advice was published. Abbreviations are SEM: standard error of the mean, ANOVA: analysis of variance.

https://doi.org/10.1371/journal.pone.0202121.g001

Data collection, outcomes and analysis

Each rater (AAB, JD, MEH, SCG) had to score and extract data independently from 50 papers before and 50 papers after the editorial advice was published. For each rater, papers from the 2007-2010 and 2012-2015 periods were audited in an alternating pattern to avoid an order or block effect. One rater unintentionally audited an additional 2007-2010 paper, and another rater unintentionally audited a 2007-2010 paper instead of a 2012-2015 paper. Thus, data from a random sample of 202 papers before and 199 papers after the editorial advice were analysed. When scoring was completed, papers that were difficult or ambiguous to score (less than 5% of all papers) were reviewed by all raters and scoring determined by consensus.

It was difficult to score some papers unambiguously on some of the scoring criteria. For example in question 3, it was sometimes difficult to determine what a paper’s primary analyses, main effects and interactions were, in order to determine whether p-values for these were reported or implied. When raters could not unambiguously interpret the data, either individually or as a team, we scored papers to give authors the benefit of doubt.

Counts and proportions of papers that fulfilled the scoring criteria for each question were calculated; no statistical tests were performed. Descriptive data are reported. All data processing and analysis were performed using Python (v3.5). Raw data, computer analysis code and result are available in the supporting information ( S3 File ).

The random sample of audited papers was reasonably representative of the number of papers published each year in the Journal of Physiology and the British Journal of Pharmacology in the two periods of interest ( Table 1 ).

thumbnail

https://doi.org/10.1371/journal.pone.0202121.t001

The proportions of audited papers that fulfilled the scoring criteria are presented in Fig 1 . The figure shows there is no substantial difference in statistical reporting, data presentation or the presence of spin after the editorial advice was published. Overall, 76-84% of papers with written measures that summarized data variability used standard errors of the mean, and 90-96% of papers did not report exact p-values for primary analyses and post-hoc tests. 76-84% of papers that plotted measures to summarize variability used standard errors of the mean, and only 2-4% of papers plotted raw data used to calculate variability.

Of papers that reported p-values between 0.05 and 0.1, 56-63% interpreted such p-values as trends or statistically significant. Examples of such interpretations include:

  • “A P < 0.05 level of significance was used for all analyses. […] As a result, further increases in the tidal P oes /VT (by 2.64 and 1.41 cmH 2 O l −1 , P = 0.041) and effort–displacement ratios (by 0.22 and 0.13 units, P = 0.060) were consistently greater during exercise …” (PMID 18687714)
  • “The level of IL-15 mRNA tended to be lower in vastus lateralis than in triceps (P = 0.07) ( Fig 1A )” (PMID 17690139)
  • “… where P < 0.05 indicates statistical significance […] was found to be slightly smaller than that of basal cells (−181 ± 21 pA, n = 7, P27–P30) but the difference was not quite significant (P = 0.05)” (PMID 18174213)
  • “… resting activity of A5 neurons was marginally but not significantly higher in toxin-treated rats (0.9 ± 0.2vs. 1.8 ± 0.5 Hz, P = 0.068)” (PMID 22526887)
  • “… significantly smaller than with fura-6F alone (P = 0.009), and slightly smaller than with fura-6F and EGTA (P = 0.08).” (PMID 18832426)
  • “The correlation becomes only marginally significant if the single experiment with the largest effect is removed (r = 0.41, P = 0.057, n = 22).” (PMID 17916607)

Implied or gross spin (i.e. spin other than interpreting p-values between 0.05 and 0.1 as trends or statistically significant) was noted incidentally in papers before (n = 10) and after (n = 9) the editorial advice was published. Examples of statements where implied or gross spin was present include:

  • “However, analysis of large spontaneous events (>50 pA in four of six cells) ( Fig 1G–1I ) showed the frequency to be increased from 0.4 ± 0.1 Hz to 0.8 ± 0.2 Hz (P < 0.05) ( Fig 1G and 1H ) and the amplitude by 20.3 ± 15.6 pA (P > 0.1) …” (PMID 24081159)
  • “… whereas there was only a non-significant trend in the older group.” (no p-value, PMID 18469848)

Post-hoc analyses revealed audit results were comparable between raters ( S4 File ). Additional post-hoc analyses revealed audit results were relatively consistent across years and journals ( S5 File ). A notable exception was the British Journal of Pharmacology and its lower rate of reporting p-values (3-27% lower; question 3) and exact p-values for main analyses (8-22% lower; question 4).

In 2011 the Journal of Physiology and the British Journal of Pharmacology jointly published editorial advice on best practice standards for statistical reporting and data presentation [ 13 ]. These recommendations were reiterated in the Journals’ Instructions to Authors. Our cross-sectional analysis shows there was no substantial improvement in statistical reporting and data presentation in the four years after publication of this editorial advice.

Our results confirm that the quality of statistical reporting is generally poor. We found that ∼80% of papers that plotted error bars used standard error of the mean. In line with this, a systematic review of 703 papers published in key physiology journals revealed 77% of papers plotted bar graphs with standard error of the mean [ 16 ]. Similarly, one of the authors (MH) audited all 2015 papers published in the Journal of Neurophysiology and found that in papers with error bars, 65% used standard error of the mean and ∼13% did not define their error bars. That audit also revealed ∼42% of papers did not report exact p-values and ∼57% of papers with p-values between 0.05 and 0.1 interpreted these p-values as trends or statistically significant [ 12 ]. Our current study found that ∼93% of papers included non-exact p-values and ∼60% of papers with p-values between 0.05 and 0.1 reported these with spin. It is unfortunate that authors adopt practices that distort the interpretation of results and mislead readers into viewing results more favorably. This problem was recently highlighted by a systematic review on the prevalence of spin in the biomedical literature [ 17 ]. Spin was present in 35% of randomized control trials with significant primary outcomes, 60% of randomized controls with non-significant primary outcomes, 84% of non-randomized trials and 86% of observational studies. Overall, these results highlight the sheer magnitude of the problem: poor statistical reporting and questionable interpretation of results are truly common practice for many scientists.

Our findings also broadly agree with other observational data on the ineffectiveness of statistical reporting guidelines in biomedical and clinical research. For example, the CONSORT guidelines for the reporting of randomized controlled trials are widely supported and mandated by key medical journals, but the quality of statistical reporting and data presentation in randomized trial reports remains inadequate [ 18 – 20 ]. A scoping audit of papers published by American Physiological Society journals in 1996 showed most papers mistakenly reported standard errors of the mean as estimates of variability, not as estimates of uncertainty [ 21 ]. Consequently, in 2004 the Society published editorial guidelines to improve statistical reporting practices [ 22 ]. These guidelines instructed authors to report variability using standard deviations, and report uncertainty about scientific importance using confidence intervals. However, the authors of the guidelines audited papers published before and after their implementation and found no improvement in the proportion of papers reporting standard errors of the mean, standard deviations, confidence intervals, and exact p-values [ 10 ]. Likewise, in 1999 and 2001 the American Psychological Association published guidelines instructing authors to report effect sizes and confidence intervals [ 23 , 24 ]. Once again, an audit of papers published before and after the guidelines were implemented found no improvement in the proportion of figures with error bars defined as standard errors of the mean (43-59%) or worse, with error bars that were not defined (29-34%) [ 25 ].

One example where editorial instructions improved reporting practices occurred in public health. In the mid-80’s the American Journal of Public Health had an influential editor who advocated and enforced the use of confidence intervals rather than p-values. An audit of papers published before and during the tenure of this editor found that the reliance on p-values to interpret findings dropped from 63% to 5% and the reporting of confidence intervals increased from 10% to 54% [ 26 ]. However, few authors referred to confidence intervals when interpreting results. In psychology, when editors of Memory & Cognition and the Journal of Consulting and Clinical Psychology enforced the use of confidence intervals and effect sizes, the use of these statistics increased to some extent, even though long-term use was not maintained [ 27 ]. These examples provide evidence that editors with training in statistical intepretation may enforce editorial instructions more successfully, even if author understanding does not necessarily improve.

Why are reporting practices not improving? The pressure to publish may be partly to blame. Statistically significant findings that are visually and numerically clean are easier to publish. Thus, it should come as no surprise that p-values between 0.05 and 0.1 are interpreted as trends or statistically significant, and that researchers use standard errors of the mean to plot and report results. There is also a cultural component to these practices. The process of natural selection ensures that practices associated with higher publication rates are transmitted from one generation of successful researchers to the next [ 28 ]. Unfortunately, some of these practices include poor reporting practices. As was recently highlighted by Goodman [ 29 ], conventions die hard, even if they contribute to irreproducible research. In the article, citing a government report on creating change within a system, Goodman highlights that “culture will trump rules, standards and control strategies every single time”. Thus, researchers will often opt for reporting practices that make their papers look like others in their field, conscious or not that these reporting practices are inadequate and not in line with published reporting guidelines. A final contributing factor is that many researchers continue to misunderstand key statistical concepts, such as measures of variability and uncertainty, inferences made from independent and repeated-measures study designs, and error bars and how they reflect statistical significance [ 30 ]. This partly explains the resistance to statistical innovations and robust reporting practices [ 27 ].

The recent reproducibility crisis has seen all levels of the scientific community implement new strategies to improve how science is conducted and reported. For example, journals have introduced article series to promote awareness [ 9 , 31 ] and adopted more stringent reporting guidelines [ 32 , 33 ]. Whole disciplines have also taken steps to tackle these issues. For example, the Academy of Medical Sciences partnered with the Biotechnology and Biological Sciences Research Council, the Medical Research Council and the Wellcome Trust to host a symposium on improving reproducibility and reliability of biomedical research [ 34 ]. Funding bodies have also acted. For example, the NIH launched training modules to educate investigators on topics such as bias, blinding and experimental design [ 35 ], and the Wellcome Trust published guidelines on research integrity and good research practice [ 36 ]. Other initiatives include the Open Science Framework, which facilitates open collaboration [ 37 ], and the Transparency and Openness Promotion guidelines, which were developed to improve reproducibility in research and have been endorsed by many key journals [ 38 ]. To improve research practices, these initiatives aim to raise awareness of the issues, educate researchers and provide tools to implement the various recommendations. While the enduring success of these initiatives remains to be determined, we remain hopeful for the future. There is considerable momentum throughout science, and many leaders from various disciplines have stepped up to lead the way.

In summary, reporting practices have not improved despite published editorial advice. Journals and other members of the scientific community continue to advocate and implement strategies for change, but these have only had limited success. Stronger incentives, better education and widespread enforcement are needed for enduring improvements in reporting practices to occur.

Supporting information

S1 file. random paper selection..

Python code and PubMed search results used to randomly select papers for the audit. See the included README.txt file for a full description.

https://doi.org/10.1371/journal.pone.0202121.s001

S2 File. Scoring information sheets.

Scoring criteria and details of questions 1-10.

https://doi.org/10.1371/journal.pone.0202121.s002

S3 File. Data and code.

Comma-separated-values (CSV) file of raw scores for questions 1-10 and the Python files used to analyse the data. See the included README.txt file for a full description.

https://doi.org/10.1371/journal.pone.0202121.s003

S4 File. Audit results across raters.

Comparison of audit results across raters indicates the scoring criteria were applied uniformly across raters.

https://doi.org/10.1371/journal.pone.0202121.s004

S5 File. Audit results across years and journals.

Comparison of audit results for each year and journal. The British Journal of Pharmacology consistently had lower reporting rates of p-values for main analyses, and exact p-values for main analyses.

https://doi.org/10.1371/journal.pone.0202121.s005

Acknowledgments

We thank Dr Gordon Drummond for reviewing a draft of the manuscript.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 24. American Psychological Association. Publication manual of the American Psychological Association. American Psychological Association; 2001. Available from: http://www.apa.org/pubs/books/4200061.aspx .
  • 25. Cumming G, Fidler F, Leonard M, Kalinowski P, Christiansen A, Kleinig A, et al. Statistical reform in psychology: is anything changing?; 2007. Available from: https://www.jstor.org/stable/40064724 .
  • 31. Nature Special. Challenges in irreproducible research; 2014. Available from: https://www.nature.com/collections/prbfkwmwvz/ .
  • 34. The Academy of Medical Sciences. Reproducibility and reliability of biomedical research; 2016. Available from: https://acmedsci.ac.uk/policy/policy-projects/reproducibility-and-reliability-of-biomedical-research .
  • 35. National Institutes of Health. Clearinghouse for training modules to enhance data reproducibility; 2014. Available from: https://www.nigms.nih.gov/training/pages/clearinghouse-for-training-modules-to-enhance-data-reproducibility.aspx .
  • 36. Wellcome Trust. Research practice; 2005. Available from: https://wellcome.ac.uk/what-we-do/our-work/research-practice .

Research Tutorial

  • Library Research Tutorial
  • What Is a Thesis Statement?
  • Topic Development
  • Improve Your Research Question
  • Good and Bad Research Questions
  • Video Review
  • Sources for Background Reading
  • What about Wikipedia?
  • Related Terms
  • Subject Terms
  • Boolean Searching
  • Advanced Searching Techniques
  • Definition of "Scholarly"
  • Subject Guides
  • Individual Databases
  • Open Access Resources
  • Google Scholar
  • Library Catalog
  • Evaluation of Sources
  • Academic Writing
  • Writing Resources
  • Citing Sources
  • Citation Formats
  • Citation Resources
  • Academic Integrity
  • Research on the Job

Characteristics of Good and Bad Research Questions

The figure below gives some examples of good and "not-so-good" research questions.

Good and bad research questions

Transcript of this image

  • << Previous: Improve Your Research Question
  • Next: Video Review >>
  • Last Updated: Nov 9, 2023 10:44 AM
  • URL: https://libguides.umgc.edu/research-tutorial

examples of bad research papers

Choosing the right title for your research paper can be the deciding factor in whether it gets published or not. At a time when ‘clickbait’ headlines are rampant, deciding on a title for your research paper can be stressful.

Well-established guidelines for the titles of academic writing can help narrow your title options down until you find the perfect balance. But first, why are titles so important?

What’s in a Name?

“That which we call a rose/ By any other name would smell as sweet” is a famous line from Romeo and Juliet. William Shakespeare wrote his famous play in the 16th century, but it’s still quoted today. The most popular interpretation of that line is that names don’t matter. But they do. In fact, their respective names are ultimately what caused Romeo and Juliet’s tragic end. 

When it comes to the written format, the name (title) is almost always the first thing people read. If something in the name sparks an interest, they will continue reading. If not, they will move on to the next thing.

As more researchers use social media to promote their articles, keywords and an attention-grabbing title increase the chances of going viral.

Conversely, an exaggerated, bombastic title could put off traditional readers, as well as journal editors. Essentially, you want a catchy but refined title. How do authors find that balance?

The Basic Rules of Research Titles

Luckily, there are rules academic writers can follow that will guide them in choosing how to title their paper. What are the dos and don'ts of research titles?

●      Use action words. Active verbs will help make the title ‘pop.’

●      Use less than 15 words in your title. Academic research titles are generally 13-15 words or 100 characters.

●      Use correct grammar. Article words, prepositions, and conjunction words aren’t capitalized. The first word of a subtitle is capitalized. Use APA grammar guidelines.

●      Use keywords from research. If possible, sprinkle a few keywords from your article in the title to help with SEO.

●      Use journal guidelines. Learn the guidelines of the journal you’re submitting your paper to, and make sure you are following instructions.

Don’t Do This

●      Use abbreviations. Abbreviations, roman numerals, and acronyms should be avoided in titles.

●      Use conjunctions “study of,” “results of,” etc.

●      Use periods, semicolons, or exclamation marks. Colons are used if you include a subtitle.

●      Use full scientific names. Instead of using complex scientific names, shorten them or avoid them.

●      Use chemical formulas. Replace chemical formulas in the title with their common name.

Your research title should be accessible and easily understood by everyone, even those outside the academic community. What else makes a dynamic title for your research paper?

How to Choose a Title for Your Paper

The first title you give your paper likely won’t be the last. Give your research paper a working title that reflects the nature of the topic. The working title will help you keep your writing focused on the topic at hand.

A good research title has three components:

  • What is the purpose of the research?
  • What tone is the paper taking?
  • What research methods were used?

The answers to those three questions will anchor your paper and will inform your title.

You can also seek suggestions about the title from your direct peers and mentor.

The main factor in an academic research paper title is that it accurately but briefly depicts the scope and subject of the research.

Do You Need a Subtitle?

Research papers often have long titles divided by a colon. Is the secondary part, the subtitle, necessary? That depends on your main title. Is it enough to catch the attention of readers while giving enough context to the topic of the article?

Subtitles are useful for providing more details about your paper to pique interest, but they aren’t necessary. 

After all your hard work on the research paper, the title may be the only thing anyone reads. Choose your title wisely.

  • Afghanistan
  • Åland Islands
  • American Samoa
  • Antigua and Barbuda
  • Bolivia (Plurinational State of)
  • Bonaire, Sint Eustatius and Saba
  • Bosnia and Herzegovina
  • Bouvet Island
  • British Indian Ocean Territory
  • Brunei Darussalam
  • Burkina Faso
  • Cayman Islands
  • Central African Republic
  • Christmas Island
  • Cocos (Keeling) Islands
  • Congo (Democratic Republic of the)
  • Cook Islands
  • Côte d'Ivoire
  • Curacao !Curaçao
  • Dominican Republic
  • El Salvador
  • Equatorial Guinea
  • Falkland Islands (Malvinas)
  • Faroe Islands
  • French Guiana
  • French Polynesia
  • French Southern Territories
  • Guinea-Bissau
  • Heard Island and McDonald Islands
  • Iran (Islamic Republic of)
  • Isle of Man
  • Korea (Democratic Peoples Republic of)
  • Korea (Republic of)
  • Lao People's Democratic Republic
  • Liechtenstein
  • Marshall Islands
  • Micronesia (Federated States of)
  • Moldova (Republic of)
  • Netherlands
  • New Caledonia
  • New Zealand
  • Norfolk Island
  • North Macedonia
  • Northern Mariana Islands
  • Palestine, State of
  • Papua New Guinea
  • Philippines
  • Puerto Rico
  • Russian Federation
  • Saint Barthélemy
  • Saint Helena, Ascension and Tristan da Cunha
  • Saint Kitts and Nevis
  • Saint Lucia
  • Saint Martin (French part)
  • Saint Pierre and Miquelon
  • Saint Vincent and the Grenadines
  • Sao Tome and Principe
  • Saudi Arabia
  • Sierra Leone
  • Sint Maarten (Dutch part)
  • Solomon Islands
  • South Africa
  • South Georgia and the South Sandwich Islands
  • South Sudan
  • Svalbard and Jan Mayen
  • Switzerland
  • Syrian Arab Republic
  • Tanzania, United Republic of
  • Timor-Leste
  • Trinidad and Tobago
  • Turkmenistan
  • Turks and Caicos Islands
  • United Arab Emirates
  • United Kingdom of Great Britain and Northern Ireland
  • United States of America
  • United States Minor Outlying Islands
  • Venezuela (Bolivarian Republic of)
  • Virgin Islands (British)
  • Virgin Islands (U.S.)
  • Wallis and Futuna
  • Western Sahara

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • World J Mens Health
  • v.41(3); 2023 Jul
  • PMC10307651

Logo of wjmh

Citation Errors in Scientific Research and Publications: Causes, Consequences, and Remedies

Ashok agarwal.

1 Global Andrology Forum, American Center for Reproductive Medicine, Moreland Hills, OH, USA.

2 Cleveland Clinic, Cleveland, OH, USA.

Mohamed Arafa

3 Department of Urology, Hamad Medical Corporation, Doha, Qatar.

4 Department of Andrology, Cairo University, Cairo, Egypt.

5 Department of Urology, Weill Cornell Medical-Qatar, Doha, Qatar.

Tomer Avidor-Reiss

6 Department of Biological Sciences, University of Toledo, Toledo, OH, USA.

7 Department of Urology, College of Medicine and Life Sciences, University of Toledo, Toledo, OH, USA.

Taha Abo-Almagd Abdel-Meguid Hamoda

8 Department of Urology, King Abdulaziz University, Jeddah, Saudi Arabia.

9 Department of Urology, Faculty of Medicine, Minia University, Minia, Egypt.

10 Department of Urology, Lilavati Hospital and Research Centre, Mumbai, India.

INTRODUCTION

Scientific research depends on the gathering of existing knowledge by collecting data from previous research and then building upon the collected data to design new research projects with the goal of answering unanswered scientific questions [ 1 ]. Therefore, proper citation of previously published studies is an essential and integral part of conducting medical research. Citations are used to establish the current state of knowledge in the subject being studied, identify gaps in the literature, and explain and debate the results of ongoing research [ 1 ]. This process requires proper identification and validation of the integrity of citations. Although it is recommended that the entire research paper is fully reviewed before being cited [ 2 , 3 ], it is a common experience that this is often not done, and it is reported that up to 80% of authors omit to read the full text of the research paper they are citing [ 3 ]. This omission can perpetuate significant errors within an article in the literature and mislead the research being reported [ 4 , 5 ].

Recently, there has been heightened focus on research ethics to detect fraudulent research [ 6 ], and many research oversight bodies have been founded and several guidelines have been published. These bodies include the International Committee of Medical Journal Editors (ICMJE), the World Association of Medical Editors (WAME), and the Committee on Publication Ethics (COPE). Most of these guidelines have been adopted by scientific journals and are stated as prerequisites for accepting manuscripts for publication [ 7 ]. However, although some authors have discussed citation errors in scientific publications [ 5 , 7 , 8 ], there are no guidelines or tools available to rectify these errors.

The Global Andrology Forum (GAF) is an online global research group which has published extensively on andrological topics [ 9 ]. It is a standard practice in the GAF to perform an intensive internal review of all citations in our manuscripts before submission to the journals. Recently, while internally reviewing one of our manuscripts being prepared for submission, the GAF reviewers identified errors in approximately 20% of citations. The manuscript had 145 references that were cited 172 times in the text. The most common error was incorrect citation information (n=9), followed by unjustified extrapolation of the conclusion of the cited work (n=6), factual errors (n=5), incorrect interpretation of results (n=5), citing a secondary source (n=4), citing a wrong reference (n=2), ignoring more suitable reference (n=2), and citing an unreliable source (n=1). The extent and importance of the problem of citation errors became evident to us, prompting us to highlight the need for routine review and audit of all citations, using the full-text of cited papers rather than their abstracts, before a manuscript is approved for submission for publication.

COMMON REASONS WHY AUTHORS FAIL TO REVIEW THE ENTIRE PAPER BEFORE CITATION

Though it is recommended that the primary or original article be thoroughly reviewed before it is cited, a common experience is that this is often not done. In fact, it has been estimated that only 20% of authors read the original paper that is being cited [ 4 ]. This practice of omitting the primary source and relying on secondary sources can result in negative consequences with misinterpretation of the cited information or unjustified extrapolation of conclusions, leading to the perpetuation and propagation of significant errors and potential misinformation [ 5 , 10 ]. We hereby highlight some of the common reasons why many authors do not read the entire full text of a paper before citing it.

Author-related factors

  • - Too much time and effort are required to check the full paper for each citation.
  • - Authors underestimate the importance of best citation practice.
  • - Authors think that citations in the introduction section are not important and hence approach them casually.
  • - Junior authors may make faulty citations that are not detected and corrected by senior authors.
  • - Authors may be biased toward papers from colleagues, mentors, or well-known authors, and thus ignore more appropriate papers from other authors.
  • - Selection bias, where newer sources are ignored, and older popular references are repeatedly cited.
  • - “Citation Metrics” influence, where authors are more concerned about the “number” of their publications rather than their quality.
  • - Self-citation ( e.g ., citing irrelevant previous self-publication).
  • - Unnecessarily redundant citations (needlessly including several references for the same information).
  • - Intentional or unintentional distortion of the cited findings or conclusions to support or endorse the authors’ findings or conclusions.

2. Article-related factors

  • - The full text of a paper is inaccessible.
  • - The abstract is thought to be sufficient for the cited findings/interpretation.
  • - Relying on familiar articles or narrative reviews and their reference lists and neglecting unfamiliar more recent sources.
  • - The secondary source is thought to be reliable enough.

3. Journal-related factors

  • - Influence of journal editors or reviewers, suggesting specific citations.
  • - Authors may try to satisfy a journal editor by including more citations from the target journal.
  • - Authors rely on the publishing journal to correct the style and accuracy of references. Journals usually review the reference style but with no verification of information.

4. Guidelines-related factors

  • - Lack of (or scarce) clear guidelines on “Best Citation Practice.”
  • - Lack of training in “Best Citation Practices”, while extensive training and guidance are provided for “Literature Search and Data Extraction.”
  • - Lack of automated software to help authors, reviewers, and journals to check citation accuracy.

COMMON CITATION ERRORS AND THEIR PROPOSED REMEDIES

Several types of citation errors may be encountered during the citation process. Here we discuss the different categories of common citation errors and provide their proposed remedies.

1. Non-citing error

Often, a paper makes a general claim but omits to provide a supporting citation. This error may be because the authors are very familiar with the subject and thus a statement is taken for granted as generally known or accepted knowledge. Alternatively, it may be a case of simple oversight on the authors’ part.

These types of errors can be overcome by carefully reviewing the manuscript and ensuring that all claims, whether they are major or minor, are supported by at least one appropriate citation.

2. Factual error

This type of error may be an incorrect description of the findings of a paper, such as the mechanism of action that was elucidated or a function of a molecule, cell, or organ that was postulated. Alternatively, it can be a numerical error, such as incorrectly citing the prevalence of a condition or a disease. Another related error is an incorrect interpretation arising from an unjustified extrapolation of a paper’s conclusion.

These error types can be overcome by careful reading and analysis of the full text of the article.

3. Selective citation

This type of error appears to have many mistake subtypes. For example, authors may cite their own papers or those of their close colleagues over others because they are more familiar with these studies. Another error is ignoring more suitable citations, such as more recent papers, due to a lack of updated knowledge on the subject. Authors may select smaller studies over more extensive studies because they fit the author’s hypothesis better.

These types of errors can be overcome by systematic literature review and searching the literature using objective means such as word searches of paper databases [ 11 ].

4. Incorrect source type

It is common to find citations of secondary literature such as reviews and books without citing the primary research paper that reported the original finding.

This error can be overcome by citing the review paper (to demonstrate that the original idea got accepted) alongside the original source.

5. Insufficient support

As research on a particular area evolves, some old information falls out of favor, and some become more popular.

It is, therefore, vital to substantiate claims by citing, along with the original paper, recent research, and review papers, to indicate the author’s claim has general support.

6. Wrong citation

This type of error occurs when a wrong reference is added to the cited information.

This type of error can be overcome by carefully reviewing the full text article to ensure that all cited information are supported by the appropriate reference.

7. Incorrect technical details

This type of error involves inaccurate details of author names, journal names, dates, and page numbers of the cited paper.

These errors can be easily overcome by systematically organizing all paper citations and using automated reference management software packages such as: EndNote, RefWorks, Zotero, and others.

BEST PRACTICES FOR CITATIONS

Below are everyday situations requiring citations when writing original research, review, or editorial papers. Each situation is described as a pairing of Purpose and Practice. More technical guidelines for proper citation can be found in the book, “Publication Manual of the American Psychological Association (7th edition, 2020) – the official guide to the APA Style” [ 12 ].

  • Purpose : Making a factual claim in a paper.
  • Practice : Every factual claim must be supported by a citation.
  • This point may sound trivial, but it is common to see papers making specific claims and not substantiating them with references.
  • Purpose : Citing a specific paper to support a claim.
  • Practice : Download the cited paper and mark the relevant section in the paper. Have the marked paper available for your co-authors to validate the accuracy of the citation and ask them to validate each source independently.
  • Purpose : Citing a scientific discovery.
  • Practice : Cite the original paper that made the discovery. Also, cite additional papers showing that the finding is reproducible. If the papers were written more than ten years ago, cite a recent review to demonstrate the discovery is still relevant and accepted in the scientific community.
  • Purpose : Citing an original research paper versus citing a review paper.
  • Practice : Citing the original paper (primary source) is the best practice. An original research paper is cited using a simple citation that does not require an explanation. However, citing a review paper (secondary source) needs disclosure of the fact that a review paper is being cited. For example, you can cite “(Smith et al., 1970).” Alternatively, if you cannot find the primary source, you need to identify the primary source in this way “(Smith et al., 1970, as cited in Cohen et al., 2020).”
  • Purpose : Citing a numerical value such as the percentage of couples with infertility.
  • Practice : Cite several of the most recent original papers to provide a range of numbers or an average. Include information on the study’s size, location, and timing so the audience can assess the quality of the studies.
  • Purpose : Citing an original research paper finding while disagreeing with the original interpretation.
  • Practice : Cite the specific paper and indicate the figure or table containing the controversial data. For example, you can cite “(Smith et al., 1970, Fig 1).” Indicate clearly what is the original interpretation of the data and what your interpretation is.
  • Purpose : Expressing an opinion based on a claim in a paper.
  • Practice : Clearly indicate that the opinion you express is your own, and then cite the paper.
  • Purpose : Citing an interpretation or opinion of a claim in a paper.
  • Practice : Clearly indicate this is an interpretation or opinion based on a published claim, and then cite the paper you are referring to.

While writing multi-author papers, we have repeatedly observed citation errors, including a large proportion of authors who tend to cite references merely based on abstracts found during a quick PubMed search. Another common erroneous citation practice is to blindly trust the information referenced by other authors in secondary sources and simply accept and adopt the information in their articles, without checking the original primary source.

These practices, and the other errors listed previously, can lead to incorrect and misleading citations. Abstracts often do not paint the complete picture and may lack adequate information to judge the validity of the citation. Furthermore, citations in secondary sources might be incorrect because the authors may have: (1) made an incorrect interpretation of the cited article as they did not read the entire original article; (2) cited a wrong article in support of their claim; (3) drawn an inaccurate conclusion from the cited article; and/or (4) presented a biased view of someone’s research or opinions and they narrate it inaccurately to serve their purpose or point of view. Importantly, repeated rewording and reiteration in secondary sources (repeated paraphrasing) can eventually result in distortion of the original information – which is very similar to the broken telephone game or transmission chain experiments.

All the citation errors discussed here can lead to the proliferation of inaccuracies and half-truths, or even completely false information, in the scientific literature. These inaccuracies and errors, even though largely unintentional, harm the sanctity of scientific literature. We must reject the notion that these are just minor errors, harmless to a paper’s main message, and therefore they do not matter and need not be pursued. No error is too small to bother, and there should be no room for error in any aspect of the work required to build an article of the highest quality and reliability.

RECOMMENDATIONS

  • - It is essential to seek access to a full paper and review it carefully before citing it.
  • - In cases when access to an entire article is unavailable for any reason, relying on a mere abstract is not the best practice. In all such circumstances, the ideal option is to find the full article by requesting it through an institutional inter-library loan, or requesting it from the author, or asking a colleague who has access to the necessary resources.
  • - If an article of interest is in a language other than English, the author should not exclude it automatically. The author should seek help translating the article so it can be read carefully to judge if is suitable as a reference.
  • - It is important not to exclude articles from the search string without making all necessary attempts to find and read them. This process may be laborious and may delay the manuscript’s writing by few days or even weeks, but, in the end, having reviewed and cited all important information published shows the thoroughness of the literature review, which raises the quality of reported findings.
  • - A policy of verification of citations by another author is also critical. An experienced senior author should adjudicate any conflicting results at this stage.
  • - There is a need to develop clear and specific guidelines on “Best Citation Practices” and to train researchers to follow them correctly and to understand the implications of citation errors in a larger context.

CONCLUSIONS

Good research requires a lot of hard work, patience, determination, and accuracy. We cannot have a high-quality paper if the foundation of our arguments is contaminated with unverified or inaccurate information. Authors should not rely on abstracts or secondary sources for citations. Clear guidelines dedicated to “Best Citation Practices” are needed to improve the accuracy and quality of scientific literature.

Acknowledgements

Authors are grateful to Parviz Kavoussi, MD (Austin, USA), Manaf Al-Hashimi, MD (Abu Dhabi, UAE) and Damayanthi Durairajanayagam, PhD (Selangor, Malaysia) for their review and editing of this manuscript.

Conflict of Interest: The authors have nothing to disclose.

Funding: None.

Author Contribution: All authors have contributed to the writing of the editorial.

Enago Academy

Incorrect Vs. Correct- Research Titles

' src=

When you are searching for a research study on a particular topic, you probably notice that articles with interesting, descriptive research titles draw a lot of attention. On the other hand, research papers with non-optimized titles do not get noticed easily, although they may have interesting content inside. An erroneous research title can be misleading. It could hamper the discoverability of your article and also cause difficulties whilst citing similar work.

Correcting Incorrect Titles

  • Describe the purpose of your study.
  • Keep it short yet informative.
  • Focus on the key factor of your study.
  • Avoid using acronyms in the title.
  • Eliminate unnecessary filler words.
  • Use jargon carefully to reflect the specificity of your study.

This smartshort gives an example of an incorrect research title and shows how to correct it. Learn more about how to write good research titles .

examples of bad research papers

Rate this article Cancel Reply

Your email address will not be published.

examples of bad research papers

Enago Academy's Most Popular Articles

Keyword

Top 4 Tools for Keyword Selection

Keywords play an important role in making research discoverable. It helps researchers discover articles relevant…

Images and Figures

Top 4 Tools to Create Scientific Images and Figures

A good image or figure can go a long way in effectively communicating your results…

Presentation

Tips to Effectively Present Your Work

Presenting your work is an important part of scientific communication and is very important for…

examples of bad research papers

Tips to Tackle Procrastination

You can end up wasting a lot of time procrastinating. Procrastination leads you to a…

examples of bad research papers

Rules of Capitalization

Using too much capitalization or using it incorrectly can undermine, clutter, and confuse your writing…

examples of bad research papers

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

examples of bad research papers

What should universities' stance be on AI tools in research and academic writing?

IMAGES

  1. Frightening Bad College Essays ~ Thatsnotus

    examples of bad research papers

  2. Example of bad scientific poster

    examples of bad research papers

  3. Examples of poorly written research papers

    examples of bad research papers

  4. How to Write a Killer Research Paper (Even If You Hate Writing)

    examples of bad research papers

  5. Examples of Good and Bad Research Questions

    examples of bad research papers

  6. Good Research Paper VS Bad Research Paper

    examples of bad research papers

VIDEO

  1. Bad Research Supervisors: Characteristics

  2. papers bad

  3. Sampling Bias in Research

  4. 9th Class papers Leak ll Bad News For Lahore Board Exam paper leak

  5. there are two papers very bad paper are you hiding inside or is it ghost please like and subscribe

  6. Day 105

COMMENTS

  1. Examples of poorly-written journal articles : r/AskAcademia

    I am looking for examples of poorly written journal articles (or other scientific writing). Of particular interest are papers with: Run-on sentences. Excessively verbose language. Awkward ways of conveying message. Rampant grammar issues. To be clear, the quality of the science makes no difference.

  2. The 10 Most Ridiculous Scientific Studies

    September 9, 2015 4:09 PM EDT. Jeffrey Kluger is an editor at large at TIME. He covers space, climate, and science. He is the author of 12 books, including Apollo 13, which served as the basis for ...

  3. Hundreds of Psychology Studies Are Wrong

    This is bad science, and bad science is at best useless, and at worst dangerous, not to mention expensive. Moreover, any interventions that are developed off the back of such research are ...

  4. How to avoid getting duped by bad research, in 5 case studies

    That's why we keep our work free. Millions rely on Vox's clear, high-quality journalism to understand the forces shaping today's world. Support our mission and help keep Vox free for all by ...

  5. Research findings that are probably wrong cited far more than robust

    Following an influential paper in 2005 titled Why most published research findings are false, three major projects have found replication rates as low as 39% in psychology journals, 61% in ...

  6. Examining data visualization pitfalls in scientific publications

    On the other hand, many articles on the web provide bad examples of visual designs leading to misinterpretation but they are usually viewed from different perspectives. In line with this research direction, Bresciani and Eppler [ 6 ] provided a comprehensive review on common errors in data visualization and classified these pitfalls into three ...

  7. Examples of Good and Bad Research Questions

    If your research feels similar to existing articles, make sure to drive home the differences. 5. Complex. Whether it's developed for a thesis or another assignment, a good research topic question should be complex enough to let you expand on it within the scope of your paper.

  8. Tolerating bad health research: the continuing scandal

    At the 2015 REWARD/EQUATOR conference on research waste, the late Doug Altman revealed that his only regret about his 1994 BMJ paper 'The scandal of poor medical research' [] was that he used the word 'poor' rather than 'bad'.Towards the end of his life, Doug had considered writing a sequel with a title that included not only 'bad' but 'continuing' [].

  9. The 7 deadly sins of research

    Some of the consequences of cherry-picked data include biased results and making wide-reaching generalizations based on limited samples. 4. Data fabrication. From fake stem cell lines to ...

  10. PDF 2018 Some characteristics of a bad research paper

    Example: Far too many papers waste space on long and irrelevant literature reviews, designed only to show that the author has done some reading. "When we ask the time, we don't want to know how watches are constructed." Georg Christoph Lichtenberg (1742-1799) 2. Fill the paper with unnecessary jargon and jibberish (Krashen, 2012b).

  11. Bad Studies: Yes, They Exist. How Can You Identify Them?

    And it's not even just that bad studies get published in bad journals, or so-called predatory journal; bad studies can be published in good journal. So for example, in the early 1980s there was a paper published in The New England Journal of Medicine, arguably the best clinical medical journal in the United States. They claimed that excess ...

  12. PDF Examples of bad research method

    SOME EXAMPLES OF BAD RESEARCH METHOD. By Marvin Hunn. I think most violations of research standards can be characterized as follows: unconscious bias; innocent mistakes & unintentional negligence; intentional negligence; or intentional deception. Bias can interact with the other categories. The words "innocent," and "unintentional ...

  13. What is the prevalence of bad social science?

    Good/Bad, Left/Right, accept/reject H0, are all binary choices. When it comes to research, it is all too easy to label a research paper as good or bad, or conclude it should never have been done, or label the data as too noisy to yield meaningful results. But I think all of these are a continuum - the world is all gray.

  14. How to Write a Research Hypothesis: Good & Bad Examples

    Another example for a directional one-tailed alternative hypothesis would be that. H1: Attending private classes before important exams has a positive effect on performance. Your null hypothesis would then be that. H0: Attending private classes before important exams has no/a negative effect on performance.

  15. Bad Sources for Your Research Project

    Unreliable Sources for Your Research Project. In conducting research for homework or an academic paper, you are basically conducting a search for facts: little tidbits of truth that you will assemble and arrange in an organized fashion to make an original point or claim. Your responsibility as a researcher is to understand the difference ...

  16. Poor statistical reporting, inadequate data presentation and ...

    The Journal of Physiology and British Journal of Pharmacology jointly published an editorial series in 2011 to improve standards in statistical reporting and data analysis. It is not known whether reporting practices changed in response to the editorial advice. We conducted a cross-sectional analysis of reporting practices in a random sample of research papers published in these journals ...

  17. publications

    Take a look at this quote from the Chronicle article "Bad Writing and Bad Thinking" Many people—publishers of scholarly work, editors at higher-education publications, agents looking for academic authors capable of writing trade books—who think about the general quality of scholarly prose would admit that we're in a sorry state, and most ...

  18. Good and Bad Research Questions

    The figure below gives some examples of good and "not-so-good" research questions. Transcript of this image. Good vs. Bad Research Questions; Good Research Questions. Bad Research Questions. Have no simple answer - are open-ended and consider cause/effect. Have simple or easy answers - can be answered with one word, a number, or a list ...

  19. Research

    Chapter 10. Research - good, bad and unnecessary. In earlier chapters we emphasized why tests of treatments must be designed properly and addressed questions that matter to patients and the public. When they are, everyone can take pride and satisfaction in the results, even when hoped-for benefits do not materialize, because important ...

  20. Writing scientific manuscripts: most common mistakes

    A recurrent problem in several manuscripts that never get to the pages of a scientific journal is the lack of approval by an institutional review board (IRB) or ethics in research committee. Ideally, the MM section should include in the first paragraph the information that this approval was been obtained.

  21. Understanding the Good, Bad, and Ugly of Research Titles

    Your research paper's title needs to catch the reader's attention, but using slang and cliched references doesn't get the job done. Instead, you should focus on summarizing what the content of the paper includes in the fewest words possible. Your title will be the most frequently read part of your paper, and if it doesn't hit the Goldilocks spot (not too long, not too short, but just ...

  22. Citation Errors in Scientific Research and Publications: Causes

    Practice: Citing the original paper (primary source) is the best practice. An original research paper is cited using a simple citation that does not require an explanation. However, citing a review paper (secondary source) needs disclosure of the fact that a review paper is being cited. For example, you can cite "(Smith et al., 1970)."

  23. Incorrect Vs. Correct- Research Titles

    Describe the purpose of your study. Keep it short yet informative. Focus on the key factor of your study. Avoid using acronyms in the title. Eliminate unnecessary filler words. Use jargon carefully to reflect the specificity of your study. This smartshort gives an example of an incorrect research title and shows how to correct it.

  24. Teens are spending nearly 5 hours daily on social media. Here are the

    41%. Percentage of teens with the highest social media use who rate their overall mental health as poor or very poor, compared with 23% of those with the lowest use. For example, 10% of the highest use group expressed suicidal intent or self-harm in the past 12 months compared with 5% of the lowest use group, and 17% of the highest users expressed poor body image compared with 6% of the lowest ...