Cambridge Dictionary

  • Cambridge Dictionary +Plus

Meaning of hypothesis in English

Your browser doesn't support HTML5 audio

  • abstraction
  • afterthought
  • anthropocentrism
  • anti-Darwinian
  • exceptionalism
  • foundation stone
  • great minds think alike idiom
  • non-dogmatic
  • non-empirical
  • non-material
  • non-practical
  • social Darwinism
  • supersensible
  • the domino theory

hypothesis | American Dictionary

Hypothesis | business english, examples of hypothesis, translations of hypothesis.

Get a quick, free translation!

{{randomImageQuizHook.quizId}}

Word of the Day

troubleshoot

to discover why something does not work effectively and help to improve it

Searching out and tracking down: talking about finding or discovering things

Searching out and tracking down: talking about finding or discovering things

hypothesis of something meaning

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists
  • English    Noun
  • American    Noun
  • Business    Noun
  • Translations
  • All translations

To add hypothesis to a word list please sign up or log in.

Add hypothesis to one of your lists below, or create a new one.

{{message}}

Something went wrong.

There was a problem sending your report.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Definition, Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

hypothesis of something meaning

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

hypothesis of something meaning

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis.

  • Operationalization

Hypothesis Types

Hypotheses examples.

  • Collecting Data

A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.

Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

At a Glance

A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.

Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

How to Formulate a Good Hypothesis

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

The Importance of Operational Definitions

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.

Replicability

One of the basic principles of any type of scientific research is that the results must be replicable.

Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type suggests a relationship between three or more variables, such as two independent and dependent variables.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative population sample and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."
  • "Children who receive a new reading intervention will have higher reading scores than students who do not receive the intervention."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "There is no difference in anxiety levels between people who take St. John's wort supplements and those who do not."
  • "There is no difference in scores on a memory recall task between children and adults."
  • "There is no difference in aggression levels between children who play first-person shooter games and those who do not."

Examples of an alternative hypothesis:

  • "People who take St. John's wort supplements will have less anxiety than those who do not."
  • "Adults will perform better on a memory task than children."
  • "Children who play first-person shooter games will show higher levels of aggression than children who do not." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when  conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a  correlational study  can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Thompson WH, Skau S. On the scope of scientific hypotheses .  R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607

Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:].  Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z

Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004

Nosek BA, Errington TM. What is replication ?  PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Grad Coach

What Is A Research (Scientific) Hypothesis? A plain-language explainer + examples

By:  Derek Jansen (MBA)  | Reviewed By: Dr Eunice Rautenbach | June 2020

If you’re new to the world of research, or it’s your first time writing a dissertation or thesis, you’re probably noticing that the words “research hypothesis” and “scientific hypothesis” are used quite a bit, and you’re wondering what they mean in a research context .

“Hypothesis” is one of those words that people use loosely, thinking they understand what it means. However, it has a very specific meaning within academic research. So, it’s important to understand the exact meaning before you start hypothesizing. 

Research Hypothesis 101

  • What is a hypothesis ?
  • What is a research hypothesis (scientific hypothesis)?
  • Requirements for a research hypothesis
  • Definition of a research hypothesis
  • The null hypothesis

What is a hypothesis?

Let’s start with the general definition of a hypothesis (not a research hypothesis or scientific hypothesis), according to the Cambridge Dictionary:

Hypothesis: an idea or explanation for something that is based on known facts but has not yet been proved.

In other words, it’s a statement that provides an explanation for why or how something works, based on facts (or some reasonable assumptions), but that has not yet been specifically tested . For example, a hypothesis might look something like this:

Hypothesis: sleep impacts academic performance.

This statement predicts that academic performance will be influenced by the amount and/or quality of sleep a student engages in – sounds reasonable, right? It’s based on reasonable assumptions , underpinned by what we currently know about sleep and health (from the existing literature). So, loosely speaking, we could call it a hypothesis, at least by the dictionary definition.

But that’s not good enough…

Unfortunately, that’s not quite sophisticated enough to describe a research hypothesis (also sometimes called a scientific hypothesis), and it wouldn’t be acceptable in a dissertation, thesis or research paper . In the world of academic research, a statement needs a few more criteria to constitute a true research hypothesis .

What is a research hypothesis?

A research hypothesis (also called a scientific hypothesis) is a statement about the expected outcome of a study (for example, a dissertation or thesis). To constitute a quality hypothesis, the statement needs to have three attributes – specificity , clarity and testability .

Let’s take a look at these more closely.

Need a helping hand?

hypothesis of something meaning

Hypothesis Essential #1: Specificity & Clarity

A good research hypothesis needs to be extremely clear and articulate about both what’ s being assessed (who or what variables are involved ) and the expected outcome (for example, a difference between groups, a relationship between variables, etc.).

Let’s stick with our sleepy students example and look at how this statement could be more specific and clear.

Hypothesis: Students who sleep at least 8 hours per night will, on average, achieve higher grades in standardised tests than students who sleep less than 8 hours a night.

As you can see, the statement is very specific as it identifies the variables involved (sleep hours and test grades), the parties involved (two groups of students), as well as the predicted relationship type (a positive relationship). There’s no ambiguity or uncertainty about who or what is involved in the statement, and the expected outcome is clear.

Contrast that to the original hypothesis we looked at – “Sleep impacts academic performance” – and you can see the difference. “Sleep” and “academic performance” are both comparatively vague , and there’s no indication of what the expected relationship direction is (more sleep or less sleep). As you can see, specificity and clarity are key.

A good research hypothesis needs to be very clear about what’s being assessed and very specific about the expected outcome.

Hypothesis Essential #2: Testability (Provability)

A statement must be testable to qualify as a research hypothesis. In other words, there needs to be a way to prove (or disprove) the statement. If it’s not testable, it’s not a hypothesis – simple as that.

For example, consider the hypothesis we mentioned earlier:

Hypothesis: Students who sleep at least 8 hours per night will, on average, achieve higher grades in standardised tests than students who sleep less than 8 hours a night.  

We could test this statement by undertaking a quantitative study involving two groups of students, one that gets 8 or more hours of sleep per night for a fixed period, and one that gets less. We could then compare the standardised test results for both groups to see if there’s a statistically significant difference. 

Again, if you compare this to the original hypothesis we looked at – “Sleep impacts academic performance” – you can see that it would be quite difficult to test that statement, primarily because it isn’t specific enough. How much sleep? By who? What type of academic performance?

So, remember the mantra – if you can’t test it, it’s not a hypothesis 🙂

A good research hypothesis must be testable. In other words, you must able to collect observable data in a scientifically rigorous fashion to test it.

Defining A Research Hypothesis

You’re still with us? Great! Let’s recap and pin down a clear definition of a hypothesis.

A research hypothesis (or scientific hypothesis) is a statement about an expected relationship between variables, or explanation of an occurrence, that is clear, specific and testable.

So, when you write up hypotheses for your dissertation or thesis, make sure that they meet all these criteria. If you do, you’ll not only have rock-solid hypotheses but you’ll also ensure a clear focus for your entire research project.

What about the null hypothesis?

You may have also heard the terms null hypothesis , alternative hypothesis, or H-zero thrown around. At a simple level, the null hypothesis is the counter-proposal to the original hypothesis.

For example, if the hypothesis predicts that there is a relationship between two variables (for example, sleep and academic performance), the null hypothesis would predict that there is no relationship between those variables.

At a more technical level, the null hypothesis proposes that no statistical significance exists in a set of given observations and that any differences are due to chance alone.

And there you have it – hypotheses in a nutshell. 

If you have any questions, be sure to leave a comment below and we’ll do our best to help you. If you need hands-on help developing and testing your hypotheses, consider our private coaching service , where we hold your hand through the research journey.

hypothesis of something meaning

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Research limitations vs delimitations

16 Comments

Lynnet Chikwaikwai

Very useful information. I benefit more from getting more information in this regard.

Dr. WuodArek

Very great insight,educative and informative. Please give meet deep critics on many research data of public international Law like human rights, environment, natural resources, law of the sea etc

Afshin

In a book I read a distinction is made between null, research, and alternative hypothesis. As far as I understand, alternative and research hypotheses are the same. Can you please elaborate? Best Afshin

GANDI Benjamin

This is a self explanatory, easy going site. I will recommend this to my friends and colleagues.

Lucile Dossou-Yovo

Very good definition. How can I cite your definition in my thesis? Thank you. Is nul hypothesis compulsory in a research?

Pereria

It’s a counter-proposal to be proven as a rejection

Egya Salihu

Please what is the difference between alternate hypothesis and research hypothesis?

Mulugeta Tefera

It is a very good explanation. However, it limits hypotheses to statistically tasteable ideas. What about for qualitative researches or other researches that involve quantitative data that don’t need statistical tests?

Derek Jansen

In qualitative research, one typically uses propositions, not hypotheses.

Samia

could you please elaborate it more

Patricia Nyawir

I’ve benefited greatly from these notes, thank you.

Hopeson Khondiwa

This is very helpful

Dr. Andarge

well articulated ideas are presented here, thank you for being reliable sources of information

TAUNO

Excellent. Thanks for being clear and sound about the research methodology and hypothesis (quantitative research)

I have only a simple question regarding the null hypothesis. – Is the null hypothesis (Ho) known as the reversible hypothesis of the alternative hypothesis (H1? – How to test it in academic research?

Tesfaye Negesa Urge

this is very important note help me much more

Trackbacks/Pingbacks

  • What Is Research Methodology? Simple Definition (With Examples) - Grad Coach - […] Contrasted to this, a quantitative methodology is typically used when the research aims and objectives are confirmatory in nature. For example,…

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

What is a scientific hypothesis?

It's the initial building block in the scientific method.

A girl looks at plants in a test tube for a science experiment. What's her scientific hypothesis?

Hypothesis basics

What makes a hypothesis testable.

  • Types of hypotheses
  • Hypothesis versus theory

Additional resources

Bibliography.

A scientific hypothesis is a tentative, testable explanation for a phenomenon in the natural world. It's the initial building block in the scientific method . Many describe it as an "educated guess" based on prior knowledge and observation. While this is true, a hypothesis is more informed than a guess. While an "educated guess" suggests a random prediction based on a person's expertise, developing a hypothesis requires active observation and background research. 

The basic idea of a hypothesis is that there is no predetermined outcome. For a solution to be termed a scientific hypothesis, it has to be an idea that can be supported or refuted through carefully crafted experimentation or observation. This concept, called falsifiability and testability, was advanced in the mid-20th century by Austrian-British philosopher Karl Popper in his famous book "The Logic of Scientific Discovery" (Routledge, 1959).

A key function of a hypothesis is to derive predictions about the results of future experiments and then perform those experiments to see whether they support the predictions.

A hypothesis is usually written in the form of an if-then statement, which gives a possibility (if) and explains what may happen because of the possibility (then). The statement could also include "may," according to California State University, Bakersfield .

Here are some examples of hypothesis statements:

  • If garlic repels fleas, then a dog that is given garlic every day will not get fleas.
  • If sugar causes cavities, then people who eat a lot of candy may be more prone to cavities.
  • If ultraviolet light can damage the eyes, then maybe this light can cause blindness.

A useful hypothesis should be testable and falsifiable. That means that it should be possible to prove it wrong. A theory that can't be proved wrong is nonscientific, according to Karl Popper's 1963 book " Conjectures and Refutations ."

An example of an untestable statement is, "Dogs are better than cats." That's because the definition of "better" is vague and subjective. However, an untestable statement can be reworded to make it testable. For example, the previous statement could be changed to this: "Owning a dog is associated with higher levels of physical fitness than owning a cat." With this statement, the researcher can take measures of physical fitness from dog and cat owners and compare the two.

Types of scientific hypotheses

Elementary-age students study alternative energy using homemade windmills during public school science class.

In an experiment, researchers generally state their hypotheses in two ways. The null hypothesis predicts that there will be no relationship between the variables tested, or no difference between the experimental groups. The alternative hypothesis predicts the opposite: that there will be a difference between the experimental groups. This is usually the hypothesis scientists are most interested in, according to the University of Miami .

For example, a null hypothesis might state, "There will be no difference in the rate of muscle growth between people who take a protein supplement and people who don't." The alternative hypothesis would state, "There will be a difference in the rate of muscle growth between people who take a protein supplement and people who don't."

If the results of the experiment show a relationship between the variables, then the null hypothesis has been rejected in favor of the alternative hypothesis, according to the book " Research Methods in Psychology " (​​BCcampus, 2015). 

There are other ways to describe an alternative hypothesis. The alternative hypothesis above does not specify a direction of the effect, only that there will be a difference between the two groups. That type of prediction is called a two-tailed hypothesis. If a hypothesis specifies a certain direction — for example, that people who take a protein supplement will gain more muscle than people who don't — it is called a one-tailed hypothesis, according to William M. K. Trochim , a professor of Policy Analysis and Management at Cornell University.

Sometimes, errors take place during an experiment. These errors can happen in one of two ways. A type I error is when the null hypothesis is rejected when it is true. This is also known as a false positive. A type II error occurs when the null hypothesis is not rejected when it is false. This is also known as a false negative, according to the University of California, Berkeley . 

A hypothesis can be rejected or modified, but it can never be proved correct 100% of the time. For example, a scientist can form a hypothesis stating that if a certain type of tomato has a gene for red pigment, that type of tomato will be red. During research, the scientist then finds that each tomato of this type is red. Though the findings confirm the hypothesis, there may be a tomato of that type somewhere in the world that isn't red. Thus, the hypothesis is true, but it may not be true 100% of the time.

Scientific theory vs. scientific hypothesis

The best hypotheses are simple. They deal with a relatively narrow set of phenomena. But theories are broader; they generally combine multiple hypotheses into a general explanation for a wide range of phenomena, according to the University of California, Berkeley . For example, a hypothesis might state, "If animals adapt to suit their environments, then birds that live on islands with lots of seeds to eat will have differently shaped beaks than birds that live on islands with lots of insects to eat." After testing many hypotheses like these, Charles Darwin formulated an overarching theory: the theory of evolution by natural selection.

"Theories are the ways that we make sense of what we observe in the natural world," Tanner said. "Theories are structures of ideas that explain and interpret facts." 

  • Read more about writing a hypothesis, from the American Medical Writers Association.
  • Find out why a hypothesis isn't always necessary in science, from The American Biology Teacher.
  • Learn about null and alternative hypotheses, from Prof. Essa on YouTube .

Encyclopedia Britannica. Scientific Hypothesis. Jan. 13, 2022. https://www.britannica.com/science/scientific-hypothesis

Karl Popper, "The Logic of Scientific Discovery," Routledge, 1959.

California State University, Bakersfield, "Formatting a testable hypothesis." https://www.csub.edu/~ddodenhoff/Bio100/Bio100sp04/formattingahypothesis.htm  

Karl Popper, "Conjectures and Refutations," Routledge, 1963.

Price, P., Jhangiani, R., & Chiang, I., "Research Methods of Psychology — 2nd Canadian Edition," BCcampus, 2015.‌

University of Miami, "The Scientific Method" http://www.bio.miami.edu/dana/161/evolution/161app1_scimethod.pdf  

William M.K. Trochim, "Research Methods Knowledge Base," https://conjointly.com/kb/hypotheses-explained/  

University of California, Berkeley, "Multiple Hypothesis Testing and False Discovery Rate" https://www.stat.berkeley.edu/~hhuang/STAT141/Lecture-FDR.pdf  

University of California, Berkeley, "Science at multiple levels" https://undsci.berkeley.edu/article/0_0_0/howscienceworks_19

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Alina Bradford

Tree rings reveal summer 2023 was the hottest in 2 millennia

Aurora photos: Stunning northern lights glisten after biggest geomagnetic storm in 21 years

Jupiter's elusive 5th moon caught crossing the Great Red Spot in new NASA images

Most Popular

  • 2 China creates its largest ever quantum computing chip — and it could be key to building the nation's own 'quantum cloud'
  • 3 MIT gives AI the power to 'reason like humans' by creating hybrid architecture
  • 4 'Quantum-inspired' laser computing is more effective than both supercomputing and quantum computing, startup claims
  • 5 2,500-year-old Illyrian helmet found in burial mound likely caused 'awe in the enemy'
  • 3 Atoms squished closer together than ever before, revealing seemingly impossible quantum effects
  • 4 Sun launches strongest solar flare of current cycle in monster X8.7-class eruption

hypothesis of something meaning

Research Hypothesis In Psychology: Types, & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .

Hypotheses connect theory to data and guide the research process towards expanding scientific understanding

Some key points about hypotheses:

  • A hypothesis expresses an expected pattern or relationship. It connects the variables under investigation.
  • It is stated in clear, precise terms before any data collection or analysis occurs. This makes the hypothesis testable.
  • A hypothesis must be falsifiable. It should be possible, even if unlikely in practice, to collect data that disconfirms rather than supports the hypothesis.
  • Hypotheses guide research. Scientists design studies to explicitly evaluate hypotheses about how nature works.
  • For a hypothesis to be valid, it must be testable against empirical evidence. The evidence can then confirm or disprove the testable predictions.
  • Hypotheses are informed by background knowledge and observation, but go beyond what is already known to propose an explanation of how or why something occurs.
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.

Types of Research Hypotheses

Alternative hypothesis.

The research hypothesis is often called the alternative or experimental hypothesis in experimental research.

It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.

The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).

A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:

  • Important hypotheses lead to predictions that can be tested empirically. The evidence can then confirm or disprove the testable predictions.

In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.

It states that the results are not due to chance and are significant in supporting the theory being investigated.

The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.

Null Hypothesis

The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.

It states results are due to chance and are not significant in supporting the idea being investigated.

The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.

Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.

This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.

Nondirectional Hypothesis

A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.

It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.

For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.

Directional Hypothesis

A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)

It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.

For example, “Exercise increases weight loss” is a directional hypothesis.

hypothesis

Falsifiability

The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.

Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.

It means that there should exist some potential evidence or experiment that could prove the proposition false.

However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.

For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.

Can a Hypothesis be Proven?

Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.

All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.

In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
  • Disproving a hypothesis is definitive. Solid disconfirmatory evidence will falsify a hypothesis and require altering or discarding it based on the evidence.
  • However, confirming evidence is always open to revision. Other explanations may account for the same results, and additional or contradictory evidence may emerge over time.

We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.

If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.

Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.

How to Write a Hypothesis

  • Identify variables . The researcher manipulates the independent variable and the dependent variable is the measured outcome.
  • Operationalized the variables being investigated . Operationalization of a hypothesis refers to the process of making the variables physically measurable or testable, e.g. if you are about to study aggression, you might count the number of punches given by participants.
  • Decide on a direction for your prediction . If there is evidence in the literature to support a specific effect of the independent variable on the dependent variable, write a directional (one-tailed) hypothesis. If there are limited or ambiguous findings in the literature regarding the effect of the independent variable on the dependent variable, write a non-directional (two-tailed) hypothesis.
  • Make it Testable : Ensure your hypothesis can be tested through experimentation or observation. It should be possible to prove it false (principle of falsifiability).
  • Clear & concise language . A strong hypothesis is concise (typically one to two sentences long), and formulated using clear and straightforward language, ensuring it’s easily understood and testable.

Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).

Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:

  • The alternative hypothesis states that students will recall significantly more information on a Monday morning than on a Friday afternoon.
  • The null hypothesis states that there will be no significant difference in the amount recalled on a Monday morning compared to a Friday afternoon. Any difference will be due to chance or confounding factors.

More Examples

  • Memory : Participants exposed to classical music during study sessions will recall more items from a list than those who studied in silence.
  • Social Psychology : Individuals who frequently engage in social media use will report higher levels of perceived social isolation compared to those who use it infrequently.
  • Developmental Psychology : Children who engage in regular imaginative play have better problem-solving skills than those who don’t.
  • Clinical Psychology : Cognitive-behavioral therapy will be more effective in reducing symptoms of anxiety over a 6-month period compared to traditional talk therapy.
  • Cognitive Psychology : Individuals who multitask between various electronic devices will have shorter attention spans on focused tasks than those who single-task.
  • Health Psychology : Patients who practice mindfulness meditation will experience lower levels of chronic pain compared to those who don’t meditate.
  • Organizational Psychology : Employees in open-plan offices will report higher levels of stress than those in private offices.
  • Behavioral Psychology : Rats rewarded with food after pressing a lever will press it more frequently than rats who receive no reward.

Print Friendly, PDF & Email

Related Articles

What Is a Focus Group?

Research Methodology

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Convergent Validity: Definition and Examples

Convergent Validity: Definition and Examples

Scientific Hypothesis, Model, Theory, and Law

Understanding the Difference Between Basic Scientific Terms

Hero Images / Getty Images

  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Scientific Method
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

Words have precise meanings in science. For example, "theory," "law," and "hypothesis" don't all mean the same thing. Outside of science, you might say something is "just a theory," meaning it's a supposition that may or may not be true. In science, however, a theory is an explanation that generally is accepted to be true. Here's a closer look at these important, commonly misused terms.

A hypothesis is an educated guess, based on observation. It's a prediction of cause and effect. Usually, a hypothesis can be supported or refuted through experimentation or more observation. A hypothesis can be disproven but not proven to be true.

Example: If you see no difference in the cleaning ability of various laundry detergents, you might hypothesize that cleaning effectiveness is not affected by which detergent you use. This hypothesis can be disproven if you observe a stain is removed by one detergent and not another. On the other hand, you cannot prove the hypothesis. Even if you never see a difference in the cleanliness of your clothes after trying 1,000 detergents, there might be one more you haven't tried that could be different.

Scientists often construct models to help explain complex concepts. These can be physical models like a model volcano or atom  or conceptual models like predictive weather algorithms. A model doesn't contain all the details of the real deal, but it should include observations known to be valid.

Example: The  Bohr model shows electrons orbiting the atomic nucleus, much the same way as the way planets revolve around the sun. In reality, the movement of electrons is complicated but the model makes it clear that protons and neutrons form a nucleus and electrons tend to move around outside the nucleus.

A scientific theory summarizes a hypothesis or group of hypotheses that have been supported with repeated testing. A theory is valid as long as there is no evidence to dispute it. Therefore, theories can be disproven. Basically, if evidence accumulates to support a hypothesis, then the hypothesis can become accepted as a good explanation of a phenomenon. One definition of a theory is to say that it's an accepted hypothesis.

Example: It is known that on June 30, 1908, in Tunguska, Siberia, there was an explosion equivalent to the detonation of about 15 million tons of TNT. Many hypotheses have been proposed for what caused the explosion. It was theorized that the explosion was caused by a natural extraterrestrial phenomenon , and was not caused by man. Is this theory a fact? No. The event is a recorded fact. Is this theory, generally accepted to be true, based on evidence to-date? Yes. Can this theory be shown to be false and be discarded? Yes.

A scientific law generalizes a body of observations. At the time it's made, no exceptions have been found to a law. Scientific laws explain things but they do not describe them. One way to tell a law and a theory apart is to ask if the description gives you the means to explain "why." The word "law" is used less and less in science, as many laws are only true under limited circumstances.

Example: Consider Newton's Law of Gravity . Newton could use this law to predict the behavior of a dropped object but he couldn't explain why it happened.

As you can see, there is no "proof" or absolute "truth" in science. The closest we get are facts, which are indisputable observations. Note, however, if you define proof as arriving at a logical conclusion, based on the evidence, then there is "proof" in science. Some work under the definition that to prove something implies it can never be wrong, which is different. If you're asked to define the terms hypothesis, theory, and law, keep in mind the definitions of proof and of these words can vary slightly depending on the scientific discipline. What's important is to realize they don't all mean the same thing and cannot be used interchangeably.

  • Theory Definition in Science
  • Hypothesis, Model, Theory, and Law
  • What Is a Scientific or Natural Law?
  • Scientific Hypothesis Examples
  • The Continental Drift Theory: Revolutionary and Significant
  • What 'Fail to Reject' Means in a Hypothesis Test
  • What Is a Hypothesis? (Science)
  • Hypothesis Definition (Science)
  • Definition of a Hypothesis
  • Processual Archaeology
  • The Basics of Physics in Scientific Study
  • What Is the Difference Between Hard and Soft Science?
  • Tips on Winning the Debate on Evolution
  • Geological Thinking: Method of Multiple Working Hypotheses
  • 5 Common Misconceptions About Evolution
  • Deductive Versus Inductive Reasoning

Geektonight

What is Hypothesis? Definition, Meaning, Characteristics, Sources

  • Post last modified: 10 January 2022
  • Reading time: 18 mins read
  • Post category: Research Methodology

Coursera 7-Day Trail offer

  • What is Hypothesis?

Hypothesis is a prediction of the outcome of a study. Hypotheses are drawn from theories and research questions or from direct observations. In fact, a research problem can be formulated as a hypothesis. To test the hypothesis we need to formulate it in terms that can actually be analysed with statistical tools.

As an example, if we want to explore whether using a specific teaching method at school will result in better school marks (research question), the hypothesis could be that the mean school marks of students being taught with that specific teaching method will be higher than of those being taught using other methods.

In this example, we stated a hypothesis about the expected differences between groups. Other hypotheses may refer to correlations between variables.

Table of Content

  • 1 What is Hypothesis?
  • 2 Hypothesis Definition
  • 3 Meaning of Hypothesis
  • 4.1 Conceptual Clarity
  • 4.2 Need of empirical referents
  • 4.3 Hypothesis should be specific
  • 4.4 Hypothesis should be within the ambit of the available research techniques
  • 4.5 Hypothesis should be consistent with the theory
  • 4.6 Hypothesis should be concerned with observable facts and empirical events
  • 4.7 Hypothesis should be simple
  • 5.1 Observation
  • 5.2 Analogies
  • 5.4 State of Knowledge
  • 5.5 Culture
  • 5.6 Continuity of Research
  • 6.1 Null Hypothesis
  • 6.2 Alternative Hypothesis

Thus, to formulate a hypothesis, we need to refer to the descriptive statistics (such as the mean final marks), and specify a set of conditions about these statistics (such as a difference between the means, or in a different example, a positive or negative correlation). The hypothesis we formulate applies to the population of interest.

The null hypothesis makes a statement that no difference exists (see Pyrczak, 1995, pp. 75-84).

Hypothesis Definition

A hypothesis is ‘a guess or supposition as to the existence of some fact or law which will serve to explain a connection of facts already known to exist.’ – J. E. Creighton & H. R. Smart

Hypothesis is ‘a proposition not known to be definitely true or false, examined for the sake of determining the consequences which would follow from its truth.’ – Max Black

Hypothesis is ‘a proposition which can be put to a test to determine validity and is useful for further research.’ – W. J. Goode and P. K. Hatt

A hypothesis is a proposition, condition or principle which is assumed, perhaps without belief, in order to draw out its logical consequences and by this method to test its accord with facts which are known or may be determined. – Webster’s New International Dictionary of the English Language (1956)

Meaning of Hypothesis

From the above mentioned definitions of hypothesis, its meaning can be explained in the following ways.

  • At the primary level, a hypothesis is the possible and probable explanation of the sequence of happenings or data.
  • Sometimes, hypothesis may emerge from an imagination, common sense or a sudden event.
  • Hypothesis can be a probable answer to the research problem undertaken for study. 4. Hypothesis may not always be true. It can get disproven. In other words, hypothesis need not always be a true proposition.
  • Hypothesis, in a sense, is an attempt to present the interrelations that exist in the available data or information.
  • Hypothesis is not an individual opinion or community thought. Instead, it is a philosophical means which is to be used for research purpose. Hypothesis is not to be considered as the ultimate objective; rather it is to be taken as the means of explaining scientifically the prevailing situation.

The concept of hypothesis can further be explained with the help of some examples. Lord Keynes, in his theory of national income determination, made a hypothesis about the consumption function. He stated that the consumption expenditure of an individual or an economy as a whole is dependent on the level of income and changes in a certain proportion.

Later, this proposition was proved in the statistical research carried out by Prof. Simon Kuznets. Matthus, while studying the population, formulated a hypothesis that population increases faster than the supply of food grains. Population studies of several countries revealed that this hypothesis is true.

Validation of the Malthus’ hypothesis turned it into a theory and when it was tested in many other countries it became the famous Malthus’ Law of Population. It thus emerges that when a hypothesis is tested and proven, it becomes a theory. The theory, when found true in different times and at different places, becomes the law. Having understood the concept of hypothesis, few hypotheses can be formulated in the areas of commerce and economics.

  • Population growth moderates with the rise in per capita income.
  • Sales growth is positively linked with the availability of credit.
  • Commerce education increases the employability of the graduate students.
  • High rates of direct taxes prompt people to evade taxes.
  • Good working conditions improve the productivity of employees.
  • Advertising is the most effecting way of promoting sales than any other scheme.
  • Higher Debt-Equity Ratio increases the probability of insolvency.
  • Economic reforms in India have made the public sector banks more efficient and competent.
  • Foreign direct investment in India has moved in those sectors which offer higher rate of profit.
  • There is no significant association between credit rating and investment of fund.

Characteristics of Hypothesis

Not all the hypotheses are good and useful from the point of view of research. It is only a few hypotheses satisfying certain criteria that are good, useful and directive in the research work undertaken. The characteristics of such a useful hypothesis can be listed as below:

Conceptual Clarity

Need of empirical referents, hypothesis should be specific, hypothesis should be within the ambit of the available research techniques, hypothesis should be consistent with the theory, hypothesis should be concerned with observable facts and empirical events, hypothesis should be simple.

The concepts used while framing hypothesis should be crystal clear and unambiguous. Such concepts must be clearly defined so that they become lucid and acceptable to everyone. How are the newly developed concepts interrelated and how are they linked with the old one is to be very clear so that the hypothesis framed on their basis also carries the same clarity.

A hypothesis embodying unclear and ambiguous concepts can to a great extent undermine the successful completion of the research work.

A hypothesis can be useful in the research work undertaken only when it has links with some empirical referents. Hypothesis based on moral values and ideals are useless as they cannot be tested. Similarly, hypothesis containing opinions as good and bad or expectation with respect to something are not testable and therefore useless.

For example, ‘current account deficit can be lowered if people change their attitude towards gold’ is a hypothesis encompassing expectation. In case of such a hypothesis, the attitude towards gold is something which cannot clearly be described and therefore a hypothesis which embodies such an unclean thing cannot be tested and proved or disproved. In short, the hypothesis should be linked with some testable referents.

For the successful conduction of research, it is necessary that the hypothesis is specific and presented in a precise manner. Hypothesis which is general, too ambitious and grandiose in scope is not to be made as such hypothesis cannot be easily put to test. A hypothesis is to be based on such concepts which are precise and empirical in nature. A hypothesis should give a clear idea about the indicators which are to be used.

For example, a hypothesis that economic power is increasingly getting concentrated in a few hands in India should enable us to define the concept of economic power. It should be explicated in terms of measurable indicator like income, wealth, etc. Such specificity in the formulation of a hypothesis ensures that the research is practicable and significant.

While framing the hypothesis, the researcher should be aware of the available research techniques and should see that the hypothesis framed is testable on the basis of them. In other words, a hypothesis should be researchable and for this it is important that a due thought has been given to the methods and techniques which can be used to measure the concepts and variables embodied in the hypothesis.

It does not however mean that hypotheses which are not testable with the available techniques of research are not to be made. If the problem is too significant and therefore the hypothesis framed becomes too ambitious and complex, it’s testing becomes possible with the development of new research techniques or the hypothesis itself leads to the development of new research techniques.

A hypothesis must be related to the existing theory or should have a theoretical orientation. The growth of knowledge takes place in the sequence of facts, hypothesis, theory and law or principles. It means the hypothesis should have a correspondence with the existing facts and theory.

If the hypothesis is related to some theory, the research work will enable us to support, modify or refute the existing theory. Theoretical orientation of the hypothesis ensures that it becomes scientifically useful. According to Prof. Goode and Prof. Hatt, research work can contribute to the existing knowledge only when the hypothesis is related with some theory.

This enables us to explain the observed facts and situations and also verify the framed hypothesis. In the words of Prof. Cohen and Prof. Nagel, “hypothesis must be formulated in such a manner that deduction can be made from it and that consequently a decision can be reached as to whether it does or does not explain the facts considered.”

If the research work based on a hypothesis is to be successful, it is necessary that the later is as simple and easy as possible. An ambition of finding out something new may lead the researcher to frame an unrealistic and unclear hypothesis. Such a temptation is to be avoided. Framing a simple, easy and testable hypothesis requires that the researcher is well acquainted with the related concepts.

Sources of Hypothesis

Hypotheses can be derived from various sources. Some of the sources is given below:

Observation

State of knowledge, continuity of research.

Hypotheses can be derived from observation from the observation of price behavior in a market. For example the relationship between the price and demand for an article is hypothesized.

Analogies are another source of useful hypotheses. Julian Huxley has pointed out that casual observations in nature or in the framework of another science may be a fertile source of hypotheses. For example, the hypotheses that similar human types or activities may be found in similar geophysical regions come from plant ecology.

This is one of the main sources of hypotheses. It gives direction to research by stating what is known logical deduction from theory lead to new hypotheses. For example, profit / wealth maximization is considered as the goal of private enterprises. From this assumption various hypotheses are derived’.

An important source of hypotheses is the state of knowledge in any particular science where formal theories exist hypotheses can be deduced. If the hypotheses are rejected theories are scarce hypotheses are generated from conception frameworks.

Another source of hypotheses is the culture on which the researcher was nurtured. Western culture has induced the emergence of sociology as an academic discipline over the past decade, a large part of the hypotheses on American society examined by researchers were connected with violence. This interest is related to the considerable increase in the level of violence in America.

The continuity of research in a field itself constitutes an important source of hypotheses. The rejection of some hypotheses leads to the formulation of new ones capable of explaining dependent variables in subsequent research on the same subject.

Null and Alternative Hypothesis

Null hypothesis.

The hypothesis that are proposed with the intent of receiving a rejection for them are called Null Hypothesis . This requires that we hypothesize the opposite of what is desired to be proved. For example, if we want to show that sales and advertisement expenditure are related, we formulate the null hypothesis that they are not related.

Similarly, if we want to conclude that the new sales training programme is effective, we formulate the null hypothesis that the new training programme is not effective, and if we want to prove that the average wages of skilled workers in town 1 is greater than that of town 2, we formulate the null hypotheses that there is no difference in the average wages of the skilled workers in both the towns.

Since we hypothesize that sales and advertisement are not related, new training programme is not effective and the average wages of skilled workers in both the towns are equal, we call such hypotheses null hypotheses and denote them as H 0 .

Alternative Hypothesis

Rejection of null hypotheses leads to the acceptance of alternative hypothesis . The rejection of null hypothesis indicates that the relationship between variables (e.g., sales and advertisement expenditure) or the difference between means (e.g., wages of skilled workers in town 1 and town 2) or the difference between proportions have statistical significance and the acceptance of the null hypotheses indicates that these differences are due to chance.

As already mentioned, the alternative hypotheses specify that values/relation which the researcher believes hold true. The alternative hypotheses can cover a whole range of values rather than a single point. The alternative hypotheses are denoted by H 1 .

Business Ethics

( Click on Topic to Read )

  • What is Ethics?
  • What is Business Ethics?
  • Values, Norms, Beliefs and Standards in Business Ethics
  • Indian Ethos in Management
  • Ethical Issues in Marketing
  • Ethical Issues in HRM
  • Ethical Issues in IT
  • Ethical Issues in Production and Operations Management
  • Ethical Issues in Finance and Accounting
  • What is Corporate Governance?
  • What is Ownership Concentration?
  • What is Ownership Composition?
  • Types of Companies in India
  • Internal Corporate Governance
  • External Corporate Governance
  • Corporate Governance in India
  • What is Enterprise Risk Management (ERM)?
  • What is Assessment of Risk?
  • What is Risk Register?
  • Risk Management Committee

Corporate social responsibility (CSR)

  • Theories of CSR
  • Arguments Against CSR
  • Business Case for CSR
  • Importance of CSR in India
  • Drivers of Corporate Social Responsibility
  • Developing a CSR Strategy
  • Implement CSR Commitments
  • CSR Marketplace
  • CSR at Workplace
  • Environmental CSR
  • CSR with Communities and in Supply Chain
  • Community Interventions
  • CSR Monitoring
  • CSR Reporting
  • Voluntary Codes in CSR
  • What is Corporate Ethics?

Lean Six Sigma

  • What is Six Sigma?
  • What is Lean Six Sigma?
  • Value and Waste in Lean Six Sigma
  • Six Sigma Team
  • MAIC Six Sigma
  • Six Sigma in Supply Chains
  • What is Binomial, Poisson, Normal Distribution?
  • What is Sigma Level?
  • What is DMAIC in Six Sigma?
  • What is DMADV in Six Sigma?
  • Six Sigma Project Charter
  • Project Decomposition in Six Sigma
  • Critical to Quality (CTQ) Six Sigma
  • Process Mapping Six Sigma
  • Flowchart and SIPOC
  • Gage Repeatability and Reproducibility
  • Statistical Diagram
  • Lean Techniques for Optimisation Flow
  • Failure Modes and Effects Analysis (FMEA)
  • What is Process Audits?
  • Six Sigma Implementation at Ford
  • IBM Uses Six Sigma to Drive Behaviour Change
  • Research Methodology
  • What is Research?
  • Sampling Method

Research Methods

  • Data Collection in Research

Methods of Collecting Data

  • Application of Business Research
  • Levels of Measurement
  • What is Sampling?
  • Hypothesis Testing
  • Research Report
  • What is Management?
  • Planning in Management
  • Decision Making in Management
  • What is Controlling?
  • What is Coordination?
  • What is Staffing?
  • Organization Structure
  • What is Departmentation?
  • Span of Control
  • What is Authority?
  • Centralization vs Decentralization
  • Organizing in Management
  • Schools of Management Thought
  • Classical Management Approach
  • Is Management an Art or Science?
  • Who is a Manager?

Operations Research

  • What is Operations Research?
  • Operation Research Models
  • Linear Programming
  • Linear Programming Graphic Solution
  • Linear Programming Simplex Method
  • Linear Programming Artificial Variable Technique
  • Duality in Linear Programming
  • Transportation Problem Initial Basic Feasible Solution
  • Transportation Problem Finding Optimal Solution
  • Project Network Analysis with Critical Path Method
  • Project Network Analysis Methods
  • Project Evaluation and Review Technique (PERT)
  • Simulation in Operation Research
  • Replacement Models in Operation Research

Operation Management

  • What is Strategy?
  • What is Operations Strategy?
  • Operations Competitive Dimensions
  • Operations Strategy Formulation Process
  • What is Strategic Fit?
  • Strategic Design Process
  • Focused Operations Strategy
  • Corporate Level Strategy
  • Expansion Strategies
  • Stability Strategies
  • Retrenchment Strategies
  • Competitive Advantage
  • Strategic Choice and Strategic Alternatives
  • What is Production Process?
  • What is Process Technology?
  • What is Process Improvement?
  • Strategic Capacity Management
  • Production and Logistics Strategy
  • Taxonomy of Supply Chain Strategies
  • Factors Considered in Supply Chain Planning
  • Operational and Strategic Issues in Global Logistics
  • Logistics Outsourcing Strategy
  • What is Supply Chain Mapping?
  • Supply Chain Process Restructuring
  • Points of Differentiation
  • Re-engineering Improvement in SCM
  • What is Supply Chain Drivers?
  • Supply Chain Operations Reference (SCOR) Model
  • Customer Service and Cost Trade Off
  • Internal and External Performance Measures
  • Linking Supply Chain and Business Performance
  • Netflix’s Niche Focused Strategy
  • Disney and Pixar Merger
  • Process Planning at Mcdonald’s

Service Operations Management

  • What is Service?
  • What is Service Operations Management?
  • What is Service Design?
  • Service Design Process
  • Service Delivery
  • What is Service Quality?
  • Gap Model of Service Quality
  • Juran Trilogy
  • Service Performance Measurement
  • Service Decoupling
  • IT Service Operation
  • Service Operations Management in Different Sector

Procurement Management

  • What is Procurement Management?
  • Procurement Negotiation
  • Types of Requisition
  • RFX in Procurement
  • What is Purchasing Cycle?
  • Vendor Managed Inventory
  • Internal Conflict During Purchasing Operation
  • Spend Analysis in Procurement
  • Sourcing in Procurement
  • Supplier Evaluation and Selection in Procurement
  • Blacklisting of Suppliers in Procurement
  • Total Cost of Ownership in Procurement
  • Incoterms in Procurement
  • Documents Used in International Procurement
  • Transportation and Logistics Strategy
  • What is Capital Equipment?
  • Procurement Process of Capital Equipment
  • Acquisition of Technology in Procurement
  • What is E-Procurement?
  • E-marketplace and Online Catalogues
  • Fixed Price and Cost Reimbursement Contracts
  • Contract Cancellation in Procurement
  • Ethics in Procurement
  • Legal Aspects of Procurement
  • Global Sourcing in Procurement
  • Intermediaries and Countertrade in Procurement

Strategic Management

  • What is Strategic Management?
  • What is Value Chain Analysis?
  • Mission Statement
  • Business Level Strategy
  • What is SWOT Analysis?
  • What is Competitive Advantage?
  • What is Vision?
  • What is Ansoff Matrix?
  • Prahalad and Gary Hammel
  • Strategic Management In Global Environment
  • Competitor Analysis Framework
  • Competitive Rivalry Analysis
  • Competitive Dynamics
  • What is Competitive Rivalry?
  • Five Competitive Forces That Shape Strategy
  • What is PESTLE Analysis?
  • Fragmentation and Consolidation Of Industries
  • What is Technology Life Cycle?
  • What is Diversification Strategy?
  • What is Corporate Restructuring Strategy?
  • Resources and Capabilities of Organization
  • Role of Leaders In Functional-Level Strategic Management
  • Functional Structure In Functional Level Strategy Formulation
  • Information And Control System
  • What is Strategy Gap Analysis?
  • Issues In Strategy Implementation
  • Matrix Organizational Structure
  • What is Strategic Management Process?

Supply Chain

  • What is Supply Chain Management?
  • Supply Chain Planning and Measuring Strategy Performance
  • What is Warehousing?
  • What is Packaging?
  • What is Inventory Management?
  • What is Material Handling?
  • What is Order Picking?
  • Receiving and Dispatch, Processes
  • What is Warehouse Design?
  • What is Warehousing Costs?

You Might Also Like

What is measure of dispersion, cross-sectional and longitudinal research, what is research methodology, what is scaling techniques types, classifications, techniques, research process | types, ethics in research, what is research design types, what is research design features, components, what is experiments variables, types, lab, field, what is parametric tests types: z-test, t-test, f-test, leave a reply cancel reply.

You must be logged in to post a comment.

World's Best Online Courses at One Place

We’ve spent the time in finding, so you can spend your time in learning

Digital Marketing

Personal growth.

hypothesis of something meaning

Development

hypothesis of something meaning

  • More from M-W
  • To save this word, you'll need to log in. Log In

hypothesize

Definition of hypothesize

intransitive verb

transitive verb

  • hypothecate

Examples of hypothesize in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'hypothesize.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

1738, in the meaning defined at intransitive sense

Dictionary Entries Near hypothesize

hypothetical

Cite this Entry

“Hypothesize.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/hypothesize. Accessed 16 May. 2024.

Kids Definition

Kids definition of hypothesize, more from merriam-webster on hypothesize.

Britannica English: Translation of hypothesize for Arabic Speakers

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

More commonly misspelled words, your vs. you're: how to use them correctly, every letter is silent, sometimes: a-z list of examples, more commonly mispronounced words, how to use em dashes (—), en dashes (–) , and hyphens (-), popular in wordplay, birds say the darndest things, the words of the week - may 10, a great big list of bread words, 10 scrabble words without any vowels, 12 more bird names that sound like insults (and sometimes are), games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

  • School Guide
  • Mathematics
  • Number System and Arithmetic
  • Trigonometry
  • Probability
  • Mensuration
  • Maths Formulas
  • Class 8 Maths Notes
  • Class 9 Maths Notes
  • Class 10 Maths Notes
  • Class 11 Maths Notes
  • Class 12 Maths Notes
  • Null Hypothesis
  • Hypothesis Testing Formula
  • Difference Between Hypothesis And Theory
  • Real-life Applications of Hypothesis Testing
  • Permutation Hypothesis Test in R Programming
  • Bayes' Theorem
  • Hypothesis in Machine Learning
  • Current Best Hypothesis Search
  • Understanding Hypothesis Testing
  • Hypothesis Testing in R Programming
  • Jobathon | Stats | Question 10
  • Jobathon | Stats | Question 17
  • Testing | Question 1
  • Difference between Null and Alternate Hypothesis
  • ML | Find S Algorithm
  • Python - Pearson's Chi-Square Test

Hypothesis is a testable statement that explains what is happening or observed. It proposes the relation between the various participating variables. Hypothesis is also called Theory, Thesis, Guess, Assumption, or Suggestion. Hypothesis creates a structure that guides the search for knowledge.

In this article, we will learn what is hypothesis, its characteristics, types, and examples. We will also learn how hypothesis helps in scientific research.

Hypothesis

What is Hypothesis?

A hypothesis is a suggested idea or plan that has little proof, meant to lead to more study. It’s mainly a smart guess or suggested answer to a problem that can be checked through study and trial. In science work, we make guesses called hypotheses to try and figure out what will happen in tests or watching. These are not sure things but rather ideas that can be proved or disproved based on real-life proofs. A good theory is clear and can be tested and found wrong if the proof doesn’t support it.

Hypothesis Meaning

A hypothesis is a proposed statement that is testable and is given for something that happens or observed.
  • It is made using what we already know and have seen, and it’s the basis for scientific research.
  • A clear guess tells us what we think will happen in an experiment or study.
  • It’s a testable clue that can be proven true or wrong with real-life facts and checking it out carefully.
  • It usually looks like a “if-then” rule, showing the expected cause and effect relationship between what’s being studied.

Characteristics of Hypothesis

Here are some key characteristics of a hypothesis:

  • Testable: An idea (hypothesis) should be made so it can be tested and proven true through doing experiments or watching. It should show a clear connection between things.
  • Specific: It needs to be easy and on target, talking about a certain part or connection between things in a study.
  • Falsifiable: A good guess should be able to show it’s wrong. This means there must be a chance for proof or seeing something that goes against the guess.
  • Logical and Rational: It should be based on things we know now or have seen, giving a reasonable reason that fits with what we already know.
  • Predictive: A guess often tells what to expect from an experiment or observation. It gives a guide for what someone might see if the guess is right.
  • Concise: It should be short and clear, showing the suggested link or explanation simply without extra confusion.
  • Grounded in Research: A guess is usually made from before studies, ideas or watching things. It comes from a deep understanding of what is already known in that area.
  • Flexible: A guess helps in the research but it needs to change or fix when new information comes up.
  • Relevant: It should be related to the question or problem being studied, helping to direct what the research is about.
  • Empirical: Hypotheses come from observations and can be tested using methods based on real-world experiences.

Sources of Hypothesis

Hypotheses can come from different places based on what you’re studying and the kind of research. Here are some common sources from which hypotheses may originate:

  • Existing Theories: Often, guesses come from well-known science ideas. These ideas may show connections between things or occurrences that scientists can look into more.
  • Observation and Experience: Watching something happen or having personal experiences can lead to guesses. We notice odd things or repeat events in everyday life and experiments. This can make us think of guesses called hypotheses.
  • Previous Research: Using old studies or discoveries can help come up with new ideas. Scientists might try to expand or question current findings, making guesses that further study old results.
  • Literature Review: Looking at books and research in a subject can help make guesses. Noticing missing parts or mismatches in previous studies might make researchers think up guesses to deal with these spots.
  • Problem Statement or Research Question: Often, ideas come from questions or problems in the study. Making clear what needs to be looked into can help create ideas that tackle certain parts of the issue.
  • Analogies or Comparisons: Making comparisons between similar things or finding connections from related areas can lead to theories. Understanding from other fields could create new guesses in a different situation.
  • Hunches and Speculation: Sometimes, scientists might get a gut feeling or make guesses that help create ideas to test. Though these may not have proof at first, they can be a beginning for looking deeper.
  • Technology and Innovations: New technology or tools might make guesses by letting us look at things that were hard to study before.
  • Personal Interest and Curiosity: People’s curiosity and personal interests in a topic can help create guesses. Scientists could make guesses based on their own likes or love for a subject.

Types of Hypothesis

Here are some common types of hypotheses:

Simple Hypothesis

Complex hypothesis, directional hypothesis.

  • Non-directional Hypothesis

Null Hypothesis (H0)

Alternative hypothesis (h1 or ha), statistical hypothesis, research hypothesis, associative hypothesis, causal hypothesis.

Simple Hypothesis guesses a connection between two things. It says that there is a connection or difference between variables, but it doesn’t tell us which way the relationship goes.
Complex Hypothesis tells us what will happen when more than two things are connected. It looks at how different things interact and may be linked together.
Directional Hypothesis says how one thing is related to another. For example, it guesses that one thing will help or hurt another thing.

Non-Directional Hypothesis

Non-Directional Hypothesis are the one that don’t say how the relationship between things will be. They just say that there is a connection, without telling which way it goes.
Null hypothesis is a statement that says there’s no connection or difference between different things. It implies that any seen impacts are because of luck or random changes in the information.
Alternative Hypothesis is different from the null hypothesis and shows that there’s a big connection or gap between variables. Scientists want to say no to the null hypothesis and choose the alternative one.
Statistical Hypotheis are used in math testing and include making ideas about what groups or bits of them look like. You aim to get information or test certain things using these top-level, common words only.
Research Hypothesis comes from the research question and tells what link is expected between things or factors. It leads the study and chooses where to look more closely.
Associative Hypotheis guesses that there is a link or connection between things without really saying it caused them. It means that when one thing changes, it is connected to another thing changing.
Causal Hypothesis are different from other ideas because they say that one thing causes another. This means there’s a cause and effect relationship between variables involved in the situation. They say that when one thing changes, it directly makes another thing change.

Hypothesis Examples

Following are the examples of hypotheses based on their types:

Simple Hypothesis Example

  • Studying more can help you do better on tests.
  • Getting more sun makes people have higher amounts of vitamin D.

Complex Hypothesis Example

  • How rich you are, how easy it is to get education and healthcare greatly affects the number of years people live.
  • A new medicine’s success relies on the amount used, how old a person is who takes it and their genes.

Directional Hypothesis Example

  • Drinking more sweet drinks is linked to a higher body weight score.
  • Too much stress makes people less productive at work.

Non-directional Hypothesis Example

  • Drinking caffeine can affect how well you sleep.
  • People often like different kinds of music based on their gender.
  • The average test scores of Group A and Group B are not much different.
  • There is no connection between using a certain fertilizer and how much it helps crops grow.

Alternative Hypothesis (Ha)

  • Patients on Diet A have much different cholesterol levels than those following Diet B.
  • Exposure to a certain type of light can change how plants grow compared to normal sunlight.
  • The average smarts score of kids in a certain school area is 100.
  • The usual time it takes to finish a job using Method A is the same as with Method B.
  • Having more kids go to early learning classes helps them do better in school when they get older.
  • Using specific ways of talking affects how much customers get involved in marketing activities.
  • Regular exercise helps to lower the chances of heart disease.
  • Going to school more can help people make more money.
  • Playing violent video games makes teens more likely to act aggressively.
  • Less clean air directly impacts breathing health in city populations.

Functions of Hypothesis

Hypotheses have many important jobs in the process of scientific research. Here are the key functions of hypotheses:

  • Guiding Research: Hypotheses give a clear and exact way for research. They act like guides, showing the predicted connections or results that scientists want to study.
  • Formulating Research Questions: Research questions often create guesses. They assist in changing big questions into particular, checkable things. They guide what the study should be focused on.
  • Setting Clear Objectives: Hypotheses set the goals of a study by saying what connections between variables should be found. They set the targets that scientists try to reach with their studies.
  • Testing Predictions: Theories guess what will happen in experiments or observations. By doing tests in a planned way, scientists can check if what they see matches the guesses made by their ideas.
  • Providing Structure: Theories give structure to the study process by arranging thoughts and ideas. They aid scientists in thinking about connections between things and plan experiments to match.
  • Focusing Investigations: Hypotheses help scientists focus on certain parts of their study question by clearly saying what they expect links or results to be. This focus makes the study work better.
  • Facilitating Communication: Theories help scientists talk to each other effectively. Clearly made guesses help scientists to tell others what they plan, how they will do it and the results expected. This explains things well with colleagues in a wide range of audiences.
  • Generating Testable Statements: A good guess can be checked, which means it can be looked at carefully or tested by doing experiments. This feature makes sure that guesses add to the real information used in science knowledge.
  • Promoting Objectivity: Guesses give a clear reason for study that helps guide the process while reducing personal bias. They motivate scientists to use facts and data as proofs or disprovals for their proposed answers.
  • Driving Scientific Progress: Making, trying out and adjusting ideas is a cycle. Even if a guess is proven right or wrong, the information learned helps to grow knowledge in one specific area.

How Hypothesis help in Scientific Research?

Researchers use hypotheses to put down their thoughts directing how the experiment would take place. Following are the steps that are involved in the scientific method:

  • Initiating Investigations: Hypotheses are the beginning of science research. They come from watching, knowing what’s already known or asking questions. This makes scientists make certain explanations that need to be checked with tests.
  • Formulating Research Questions: Ideas usually come from bigger questions in study. They help scientists make these questions more exact and testable, guiding the study’s main point.
  • Setting Clear Objectives: Hypotheses set the goals of a study by stating what we think will happen between different things. They set the goals that scientists want to reach by doing their studies.
  • Designing Experiments and Studies: Assumptions help plan experiments and watchful studies. They assist scientists in knowing what factors to measure, the techniques they will use and gather data for a proposed reason.
  • Testing Predictions: Ideas guess what will happen in experiments or observations. By checking these guesses carefully, scientists can see if the seen results match up with what was predicted in each hypothesis.
  • Analysis and Interpretation of Data: Hypotheses give us a way to study and make sense of information. Researchers look at what they found and see if it matches the guesses made in their theories. They decide if the proof backs up or disagrees with these suggested reasons why things are happening as expected.
  • Encouraging Objectivity: Hypotheses help make things fair by making sure scientists use facts and information to either agree or disagree with their suggested reasons. They lessen personal preferences by needing proof from experience.
  • Iterative Process: People either agree or disagree with guesses, but they still help the ongoing process of science. Findings from testing ideas make us ask new questions, improve those ideas and do more tests. It keeps going on in the work of science to keep learning things.

People Also View:

Mathematics Maths Formulas Branches of Mathematics

Summary – Hypothesis

A hypothesis is a testable statement serving as an initial explanation for phenomena, based on observations, theories, or existing knowledge. It acts as a guiding light for scientific research, proposing potential relationships between variables that can be empirically tested through experiments and observations. The hypothesis must be specific, testable, falsifiable, and grounded in prior research or observation, laying out a predictive, if-then scenario that details a cause-and-effect relationship. It originates from various sources including existing theories, observations, previous research, and even personal curiosity, leading to different types, such as simple, complex, directional, non-directional, null, and alternative hypotheses, each serving distinct roles in research methodology. The hypothesis not only guides the research process by shaping objectives and designing experiments but also facilitates objective analysis and interpretation of data, ultimately driving scientific progress through a cycle of testing, validation, and refinement.

FAQs on Hypothesis

What is a hypothesis.

A guess is a possible explanation or forecast that can be checked by doing research and experiments.

What are Components of a Hypothesis?

The components of a Hypothesis are Independent Variable, Dependent Variable, Relationship between Variables, Directionality etc.

What makes a Good Hypothesis?

Testability, Falsifiability, Clarity and Precision, Relevance are some parameters that makes a Good Hypothesis

Can a Hypothesis be Proven True?

You cannot prove conclusively that most hypotheses are true because it’s generally impossible to examine all possible cases for exceptions that would disprove them.

How are Hypotheses Tested?

Hypothesis testing is used to assess the plausibility of a hypothesis by using sample data

Can Hypotheses change during Research?

Yes, you can change or improve your ideas based on new information discovered during the research process.

What is the Role of a Hypothesis in Scientific Research?

Hypotheses are used to support scientific research and bring about advancements in knowledge.

Please Login to comment...

Similar reads.

author

  • Geeks Premier League 2023
  • Maths-Class-12
  • Geeks Premier League
  • School Learning

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

  • Dictionaries home
  • American English
  • Collocations
  • German-English
  • Grammar home
  • Practical English Usage
  • Learn & Practise Grammar (Beta)
  • Word Lists home
  • My Word Lists
  • Recent additions
  • Resources home
  • Text Checker

Definition of hypothesis noun from the Oxford Advanced American Dictionary

  • formulate/advance a theory/hypothesis
  • build/construct/create/develop a simple/theoretical/mathematical model
  • develop/establish/provide/use a theoretical/conceptual framework/an algorithm
  • advance/argue/develop the thesis that…
  • explore an idea/a concept/a hypothesis
  • make a prediction/an inference
  • base a prediction/your calculations on something
  • investigate/evaluate/accept/challenge/reject a theory/hypothesis/model
  • design an experiment/a questionnaire/a study/a test
  • do research/an experiment/an analysis
  • make observations/calculations
  • take/record measurements
  • carry out/conduct/perform an experiment/a test/a longitudinal study/observations/clinical trials
  • run an experiment/a simulation/clinical trials
  • repeat an experiment/a test/an analysis
  • replicate a study/the results/the findings
  • observe/study/examine/investigate/assess a pattern/a process/a behavior
  • fund/support the research/project/study
  • seek/provide/get/secure funding for research
  • collect/gather/extract data/information
  • yield data/evidence/similar findings/the same results
  • analyze/examine the data/soil samples/a specimen
  • consider/compare/interpret the results/findings
  • fit the data/model
  • confirm/support/verify a prediction/a hypothesis/the results/the findings
  • prove a conjecture/hypothesis/theorem
  • draw/make/reach the same conclusions
  • read/review the records/literature
  • describe/report an experiment/a study
  • present/publish/summarize the results/findings
  • present/publish/read/review/cite a paper in a scientific journal

Join our community to access the latest language learning and assessment tips from Oxford University Press!

  • 2 [ uncountable ] guesses and ideas that are not based on certain knowledge synonym speculation It would be pointless to engage in hypothesis before we have the facts.

Other results

Nearby words.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Theories of Meaning

The term “theory of meaning” has figured, in one way or another, in a great number of philosophical disputes over the last century. Unfortunately, this term has also been used to mean a great number of different things. In this entry, the focus is on two sorts of “theory of meaning”. The first sort of theory—a semantic theory—is a theory which assigns semantic contents to expressions of a language. The second sort of theory—a foundational theory of meaning—is a theory which states the facts in virtue of which expressions have the semantic contents that they have. Following a brief introduction, these two kinds of theory are discussed in turn.

1. Two Kinds of Theory of Meaning

2.1.1 the theory of reference, 2.1.2 theories of reference vs. semantic theories, 2.1.3 the relationship between content and reference, 2.1.4 character and content, context and circumstance, 2.1.5 possible worlds semantics, 2.1.6 russellian semantics, 2.1.7 fregean semantics, 2.2.1 davidsonian semantics, 2.2.2 internalist semantics, 2.2.3 inferentialist semantics, 2.2.4 dynamic semantics, 2.2.5 expressivist semantics, 2.3.1 how much context-sensitivity, 2.3.2 how many indices, 2.3.3 what are propositions, anyway, 3.1.1 the gricean program, 3.1.2 meaning, belief, and convention, 3.1.3 mental representation-based theories, 3.2.1 causal origin, 3.2.2 truth-maximization and the principle of charity, 3.2.3 reference magnetism, 3.2.4 regularities in use, 3.2.5 social norms, other internet resources, related entries.

In “General Semantics”, David Lewis wrote

I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and, second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Only confusion comes of mixing these two topics. (Lewis 1970: 19)

Lewis was right. Even if philosophers have not consistently kept these two questions separate, there clearly is a distinction between the questions “What is the meaning of this or that symbol (for a particular person or group)?” and “In virtue of what facts about that person or group does the symbol have that meaning?”

Corresponding to these two questions are two different sorts of theory of meaning. One sort of theory of meaning—a semantic theory —is a specification of the meanings of the words and sentences of some symbol system. Semantic theories thus answer the question, “What is the meaning of this or that expression?” A distinct sort of theory—a foundational theory of meaning —tries to explain what about some person or group gives the symbols of their language the meanings that they have. To be sure, the shape of a correct semantic theory places constraints on the correct foundational theory of meaning, and vice versa; but that does not change the fact that semantic theories and foundational theories are simply different sorts of theories, designed to answer different questions.

To see the distinction between semantic theories and foundational theories of meaning, it may help to consider an analogous one. Imagine an anthropologist specializing in table manners sent out to observe a distant tribe. One task the anthropologist clearly might undertake is to simply describe the table manners of that tribe—to describe the different categories into which members of the tribe place actions at the table, and to say which sorts of actions fall into which categories. This would be analogous to the task of the philosopher of language interested in semantics; her job is say what different sorts of meanings expressions of a given language have, and which expressions have which meanings.

But our anthropologist might also become interested in the nature of manners; he might wonder how, in general, one set of rules of table manners comes to be the system of etiquette governing a particular group. Since presumably the fact that a group obeys one system of etiquette rather than another is traceable to something about that group, the anthropologist might put his new question by asking,

In virtue of what facts about a person or group does that person or group come to be governed by a particular system of etiquette, rather than another?

Our anthropologist would then have embarked upon the analogue of the construction of a foundational theory of meaning: he would then be interested, not in which etiquette-related properties particular action types have in a certain group, but rather the question of how action-types can, in any group, come to acquire properties of this sort. [ 1 ] Our anthropologist might well be interested in both sorts of questions about table manners; but they are, pretty clearly, different questions. Just so, semantic theories and foundational theories of meaning are, pretty clearly, different sorts of theories.

The term “theory of meaning” has, in the recent history of philosophy, been used to stand for both semantic theories and foundational theories of meaning. As this has obvious potential to mislead, in what follows I’ll avoid the term which this article is meant to define and stick instead to the more specific “semantic theory” and “foundational theory of meaning”. “Theory of meaning” simpliciter can be understood as ambiguous between these two interpretations.

Before turning to discussion of these two sorts of theories, it is worth noting that one prominent tradition in the philosophy of language denies that there are facts about the meanings of linguistic expressions. (See, for example, Quine 1960 and Kripke 1982; for critical discussion, see Soames 1997.) If this sort of skepticism about meaning is correct, then there is neither a true semantic theory nor a true foundational theory of meaning to be found, since the relevant sort of facts simply are not around to be described or analyzed. Discussion of these skeptical arguments is beyond the scope of this entry, so in what follows I’ll simply assume that skepticism about meaning is false.

2. Semantic Theories

The task of explaining the main approaches to semantic theory in contemporary philosophy of language might seem to face an in-principle stumbling block. Given that no two languages have the same semantics—no two languages are comprised of just the same words, with just the same meanings—it may seem hard to see how we can say anything about different views about semantics in general, as opposed to views about the semantics of this or that language. This problem has a relatively straightforward solution. While it is of course correct that the semantics for English is one thing and the semantics for French something else, most assume that the various natural languages should all have semantic theories of (in a sense to be explained) the same form. The aim of what follows will, accordingly, be to introduce the reader to the main approaches to natural language semantics—the main views about the right form for a semantics for a natural language to take—rather than to provide a detailed examination of the various views about the semantics of some particular expression. (For an overview, see the entry on word meaning . For discussion of issues involving particular expression types, see the entries on names , quantifiers and quantification , descriptions , propositional attitude reports , and natural kinds .)

One caveat before we get started: before a semantic theorist sets off to explain the meanings of the expressions of some language, she needs a clear idea of what she is supposed to explain the meaning of . This might not seem to present much of a problem; aren’t the bearers of meaning just the sentences of the relevant language, and their parts? This is correct as far as it goes. But the task of explaining what the semantically significant parts of a sentence are, and how those parts combine to form the sentence, is as complex as semantics itself, and has important consequences for semantic theory. Indeed, most disputes about the right semantic treatment of some class of expressions are intertwined with questions about the syntactic form of sentences in which those expressions figure. Unfortunately, discussion of theories of this sort, which attempt to explain the syntax, or logical form, of natural language sentences, is well beyond the scope of this entry. As a result, figures like Richard Montague, whose work on syntax and its connection to semantics has been central to the development of semantic theory over the past few decades, are passed over in what follows. (Montague’s essays are collected in Montague 1974.) For an excellent introduction to the connections between syntax and semantics, see Heim & Kratzer (1998); for an overview of the relations between philosophy of language and several branches of linguistics, see Moss (2012).

There are a wide variety of approaches to natural language semantics. My strategy in what follows will be to begin by explaining one prominent family of approaches to semantics which developed over the course of the twentieth century and is still prominently represented in contemporary work in semantics, both in linguistics and in philosophy. For lack of a better term, let's call these types of semantic theories classical semantic theories . (As in discussions of classical logic, the appellation “classical” is not meant to suggest that theories to which this label is applied are to be preferred over others.) Classical semantic theories agree that sentences are (typically) true or false, and that whether they are true or false depends on what information they encode or express. This “information” is often called “the proposition expressed by the sentence”. The job of a semantic theory, according to the classical theorist, is at least in large part to explain how the meanings of the parts of the sentence, along with the context in which the sentence is used, combine to determine which proposition the sentence expresses in that context (and hence also the truth conditions of the sentence, as used in that context).

Classical semantic theories are discussed in §2.1 . In §§2.1.1–4 the theoretical framework common to classical semantic theories is explained; in §§2.1.5–7 the differences between three main versions of classical semantic theories are explained. In §2.2 there is a discussion of the alternatives to classical semantic theories. In §2.3 few general concluding questions are discussed; these are questions semantic theorists face which are largely, though not completely, orthogonal to one’s view about the form which a semantic theory ought to take.

2.1 Classical Semantic Theories

The easiest way to understand the various sorts of classical semantic theories is by beginning with another sort of theory: a theory of reference.

A theory of reference is a theory which pairs expressions with the contribution those expressions make to the determination of the truth-values of sentences in which they occur. (Though later we will see that this view of the reference of an expression must be restricted in certain ways.)

This construal of the theory of reference is traceable to Gottlob Frege ’s attempt to formulate a logic sufficient for the formalization of mathematical inferences (see especially Frege 1879 and 1892.) The construction of a theory of reference of this kind is best illustrated by beginning with the example of proper names. Consider the following sentences:

  • (1) Barack Obama was the 44th president of the United States.
  • (2) John McCain was the 44th president of the United States.

(1) is true, and (2) is false. Obviously, this difference in truth-value is traceable to some difference between the expressions “Barack Obama” and “John McCain”. What about these expressions explains the difference in truth-value between these sentences? It is very plausible that it is the fact that “Barack Obama” stands for the man who was in fact the 44th president of the United States, whereas “John McCain” stands for a man who was not. This suggests that the reference of a proper name—its contribution to the determination of truth conditions of sentences in which it occurs—is the object for which that name stands. (While this is plausible, it is not uncontroversial that the purpose of a name is to refer to an individual; see Graff Fara (2015) and Jeshion (2015) for arguments on opposite sides of this issue.)

Given this starting point, it is a short step to some conclusions about the reference of other sorts of expressions. Consider the following pair of sentences:

  • (3) Barack Obama is a Democrat.
  • (4) Barack Obama is a Republican.

Again, the first of these is true, whereas the second is false. We already know that the reference of “Barack Obama” is the man for which the name stands; so, given that reference is power to affect truth-value, we know that the reference of predicates like “is a Democrat” and “is a Republican” must be something which combines with an object to yield a truth-value. Accordingly, it is natural to think of the reference of predicates of this sort as functions from objects to truth-values. The reference of “is a Democrat” is that function which returns the truth-value “true” when given as input an object which is a member of the Democratic party (and the truth-value “false” otherwise), whereas the reference of “is a Republican” is a function which returns the truth-value “true” when given as input an object which is a member of the Republican party (and the truth-value “false” otherwise). This is what explains the fact that (3) is true and (4) false: Obama is a member of the Democratic party, and is not a member of the Republican party.

Matters get more complicated, and more controversial, as we extend this sort of theory of reference to cover more and more of the types of expressions we find in natural languages like English. (For an introduction, see Heim and Kratzer (1998).) But the above is enough to give a rough idea of how one might proceed. For example, some predicates, like “loves” combine with two names to form a sentence, rather than one. So the reference of two-place predicates of this sort must be something which combines with a pair of objects to determine a truth-value—perhaps, that function from ordered pairs of objects to truth-values which returns the truth-value “true” when given as input a pair of objects whose first member loves the second member, and “false” otherwise.

So let’s suppose that we have a theory of reference for a language, in the above sense. Would we then have a satisfactory semantic theory for the language?

Some plausible arguments indicate that we would not. To adopt an example from Quine (1970 [1986], pp. 8–9), let’s assume that the set of animals with hearts (which Quine, for convenience, calls “cordates” – not to be confused with “chordates”) is the same as the set of animals with kidneys (which Quine calls “renates”). Now, consider the pair of sentences:

  • (5) All cordates are cordates.
  • (6) All cordates are renates.

Given our assumption, both sentences are true. Moreover, from the point of view of the theory of reference, (5) and (6) are just the same: they differ only in the substitution of “renates” for “cordates”, and these expressions have the same reference (because they stand for the same function from objects to truth-values).

All the same, there is clearly an intuitive difference in meaning between (5) and (6); the sentences seem, in some sense, to say different things. The first seems to express the trivial, boring thought that every creature with a heart is a creature with a heart, whereas the second expresses the non-trivial, potentially informative claim that every creature with a heart also has a kidney. This suggests that there is an important difference between (5) and (6) which our theory of reference simply fails to capture.

Examples of the same sort can be generated using pairs of expressions of other types which share a reference, but intuitively differ in meaning; for example, “Clark Kent” and “Superman”, or (an example famously discussed by Frege (1892 [1960]), “the Morning Star” and “the Evening Star”.

This might seem a rather weak argument for the incompleteness of the theory of reference, resting as it does on intuitions about the relative informativeness of sentences like (5) and (6). But this argument can be strengthened by embedding sentences like (5) and (6) in more complex sentences, as follows:

  • (7) John believes that all cordates are cordates .
  • (8) John believes that all cordates are renates .

(7) and (8) differ only with respect to the italic expressions and, as we noted above, these expressions have the same reference. Despite this, it seems clear that (7) and (8) could differ in truth-value: someone could know that all cordates have a heart without having any opinion on the question of whether all cordates have a kidney. But that means that the references of expressions don’t even do the job for which they were introduced: they don’t explain the contribution that expressions make to the determination of the truth-value of all sentences in which they occur. (One might, of course, still think that the reference of an expression explains its contribution to the determination of the truth-value of a suitably delimited class of simple sentences in which the expression occurs.) If we are to be able to explain, in terms of the properties of the expressions that make them up, how (7) and (8) can differ in truth-value, then expressions must have some other sort of value, some sort of meaning, which goes beyond reference.

(7) and (8) are called belief ascriptions , for the obvious reason that they ascribe a belief to a subject. Belief ascriptions are one sort of propositional attitude ascription —other types include ascriptions of knowledge, desire, or judgment. As will become clear in what follows, propositional attitude ascriptions have been very important in recent debates in semantics. One of the reasons why they have been important is exemplified by (7) and (8). Because these sentences can differ in truth-value despite the fact that they differ only with respect to the italic words, and these words both share a reference and occupy the same place in the structure of the two sentences, we say that (7) and (8) contain a non-extensional context : roughly, a “location” in the sentence which is such that substitution of terms which share a reference in that location can change truth-value. (They’re called “non-extensional contexts” because “extension” is another term for “reference”.)

We can give a similar argument for the incompleteness of the theory of reference based on the substitution of whole sentences. A theory of reference assigns to subsentential expressions values which explain their contribution to the truth-values of sentences; but to those sentences, it only assigns “true” or “false”. But consider a pair of sentences like

  • (9) Mary believes that Barack Obama was the president of the United States .
  • (10) Mary believes that John Key was the prime minister of New Zealand .

Because both of the italic sentences are true, (9) and (10) are a pair of sentences which differ only with respect to substitution of expressions (namely, the italic sentences) with the same reference. Nonetheless, (9) and (10) could plainly differ in truth-value.

This seems to show that a semantic theory should assign some value to sentences other than a truth-value. Another route to this conclusion is the apparent truth of claims of the following sort:

  • There are three things that John believes about Indiana, and they are all false.
  • There are many necessary truths which are not a priori , and my favorite sentence expresses one of them.
  • To get an A you must believe everything I say.

Sentences like these seem to show that there are things which are the objects of mental states like belief, the bearers of truth and falsity as well as modal properties like necessity and possibility and epistemic properties like a prioricity and posterioricity, and the things expressed by sentences. What are these things? The theory of reference provides no answer.

These entities are often called propositions . Friends of propositions aim both to provide a theory of these entities, and, in so doing, also to solve the two problems for the theory of reference discussed above: (i) the lack of an explanation for the fact that (5) is trivial while (6) is not, and (ii) the fact (exemplified by (7)/(8) and (9)/(10) ) that sentences which differ only in the substitution of expressions with the same reference can differ in truth-value.

A theory of propositions thus does not abandon the theory of reference, as sketched above, but simply says that there is more to a semantic theory than the theory of reference. Subsentential expressions have, in addition to a reference, a content . The contents of sentences—what sentences express—are known as propositions .

The natural next question is: What sorts of things are contents? Below I’ll discuss some of the leading answers to this question. But in advance of laying out any theory about what contents are, we can say some general things about the role that contents are meant to play.

First, what is the relationship between content and reference? Let’s examine this question in connection with sentences; here it amounts to the question of the relationship between the proposition a sentence expresses and the sentence’s truth-value. One point brought out by the example of (9) and (10) is that two sentences can express different propositions while having the same truth-value. After all, the beliefs ascribed to Mary by these sentences are different; so if propositions are the objects of belief, the propositions corresponding to the italic sentences must be different. Nonetheless, both sentences are true.

Is the reverse possible? Can two sentences express the same proposition, but differ in truth-value? It seems not, as can be illustrated again by the role of propositions as the objects of belief. Suppose that you and I believe the exact same thing—both of us believe the world to be just the same way. Can my belief be true, and yours false? Intuitively, it seems not; it seems incoherent to say that we both believe the world to be the same way, but that I get things right and you get them wrong. (Though see the discussion of relativism in §2.3.2 below for a dissenting view.) So it seems that if two sentences express the same proposition, they must have the same truth value.

In general, then, it seems plausible that two sentences with the same content—i.e., which express the same proposition—must always have the same reference, though two expressions with the same reference can differ in content. This is the view stated by the Fregean slogan that sense determines reference (“sense” being the conventional translation of Frege’s Sinn , which was his word for what we are calling “content”).

If this holds for sentences, does it also hold for subsentential expressions? It seems that it must. Suppose for reductio that two subsentential expressions, e and e* , have the same content but differ in reference. It seems plausible that two sentences which differ only by the substitution of expressions with the same content must have the same content. (While plausible, this principle is not uncontroversial; see entry on compositionality .) But if this is true, then sentences which differ only in the substitution of e and e* would have the same content. But such a pair of sentences could differ in truth-value, since, for any pair of expressions which differ in reference, there is some pair of sentences which differ only by the substitution of those expressions and differ in truth-value. So if there could be a pair of expressions like e and e* , which differ in their reference but not in their content, there could be a pair of sentences which have the same content—which express the same proposition—but differ in truth-value. But this is what we argued above to be impossible; hence there could be no pair of expressions like e and e* , and content must determine reference for subsentential expressions as well as sentences.

This result—that content determines reference—explains one thing we should, plausibly, want a semantic theory to do: it should assign to each expression some value—a content—which determines a reference for that expression.

However, there is an obvious problem with the idea that we can assign a content, in this sense, to all of the expressions of a language like English: many expressions, like “I” or “here”, have a different reference when uttered by different speakers in different situations. So we plainly cannot assign to “I” a single content which determines a reference for the expression, since the expression has a different reference in different situations. These “situations” are typically called contexts of utterance , or just contexts , and expressions whose reference depends on the context are called indexicals (see entry) or context-dependent expressions .

The obvious existence of such expressions shows that a semantic theory must do more than simply assign contents to every expression of the language. Expressions like “I” must also be associated with rules which determine the content of the expression, given a context of utterance. These rules, which are (or determine) functions from contexts to contents, are called characters . (The terminology here, as well as the view of the relationship between context, content, and reference, is due to Kaplan (1989).) So the character of “I” must be some function from contexts to contents which, in a context in which I am the speaker, delivers a content which determines me as reference; in a context in which Barack Obama is the speaker, delivers a content which determines Barack Obama as reference; and so on. (See figure 1.)

Figure 1. [An extended description of figure 1 is in the supplement.]

Here we face another potentially misleading ambiguity in “meaning”. What is the real meaning of an expression—its character, or its content (in the relevant context)? The best answer here is a pluralist one. Expressions have characters which, given a context, determine a content. We can talk about either character or content, and both are important. The important thing is to be clear on the distinction, and to see the reasons for thinking that expressions have both a character and (relative to a context) a content.

How many indexical expressions are there? There are some obvious candidates—“I”, “here”, “now”, etc.—but beyond the obvious candidates, it is very much a matter of dispute; for discussion, see §2.3.1 below.

But there is a kind of argument which seems to show that almost every expression is an indexical. Consider an expression which does not seem to be context-sensitive, like “the second-largest city in the United States”. This does not seem to be context-sensitive, because it seems to refer to the same city—Los Angeles—whether uttered by me, you, or some other English speaker. But now consider a sentence like

  • (11) 100 years ago, the second-largest city in the United States was Chicago.

This sentence is true. But for it to be true, “the second-largest city in the United States” would have to, in (11), refer to Chicago. But then it seems like this expression must be an indexical—its reference must depend on the context of utterance. In (11), the thought goes, the phrase “one hundred years ago” shifts the context: in (11), “the second-largest city in the United States” refers to that city that it would have referred to if uttered one hundred years ago.

However, this can’t be quite right, as is shown by examples like this one:

  • (12) In 100 years, I will not exist.

Let’s suppose that this sentence, as uttered by me, is true. Then, if what we said about (11) was right, it seems that “I” must, in, (12), refer to whoever it would refer to if it were uttered 100 years in the future. So the one thing we know is that (assuming that (12) is true), it does not refer to me—after all, I won’t be around to utter anything. But, plainly, the “I” in (12) does refer to me when this sentence is uttered by me—after all, it is a claim about me. What’s going on here?

What examples like (12) are often taken to show is that the reference of an expression must be relativized, not just to a context of utterance, but also to a circumstance of evaluation —roughly, the possible state of the world relevant to the determination of the truth or falsity of the sentence. In the case of many simple sentences, context and circumstance coincide; details aside, they both just are the state of the world at the time of the utterance, with a designated speaker and place. But sentences like (12) show that they can come apart. Phrases like “In 100 years” shift the circumstance of evaluation—they change the state of the world relevant to the evaluation of the truth or falsity of the sentence—but don’t change the context of utterance. That’s why when I utter (12), “I” refers to me—despite the fact that I won’t exist to utter it in 100 years time.

Figure 2. [An extended description of figure 2 is in the supplement.]

This is sometimes called the need for double-indexing semantics —the two indices being contexts of utterance and circumstances of evaluation. (See figure 2.)

The classic explanation of a double-indexing semantics is Kaplan (1989); another important early discussion is Kamp (1971). For a different interpretation of the framework, see David Lewis (1980). For a classic discussion of some of the philosophical issues raised by indexicals, see Perry (1979).

Double-indexing explains how we can regard the reference of “the second-largest city in the United States” in (11) to be Chicago, without taking “the second-largest city in the United States” to be an indexical like “I”. On this view, “the second-largest city in the United States” does not vary in content depending on the context of utterance; rather, the content of this phrase is such that it determines a different reference with respect to different circumstances of evaluation. In particular, it has Los Angeles as its reference with respect to the present state of the actual world, and has Chicago as its reference with respect to the state of actual world 100 years ago. [ 2 ] Because “the second-largest city in the United States” refers to different things with respect to different circumstances, it is not a rigid designator (see entry) —these being expressions which (relative to a context of utterance) refer to the same object with respect to every circumstance of evaluation at which that object exists, and never refer to anything else with respect to another circumstance of evaluation. (The term “rigid designator” is due to Kripke [1972].)

(Note that this particular example assumes the highly controversial view that circumstances of evaluation include, not just possible worlds, but also times. For a discussion of different views about the nature of circumstances of evaluation and their motivations, see §2.3.2 below.)

So we know that expressions are associated with characters, which are functions from contexts to contents; and we know that contents are things which, for each circumstance of evaluation, determine a reference. We can now raise a central question of (classical) semantic theories: what sorts of things are contents? The foregoing suggests a pleasingly minimalist answer to this question: perhaps, since contents are things which together with circumstances of evaluation determine a reference, contents just are functions from circumstances of evaluation to a reference.

This view sounds abstract but is, in a way, quite intuitive. The idea is that the meaning of an expression is not what the expression stands for in the relevant circumstance, but rather a rule which tells you what the expression would stand for if the world were a certain way. So, on this view, the content of an expression like “the tallest man in the world” is not simply the man who happens to be tallest, but rather a function from ways the world might be to men—namely, that function which, for any way the world might be, returns as a referent the tallest man in that world (if there is one, and nothing otherwise). This fits nicely with the intuitive idea that to understand such an expression one needn’t know what the expression actually refers to—after all, one can understand “the tallest man” without knowing who the tallest man is—but must know how to tell what the expression would refer to, given certain information about the world (namely, the heights of all the men in it).

These functions, or rules, are called (following Carnap (1947)) intensions. Possible worlds semantics is the view that contents are intensions (and hence that characters are functions from contexts to intensions, i.e. functions from contexts to functions from circumstances of evaluation to a reference). (See figure 3.)

Figure 3. [An extended description of figure 3 is in the supplement.]

For discussion of the application of the framework of possible world semantics to natural language, see David Lewis (1970). The intension of a sentence—i.e., the proposition that sentence expresses, on the present view—will then be a function from worlds to truth-values. In particular, it will be that function which returns the truth-value true for every world with respect to which that sentence is true, and false otherwise. The intension of a simple predicate like “is red” will be a function from worlds to the function from objects to truth-values which, for each world, returns the truth-value true if the thing in question is red, and returns the truth-value false otherwise. In effect, possible worlds semantics takes the meanings of expressions to be functions from worlds to the values which would be assigned by a theory of reference to those expressions at the relevant world: in that sense, intensions are a kind of “extra layer” on top of the theory of reference.

This extra layer promises to solve the problem posed by non-extensional contexts, as illustrated by the example of “cordate” and “renate” in (7) and (8) . Our worry was that, since these expressions have the same reference, if meaning just is reference, then it seems that any pair of sentences which differ only in the substitution of these expressions must have the same truth-value. But (7) and (8) are such a pair of sentences, and needn’t have the same truth-value. The proponent of possible worlds semantics solves this problem by identifying the meaning of these expressions with their intension rather than their reference, and by pointing out that “cordate” and “renate”, while they share a reference, seem to have different intensions. After all, even if in our world every creature with a heart is a creature with a kidney (and vice versa), it seems that the world could have been such that some creatures had a heart but not a kidney. Since with respect to that circumstance of evaluation the terms will differ in reference, their intensions—which are just functions from circumstances of evaluations to referents—must also differ. Hence possible worlds semantics leaves room for (7) and (8) to differ in truth value, as they manifestly can.

The central problem facing possible worlds semantics, however, concerns sentences of the same form as (7) and (8) : sentences which ascribe propositional attitudes, like beliefs, to subjects. To see this problem, we can begin by asking: according to possible worlds semantics, what does it take for a pair of sentences to have the same content (i.e., express the same proposition)? Since contents are intensions, and intensions are functions from circumstances of evaluation to referents, it seems that two sentences have the same content, according to possible worlds semantics, if they have the same truth-value with respect to every circumstance of evaluation. In other words, two sentences express the same proposition if and only if it is impossible for them to differ in truth-value.

The problem is that there are sentences which have the same truth-value in every circumstance of evaluation, but seem to differ in meaning. Consider, for example

  • (13) \(2+2=4\).
  • (14) There are infinitely many prime numbers.

(13) and (14) are are both, like other truths of mathematics, necessary truths. Hence (13) and (14) have the same intension and, according to possible worlds semantics, must have the same content.

But this is highly counterintuitive. (13) and (14) certainly seem to say different things. The problem (just as with (5) and (6) ) can be sharpened by embedding these sentences in propositional attitude ascriptions:

  • (15) John believes that \(\mathit{2+2=4}\) .
  • (16) John believes that there are infinitely many prime numbers .

As we have just seen, the proponent of possible worlds semantics must take the italic sentences, (13) and (14), to have the same content; hence, it seems, the proponent of possible worlds semantics must take (15) and (16) to be a pair of sentences which differ only in the substitution of expressions with the same content. But then it seems that the proponent of possible worlds semantics must take this pair of sentences to express the same proposition, and have the same truth-value; but (15) and (16) (like (7) and (8) ) seem to differ in truth-value, and hence seem not to express the same proposition. For an influential extension of this argument, see Soames (1988).

For attempts to reply to the argument from within the framework of possible worlds semantics, see among other places Stalnaker (1984) and Yalcin (2018); for discussion of a related approach to semantics which aims to avoid these problems, see situations in natural language semantics (see entry) . Another option is to invoke impossible as well as possible worlds; one might then treat propositions as sets of worlds, which may or may not be possible. If there is an impossible world in which there are only finitely many primes but in which 2+2=4, that would promise to give us the resources to distinguish between the set of worlds in which (13) is true and the set of worlds in which (14) is true, and hence to explain the difference in truth-value between (15) and (16) . For an overview of issues involving impossible worlds, see Nolan (2013).

What we need, then, is an approach to semantics which can explain how sentences like (13) and (14) , and hence also (15) and (16) , can express different propositions. That is, we need a view of propositions which makes room for the possibility that a pair of sentences can be true in just the same circumstances but nonetheless have genuinely different contents.

A natural thought is that (13) and (14) have different contents because they are about different things; for example, (14) makes a general claim about the set of prime numbers whereas (13) is about the relationship between the numbers 2 and 4. One might want our semantic theory to be sensitive to such differences: to count two sentences as expressing different propositions if they are have different subject matters, in this sense. One way to secure this result is to think of the contents of subsentential expressions as components of the proposition expressed by the sentence as a whole. Differences in the contents of subsentential expressions would then be sufficient for differences in the content of the sentence as a whole; so, for example, since (14) but not (13) contains an expression which refers to prime numbers, these sentences will express different propositions.

Proponents of this sort of view think of propositions as structured : as having constituents which include the meanings of the expressions which make up the sentence expressing the relevant proposition. (See, for more discussion, the entry on structured propositions .) One important question for views of this sort is: what does it mean for an abstract object, like a proposition, to be structured, and have constituents? But this question would take us too far afield into metaphysics (see §2.3.3 below for a brief discussion). The fundamental semantic question for proponents of this sort of structured proposition view is: what sorts of things are the constituents of propositions?

The answer to this question given by a proponent of Russellian propositions is: objects, properties, relations, and functions. (The view is called “Russellianism” because of its resemblance to the view of content defended in Chapter IV of Russell 1903.) So described, Russellianism is a general view about what sorts of things the constituents of propositions are, and does not carry a commitment to any views about the contents of particular types of expressions. However, most Russellians also endorse a particular view about the contents of proper names which is known as Millianism : the view that the meaning of a simple proper name is the object (if any) for which it stands.

Russellianism has much to be said for it. It not only solves the problems with possible worlds semantics discussed above, but fits well with the intuitive idea that the function of names is to single out objects, and the function of predicates is to (what else?) predicate properties of those objects.

However, Millian-Russellian semantic theories also face some problems. Some of these are metaphysical in nature, and are based on the premise that propositions which have objects among their constituents cannot exist in circumstances in which those objects do not exist. (For discussion, see the entry on singular propositions .) Of the semantic objections to Millian-Russellian semantics, two are especially important.

The first of these problems involves the existence of empty names : names which have no referent. It is a commonplace that there are such names; an example is “Vulcan”, the name introduced for the planet between Mercury and the sun which was causing perturbations in the orbit of Mercury. Because the Millian-Russellian says that the content of a name is its referent, the Millian-Russellian seems forced into saying that empty names lack a content. But this is surprising; it seems that we can use empty names in sentences to express propositions and form beliefs about the world. The Millian-Russellian owes some explanation of how this is possible, if such names genuinely lack a content. An excellent discussion of this problem from a Millian point of view is provided in Braun (1993).

Perhaps the most important problem facing Millian-Russellian views, though, is Frege’s puzzle. Consider the sentences

  • (17) Clark Kent is Clark Kent.
  • (18) Clark Kent is Superman.

According to the Millian-Russellian, (17) and (18) differ only in the substitution of expressions with have the same content: after all, “Clark Kent” and “Superman” are proper names which refer to the same object, and the Millian-Russellian holds that the content of a proper name is the object to which that name refers. But this is a surprising result. These sentences seem to differ in meaning, because (17) seems to express a trivial, obvious claim, whereas (18) seems to express a non-trivial, potentially informative claim.

This sort of objection to Millian-Russellian views can (as above) be strengthened by embedding the intuitively different sentences in propositional attitude ascriptions, as follows:

  • (19) Lois believes that Clark Kent is Clark Kent .
  • (20) Lois believes that Clark Kent is Superman .

The problem posed by (19) and (20) for Russellian semantics is analogous to the problem posed by (15) and (16) for possible worlds semantics. Here, as there, we have a pair of belief ascriptions which seem as though they could differ in truth-value despite the fact that these sentences differ only with respect to expressions counted as synonymous by the relevant semantic theory.

Russellians have offered a variety of responses to Frege’s puzzle. Many Russellians think that our intuition that sentences like (19) and (20) can differ in truth-value is based on a mistake. This mistake might be explained at least partly in terms of a confusion between the proposition semantically expressed by a sentence in a context and the propositions speakers would typically use that sentence to pragmatically convey (Salmon 1986; Soames 2002), or in terms of the fact that a single proposition may be believed under several “propositional guises” (again, see Salmon 1986), or in terms of a failure to integrate pieces of information stored using distinct mental representations (Braun & Saul 2002). [ 3 ] Alternatively, a Russellian might try to make room for (19) and (20) to genuinely differ in truth-value by giving up the idea that sentences which differ only in the substitution of proper names with the same content must express the same proposition (Taschek 1995, Fine 2007).

However, these are not the only responses to Frege’s puzzle. Just as the Russellian responded to the problem posed by (15) and (16) by holding that two sentences with the same intension can differ in meaning, one might respond to the problem posed by (19) and (20) by holding that two names which refer to the same object can differ in meaning, thus making room for (19) and (20) to differ in truth-value. This is to endorse a Fregean response to Frege’s puzzle, and to abandon the Russellian approach to semantics (or, at least, to abandon Millian-Russellian semantics).

Fregeans, like Russellians, think of the proposition expressed by a sentence as a structured entity with constituents which are the contents of the expressions making up the sentence. But Fregeans, unlike Russellians, do not think of these propositional constituents as the objects, properties, and relations for which these expressions stand; instead, Fregeans think of the contents as modes of presentation, or ways of thinking about, objects, properties, and relations. The standard term for these modes of presentation is sense . (As with “intension”, “sense” is sometimes also used as a synonym for “content”. But, as with “intension”, it avoids confusion to restrict “sense” for “content, as construed by Fregean semantics”. It is then controversial whether there are such things as senses, and whether they are the contents of expressions.) Frege explained his view of senses with an analogy:

The reference of a proper name is the object itself which we designate by its means; the idea, which we have in that case, is wholly subjective; in between lies the sense, which is indeed no longer subjective like the idea, but is yet not the object itself. The following analogy will perhaps clarify these relationships. Somebody observes the Moon through a telescope. I compare the Moon itself to the reference; it is the object of the observation, mediated by the real image projected by the object glass in the interior of the telescope, and by the retinal image of the observer. The former I compare to the sense, the latter is like the idea or experience. The optical image in the telescope is indeed one-sided and dependent upon the standpoint of observation; but it is still objective, inasmuch as it can be used by several observers. At any rate it could be arranged for several to use it simultaneously. But each one would have his own retinal image. (Frege 1892 [1960])

Senses are then objective, in that more than one person can express thoughts with a given sense, and correspond many-one to objects. Thus, just as Russellian propositions correspond many-one to intensions, Fregean propositions correspond many-one to Russellian propositions. This is sometimes expressed by the claim that Fregean contents are more fine-grained than Russellian contents (or intensions).

Indeed, we can think of our three classical semantic theories, along with the theory of reference, as related by this kind of many-one relation, as illustrated by the chart below:

diagram: link to extended description below

Figure 4. [An extended description of figure 4 is in the supplement.]

The principal argument for Fregean semantics (which also motivated Frege himself) is the neat solution the view offers to Frege’s puzzle: the view says that, in cases like (19) and (20) in which there seems to be a difference in content, there really is a difference in content: the names share a reference, but differ in their sense, because they differ in their mode of presentation of their shared reference.

The principal challenge for Fregeanism is the challenge of giving a non-metaphorical explanation of the nature of sense. This is a problem for the Fregean in a way that it is not for the possible worlds semanticist or the Russellian since the Fregean, unlike these two, introduces a new class of entities to serve as meanings of expressions rather than merely appropriating an already recognized sort of entity—like a function, or an object, property, or relation—to serve this purpose. [ 4 ]

A first step toward answering this challenge is provided by a criterion for telling when two expressions differ in meaning, which might be stated as follows. In his 1906 paper, “A Brief Survey of My Logical Doctrines”, Frege seems to endorse the following criterion:

Frege’s criterion of difference for senses Two sentences S and S* differ in sense if and only if some rational agent who understood both could, on reflection, judge that S is true without judging that S* is true.

One worry about this formulation concerns the apparent existence of pairs of sentences, like “If Obama exists, then Obama=Obama” and “If McCain exists, McCain=McCain” which are such that any rational person who understands both will take both to be true. These sentences seem intuitively to differ in content—but this is ruled out by the criterion above. One idea for getting around this problem would be to state our criterion of difference for senses of expressions in terms of differences which result from substituting one expression for another:

Two expressions e and e* differ in sense if and only if there are a pair of sentences, S and S* which (i) differ only in the substitution of e for e * and (ii) are such that some rational agent who understood both could, on reflection, judge that S is true without judging that S* is true.

This version of the criterion has Frege’s formulation as a special case, since sentences are, of course, expressions; and it solves the problem with obvious truths, since it seems that substitution of sentences of this sort can change the truth value of a propositional attitude ascription. Furthermore, the criterion delivers the wanted result that coreferential names like “Superman” and “Clark Kent” differ in sense, since a rational, reflective agent like Lois Lane could think that (17) is true while withholding assent from (18) .

But even if this tells us when names differ in sense, it does not quite tell us what the sense of a name is . Here is one initially plausible way of explaining what the sense of a name is. We know that, whatever the content of a name is, it must be something which determines as a reference the object for which the name stands; and we know that, if Fregeanism is true, this must be something other than the object itself. A natural thought, then, is that the content of a name—its sense—is some condition which the referent of the name uniquely satisfies. Coreferential names can differ in sense because there is always more than one condition which a given object uniquely satisfies. (For example, Superman/Clark Kent uniquely satisfies both the condition of being the superhero Lois most admires, and the newspaperman she least admires.) Given this view, it is natural to then hold that names have the same meanings as definite descriptions —phrases of the form “the so-and-so”. After all, phrases of this sort seem to be designed to pick out the unique object, if any, which satisfies the condition following the “the”. (For more discussion, see entry on descriptions .) This Fregean view of names is called Fregean descriptivism .

However, as Saul Kripke argued in Naming and Necessity , Fregean descriptivism faces some serious problems. Here is one of the arguments he gave against the view, which is called the modal argument . Consider a name like “Aristotle”, and suppose for purposes of exposition that the sense I associate with that name is the sense of the definite description “the greatest philosopher of antiquity”. Now consider the following pair of sentences:

  • (21) Necessarily, if Aristotle exists, then Aristotle is Aristotle .
  • (22) Necessarily, if Aristotle exists, then Aristotle is the greatest philosopher of antiquity .

If Fregean descriptivism is true, and “the greatest philosopher of antiquity” is indeed the description I associate with the name “Aristotle”, then it seems that (21) and (22) must be a pair of sentences which differ only via the substitution of expressions (the italic ones) with the same content. If this is right, then (21) and (22) must express the same proposition, and have the same truth-value. But this seems to be a mistake; while (21) appears to be true (Aristotle could hardly have failed to be himself), (22) appears to be false (perhaps Aristotle could have been a shoemaker rather than a philosopher; or perhaps if Plato had worked a bit harder, he rather than Aristotle could have been the greatest philosopher of antiquity).

An important precursor to Kripke’s arguments against Fregean descriptivism is Marcus (1961), which argues that names are “tags” for objects rather than abbreviated descriptions. Fregean descriptivists have given various replies to Kripke’s modal and other arguments; see especially Plantinga (1978), Dummett (1981), and Sosa (2001). For rejoinders to these Fregean replies, see Soames (1998, 2002) and Caplan (2005). For a defense of a view of descriptions which promises a reply to the modal argument, see Rothschild (2007). For a brief sketch of Kripke’s other arguments against Fregean descriptivism, see names, §2.4 .

Kripke’s arguments provide a strong reason for Fregeans to deny Fregean descriptivism, and hold instead that the senses of proper names are not the senses of any definite description associated with those names by speakers. The main problem for this sort of non-descriptive Fregeanism is to explain what the sense of a name might be such that it can determine the reference of the name, if it is not a condition uniquely satisfied by the reference of the name. Non-descriptive Fregean views were defended in McDowell (1977) and Evans (1981). The most sophisticated and well-developed version of the view is a kind of blend of Fregean semantics and possible worlds semantics. This is the epistemic two-dimensionalist approach to semantics which has been developed by David Chalmers. See Chalmers (2004,2006).

Three other problems for Fregean semantics are worth mentioning. The first is the problem of whether the Fregean can give an adequate treatment of indexical expressions. A classic argument that the Fregean cannot is given in Perry (1977); for a Fregean reply, see Evans (1981).

The first calls into question the Fregean’s claim to have provided a plausible solution to Frege’s puzzle. The Fregean resolves instances of Frege’s puzzle by positing differences in sense to explain apparent differences in truth-value. But this sort of solution, if pursued generally, seems to lead to the surprising result that no two expressions can have the same content. For consider a pair of expressions which really do seem to have the same content, like “catsup” and “ketchup”. (The example, as well as the argument to follow, is borrowed from Salmon 1990.) Now consider Bob, a confused condiment user, who thinks that the tasty red substance standardly labeled “catsup” is distinct from the tasty red substance standardly labeled “ketchup”, and consider the following pair of sentences:

  • (23) Bob believes that catsup is catsup .
  • (24) Bob believes that catsup is ketchup .

(23) and (24) seem quite a bit like (19) and (20) : these each seem to be pairs of sentences which differ in truth-value, despite differing only in the substitution of the italic expressions. So, for consistency, it seems that the Fregean should explain the apparent difference in truth-value between (23) and (24) in just the way he explains the apparent difference in truth-value between (19) and (20): by positing a difference in meaning between the italic expressions. But, first, it is hard to see how expressions like “catsup” and “ketchup” could differ in meaning; and, second, it seems that an example of this sort could be generated for any alleged pair of synonymous expressions. (A closely related series of examples is developed in much more detail in Kripke 1979.)

The example of “catsup” and “ketchup” is related to a second worry for the Fregean, which is the reverse of the Fregean’s complaint about Russellian semantics: a plausible case can be made that Frege’s criterion of difference for sense slices contents too finely, and draws distinctions in content where there are none. One way of developing this sort of argument involves (again) propositional attitude ascriptions. It seems plausible that if I utter a sentence like “Hammurabi thought that Hesperus was visible only in the morning”, what I say is true if and only if one of Hammurabi’s thoughts has the same content as does the sentence “Hesperus was visible only in the morning”, as used by me. On a Russellian view, this places a reasonable constraint on the truth of the ascription; it requires only that Hammurabi believe of a certain object that it instantiates the property of being visible in the morning. But on a Fregean view, this sort of view of attitude ascriptions would require that Hammurabi thought of the planet Venus under the same mode of presentation as I attach to the term “Hesperus”. This seems implausible, since it seems that I can truly report Hammurabi’s beliefs without knowing anything about the mode of presentation under which he thought of the planets. (For a recent attempt to develop a Fregean semantics for propositional attitude ascriptions which avoids this sort of problem by integrating aspects of a Russellian semantics, see Chalmers (2011).)

2.2 Alternatives to Classical Semantic Theories

Classical semantic theories, however, are not the only game in town. This section lays out the basics of five alternatives to classical semantic theorizing.

One kind of challenge to classical semantics attacks the idea that the job of a semantic theory is to systematically pair expressions with the entities which are their meanings. Wittgenstein was parodying just this idea when he wrote

You say: the point isn’t the word, but its meaning, and you think of the meaning as a thing of the same kind as the word, though also different from the word. Here the word, there the meaning. The money, and the cow that you can buy with it. (Wittgenstein 1953, §120)

While Wittgenstein himself did not think that systematic theorizing about semantics was possible, this anti-theoretical stance has not been shared by all subsequent philosophers who share his aversion to “meanings as entities”. A case in point is Donald Davidson. Davidson thought that semantic theory should take the form of a theory of truth for the language of the sort which Alfred Tarski showed us how to construct (see Tarski 1944 and entry on Tarski’s truth definitions ).

For our purposes, it will be convenient to think of a Tarskian truth theory as a variant on the sorts of theories of reference introduced in §2.1.1 . Recall that theories of reference of this sort specified, for each proper name in the language, the object to which that name refers, and for every simple predicate in the language, the set of things which satisfy that predicate. If we then consider a sentence which combines a proper name with such a predicate, like

Amelia sings

the theory tells us what it would take for that sentence to be true: it tells us that this sentence is true if and only if the object to which “Amelia” refers is a member of the set of things which satisfy the predicate “sings”—i.e., the set of things which sing. So we can think of a full theory of reference for the language as implying, for each sentence of this sort, a T-sentence of the form

“Amelia sings” is T (in the language) if and only if Amelia sings.

Suppose now that we expand our theory of reference so that it implies a T-sentence of this sort for every sentence of the language, rather than just for simple sentences which result from combining a name and a monadic predicate. We would then have a Tarskian truth theory for our language. Tarski’s idea was that such a theory would define a truth predicate (“ T ”) for the language; Davidson, by contrast, thought that we find in Tarskian truth theories “the sophisticated and powerful foundation of a competent theory of meaning” (Davidson 1967).

This claim is puzzling: why should a a theory which issues T-sentences, but makes no explicit claims about meaning or content, count as a semantic theory? Davidson’s answer was that knowledge of such a theory would be sufficient to understand the language. If Davidson were right about this, then he would have a plausible argument that a semantic theory could take this form. After all, it is plausible that someone who understands a language knows the meanings of the expressions in the language; so, if knowledge of a Tarskian truth theory for the language were sufficient to understand the language, then knowledge of what that theory says would be sufficient to know all the facts about the meanings of expressions in the language, in which case it seems that the theory would state all the facts about the meanings of expressions in the language.

One advantage of this sort of approach to semantics is its parsimony: it makes no use of the intensions, Russellian propositions, or Fregean senses assigned to expressions by the propositional semantic theories discussed above. Of course, as we saw above, these entities were introduced to provide a satisfactory semantic treatment of various sorts of linguistic constructions, and one might well wonder whether it is possible to provide a Tarskian truth theory of the sort sketched above for a natural language without making use of intensions, Russellian propositions, or Fregean senses. The Davidsonian program obviously requires that we be able to do this, but it is still very much a matter of controversy whether a truth theory of this sort can be constructed. Discussion of this point is beyond the scope of this entry; one good way into this debate is through the debate about whether the Davidsonian program can provide an adequate treatment of propositional attitude ascriptions. See the discussion of the paratactic account and interpreted logical forms in the entry on propositional attitude reports . (For Davidson’s initial treatment of attitude ascriptions, see Davidson 1968; for further discussion see, among other places, Burge 1986; Schiffer 1987; Lepore and Loewer 1989; Larson and Ludlow 1993; Soames 2002.)

Let’s set this aside, and assume that a Tarskian truth theory of the relevant sort can be constructed, and ask whether, given this supposition, this sort of theory would provide an adequate semantics. There are two fundamental reasons for thinking that it would not, both of which are ultimately due to Foster (1976). Larson and Segal (1995) call these the extension problem and the information problem .

The extension problem stems from the fact that it is not enough for a semantic theory whose theorems are T-sentences to yield true theorems; the T-sentence

“Snow is white” is T in English iff grass is green.

is true, but tells us hardly anything about the meaning of “Snow is white”. Rather, we want a semantic theory to entail, for each sentence of the object language, exactly one interpretive T-sentence: a T-sentence such that the sentence used on its right-hand side gives the meaning of the sentence mentioned on its left-hand side. Our theory must entail at least one such T-sentence for each sentence in the object language because the aim is to give the meaning of each sentence in the language; and it must entail no more than one because, if the theory had as theorems more than one T-sentence for a single sentence S of the object language, an agent who knew all the theorems of the theory would not yet understand S , since such an agent would not know which of the T-sentences which mention S was interpretive.

The problem is that it seems that any theory which implies at least one T-sentence for every sentence of the language will also imply more than one T-sentence for every sentence in the language. For any sentences p , q , if the theory entails a T-sentence

S is T in L iff p ,

then, since p is logically equivalent to \(p \amp \nsim(q \amp \nsim q)\), the theory will also entail the T-sentence

S is T in L iff \(p \amp \nsim(q \amp \nsim q)\),

which, if the first is interpretive, won’t be. But then the theory will entail at least one non-interpretive T-sentence, and someone who knows the theory will not know which of the relevant sentences is interpretive and which not; such a person therefore would not understand the language.

The information problem is that, even if our semantic theory entails all and only interpretive T-sentences, it is not the case that knowledge of what is said by these theorems would suffice for understanding the object language. For, it seems, I can know what is said by a series of interpretive T-sentences without knowing that they are interpretive. I may, for example, know what is said by the interpretive T-sentence

“Londres est jolie” is T in French iff London is pretty

but still not know the meaning of the sentence mentioned on the left-hand side of the T-sentence. The truth of what is said by this sentence, after all, is compatible with the sentence used on the right-hand side being materially equivalent to, but different in meaning from, the sentence mentioned on the left. This seems to indicate that knowing what is said by a truth theory of the relevant kind is not, after all, sufficient for understanding a language. (For replies to these criticisms, see Davidson (1976), Larson and Segal (1995) and Kölbel (2001); for criticism of these replies, see Soames (1992) and Ray (2014). For a reply to the latter, see Kirk-Giannini and Lepore (2017).)

The Davidsonian, on one reading, diagnoses the mistake of classical semantics in its commitment to a layer of content which goes beyond a theory of reference. A different alternative to classical semantics departs even more radically from that tradition, by denying that mind-world reference relations should play any role in semantic theorizing.

This view is sometimes called “internalist semantics” by contrast with views which locate the semantic properties of expressions in their relation to elements of the external world. This internalist approach to semantics is associated with the work of Noam Chomsky (see especially Chomsky 2000).

It is easy to say what this approach to semantics denies. The internalist denies an assumption common to all of the approaches discussed so far: the assumption that in giving the content of an expression, we are primarily specifying something about that expression’s relation to things in the world which that expression might be used to say things about. According to the internalist, expressions as such don’t bear any semantically interesting relations to things in the world; names don’t, for example, refer to the objects with which one might take them to be associated; predicates don’t have extensions; sentences don’t have truth conditions. On this sort of view, we can use sentences to say true or false things about the world, and can use names to refer to things; but this is just one thing we can do with names and sentences, and is not a claim about the meanings of those expressions.

So what are meanings, on this view? The most developed answer to this question is given in Pietroski (2018), according to which “meanings are instructions for how to build concepts of a special sort” (2018: 36). By “concepts”, Pietroski means mental representations of a certain kind. So the meaning of an expression is an instruction to form a certain sort of mental representation.

On this kind of view, while concepts may have extensions, expressions of natural languages do not. So this approach rejects not just the details but the foundation of the classical approach to semantics described above.

One way to motivate an approach of this kind focuses on the ubiquity of the phenomenon of polysemy in natural languages. As Pietroski says,

We can use “line” to speak of Euclidean lines, fishing lines, telephone lines, waiting lines, lines in faces, lines of thought, etc. We can use “door” to access a concept of certain impenetrable objects, or a concept of certain spaces that can be occupied by such objects. (2018: 5)

The defender of the view that expressions have meanings which determine extensions seems forced to say that “line” and “door” are homophonous expressions, like “bank”. But that seems implausible; when one uses the expressions “fishing line” and “line of thought” one seems to be using “line” in recognizably the same sense. (This is a point of contrast with standard examples of homophony, as when uses “bank” once to refer to a financial institution and then later to refer to the side of a river.) The internalist, by contrast, is not forced into treating these as cases of homophony; he can say that the meaning of “line” is an instruction to fetch one of a family of concepts.

For defenses and developments of internalist approaches to semantics, see McGilvray (1998), Chomsky (2000), and Pietroski (2003, 2005, 2018).

Internalist semantics can be understood as denying that classical semantic assumption that a semantic theory should assign truth conditions to sentences. Another alternative to classical semantics does not deny that assumption, but does deny that truth conditions should play the fundamental role in semantics that classical semantics give to them.

This alternative is inferentialist semantics. The difference between classical and inferentialist semantics is nicely put by Robert Brandom:

The standard way [of classical semantics] is to assume that one has a prior grip on the notion of truth, and use it to explain what good inference consists in. … [I]nferentialist pragmatism reverses this order of explanation … It starts with a practical distinction between good and bad inferences, understood as a distinction between appropriate and inappropriate doings, and goes on to understand talk about truth as talk about what is preserved by the good moves. (Brandom 2000: 12)

The classical semanticist begins with certain language-world representational relations, and uses these to explain the truth conditions of sentences; we can then go on to use these truth conditions to explain the difference between good and bad inferences. The inferentialist, by contrast, begins with the distinction between good and bad inferences, and tries to explain the representational relations which the classical semanticist takes as (comparatively) basic in inferentialist terms. (I say “comparatively basic” because the classical semanticist might go on to provide a reductive explanation of these representational relations, and the inferentialist might go on to provide a reductive explanation of the distinction between good and bad inferences.

As Brandom also emphasizes, the divergence between the classical and inferentialist approaches to semantics arguably brings with it a divergence on two other fundamental topics.

The first is the relative explanatory priority of the semantic properties of sentences, on the one hand, and subsentential expressions, on the other. It is natural for the classical semanticist to think that the representational relations between subsentential expressions and their semantic contents can be explained independently of the representational properties of sentences (i.e., their truth conditions); the latter can thus be explained in terms of the former. For the inferentialist, on the other hand, the semantic properties of sentences must come first, because inferential relations hold between sentences but not between subsentential expressions. (One cannot, for example, infer one name from another.) So the inferentialist will not explain the semantic properties of, for example, singular terms in terms of representational relations between those singular terms and items in the world; rather, she will explain what is distinctive of singular terms in terms of their role in certain kinds of inferences. (To see how this strategy might work, see Brandom 2000: Ch. 4.)

The second is the relative explanatory priority of the semantic properties of individual sentences, on the one hand, and the semantic relations between sentences on the other. The classical semanticist can, so to speak, explain the meanings of sentences one by one; there is no difficulty in explaining the meaning of a sentence without mentioning other sentences. By contrast, according to the inferentialist,

if the conceptual content expressed by each sentence or word is understood as essentially consisting in its inferential relations (broadly construed) or articulated by its inferential relations (narrowly construed), then one must grasp many such contents in order to grasp any. (Brandom 2000: 29)

This is sometimes called a holist approach to semantics. For discussions of the pros and cons of this kind of view, see the entry on meaning holism .

For book length defenses of inferentialism, see Brandom (1994) and Brandom (2000). Important precursors include Wittgenstein (1953) and Sellars (1968); see also the entries on Ludwig Wittgenstein and Wilfrid Sellars . For a classic objection to inferentialism, see Prior (1960). For a discussion of a prominent approach within the inferentialist tradition, see the entry on proof-theoretic semantics .

In laying out the various versions of classical semantics, we said a lot about sentences. By comparison, we said hardly anything about conversations, or discourses. This is no accident; classical approaches to semantics typically think of properties of conversations or discourses as explicable in terms of explanatorily prior semantic properties of sentences (even if classical semanticists do often take the semantic contents of sentences to be sensitive to features of the discourse in which they occur).

Dynamic semantics is, to a first approximation, an approach to semantics which reverses these explanatory priorities. (The sorts of classical theories sketched above are, by contrast, called “static” semantic theories.) On a dynamic approach, a semantic theory does not aim primarily to deliver a pairing of sentences with propositions which then determine the sentence’s truth conditions. Rather, according to these dynamic approaches to semantics, the semantic values of sentences are rather “context change potentials”—roughly, instructions for updating the context, or discourse.

In a dynamic context, many of the questions posed above about how best to understand the nature of semantic contents show up instead as questions about how best to understand the nature of contexts and context change potentials. It is controversial not just whether a dynamic or static approach to semantics is likely to be more fruitful, but also what exactly the distinction between dynamic and static systems comes to. (For discussion of the latter question, see Rothschild & Yalcin 2016.)

The relationship between dynamic semantics and classical semantics is different than the relationship between the latter and the other alternatives to classical semantics that I’ve discussed. The other alternatives to classical semantics reject some core feature of classical semantics—for example, the assignments of entities as meanings, or the idea that meaning centrally involves word-world relations. By contrast, dynamic semantics can be thought of as a kind of extension or generalization of classical semantics, which can employ modified versions of much of the same theoretical machinery.

Foundational works in this tradition include Irene Heim’s file change semantics (1982) and Hans Kamp’s discourse representation theory (see entry) . For more details on different versions of this alternative to classical semantics, see the entry on dynamic semantics . For critical discussion of the motivations for dynamic semantics, see Karen Lewis (2014). For discussion of the extent to which dynamic and static approaches are really in competition, see Stojnić (2019).

A final alternative to classical semantics differs from those discussed in the preceding four subsections in two (related) respects.

The first is that, unlike the other non-classical approaches, expressivist semantics was originally not motivated by linguistic considerations. Rather, it was developed in response to specifically metaethical considerations. A number of philosophers held metaethical views which made it hard for them to see how a classical semantic treatment of sentences about ethics could be correct, and so developed expressivism as an alternative treatment of these parts of language.

This leads to a second difference between expressivist and our other four alternatives to classical semantics. The latter are all “global alternatives”, in the sense that they propose non-classical approaches to the semantics of all of a natural language. By contrast, expressivists typically agree that classical semantics (or one of the other non-expressivist alternatives to it discussed in §§2.2.1–4 ) is correct for many parts of language; they just think that special features of some parts of language require expressivist treatment.

One can think of many traditional versions of expressivism, which were motivated by metaethical concerns, as involving two basic ideas. First, we can explain the meaning of a mental state by saying what mental state that sentence expresses. Second, the mental state expressed by a sentence about ethics is different in kind from the mental state expressed by a “factual” sentence.

Two follow up questions suggest themselves. One is about what “expresses” means here; for one answer, see Gibbard (1990). A second is about what the relevant difference in mental states consists in. On many views, the mental states expressed by non-ethical sentences are beliefs, whereas the mental states expressed by ethical sentences are not. Different versions of expressivism propose different candidates for the mental states which are expressed by ethical sentences. Prominent candidates include exclamations (Ayer 1936), commands (Hare 1952), and plans (Gibbard 1990, 2003).

A classic problem for expressivist theories of the kind just sketched comes from interactions between ethical and non-ethical bits of language. This problem has come to be known as the Frege-Geach problem, because a very influential version of the problem was posed by Geach (1960, 1965). (A version of the problem is also independently presented in Searle 1962.) In one of its versions, the problem comes in two parts. First, whatever mental state expressivists take ethical sentences to express will typically not be expressed by complex sentences which embed the relevant ethical sentence. So even if

Lying is wrong.

expresses the mental state of planning not to lie, the same sentence when embedded in the conditional

If lying is wrong, then what Jane did was wrong.

does not. After all, one can endorse this conditional without endorsing a plan not to lie. So it seems that the expressivist must say that “lying is wrong” means something different when it occurs alone than it does when it occurs in the antecedent of a conditional. The problem, though, is that if one takes that view it is hard to see how it could follow from the above two sentences, as it surely does, that

What Jane did was wrong.

For a discussion of solutions to this problem, and an influential critique of expressivism, see Schroeder (2008).

Much recent work on expressivism is both less focused on the special case of ethics, and more motivated by purely linguistic considerations, than has often been the case traditionally. Examples include discussions of expressivism about epistemic modality in Yalcin (2007), about knowledge ascriptions in Moss (2013), and about vagueness in MacFarlane (2016).

2.3 General Questions Facing Semantic Theories

As mentioned above, the aim of §2 of this entry is to discuss issues about the form which a semantic theory should take which are at a higher level of abstraction than issues about the correct semantic treatment of particular expression-types. (Also as mentioned above, some of these may be found in the entries on conditionals , descriptions , names , propositional attitude reports , and tense and aspect .) But there are some general issues in semantics which, while more general than questions about how, for example, the semantics of adverbs should go, are largely (though not wholly) orthogonal to the question of which of the frameworks for semantic theorizing laid out in §2.1–2 should be adopted. The present subsection introduces a few of these.

§2.1.4 introduced the idea that some expressions might be context-sensitive, or indexical. Within a propositional semantics, we’d say that these expressions have different contents relative to distinct contexts; but the phenomenon of context-sensitivity is one which any semantic theory must recognize. A very general question which is both highly important and orthogonal to the above distinctions between types of semantic theories is: How much context-sensitivity is there in natural languages?

Virtually everyone recognizes a sort of core group of indexicals, including “I”, “here”, and “now”. Most also think of demonstratives, like (some uses of) “this” and “that”, as indexicals. But whether and how this list should be extended is a matter of controversy. Some popular candidates for inclusion are:

  • devices of quantification
  • gradable adjectives
  • alethic modals, including counterfactual conditionals
  • “knows” and epistemic modals
  • propositional attitude ascriptions
  • “good” and other moral terms

Many philosophers and linguists think that one or more of these categories of expressions are indexicals. Indeed, some think that virtually every natural language expression is context-sensitive.

Questions about context-sensitivity are important, not just for semantics, but for many areas of philosophy. And that is because some of the terms thought to be context-sensitive are terms which play a central role in describing the subject matter of other areas of philosophy.

Perhaps the most prominent example here is the role that the view that “knows” is an indexical has played in recent epistemology. This view is often called “contextualism about knowledge”; and in general, the view that some term F is an indexical is often called “contextualism about F ”. Contextualism about knowledge is of interest in part because it promises to provide a kind of middle ground between two opposing epistemological positions: the skeptical view that we know hardly anything about our surroundings, and the dogmatist view that we can know that we are not in various Cartesian skeptical scenarios. (So, for example, the dogmatist holds that I can know that I am not a brain in a vat which is, for whatever reason, being made to have the series of experiences subjectively indistinguishable from the experiences I actually have.) Both of these positions can seem unappealing—skepticism because it does seem that I can occasionally know, e.g., that I am sitting down, and dogmatism because it’s hard to see how I can rule out the possibility that I am in a skeptical scenario subjectively indistinguishable from my actual situation.

But the disjunction of these positions can seem, not just unappealing, but inevitable; for the proposition that I am sitting entails that I am not a brain in a vat, and it’s hard to see—presuming that I know that this entailment holds—how I could know the former without thereby being in a position to know the latter. The contextualist about “knows” aims to provide the answer: the extension of “knows” depends on features of the context of utterance. Perhaps—to take one among many possible contextualist views—a pair of a subject and a proposition p will be in the extension of “knows” relative to a context only if that subject is able to rule out every possibility which is both (i) inconsistent with p and (ii) salient in C . The idea is that “I know that I am sitting down” can be true in a normal setting, simply because the possibility that I am a brain in a vat is not normally salient; but typically “I know that I am not a brain in a vat” will be false, since discussion of skeptical scenarios makes them salient, and (if the skeptical scenario is well-designed) I will lack the evidence needed to rule them out. See for discussion, among many other places, the entry on epistemic contextualism , Cohen (1986), DeRose (1992), and David Lewis (1996).

Having introduced one important contextualist thesis, let’s return to the general question which faces the semantic theorist, which is: How do we tell when an expression is context-sensitive? Contextualism about knowledge, after all, can hardly get off the ground unless “knows” really is a context-sensitive expression. “I” and “here” wear their context-sensitivity on their sleeves; but “knows” does not. What sort of argument would suffice to show that an expression is an indexical?

Philosophers and linguists disagree about the right answers to this question. The difficulty of coming up with a suitable diagnostic is illustrated by considering one intuitively plausible test, defended in Chapter 7 of Cappelen & Lepore (2005). This test says that an expression is an indexical iff it characteristically blocks disquotational reports of what a speaker said in cases in which the original speech and the disquotational report are uttered in contexts which differ with respect to the relevant contextual parameter. (Or, more cautiously, that this test provides evidence that a given expression is, or is not, context-sensitive.)

This test clearly counts obvious indexicals as such. Consider “I”. Suppose that Mary utters

I am hungry.

One sort of disquotational report of Mary’s speech would use the very sentence Mary uttered in the complement of a “says” ascription. So suppose that Sam attempts such a disquotational report of what Mary said, and utters

Mary said that I am hungry.

The report is obviously false; Mary said that Mary is hungry, not that Sam is. The falsity of Sam’s report suggests that “I am hungry” has a different content out of Mary’s mouth than out of Sam’s; and this, in turn, suggests that “I” has a different content when uttered by Mary than when uttered by Sam. Hence, it suggests that “I” is an indexical.

It isn’t just that this test gives the right result in many cases; it’s also that the test fits nicely with the plausible view that an utterance of a sentence of the form “ A said that S ” in a context C is true iff the content of S in C is the same as the content of what the referent of “ A ” said (on the relevant occasion).

The interesting uses of this test are not uses which show that “I” is an indexical; we already knew that. The interesting use of this test, as Cappelen and Lepore argue, is to show that many of the expressions which have been taken to be indexicals—like the ones on the list given above—are not context-sensitive. For we can apparently employ disquotational reports of the relevant sort to report utterances using quantifiers, gradable adjectives, modals, “knows”, etc. This test thus apparently shows that no expressions beyond the obvious ones—“I”, “here”, “now”, etc.—are genuinely context-sensitive.

But, as Hawthorne (2006) argues, naive applications of this test seem to lead to unacceptable results. Terms for relative directions, like “left”, seem to be almost as obviously context-sensitive as “I”; the direction picked out by simple uses of “left” depends on the orientation of the speaker of the context. But we can typically use “left” in disquotational “says” reports of the relevant sort. Suppose, for example, that Mary says

The coffee machine is to the left.

Sam can later truly report Mary’s speech by saying

Mary said that the coffee machine was to the left.

despite the fact that Sam’s orientation in the context of the ascription differs from Mary’s orientation in the context of the reported utterance. Hence our test seems to lead to the absurd result that “left” is not context-sensitive.

One interpretation of this puzzling fact is that our test using disquotational “says” ascriptions is a bit harder to apply than one might have thought. For, to apply it, one needs to be sure that the context of the ascription really does differ from the context of the original utterance in the value of the relevant contextual parameter . And in the case of disquotational reports using “left”, one might think that examples like the above show that the relevant contextual parameter is sometimes not the orientation of the speaker, but rather the orientation of the subject of the ascription at the time of the relevant utterance.

This is but one criterion for context-sensitivity. But discussion of this criterion brings out the fact that the reliability of an application of a test for context-sensitivity will in general not be independent of the space of views one might take about the contextual parameters to which a given expression is sensitive. For an illuminating discussion of ways in which we might revise tests for context-sensitivity using disquotational reports which are sensitive to the above data, see Cappelen & Hawthorne (2009). For a critical survey of other proposed tests for context-sensitivity, see Cappelen & Lepore (2005: Part I).

This is just an introduction to one central issue concerning the relationship between context and semantic content. A sampling of other influential works on this topic includes Sperber and Wilson (1995), Carston (2002), Recanati (2004, 2010), Bezuidenhout (2002), and the essays in Stanley (2007).

§2.1.5 introduced the idea of an expression determining a reference, relative to a context, with respect to a particular circumstance of evaluation. But that discussion left the notion of a circumstance of evaluation rather underspecified. One might want to know more about what, exactly, these circumstances of evaluation involve—and hence about what sorts of things the reference of an expression can (once we’ve fixed a context) vary with respect to.

One way to focus this question is to stay at the level of sentences, and imagine that we have fixed on a sentence S , with a certain character, and context C . If sentences express propositions relative to contexts, then S will express some proposition P relative to C . If the determination of reference in general depends not just on character and context, but also on circumstance, then we know that P might have different truth-values relative to different circumstances of evaluation. Our question is: exactly what must we specify in order to determine P ’s truth-value?

Let’s say that an index is the sort of thing which, for some proposition P , we must at least sometimes specify in order to determine P’s truth-value. Given this usage, we can think of circumstances of evaluation—the things which play the theoretical role outlined in §2.1.5 —as made up of indices.

The most uncontroversial candidate for an index is a world, because most advocates of a propositional semantics think that propositions can have different truth-values with respect to different possible worlds. The main question is whether circumstances of evaluation need contain any indices other than a possible world.

The most popular candidate for a second index is a time. The view that propositions can have different truth-values with respect to different times—and hence that we need a time index—is often called “temporalism”. The negation of temporalism is eternalism.

The motivations for temporalism are both metaphysical and semantic. On the metaphysical side, A-theorists about time (see the entry on time ) think that corresponding to predicates like “is a child” are A-series properties which a thing can have at one time, and lack at another time. (Hence, on this view, the property corresponding to “is a child” is not a property like being a child in 2014 , since that is a property which a thing has permanently if at all, and hence is a B-series rather than A-series property.) But then it looks like the proposition expressed by “Violet is a child”—which predicates this A-series property of Violet—should have different truth-values with respect to different times. And this is enough to motivate the view that we should have an index for a time.

On the semantic side, as Kaplan (1989) notes, friends of the idea that tenses are best modeled as operators have good reason to include a time index in circumstances of evaluation. After all, operators operate on contents, so if there are temporal operators, they will only be able to affect truth-values if those contents can have different truth-values with respect to different times.

A central challenge for the view that propositions can change truth-value over time is whether the proponent of this view can make sense of retention of propositional attitudes over time. For suppose that I believe in 2014 that Violet is a child. Intuitively, I might hold fixed all of my beliefs about Violet for the next 40 years, without its being true, in 2054, that I have the obviously false belief that Violet is still a child. But the temporalist, who thinks of the proposition that Violet is a child as something which incorporates no reference to a time and changes truth-value over time, seems stuck with this result. Problems of this sort for temporalism are developed in Richard (1981); for a response see Sullivan (2014).

Motivations for eternalism are also both metaphysical and semantic. Those attracted to B-theories of time will take propositions to have their truth-values eternally, which makes inclusion of a time index superfluous. And those who think that tenses are best modeled in terms of quantification over times rather than using tense operators will, similarly, see no use for a time index. For a defense of the quantificational over the operator analysis of tense, see King (2003).

Is there a case to be made for including any indices other than a world and a time? There is; and this has spurred much of the recent interest in relativist semantic theories. Relativist semantic theories hold that our indices should include not just a world and (perhaps) a time, but also a context of assessment . Just as propositions can have different truth values with respect to different worlds, so, on this view, they can vary in their truth depending upon features of the conversational setting in which they are considered. (Though this way of putting things assumes that the relativist should be a “truth relativist” rather than a “content relativist”. See for discussion Weatherson and Egan 2011: § 2.3.)

The motivations for this sort of view can be illustrated by a type of example whose importance is emphasized in Egan et al. (2005). Suppose that, at the beginning of a murder investigation inquiry, I say

The murderer might have been on campus at midnight.

It looks like the proposition expressed by this sentence will be true, roughly, if we don’t know anything which rules out the murderer having been on campus at midnight. But now suppose that more information comes in, some of which rules out the murderer having been on campus at midnight. At this point, it seems, I could truly say

What I said was false—the murderer couldn’t have been on campus at midnight.

But this is puzzling. It is not puzzling that the sentence “The murderer might have been on campus at midnight” could be true when uttered in the first context but false when uttered in the second context; that fact could be accommodated by any number of contextualist treatments of epistemic modals, which would dissolve the puzzle by saying that the sentence expresses different propositions relative to the two contexts. The puzzle is that the truth of the second sentence seems to imply that the proposition expressed by the first—which we agreed was true relative to that context—is false relative to the second context. Here we don’t have (or don’t just have) sentences varying in truth-value depending on context; we seem to have propositions varying in truth-value depending on context. The relativist about epistemic modals takes appearance here to be reality, and holds that, in addition to worlds (and maybe times), propositions can sometimes differ in their truth-value relative to contexts of assessment (roughly, the context in which the proposition is being considered). (Note that it is not essential to the case that the two contexts of assessment are at different times; much the same intuitions can be generated by considering cases of “eavesdropping”, in which one party overhears the utterance of some other group which lacks some of its evidence.)

Relativist treatments of various expressions have also been motivated by certain apparent facts about disagreement. Lasersohn (2005) considers the example of predicates of personal taste. He points out that we’re often inclined to think that, if our tastes differ sufficiently, my utterance of “That soup is tasty” can be true even while your utterance of “That soup is not tasty” is also true. As above, this fact by itself is not especially surprising, and might seem to cry out for a contextualist treatment of “tasty”. But the puzzling thing is that, despite the fact that we think that each of us are uttering sentences which express true propositions, we are clearly disagreeing with each other. (You might say, after overhearing me, “ No , that soup is not tasty”.)

The contrast here with indexicals is apparently quite sharp. If I say “I’m hungry”, and you’re not hungry, you’d never reply to my utterance by saying “No, I’m not hungry”—precisely because it’s obvious that we would not be disagreeing. So again we have a puzzle: a puzzle about how each of our “soup” sentences could express true propositions, despite those propositions contradicting each other. Relativism suggests an answer: these propositions are only true or false relative to individuals. The one I express is true relative to me, and its negation is true relative to you; they’re contradictory in the sense that it is impossible for both to be true relative to the same individual (at the same time).

It’s very controversial whether any of these relativist arguments are convincing. For more discussion, see the discussion of “new relativism” in the entry on relativism . For an explication of relativism and its application to various kinds of discourse, see MacFarlane (2014). For an extended critique of relativism, see Cappelen & Hawthorne (2009).

Most philosophers believe in propositions, and hence think that semantics should be done according to one of the three broad categories of propositionalist approaches sketched above: possible worlds semantics, Russellianism, or Fregeanism. But it is notable that of these three views, only one—possible worlds semantics—actually tells us what propositions are. (Even in that case, of course, one might ask what possible worlds are, and hence what propositions are sets of . See the entry on possible worlds .) Russellian and Fregean views make claims about what sorts of things are the constituents of propositions—but don’t tell us what the structured propositions so constituted are.

There are really two questions here. One is the question: what does it mean to say that x is a constituent of a proposition? The language of constituency suggests parthood; but there’s some reason to think that x ’s being a constituent of a proposition isn’t a matter of x ’s being a part of that proposition. This is perhaps clearest on a Russellian view, according to which ordinary physical objects can be constituents of propositions. The problem is that a thing can be a constituent of a proposition without every part of that thing being a constituent of that proposition; a proposition with me as a constituent, it seems, need not also have every single molecule that now composes me as a constituent. But that fact is inconsistent with the idea that constituency is parthood and the plausible assumption that parthood is transitive. For discussion of this and other problems, see Gilmore (2014), Keller (2013), and Merricks (2015).

Hence the proponent of structured propositions owes some account of what “structure” and “constituent” talk amounts to in this domain. And they can hardly take these notions as primitive, since it would then be very unclear what explanatory value the claim that propositions are structured could have.

The second, in some ways more fundamental, question, is: What sort of thing are propositions? To what metaphysical category do they belong? The simplest and most straightforward answer to this question is: “They belong to the sui generis category of propositions”. (This is the view of Plantinga (1974) and Merricks (2015).)

But recently many philosophers have sought to give different answers to this question, by trying to explain how propositions could be members of some other ontological category in which we have independent reason to believe. Recent work of this sort can be divided into three main families of views.

According to the first, propositions are a kind of fact. This view was, on some interpretations, advocated by Russell (1903) and Wittgenstein (1922). The most prominent current defender of this view is Jeffrey King. On his version of the view, propositions (at least the propositions expressed by sentences) are meta-linguistic facts about sentences. At a first pass, and ignoring some important subtleties, the proposition expressed by the sentence “Amelia talks” will be the fact that there is some language L , some expression x , some expression y , and some syntactic relation R such that R ( x , y ), x has Amelia as its semantic value, y has the property of talking as its semantic value, and R encodes ascription. In some respects, this view is not so far from—though much more thoroughly developed than—Wittgenstein’s view in the Tractatus that “a proposition is a propositional sign in its projective relation to the world” (3.12). See for development and defense of this view King (2007, 2014).

According to the second sort of view, propositions are kind of property. Versions of this view vary both according to which properties they take propositions to be, and what they take propositions to be properties of. This view is most closely associated with David Lewis (1979) and Chisholm (1981), who took the objects of propositional attitudes to be properties which the bearer of the attitude ascribes to him- or herself. Other versions of the view are defended by van Inwagen (2004) and Gilmore (forthcoming), who take propositions to be 0-place relations, and Richard (2013) and Speaks (2014), who take propositions to be monadic properties of certain sorts.

According to the third sort of view, propositions are entities which are, or owe their existence to, the mental acts of subjects. While their views differ in many ways, both Hanks (2007, 2011) and Soames (2010, 2014) think of propositions as acts of predication. In the simplest case—a monadic predication—the proposition will be the act of predicating a property of an object.

Of course, not all views fit into these three categories. An important view which fits into none of them is defended in Moltmann (2013).

Different theorists differ, not just in their views about what propositions are, but also in their views about what a theory of propositions should explain. The representational properties of propositions are a case in point. Hanks, King, and Soames take one of the primary tasks of a theory of propositions to be the explanation of the representational properties of propositions. Others, like McGlone (2012) and Merricks (2015), hold that a proposition’s having certain representational properties is a primitive matter. Still others, like Richard and Speaks, deny that propositions have representational properties in any interesting sense. See for further discussion of these issues the entry on structured propositions .

3. Foundational Theories of Meaning

We now turn to our second sort of “theory of meaning”: foundational theories of meaning, which are attempts to specify the facts in virtue of which expressions of natural languages come to have the semantic properties that they have.

The question which foundational theories of meaning try to answer is a common sort of question in philosophy. In the philosophy of action (see entry) we ask what the facts are in virtue of which a given piece of behavior is an intentional action; in questions about personal identity (see entry) we ask what the facts are in virtue of which x and y are the same person; in ethics we ask what the facts are in virtue of which a given action is morally right or wrong. But, even if they are common enough, it is not obvious what the constraints are on answers to these sorts of questions, or when we should expect questions of this sort to have interesting answers.

Accordingly, one sort of approach to foundational theories of meaning is simply to deny that there is any true foundational theory of meaning. One might be quite willing to endorse one of the semantic theories outlined above while also holding that facts about the meanings of expressions are primitive, in the sense that there is no systematic story to be told about the facts in virtue of which expressions have the meanings that they have. (See, for example, Johnston 1988.)

There is another reason why one might be pessimistic about the prospects of foundational theories of meaning. While plainly distinct from semantics, the attempt to provide foundational theories is clearly in one sense answerable to semantic theorizing, since without a clear view of the facts about the semantic contents of expressions we won't have a clear view of the facts for which we are trying to provide an explanation. One might, then, be skeptical about the prospects of foundational theories of meaning not because of a general primitivist view of semantic facts, but just because one holds that natural language semantics is not yet advanced enough for us to have a clear grip on the semantic facts which foundational theories of meaning aim to analyze. (See for discussion Yalcin (2014).)

Many philosophers have, however, attempted to provide foundational theories of meaning. This section sets aside pessimism about the prospects for such theories and lays out the main attempts to give a systematic account of the facts about language users in virtue of which their words have the semantic properties that they do. It is useful to separate these theories into two camps.

According to the first sort of view, linguistic expressions inherit their contents from some other sort of bearer of content. So, for example, one might say that linguistic expressions inherit their contents from the contents of certain mental states with which they are associated. I’ll call views of this type mentalist theories. Mentalist theories are discussed in §3.1 , and non-mentalist theories in §3.2 .

3.1 Mentalist Theories

All mentalist theories of meaning have in common that they analyze one sort of representation—linguistic representation—in terms of another sort of representation—mental representation. For philosophers who are interested in explaining content, or representation, in non-representational terms, then, mentalist theories can only be a first step in the task of giving an ultimate explanation of the foundations of linguistic representation. The second, and more fundamental explanation would then come at the level of a theory of mental content. (For an overview of theories of this sort, see entry on mental representation and the essays in Stich and Warfield 1994.) Indeed, the popularity of mentalist theories of linguistic meaning, along with the conviction that content should be explicable in non-representational terms, is an important reason why so much attention has been focused on theories of mental representation over the last few decades.

Since mentalists aim to explain the nature of meaning in terms of the mental states of language users, mentalist theories may be divided according to which mental states they take to be relevant to the determination of meaning. The most well-worked out views on this topic are the Gricean view, which explains meaning in terms of the communicative intentions of language users, and the view that the meanings of expressions are fixed by conventions which pair sentences with certain beliefs. We will discuss these in turn, followed by a brief discussion of a third alternative available to the mentalist.

Paul Grice developed an analysis of meaning which can be thought of as the conjunction of two claims: (1) facts about what expressions mean are to be explained, or analyzed, in terms of facts about what speakers mean by utterances of them, and (2) facts about what speakers mean by their utterances can be explained in terms of their intentions. These two theses comprise the “Gricean program” for reducing meaning to the contents of the intentions of speakers.

To understand Grice’s view of meaning, it is important first to be clear on the distinction between the meaning, or content, of linguistic expressions —which is what semantic theories like those discussed in §2 aim to describe—and what speakers mean by utterances employing those expressions. This distinction can be illustrated by example. (See entry on pragmatics for more discussion.) Suppose that in response to a question about the weather in the city where I live, I say “Well, South Bend is not exactly Hawaii”. The meaning of this sentence is fairly clear: it expresses the (true) proposition that South Bend, Indiana is not identical to Hawaii. But what I mean by uttering this sentence is something more than this triviality: I mean by the utterance that the weather in South Bend is not nearly as good as that in Hawaii. And this example utterance is in an important respect very typical: usually the propositions which speakers mean to convey by their utterances include propositions other than the one expressed by the sentence used in the context. When we ask “What did you mean by that?” we are usually not asking for the meaning of the sentence uttered.

The idea behind stage (1) of Grice’s theory of meaning is that of these two phenomena, speaker-meaning is the more fundamental: sentences and other expressions mean what they do because of what speakers mean by their utterances of those sentences. (For more details about how Grice thought that sentence-meaning could be explained in terms of speaker-meaning, see the discussion of resultant procedures in the entry on Paul Grice .) One powerful way to substantiate the claim that speaker-meaning is explanatorily prior to expression-meaning would be to show that facts about speaker-meaning may be given an analysis which makes no use of facts about what expressions mean; and this is just what stage (2) of Grice’s analysis, to which we now turn, aims to provide.

Grice thought that speaker-meaning could be analyzed in terms of the communicative intentions of speakers—in particular, their intentions to cause beliefs in their audience.

The simplest version of this idea would hold that meaning p by an utterance is just a matter of intending that one’s audience come to believe p . But this can’t be quite right. Suppose I turn to you and say, “You’re standing on my foot”. I intend that you hear the words I am saying; so I intend that you believe that I have said, “You’re standing on my foot”. But I do not mean by my utterance that I have said, “You’re standing on my foot”. That is my utterance—what I mean by it is the proposition that you are standing on my foot, or that you should get off of my foot. I do not mean by my utterance that I am uttering a certain sentence.

This sort of example indicates that speaker meaning can’t just be a matter of intending to cause a certain belief—it must be intending to cause a certain belief in a certain way. But what, in addition to intending to cause the belief, is required for meaning that p ? Grice’s idea was that one must not only intend to cause the audience to form a belief, but also intend that they do so on the basis of their recognition of the speaker’s intention. This condition is not met in the above example: I don’t expect you to believe that I have uttered a certain sentence on the basis of your recognition of my intention that you do so; after all, you’d believe this whether or not I wanted you to. This is all to the good.

This Gricean analysis of speaker-meaning can be formulated as follows: [ 5 ]

  • [G] a means p by uttering x iff a intends in uttering x that
  • 1. his audience come to believe p ,
  • 2. his audience recognize this intention, and
  • 3. (1) occur on the basis of (2).

However, even if [G] can be given a fairly plausible motivation, and fits many cases rather well, it is also open to some convincing counterexamples. Three such types of cases are: (i) cases in which the speaker means p by an utterance despite knowing that the audience already believes p , as in cases of reminding or confession; (ii) cases in which a speaker means p by an utterance, such as the conclusion of an argument, which the speaker intends an audience to believe on the basis of evidence rather than recognition of speaker intention; and (iii) cases in which there is no intended audience at all, as in uses of language in thought. These cases call into question whether there is any connection between speaker-meaning and intended effects stable enough to ground an analysis of the sort that Grice envisaged; it is still a matter of much controversy whether an explanation of speaker meaning descended from [G] can succeed.

For developments of the Gricean program, see—in addition to the classic essays in Grice (1989)—Schiffer (1972), Neale (1992), and Davis (2002). For an extended criticism, see Schiffer (1987).

An important alternative to the Gricean analysis, which shares the Gricean’s commitment to a mentalist analysis of meaning in terms of the contents of mental states, is the analysis of meaning in terms of the beliefs rather than the intentions of speakers.

It is intuitively plausible that such an analysis should be possible. After all, there clearly are regularities which connect utterances and the beliefs of speakers; roughly, it seems that, for the most part, speakers seriously utter a sentence which (in the context) means p only if they also believe p . One might then, try to analyze meaning directly in terms of the beliefs of language users, by saying that what it is for a sentence S to express some proposition p is for it to be the case that, typically, members of the community would not utter S unless they believed p . However, we can imagine a community in which there is some action which everyone would only perform were they to believe some proposition p , but which is such that no member of the community knows that any other member of the community acts according to a rule of this sort. It is plausible that in such a community, the action-type in question would not express the proposition p , or indeed have any meaning at all.

Because of cases like this, it seems that regularities in meaning and belief are not sufficient to ground an analysis of meaning. For this reason, many proponents of a mentalist analysis of meaning in terms of belief have sought instead to analyze meaning in terms of conventions governing such regularities. Roughly, a regularity is a matter of convention when the regularity obtains because there is something akin to an agreement among a group of people to keep the regularity in place. So, applied to our present example, the idea would be (again roughly) that for a sentence S to express a proposition p in some group is for there to be something like an agreement in that group to maintain some sort of regularity between utterances of S and agents’ believing p . This seems to be what is lacking in the example described in the previous paragraph.

There are different ways to make this rough idea precise (see the entry on convention ). According to one important view, a sentence S expresses the proposition p if and only if the following three conditions are satisfied: (1) speakers typically utter S only if they believe p and typically come to believe p upon hearing S , (2) members of the community believe that (1) is true, and (3) the fact that members of the community believe that (1) is true, and believe that other members of the community believe that (1) is true, gives them a good reason to go on acting so as to make (1) true. (This a simplified version of the theory defended in David Lewis 1975.)

For critical discussion of this sort of analysis of meaning, see Burge 1975, Hawthorne 1990, Laurence 1996, and Schiffer 2006.

The two sorts of mentalist theories sketched above both try to explain meaning in terms of the relationship between linguistic expressions and propositional attitudes of users of the relevant language. But this is not the only sort of theory available to a theorist who wants to analyze meaning in terms of broadly mental representation. A common view in the philosophy of mind and cognitive science is that the propositional attitudes of subjects are underwritten by an internal language of thought, comprised of mental representations. (See entry on the computational theory of mind .) One might try to explain linguistic meaning directly in terms of the contents of mental representations, perhaps by thinking of language processing as pairing linguistic expressions with mental representations; one could then think of the meaning of the relevant expression for that individual as being inherited from the content of the mental representation with which it is paired.

While this view has, historically, not enjoyed as much attention as the mentalist theories discussed in the preceding two subsections, it is a natural view for anyone who endorses the widely held thesis that semantic competence is to be explained by some sort of internal representation of the semantic facts. If we need to posit such internal representations anyway, it is natural to think that the meaning of an expression for an individual can be explained in terms of that individual's representation of its meaning. For discussion of this sort of theory, see Laurence (1996).

Just as proponents of Gricean and convention-based theories typically view their theories as only the first stage in an analysis of meaning—because they analyze meaning in terms of another sort of mental representation—so proponents of mental representation-based theories will typically seek to provide an independent analysis of contents of mental representations. For an overview of attempts to provide the latter sort of theory, see the entry on mental representation and the essays in Stich and Warfield (1994).

3.2 Non-Mentalist Theories

As noted above, not all foundational theories of meaning attempt to explain meaning in terms of mental content. One might be inclined to pursue a non-mentalist foundational theory of meaning for a number of reasons; for example, one might be skeptical about the mentalist theories on offer; one might think that mental representation should be analyzed in terms of linguistic representation, rather than the other way around; or one might think that representation should be analyzable in non-representational terms, and doubt whether there is any true explanation of mental representation suitable to accompany a mentalist reduction of meaning to mental representation.

To give a non-mentalist foundational theory of meaning is to say which aspects of the use of an expression determine meaning—and do so without taking that expression to simply inherit its content from some more fundamental bearer of content. In what follows I’ll briefly discuss some of the aspects of the use of expressions which proponents of non-mentalist theories have taken to explain their meanings.

In Naming and Necessity , Kripke suggested that the reference of a name could be explained in terms of the history of use of that name, rather than by descriptions associated with that name by its users. In the standard case, Kripke thought, the right explanation of the reference of a name could be divided into an explanation of the name’s introduction as name for this or that—an event of “baptism”—and its successful transmission from one speaker to another.

One approach to the theory of meaning is to extend Kripke’s remarks in two ways: first, by suggesting that they might serve as an account of meaning, as well as reference; [ 6 ] and second, by extending them to parts of speech other than names. (See, for discussion, Devitt 1981.) In this way, we might aim to explain the meanings of expressions in terms of their causal origin.

While causal theories don’t take expressions to simply inherit their contents from mental states, it is plausible that they should still give mental states an important role to play in explaining meaning. For example, it is plausible that introducing a term involves intending that it stand for some object or property, and that transmission of a term from one speaker to another involves the latter intending to use it in the same way as the former.

There are two standard problems for causal theories of this sort (whether they are elaborated in a mentalist or a non-mentalist way). The first is the problem of extending the theory from the case of names to to other sorts of vocabulary for which the theory seems less natural. Examples which have seemed to many to be problematic are empty names and non-referring theoretical terms, logical vocabulary, and predicates which, because their content does not seem closely related to the properties represented in perceptual experience, are not intuitively linked to any initial act of “baptism”. The second problem, which is sometimes called the “ qua problem”, is the problem of explaining which of the many causes of a term’s introduction should determine its content. Suppose that the term “water” was introduced in the presence of a body of H 2 O. What made it a term for this substance, rather than for liquid in general, or colorless liquid, or colorless liquid in the region of the term's introduction? The proponent of a causal theory owes some answer to this question; see for discussion Devitt and Sterelny (1987).

For a classic discussion of the prospects of causal theories, see Evans (1973). For a recent theory which makes causal origin part but not all of the story, see Dickie (2015).

Causal theories aim to explain meaning in terms of the relations between expressions and the objects and properties they represent. A very different sort of foundational theory of meaning which maintains this emphasis on the relations between expressions and the world gives a central role to a principle of charity which holds that (modulo some qualifications) the right assignment of meanings to the expression of a subject’s language is that assignment of meanings which maximizes the truth of the subject’s utterances.

An influential proponent of this sort of view was Donald Davidson, who stated the motivation for the view as follows:

A central source of trouble is the way beliefs and meanings conspire to account for utterances. A speaker who holds a sentence to be true on an occasion does so in part because of what he means, or would mean, by an utterance of that sentence, and in part because of what he believes. If all we have to go on is the fact of honest utterance, we cannot infer the belief without knowing the meaning, and have no chance of inferring the meaning without the belief. (Davidson 1974a: 314; see also Davidson 1973)

Davidson’s idea was that attempts to state the facts in virtue of which expressions have a certain meaning for a subject face a kind of dilemma: if we had an independent account of what it is for an agent to have a belief with a certain content, we could ascend from there to an account of what it is for a sentence to have a meaning; if we had an independent account of what it is for a sentence to have a meaning, we could ascend from there to an account of what it is for an agent to have a belief with a certain content; but in fact neither sort of independent account is available, because many assignments of beliefs and meanings are consistent with the subject’s linguistic behavior. Davidson’s solution to this dilemma is that we must define belief and meaning together, in terms of an independent third fact: the fact that the beliefs of an agent, and the meanings of her words, are whatever they must be in order to maximize the truth of her beliefs and utterances.

By tying meaning and belief to truth, this sort of foundational theory of meaning implies that it is impossible for anyone who speaks a meaningful language to be radically mistaken about the nature of the world; and this implies that certain levels of radical disagreement between a pair of speakers or communities will also be impossible (since the beliefs of each community must be, by and large, true). This is a consequence of the view which Davidson embraced (see Davidson 1974b); but one might also reasonably think that radical disagreement, as well as radical error, are possible, and hence that any theory, like Davidson’s, which implies that they are impossible must be mistaken.

A different sort of worry about a theory of this sort is that the requirement that we maximize the truth of the utterances of subjects hardly seems sufficient to determine the meanings of the expressions of their language. It seems plausible, offhand, that there will be many different interpretations of a subject’s language which will be tied on the measure of truth-maximization; one way to see the force of this sort of worry is to recall the point, familiar from our discussion of possible worlds semantics in §2.1.5 above, that a pair of sentences can be true in exactly the same circumstances and yet differ in meaning. One worry is thus that a theory of Davidson’s sort will entail an implausible indeterminacy of meaning. For Davidson’s fullest attempt to answer this sort of worry, see Chapter 3 of Davidson (2005).

A different sort of theory emerges from a further objection to the sort of theory discussed in the previous section. This objection is based on Hilary Putnam’s (1980, 1981) model-theoretic argument. This argument aimed to show that there are very many different assignments of reference to subsentential expressions of our language which make all of our utterances true. (For details on how the argument works, see the entry on Skolem’s paradox , especially §3.4.) Putnam’s argument therefore leaves us with a choice between two options: either we must accept that there are no facts of the matter about what any of our expressions refer to, or we must deny that is determined solely by a principle of truth-maximization.

Most philosophers take the second option. Doing so, however, doesn’t mean that something like the principle of charity can’t still be part of our foundational theory of meaning.

David Lewis (1983, 1984) gave a version of this kind of response, which he credits to Merrill (1980), and which has since been quite influential. His idea was that the assignment of contents to expressions of our language is fixed, not just by the constraint that the right interpretation will maximize the truth of our utterances, but by picking the interpretation which does best at jointly satisfying the constraints of truth-maximization and the constraint that the referents of our terms should, as much as possible, be “the ones that respect the objective joints in nature” (1984: 227).

Such entities are often said to be more “eligible” to be the referents of expressions than others. An approach to the foundations of meaning based on the twin principles of charity + eligibility has some claim to being the most widely held view today. See Sider (2011) for an influential extension of the Lewisian strategy.

Lewis’ solution to Putnam’s problem comes with a non-trivial metaphysical price tag: recognition of an objective graded distinction between more and less natural properties. Some have found the price too much to pay, and have sought other approaches to the foundational theory of meaning. But even if we recognize in our metaphysics a distinction between properties which are “joint-carving” and those which are not, we might still doubt whether this distinction can remedy the sorts of indeterminacy problems which plague foundational theories based solely on the principle of charity. For doubts along these lines, see Hawthorne (2007).

A different way to develop a non-mentalist foundational theory of meaning focuses less on relations between subsentential expressions or sentences and bits of non-linguistic reality and more on the regularities which govern our use of language. Views of this sort have been defended by a number of authors; this section focuses on the version of the view developed in Horwich (1998, 2005).

Horwich’s core idea is that our acceptance of sentences is governed by certain laws, and, in the case of non-ambiguous expressions, there is a single “acceptance regularity” which explains all of our uses of the expression. The type of acceptance regularity which is relevant will vary depending on the sort of expression whose meaning is being explained. For example, our use of a perceptual term like “red” might be best explained by the following acceptance regularity:

The disposition to accept “that is red” in response to the sort of visual experience normally provoked by a red surface.

whereas, in the case of a logical term like “and”, the acceptance regularity will involve dispositions to accept inferences involving pairs of sentences rather than dispositions to respond to particular sorts of experiences:

The disposition to accept the two-way argument schema “ p , q // p and q ”.

As these examples illustrate, it is plausible that a strength of a view like Horwich’s is its ability to handle expressions of different categories.

Like its competitors, Horwich’s theory is also open to some objections. One might worry that his use of the sentential attitude of acceptance entails a lapse into mentalism, if acceptance either just is, or is analyzed in terms of, beliefs. There is also a worry—which affects other “use” or “conceptual role” or “functional role” theories of meaning—that Horwich’s account implies the existence of differences in meaning which do not exist; it seems, for example, that two people’s use of some term might be explained by distinct basic acceptance regularities without their meaning different things by that term. Schiffer (2000) discusses the example of “dog”, and the differences between the basic acceptance regularities which govern the use of the term for the blind, the biologically unsophisticated, and people acquainted only with certain sorts of dogs. [ 7 ]

This last concern about Horwich’s theory stems from the fact that the theory is, at its core, an individualist theory: it explains the meaning of an expression for an individual in terms of properties of that individual’s use of the term. A quite different sort of use theory of meaning turns from the laws which explain an individual’s use of a word to the norms which, in a society, govern the use of the relevant terms. Like the other views discussed here, the view that meaning is a product of social norms of this sort has a long history; it is particularly associated with the work of the later Wittgenstein and his philosophical descendants. (See especially Wittgenstein 1953.)

An important defender of this sort of view is Robert Brandom. On Brandom’s view, a sentence’s meaning is due to the conditions, in a given society, under which it is correct or appropriate to perform various speech acts involving the sentence. To develop a theory of this sort, one must do two things. First, one must show how the meanings of expressions can be explained in terms of these normative statuses—in Brandom’s (slightly nonstandard) terms, one must show how semantics can be explained in terms of pragmatics. Second, one must explain how these normative statuses can be instituted by social practices.

For details, see Brandom (1994), in which the view is developed at great length; for a critical discussion of Brandom’s attempt to carry out the second task above, see Rosen (1997). For discussion of the role (or lack thereof) of normativity in a foundational theory of meaning, see Hattiangadi (2007), Gluer and Wilkforss (2009), and the entry on meaning normativity .

  • Ayer, Alfred Jules, 1936, Language, Truth, and Logic , London: Victor Gollancz.
  • Beaney, Michael (ed.), 1997, The Frege Reader , Oxford: Basil Blackwell.
  • Bezuidenhout, Anne, 2002, Truth-Conditional Pragmatics, Philosophical Perspectives , 16: 105–134.
  • Brandom, Robert B., 1994, Making It Explicit: Reasoning, Representing, and Discursive Commitment , Cambridge, MA: Harvard University Press.
  • –––, 2000, Articulating Reasons: An Introduction to Inferentialism , Cambridge, MA: Harvard University Press.
  • Braun, David, 1993, “Empty Names”, Noûs , 27(4): 449–469. doi:10.2307/2215787
  • Braun, David and Jennifer Saul, 2002, “Simple Sentences, Substitutions, and Mistaken Evaluations”, Philosophical Studies , 111(1): 1–41. doi:10.1023/A:1021287328280
  • Burge, Tyler, 1975, “On Knowledge and Convention”, The Philosophical Review , 84(2): 249–255. doi:10.2307/2183970
  • –––, 1986, “On Davidson’s ‘Saying That’”, in Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson , Ernest Lepore (ed.), Oxford: Blackwell Publishing, 190–210.
  • Burgess, Alexis and Brett Sherman (eds.), 2014, Metasemantics: New Essays on the Foundations of Meaning , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199669592.001.0001
  • Caplan, Ben, 2005, “Against Widescopism”, Philosophical Studies , 125(2): 167–190. doi:10.1007/s11098-004-7814-1
  • Cappelen, Herman and John Hawthorne, 2009, Relativism and Monadic Truth , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199560554.001.0001
  • Cappelen, Herman and Ernie Lepore, 2005, Insensitive Semantics: A Defense of Semantic Minimalism and Speech Act Pluralism , Malden, MA: Blackwell Publishing. doi:10.1002/9780470755792
  • Carnap, Rudolf, 1947, Meaning and Necessity: A Study in Semantics and Modal Logic , Chicago: University of Chicago Press.
  • Carston, Robyn, 2002, Thoughts and Utterances: The Pragmatics of Explicit Communication , Malden, MA: Blackwell Publishing. doi:10.1002/9780470754603
  • Chalmers, David J., 2004, “Epistemic Two-Dimensional Semantics”, Philosophical Studies , 118(1/2): 153–226. doi:10.1023/B:PHIL.0000019546.17135.e0
  • –––, 2006, “The Foundations of Two-Dimensional Semantics”, in Two-Dimensional Semantics , Manuel Garcia-Carpintero and Josep Macià (eds.), Oxford: Clarendon Press, 55–140.
  • –––, 2011, “Propositions and Attitude Ascriptions: A Fregean Account”, Noûs , 45(4): 595–639. doi:10.1111/j.1468-0068.2010.00788.x
  • Chisholm, Roderick M., 1981, The First Person: An Essay on Reference and Intentionality , Minneapolis, MN: University of Minnesota Press.
  • Chomsky, Noam, 2000, New Horizons in the Study of Language and Mind , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511811937
  • Cohen, Stewart, 1986, “Knowledge and Context”, The Journal of Philosophy , 83(10): 574–583. doi:10.2307/2026434
  • Davidson, Donald, 1967, “Truth and Meaning”, Synthese , 17(1): 304–323; reprinted in Davidson 1984: 17–36. doi:10.1007/BF00485035
  • –––, 1968, “On Saying That”, Synthese , 19(1–2): 130–146. doi:10.1007/BF00568054
  • –––, 1973, “Radical Interpretation”, Dialectica , 27(3–4): 313–328. doi:10.1111/j.1746-8361.1973.tb00623.x
  • –––, 1974a, “Belief and the Basis of Meaning”, Synthese , 27(3–4): 309–323. doi:10.1007/BF00484597
  • –––, 1974b, “On the Very Idea of a Conceptual Scheme”, Proceedings and Addresses of the American Philosophical Association , 47: 5–20; reprinted in Davidson 1984: 183–198. doi:10.2307/3129898
  • –––, 1976, “Reply to Foster”, in Evans and McDowell (eds.) 1976: 33–41.
  • –––, 1984, Inquiries into Truth and Interpretation , Oxford: Oxford University Press.
  • –––, 2005, Truth and Predication , Cambridge, MA: Harvard University Press.
  • Davis, Wayne A., 2002, Meaning, Expression and Thought , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511498763
  • DeRose, Keith, 1992, “Contextualism and Knowledge Attributions”, Philosophy and Phenomenological Research , 52(4): 913–929. doi:10.2307/2107917
  • Devitt, Michael, 1981, Designation , New York: Columbia University Press. [ Devitt 1981 available online ]
  • Devitt, Michael and Kim Sterelny, 1987, Language and Reality: An Introduction to the Philosophy of Language , Cambridge, MA: MIT Press.
  • Dickie, Imogen, 2015, Fixing Reference , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198755616.001.0001
  • Dummett, Michael A.E., 1981, The Interpretation of Frege’s Philosophy , Cambridge, MA: Harvard University Press.
  • Egan, Andy, John Hawthorne, and Brian Weatherson, 2005, “Epistemic Modals in Context”, in Preyer and Peter 2005: 131–170.
  • Egan, Andy and Brian Weatherson (eds.), 2011, Epistemic Modality , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199591596.001.0001
  • Evans, Gareth, 1973, “The Causal Theory of Names”, Proceedings of the Aristotelian Society (Supplement), 47 (1): 187–208.
  • –––, 1981, “Understanding Demonstratives”, in Meaning and Understanding , Herman Parret and Jacques Bouveresse (eds.), New York: Walter de Gruyter, 280–304; reprinted in his 1996 Collected Papers , Oxford: Clarendon Press, 291–304.
  • Evans, Gareth and John McDowell (eds.), 1976, Truth and Meaning: Essays in Semantics , Oxford: Clarendon Press.
  • Fine, Kit, 2007, Semantic Relationism , New York: Blackwell Publishing.
  • Foster, J.A., 1976, “Meaning and Truth Theory”, in Evans and McDowell (eds.) 1976: 1–32.
  • Frege, Gottlob, 1879 [1997], Begriffsschrift , Halle: Louis Nebert; translated and reprinted in Beaney 1997: 47–79.
  • –––, 1892 [1960], “Über Sinn und Bedeutung” (On Sense and Reference), Zeitschrift für Philosophie und philosophische Kritik , 100: 25–50. Translated and reprinted in Translations from the Philosophical Writings of Gottlob Frege , Peter Geach and Max Black (eds.), Oxford: Basil Blackwell, 1960, 56–78.
  • –––, 1906 [1997], “Kurze Übersicht meiner logischen Lehren?”, unpublished. Translated as “A Brief Survey of My Logical Doctrines”, in Beaney 1997: 299–300.
  • Geach, P. T., 1960, “Ascriptivism”, The Philosophical Review , 69(2): 221–225. doi:10.2307/2183506
  • –––, 1965, “Assertion”, The Philosophical Review , 74(4): 449–465. doi:10.2307/2183123
  • Gibbard, Allan, 1990, Wise Choices, Apt Feelings: A Theory of Normative Judgment , Cambridge, MA: Harvard University Press.
  • –––, 2003, Thinking How to Live , Cambridge, MA: Harvard University Press.
  • Gilmore, Cody, 2014, “Parts of Propositions”, in Mereology and Location , Shieva Kleinschmidt (ed.), Oxford: Oxford University Press, 156–208. doi:10.1093/acprof:oso/9780199593828.003.0009
  • –––, forthcoming, “Why 0-adic Relations Have Truth Conditions”, in Tillman forthcoming.
  • Gluer, Kathrin and Åsa Wikforss, 2009, “Against Content Normativity”, Mind , 118(469): 31–70. doi:10.1093/mind/fzn154
  • Graff Fara, Delia, 2015, “Names Are Predicates”, Philosophical Review , 124(1): 59–117. doi:10.1215/00318108-2812660
  • Grice, H. P., 1957, “Meaning”, The Philosophical Review , 66(3): 377–388; reprinted in Grice 1989: 213–223. doi:10.2307/2182440
  • –––, 1969, “Utterer’s Meaning and Intention”, The Philosophical Review , 78(2): 147–177; reprinted in Grice 1989: 86–116. doi:10.2307/2184179
  • –––, 1989, Studies in the Way of Words , Cambridge, MA: Harvard University Press.
  • Hanks, Peter W., 2007, “The Content–Force Distinction”, Philosophical Studies , 134(2): 141–164. doi:10.1007/s11098-007-9080-5
  • –––, 2011, “Structured Propositions as Types”, Mind , 120(477): 11–52. doi:10.1093/mind/fzr011
  • Hare, R. M., 1952, The Language of Morals , Oxford: Oxford University Press. doi:10.1093/0198810776.001.0001
  • Hattiangadi, Anandi, 2007, Oughts and Thoughts: Scepticism and the Normativity of Meaning , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199219025.001.0001
  • Hawthorne, John, 2007, “Craziness and Metasemantics”, Philosophical Review , 116(3): 427–440. doi:10.1215/00318108-2007-004
  • –––, 2006, “Testing for Context-Dependence”, Philosophy and Phenomenological Research , 73(2): 443–450. doi:10.1111/j.1933-1592.2006.tb00627.x
  • –––, 1990, “A Note on ‘Languages and Language’”, Australasian Journal of Philosophy , 68(1): 116–118. doi:10.1080/00048409012340233
  • Heim, Irene, 1982, The Semantics of Definite and Indefinite Noun Phrases , Ph.D. thesis, Department of Linguistics, University of Massachusetts at Amherst
  • Heim, Irene and Angelika Kratzer, 1998, Semantics in Generative Grammar , (Blackwell Textbooks in Linguistics 13), Malden, MA: Blackwell.
  • Horwich, Paul, 1998, Meaning , Oxford: Oxford University Press. doi:10.1093/019823824X.001.0001
  • –––, 2005, Reflections on Meaning , Oxford: Clarendon Press. doi:10.1093/019925124X.001.0001
  • Jeshion, Robin, 2015, “Referentialism and Predicativism About Proper Names”, Erkenntnis , 80(S2): 363–404. doi:10.1007/s10670-014-9700-3
  • Johnston, Mark, 1988, “The End of the Theory of Meaning”, Mind & Language , 3(1): 28–42. doi:10.1111/j.1468-0017.1988.tb00131.x
  • Kamp, Hans, 1971, “Formal Properties of ‘Now’”, Theoria , 37(3): 227–273. doi:10.1111/j.1755-2567.1971.tb00071.x
  • –––, 1981, “A Theory of Truth and Semantic Representation”, in Formal Methods in the Study of Language , Jeroen A. G. Groenendijk, Theo M. V. Janssen, and M. B. J. Stokhof (eds.), Amsterdam: Mathematisch Centrum; reprinted in in Formal Semantics: The Essential Readings , Paul Portner and Barbara H. Partee (eds.), Oxford: Blackwell Publishers, 189–222. doi:10.1002/9780470758335.ch8
  • Kaplan, David, 1989, “Demonstratives”, in Themes from Kaplan , Joseph Almog, John Perry, and Howard Wettstein (eds.), New York: Oxford University Press, 481–563.
  • Keller, Lorraine, 2013, “The Metaphysics of Propositional Constituency”, Canadian Journal of Philosophy , 43(5–6): 655–678. doi:10.1080/00455091.2013.870735
  • King, Jeffrey C., 2003, “Tense, Modality, and Semantic Values”, Philosophical Perspectives , 17(1): 195–246. doi:10.1111/j.1520-8583.2003.00009.x
  • –––, 2007, The Nature and Structure of Content , Oxford: Oxford University Press.
  • –––, 2014, “Naturalized Propositions”, in King, Soames, and Speaks (eds.) 2014: 47–70.
  • King, Jeffrey C., Scott Soames, and Jeff Speaks (eds.), 2014, New Thinking about Propositions , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199693764.001.0001
  • Kirk-Giannini, Cameron Domenico and Ernie Lepore, 2017, “De Ray: On the Boundaries of the Davidsonian Semantic Programme”, Mind , 126(503): 697–714. doi:10.1093/mind/fzv186
  • Kolbel, Max, 2001, “Two Dogmas of Davidsonian Semantics”, The Journal of Philosophy , 98(12): 613–635. doi:10.2307/3649462
  • Kripke, Saul A., 1972, Naming and Necessity , Cambridge, MA: Harvard University Press.
  • –––, 1979, “A Puzzle about Belief”, in Meaning and Use , Avishai Margalit (ed.), (Synthese Language Library 3), Dordrecht: Springer Netherlands, 239–283. doi:10.1007/978-1-4020-4104-4_20
  • –––, 1982, Wittgenstein on Rules and Private Language: An Elementary Exposition , Cambridge, MA: Harvard University Press.
  • Larson, Richard K. and Peter Ludlow, 1993, “Interpreted Logical Forms”, Synthese , 95(3): 305–355. doi:10.1007/BF01063877
  • Larson, Richard K and Gabriel Segal, 1995, Knowledge of Meaning: An Introduction to Semantic Theory , Cambridge, MA: MIT Press.
  • Lasersohn, Peter, 2005, “Context Dependence, Disagreement, and Predicates of Personal Taste*”, Linguistics and Philosophy , 28(6): 643–686. doi:10.1007/s10988-005-0596-x
  • Laurence, Stephen, 1996, “A Chomskian Alternative to Convention-Based Semantics”, Mind , 105(418): 269–301. doi:10.1093/mind/105.418.269
  • Lepore, Ernest and Kirk Ludwig, 2007, Donald Davidson’s Truth-Theoretic Semantics , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199290932.001.0001
  • Lepore, Ernest and Barry Loewer, 1989, “You Can Say That Again”, Midwest Studies in Philosophy , 14: 338–356. doi:10.1111/j.1475-4975.1989.tb00196.x
  • Lewis, David, 1970, “General Semantics”, Synthese , 22(1–2): 18–67. doi:10.1007/BF00413598
  • –––, 1975, “Languages and Language”, in Language, Mind, and Knowledge , Keith Gunderson (ed.), Minneapolis: University of Minnesota Press.
  • –––, 1979, “Attitudes De Dicto and De Se ”, The Philosophical Review , 88(4): 513–543. doi:10.2307/2184843
  • –––, 1980, “Index, Context, and Content”, in Philosophy and Grammar , Stig Kanger and Sven Ōhman (eds.) (Synthese Library 143), Dordrecht: Springer Netherlands, 79–100. doi:10.1007/978-94-009-9012-8_6
  • –––, 1983, “New Work for a Theory of Universals”, Australasian Journal of Philosophy , 61(4): 343–377. doi:10.1080/00048408312341131
  • –––, 1984, “Putnam’s Paradox”, Australasian Journal of Philosophy , 62(3): 221–236. doi:10.1080/00048408412340013
  • –––, 1996, “Elusive Knowledge”, Australasian Journal of Philosophy , 74(4): 549–567. doi:10.1080/00048409612347521
  • Lewis, Karen S., 2014, “Do We Need Dynamic Semantics?”, in Burgess and Sherman 2014: 231–258. doi:10.1093/acprof:oso/9780199669592.003.0010
  • MacFarlane, John, 2014, Assessment Sensitivity: Relative Truth and Its Applications , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199682751.001.0001
  • MacFarlane, John, 2016, “I – Vagueness as Indecision”, Aristotelian Society Supplementary Volume , 90(1): 255–283. doi:10.1093/arisup/akw013
  • Marcus, Ruth Barcan, 1961, “Modalities and Intensional Languages”, Synthese , 13(4): 303–322. doi:10.1007/BF00486629
  • McDowell, John, 1977, “On the Sense and Reference of a Proper Name”, Mind , 86(342): 159–185. doi:10.1093/mind/LXXXVI.342.159
  • McGilvray, James, 1998, “Meanings Are Syntactically Individuated and Found in the Head”, Mind & Language , 13(2): 225–280. doi:10.1111/1468-0017.00076
  • McGlone, Michael, 2012, “Propositional Structure and Truth Conditions”, Philosophical Studies , 157(2): 211–225. doi:10.1007/s11098-010-9633-x
  • McKeown-Green, Arthur Jonathan, 2002, The Primacy of Public Language , PhD Dissertation, Princeton University.
  • Merricks, Trenton, 2015, Propositions , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198732563.001.0001
  • Merrill, G. H., 1980, “The Model-Theoretic Argument against Realism”, Philosophy of Science , 47(1): 69–81. doi:10.1086/288910
  • Moltmann, Friederike, 2013, “Propositions, Attitudinal Objects, and the Distinction between Actions and Products”, Canadian Journal of Philosophy , 43(5–6): 679–701. doi:10.1080/00455091.2014.892770
  • Montague, Richard, 1974, Formal Philosophy: The Selected Papers of Richard Montague , R. Thomason (ed.), New Haven, CT: Yale University Press.
  • Moss, Sarah, 2012, “The Role of Linguistics in the Philosophy of Language”, in Routledge Companion to the Philosophy of Language , Delia Graff Fara and Gillian Russell (eds.), Routledge.
  • –––, 2013, “Epistemology Formalized”, Philosophical Review , 122(1): 1–43. doi:10.1215/00318108-1728705
  • Neale, Stephen, 1992, “Paul Grice and the Philosophy of Language”, Linguistics and Philosophy , 15(5): 509–559. doi:10.1007/BF00630629
  • Nolan, Daniel P., 2013, “Impossible Worlds”, Philosophy Compass , 8(4): 360–372. doi:10.1111/phc3.12027
  • Perry, John, 1977, “Frege on Demonstratives”, The Philosophical Review , 86(4): 474–497. doi:10.2307/2184564
  • –––, 1979, “The Problem of the Essential Indexical”, Noûs , 13(1): 3–21. doi:10.2307/2214792
  • Pietroski, Paul M., 2003, “The Character of Natural Language Semantics”, in Epistemology of Language , Alex Barber (ed.), Oxford: Oxford University Press, 217–256.
  • –––, 2005, “Meaning Before Truth”, in Preyer and Peter 2005: 255–302.
  • –––, 2018, Conjoining Meanings: Semantics Without Truth Values , Oxford: Oxford University Press. doi:10.1093/oso/9780198812722.001.0001
  • Plantinga, Alvin, 1974, The Nature of Necessity , Clarendon Press.
  • –––, 1978, “The Boethian Compromise”, American Philosophical Quarterly , 15(2): 129–138.
  • Preyer, Gerhard and Georg Peter (eds.), 2005, Contextualism in Philosophy: Knowledge, Meaning, and Truth , Oxford: Clarendon Press.
  • Prior, A. N., 1960, “The Runabout Inference-Ticket”, Analysis , 21(2): 38–39. doi:10.1093/analys/21.2.38
  • Putnam, Hilary, 1980, “Models and Reality”, Journal of Symbolic Logic , 45(3): 464–482. doi:10.2307/2273415
  • –––, 1981, Reason, Truth and History , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511625398
  • Quine, W.V.O., 1960, Word and Object , Cambridge, MA: MIT Press.
  • –––, 1970 [1986], Philosophy of Logic , New Jersey: Prentice Hall; second edition, 1986. (Page reference is to the second edition.)
  • Ray, Greg, 2014, “Meaning and Truth”, Mind , 123(489): 79–100. doi:10.1093/mind/fzu026
  • Recanati, François, 2004, Literal Meaning , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511615382
  • –––, 2010, Truth-Conditional Pragmatics , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199226993.001.0001
  • Richard, Mark, 1981, “Temporalism and Eternalism”, Philosophical Studies , 39(1): 1–13. doi:10.1007/BF00354808
  • –––, 2013, “What Are Propositions?”, Canadian Journal of Philosophy , 43(5–6): 702–719. doi:10.1080/00455091.2013.870738
  • Rosen, Gideon, 1997, “Who Makes the Rules Around Here?”, Philosophy and Phenomenological Research , 57(1): 163–171. doi:10.2307/2953786
  • Rothschild, Daniel, 2007, “Presuppositions and Scope”:, Journal of Philosophy , 104(2): 71–106. doi:10.5840/jphil2007104233
  • Rothschild, Daniel and Seth Yalcin, 2016, “Three Notions of Dynamicness in Language”, Linguistics and Philosophy , 39(4): 333–355. doi:10.1007/s10988-016-9188-1
  • Russell, Bertrand, 1903, The Principles of Mathematics , Cambridge: Cambridge University Press.
  • Salmon, Nathan U., 1986, Frege’s Puzzle , Atascadero, CA: Ridgeview Publishing Company.
  • –––, 1990, “A Millian Heir Rejects the Wages of Sinn ”, in Propositional Attitudes: The Role of Content in Logic, Language, and Mind , C. Anthony Anderson and Joseph Owens (eds.), (CSLI Lecture Notes 20), Stanford, CA: CSLI Publications.
  • Searle, John R., 1962, “Meaning and Speech Acts”, The Philosophical Review , 71(4): 423–432. doi:10.2307/2183455
  • Schiffer, Stephen, 1972, Meaning , Oxford: Oxford University Press.
  • –––, 1987, Remnants of Meaning , Cambridge, MA: MIT Press.
  • –––, 2000, “Review: Horwich on Meaning”, Philosophical Quarterly , 50(201): 527–536.
  • –––, 2006, “Two Perspectives on Knowledge of Language”, Philosophical Issues , 16: 275–287. doi:10.1111/j.1533-6077.2006.00114.x
  • Schroeder, Mark, 2008, Being For: Evaluating the Semantic Program of Expressivism , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199534654.001.0001
  • Sellars, Wilfrid, 1968, Science and Metaphysics: Variations on Kantian Themes , New York: Humanities Press.
  • Sider, Theodore, 2011, Writing the Book of the World , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199697908.001.0001
  • Soames, Scott, 1988, “Direct Reference, Propositional Attitudes, and Semantic Content”, in Propositions and Attitudes , Nathan Salmon and Scott Soames (eds.), Oxford: Oxford University Press, 197–239.
  • –––, 1992, “Truth, Meaning, and Understanding”, Philosophical Studies , 65(1–2): 17–35. doi:10.1007/BF00571314
  • –––, 1997, “Skepticism about Meaning: Indeterminacy, Normativity, and the Rule-Following Paradox”, Canadian Journal of Philosophy , 27 Supplementary Volume 23: 211–249. doi:10.1080/00455091.1997.10715967
  • –––, 1997, “Skepticism About Meaning: Indeterminacy, Normativity, and the Rule-following Paradox”, Canadian Journal of Philosophy (Supplementary Volume), 23: 211–249.
  • –––, 1998, “The Modal Argument: Wide Scope and Rigidified Descriptions”, Noûs , 32(1): 1–22. doi:10.1111/0029-4624.00084
  • –––, 2002, Beyond Rigidity: The Unfinished Semantic Agenda of Naming and Necessity , Oxford: Oxford University Press. doi:10.1093/0195145283.001.0001
  • –––, 2010, Philosophy of Language , Princeton, NJ: Princeton University Press.
  • –––, 2014. “Cognitive Propositions”, in King, Soames, and Speaks 2014: 91–125.
  • Sosa, David, 2001, “Rigidity in the Scope of Russell’s Theory”, Noûs , 35(1): 1–38. doi:10.1111/0029-4624.00286
  • Speaks, Jeff, 2014, “Propositions Are Properties of Everything or Nothing”, in King, Soames, and Speaks 2014: 71–90.
  • Sperber, D. & Wilson, D., 1995, Relevance , Blackwell.
  • Stalnaker, Robert C., 1984, Inquiry , Cambridge, MA: MIT Press.
  • Stanley, Jason, 2007, Language in Context: Selected Essays , Oxford: Clarendon Press.
  • Stich, Stephen P. and Ted A. Warfield (eds.), 1994, Mental Representation: A Reader , Cambridge, MA: Blackwell.
  • Stojnić, Una, 2019, “Content in a Dynamic Context”, Noûs , 53(2): 394–432. doi:10.1111/nous.12220
  • Sullivan, Meghan, 2014, “Change We Can Believe In (and Assert): Change We Can Believe In (and Assert)”, Noûs , 48(3): 474–495. doi:10.1111/j.1468-0068.2012.00874.x
  • Tarski, Alfred, 1944, “The Semantic Conception of Truth: And the Foundations of Semantics”, Philosophy and Phenomenological Research , 4(3): 341–376. doi:10.2307/2102968
  • Taschek, William W., 1995, “Belief, Substitution, and Logical Structure”, Noûs , 29(1): 71–95. doi:10.2307/2215727
  • Tillman, Chris (ed.), forthcoming, Routledge Handbook of Propositions , Abingdon, UK: Routledge.
  • Inwagen, Peter van, 2004, “A Theory of Properties”, in Oxford Studies in Metaphysics , volume 1, Dean W Zimmerman (ed.), Oxford: Clarendon Press, 107–138.
  • Weatherson, Brian and Andy Egan, 2011, “Introduction: Epistemic Modals and Epistemic Modality”, in Egan and Weatherson 2011: 1–18. doi:10.1093/acprof:oso/9780199591596.003.0001
  • Wilson, Deirdre and Dan Sperber, 2012, Meaning and Relevance , Cambridge: Cambridge University Press. doi:10.1017/CBO9781139028370
  • Wittgenstein, Ludwig, 1922, Tractatus Logico-Philosophicus , C. K. Ogden (trans.), London: Routledge & Kegan Paul. Originally published as “Logisch-Philosophische Abhandlung”, in Annalen der Naturphilosophische , XIV (3/4), 1921.
  • –––, 1953, Philosophical Investigations , G.E.M. Anscombe (trans.), New York: MacMillan.
  • Yalcin, Seth, 2007, “Epistemic Modals”, Mind , 116(464): 983–1026. doi:10.1093/mind/fzm983
  • –––, 2014, “Semantics and Metasemantics in the Context of Generative Grammar”, in Burgess and Sherman 2014: 17–54. doi:10.1093/acprof:oso/9780199669592.003.0002
  • –––, 2018, “Belief as Question-Sensitive”, Philosophy and Phenomenological Research , 97(1): 23–47. doi:10.1111/phpr.12330
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • philpapers: Philosophy of Language
  • The semantics archive

action | compositionality | conditionals | contextualism, epistemic | convention | descriptions | discourse representation theory | Frege, Gottlob | Grice, Paul | indexicals | meaning: normativity of | meaning: of words | meaning holism | mental representation | mind: computational theory of | names | natural kinds | paradox: Skolem’s | personal identity | possible worlds | pragmatics | propositional attitude reports | propositions: singular | propositions: structured | quantifiers and quantification | relativism | rigid designators | Sellars, Wilfrid | semantics: dynamic | semantics: proof-theoretic | semantics: two-dimensional | situations: in natural language semantics | Tarski, Alfred: truth definitions | tense and aspect | time | Wittgenstein, Ludwig

Copyright © 2019 by Jeff Speaks < jspeaks @ nd . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

IMAGES

  1. 13 Different Types of Hypothesis (2024)

    hypothesis of something meaning

  2. What is a Hypothesis?

    hypothesis of something meaning

  3. Research Hypothesis: Definition, Types, Examples and Quick Tips

    hypothesis of something meaning

  4. What is an Hypothesis

    hypothesis of something meaning

  5. What is a Hypothesis

    hypothesis of something meaning

  6. hypothesis in research methodology notes

    hypothesis of something meaning

VIDEO

  1. Concept of Hypothesis

  2. What Is A Hypothesis?

  3. Hypothesis|Meaning|Definition|Characteristics|Source|Types|Sociology|Research Methodology|Notes

  4. Hypothesis Development

  5. Hypothesis Formulation

  6. What does hypothesis mean?

COMMENTS

  1. HYPOTHESIS

    HYPOTHESIS definition: 1. an idea or explanation for something that is based on known facts but has not yet been proved…. Learn more.

  2. Hypothesis Definition & Meaning

    hypothesis: [noun] an assumption or concession made for the sake of argument. an interpretation of a practical situation or condition taken as the ground for action.

  3. Hypothesis: Definition, Examples, and Types

    A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process. Consider a study designed to examine the relationship between sleep deprivation and test ...

  4. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  5. Hypothesis

    The hypothesis of Andreas Cellarius, showing the planetary motions in eccentric and epicyclical orbits.. A hypothesis (pl.: hypotheses) is a proposed explanation for a phenomenon.For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained ...

  6. Hypothesis

    In science, a hypothesis is an idea or explanation that you then test through study and experimentation. Outside science, a theory or guess can also be called a hypothesis. ... Outside science, a theory or guess can also be called a hypothesis. A hypothesis is something more than a wild guess but less than a well-established theory. In science ...

  7. hypothesis noun

    [countable] an idea or explanation of something that is based on a few known facts but that has not yet been proved to be true or correct synonym theory to formulate/confirm a hypothesis; a hypothesis about the function of dreams; There is little evidence to support these hypotheses. see also null hypothesis, Sapir-Whorf hypothesis

  8. What is a Hypothesis

    Definition: Hypothesis is an educated guess or proposed explanation for a phenomenon, based on some initial observations or data. It is a tentative statement that can be tested and potentially proven or disproven through further investigation and experimentation. Hypothesis is often used in scientific research to guide the design of experiments ...

  9. What Is a Hypothesis? The Scientific Method

    A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject. In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

  10. What Is A Research Hypothesis? A Simple Definition

    Hypothesis: an idea or explanation for something that is based on known facts but has not yet been proved. In other words, it's a statement that provides an explanation for why or how something works, based on facts (or some reasonable assumptions), but that has not yet been specifically tested. For example, a hypothesis might look something ...

  11. Scientific hypothesis

    scientific hypothesis, an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world.The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an "If…then" statement summarizing the idea and in the ability to be supported or refuted through observation and experimentation.

  12. What is a scientific hypothesis?

    A scientific hypothesis is a tentative, testable explanation for a phenomenon in the natural world. It's the initial building block in the scientific method.Many describe it as an "educated guess ...

  13. Research Hypothesis In Psychology: Types, & Examples

    A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

  14. Scientific Hypothesis, Theory, Law Definitions

    For example, "theory," "law," and "hypothesis" don't all mean the same thing. Outside of science, you might say something is "just a theory," meaning it's a supposition that may or may not be true. In science, however, a theory is an explanation that generally is accepted to be true. Here's a closer look at these important, commonly misused terms.

  15. What Is Hypothesis? Definition, Meaning, Characteristics, Sources

    Hypothesis is a prediction of the outcome of a study. Hypotheses are drawn from theories and research questions or from direct observations. In fact, a research problem can be formulated as a hypothesis. To test the hypothesis we need to formulate it in terms that can actually be analysed with statistical tools.

  16. Hypothesize Definition & Meaning

    The meaning of HYPOTHESIZE is to make a hypothesis. How to use hypothesize in a sentence. to make a hypothesis; to adopt as a hypothesis… See the full definition. Games & Quizzes; Games & Quizzes ; Word of the Day ... Recent Examples on the Web The light data also revealed something striking about the gas giant's atmospheric conditions ...

  17. Hypothesis Testing

    Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories. ... Stating results in a statistics assignment In our comparison of mean height between men and women we found an average difference ...

  18. What is Hypothesis

    Hypothesis Meaning. A hypothesis is a proposed statement that is testable and is given for something that happens or observed. It is made using what we already know and have seen, and it's the basis for scientific research. A clear guess tells us what we think will happen in an experiment or study.

  19. hypothesis noun

    1 [countable] an idea or explanation of something that is based on a few known facts but that has not yet been proved to be true or correct synonym theory to formulate/confirm a hypothesis a hypothesis about the function of dreams There is little evidence to support these hypotheses. Topic Collocations Scientific Research theory. formulate/advance a theory/hypothesis

  20. Theories of Meaning

    1. Two Kinds of Theory of Meaning. In "General Semantics", David Lewis wrote. I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and, second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one ...

  21. Hypothesis Testing Explained (How I Wish It Was Explained to Me)

    I first learned about hypothesis testing in the first year of my Bachelor's in Statistics. Ever since I've always felt that I was missing something about it.. What particularly bothered me was the presence of elements that seemed quite arbitrary, like those "magic numbers" such as 80% Power or 97.5% Confidence.. So I recently tried to make a deep dive into the topic and, at some point ...