Yogi Berra: 'If the world were perfect, it wouldn't be.'

If the world were perfect, it wouldn't be.

"If the world were perfect, it wouldn't be." - Yogi BerraYogi Berra, a legendary American baseball player and philosopher, famously stated this intriguing quote that invites us to consider the imperfection of our world. At first glance, it may seem paradoxical or even pessimistic, but delving deeper into its meaning reveals profound philosophical insights. Berra's words encapsulate the idea that perfection is not attainable or desirable, and that the imperfections of our world are what make it truly remarkable.In a straightforward sense, this quote suggests that a perfect world is an impossibility. If everything were flawless and without blemish, there would be no room for growth, development, or change. Perfection implies a static state where nothing can be improved or enhanced. And in such a world, life would lack the variety and challenges that propel us forward.To fully understand the significance of Berra's quote, let us introduce a concept known as "The Beauty of Imperfection." This concept challenges the conventional belief that perfection is the ultimate goal. It emphasizes that perfection can lead to stagnation and monotony, robbing life of its richness and vibrancy. Instead, embracing imperfections allows us to appreciate the unique and diverse aspects of our world.Consider the natural world, for instance. The intricacies of the flora and fauna, the delicate balance of ecosystems, the changing seasons, and the unpredictability of weather patterns all contribute to the charm and splendor of our planet. If everything were perfect and predictable, nature would lose its awe-inspiring and mysterious appeal. The imperfections in nature make it captivating and engender a sense of wonder that propels us to explore and learn.The concept of embracing imperfection also applies to human relationships. It is through our flaws and imperfections that we connect with others on a deeper level. When we share vulnerability, empathy grows, fostering meaningful connections. In a perfect world, relationships would lack authenticity and depth, as there would be no need for understanding, forgiveness, or growth.Furthermore, it is in the pursuit of overcoming imperfections that we demonstrate resilience, creativity, and innovation. Unattainable perfection drives us to constantly seek improvement, to find innovative solutions to problems, and to push boundaries. The imperfections in technology, for example, provide the impetus for advancements and breakthroughs, fueling progress in various fields.However, it is important to note that embracing imperfection does not mean accepting mediocrity or ignoring the pursuit of excellence. Instead, it encourages us to recognize that mistakes, failures, and shortcomings are essential components of progress and personal growth. It is through our imperfections that we learn and evolve, both individually and collectively.In conclusion, Yogi Berra's thought-provoking quote, "If the world were perfect, it wouldn't be," sheds light on an intricate aspect of our existence. It reminds us of the beauty found in imperfections and challenges the notion that perfection should be our ultimate goal. Embracing imperfection encourages growth, fosters meaningful relationships, fuels creativity, and propels progress. By celebrating the imperfect world we inhabit, we can find inspiration, fulfillment, and a greater appreciation for the complexities of life.

9 Newtonian Worldview

Chapter 9: newtonian worldview.

Back in chapter 4, we deduced a couple theorems from the laws of scientific change. One such theorem is called the mosaic split theorem , where the acceptance of two incompatible theories leads to a split in a mosaic. A very noteworthy split happened to the Aristotelian-Medieval mosaic around 1700, when theories of both the Cartesian and the Newtonian worldviews equally satisfied the expectations of Aristotelians. For a period of around 40 years between 1700 and 1740, two incompatible sets of theories were accepted by two very different communities. We covered the Cartesian worldview, which was accepted on the Continent, in chapter 8. In this chapter, we will cover the Newtonian worldview .

The Newtonian mosaic was first accepted in Britain ca. 1700. Continental Europe accepted the Newtonian mosaic around 1740, following the confirmation of a novel prediction concerning the shape of the Earth that favoured the Newtonian theory of gravity. The once-split Cartesian and Newtonian mosaics merged, leaving the Newtonian worldview accepted across Europe until about 1920.

One thing we must bear in mind is that the Newtonian mosaic of 1700 looked quite different from the Newtonian mosaic of, say, 1900; a lot can happen to a mosaic over two centuries. Recall that theories and methods of a mosaic do not change all at once, but rather in a piecemeal fashion. We nevertheless suggest the mosaic of 1700 exemplifies the same worldview as that of 1900 because, generally-speaking, both mosaics bore similar underlying metaphysical assumptions – principles to be elaborated on throughout this chapter.

That said, we can still understand and appreciate the key elements of the Newtonian mosaic at some particular time. In our case, we’re going to provide a snapshot of the mosaic ca. 1765. Its key elements at that time included revealed and natural theology, natural astrology, Newtonian physics and Keplerian astronomy, vitalist physiology, phlogiston chemistry, the theory of preformation, Linnaean biology, associationist psychology, history, mathematics (including calculus) as well as the hypothetico-deductive method.

image

Let’s start with the most obvious elements of the Newtonian mosaic – Newtonian physics and cosmology.

Newtonian Physics and Cosmology

In 1687, Isaac Newton first published one of the most studied texts in the history and philosophy of science, Philosophiæ Naturalis Principia Mathematica , or the Principia for short. It is in this text that Newton first described the physical laws that are part and parcel of every first-year physics course, including his three laws of motion, his law of universal gravitation, and the laws of planetary motion. Of course, it would take several decades of debate and discussion for the community of the time to fully accept Newtonian physics. Nevertheless, by the 1760s, Newtonian cosmology and physics were accepted across Europe.

As we did in chapter 8, here we’re going to cover not only the individual theories of the Newtonian mosaic, but also the metaphysical elements underlying these theories. Since any metaphysical element is best understood when paired with its opposite elements (e.g. hylomorphism vs. mechanicism, pluralism vs. dualism, etc.), we will also be introducing those elements of the Aristotelian-Medieval and Cartesian worldviews which the Newtonian worldview opposed.

Recall from chapter 7 that, in their accepted cosmology, Aristotelians separated the universe into two regions – terrestrial and celestial. They believe that, in the terrestrial region, there are four elements – earth, water, air, and fire – that move linearly either towards or away from the centre of the universe. The celestial region, on the other hand, is composed of a single element – aether – which moves circularly around the centre of the universe. Since Aristotelians believed that terrestrial and celestial objects behave differently, we say that Aristotelians accepted the metaphysical principle of heterogeneity , that the terrestrial and celestial regions were fundamentally different.

Additionally, Aristotelians posited that the celestial region was organized in a series of concentric spheres – something like a Matryoshka, or Russian nesting doll – with each planet nested in a spherical shell. The outermost sphere was considered the sphere of the stars, which was believed to be the physical boundary of the universe. According to Aristotelians, there is nothing beyond that sphere, not even empty space. Thus, they also accepted that the universe is finite .

image

Cartesians rejected the Aristotelians idea of heterogeneity of the two regions as well as their idea of a finite universe. First, let’s recall one of the central tenets of the Cartesian worldview: the principal attribute of all matter is extension. For Cartesians, it makes no difference whether that is the tangible matter of the Earth or the invisible matter of a stellar vortex – it must always be extended, i.e. occupy space. Since all matter, both terrestrial and celestial, is just an extended substance, the same set of physical laws applied anywhere in the universe. That is, Cartesians accepted the homogeneity of the laws of nature, that all regions of the universe obey the same laws.

image

Additionally, if extension is merely an attribute of matter, i.e. if space cannot exist independently of matter, then a question emerges: what would a Cartesian imagine existing beyond any boundary? Surely, they would never imagine an edge to the universe followed by empty space and nothingness – that would violate their belief in plenism. Instead beyond every seeming boundary is simply more extended matter , be it spatial matter or the matter of other planetary systems. Descartes would say that the universe extends ‘indefinitely’, meaning potentially infinitely, because he could imagine (but be uncertain about) a lack of boundaries to the edge of the universe and because he reserved the true idea of infiniteness (rather than indefiniteness) for God. So, we would say that Cartesians accepted an infinite universe , that the universe has no physical boundaries and is infinite in space.

image

This is how Descartes’ himself imagined a fragment of our infinite universe:

image

The drawing shows a number of stars (in yellow) with their respective stellar vortices, as well as a comet (in light blue) wandering from one vortex to another.

What about the Newtonian attitude towards the scope of the laws of nature and the boundaries of the universe? Let’s start with the Newtonian view on the boundaries of the universe. While Cartesians accepted that space was an attribute of matter, i.e. that it is indispensable from matter, Newtonians accepted quite the opposite: that space can and does exist independently from matter. For Newtonians, space is like a stage on which the entire material universe is built. But space can also exist without that material universe. This idea is known as the conception of absolute space : space is independent of material objects; it is an empty receptacle in which physical processes take place.

Bearing in mind that Newtonians accepted the existence of absolute space, then it remained possible, from the Newtonian point of view, for this absolute space to exist beyond any perceived boundary of the universe. Effectively, such boundaries would not even exist in the Newtonian worldview; space is essentially a giant void filled with solar system after solar system. If space is a void, then the universe must be infinite. Therefore, we say that Newtonians accepted the metaphysical idea of an infinite universe .

What about the laws of nature in the Newtonian worldview? Newton introduced three laws of motion as well as the law of universal gravitation to describe physical processes. Let’s go over these laws and see what they suggest about homogeneity or heterogeneity. First, consider Newton’s second law , which states:

The acceleration (a) of a body is directly proportional to the net force (F) acting on the body and is inversely proportional to the mass (m) of the body:

image

The law is also often stated as F = ma . To understand what this law states, imagine we used the law to describe an arrow being launched from a bow. A Newtonian would say the acceleration of the arrow after being launched from the bow would depend on the mass of the arrow, as well as the force of the bowstring pushing the arrow.

Now, what would happen to this arrow if there were no additional forces applied to it after being launched? For Newtonians, the answer is given by Newton’s first law , or the law of inertia , which states that:

If an object experiences no net force, then the velocity of the object is constant: it is either at rest or moves in a straight line with constant speed.

So, after being launched but while remaining subject to no additional forces, the arrow would just keep moving in a straight line because of inertia. In other words, an object will remain at rest or in constant motion until some external force acts upon it. In reality, projectiles do not fly in a vacuum, for other than inertia, they are also subject to gravity. Newton accounted for the falling of objects by a force of mutual gravitational attraction between every pair of objects. His law of universal gravitation states that any two bodies attract each other with a force proportional to the product of their masses and inversely proportional to the square of the distance between them.

Any two bodies attract each other with a force (F) proportional to the product of their masses (m 1 and m 2 ) and inversely proportional to the square of the distance (r) between them:

image

To apply Newton’s theory to the flying arrow, a Newtonian would need to know the distance between the arrow and the centre of the Earth, as well as the masses of the arrow and the Earth. Knowing these two values, they could calculate what the force of gravity between the arrow and the Earth is. Although gravity is a force of mutual attraction, the force pulling the Earth towards the arrow is immeasurably tiny compared to the force pulling the arrow towards the Earth. This is because the mass of the Earth is vastly greater than that of the arrow.

Finally, we have Newton’s third law , which states that:

If one body exerts a force on a second body, then the latter simultaneously exerts a force on the former, and the two forces are equal and opposite.

In other words, the third law is the law of equal and opposite reactions. So, every time an object interacts with another object – either by colliding with it, or by exerting some attractive force on it – the other object will experience the same force, but in the opposite direction. Thus, as the bowstring exerts its propulsive force on the arrow, the arrow exerts an equal and opposite force on the bowstring.

The two main factors a Newtonian would have to consider determining a flying arrow’s trajectory are inertia and gravity . Were the arrow launched at a forty-five degree angle, the Newtonian would explain the forward and upward motion of the arrow, after leaving the bow, as due to its inertia. They would explain the fact that the arrow does not move in a straight line at a forty-five degree angle to the ground as due to the action of the force of gravity, which bends the arrow’s trajectory by pulling it towards the surface of the Earth. The resulting motion would be due to both inertia and gravitational force. The angle of ascent would decrease and turn into an angle of descent, and the arrow’s trajectory would be a parabola .

In a thought experiment in his Treatise of the System of the World Newton imagined he had a fantastically powerful cannon, on top of an imaginary mountain so high that the force of air resistance on a cannonball would be negligible. If he fired a cannonball from this super cannon, it would hurtle forward due to inertia, but also fall toward Earth due to the force of gravity, eventually crashing into Earth’s surface. The faster the cannonball left the cannon barrel, the further it would travel before crashing to Earth. Newton realized that if a cannonball were fired fast enough, its fall due to the force of gravity would be at the same rate as the Earth curved away beneath it. Rather than crashing to Earth, it would continue to circle the globe forever, just as the moon circles the Earth in its orbit. The orbital path it would follow would be circular or, more generally, elliptical ; an oval shape with an off-centre Earth. If the cannonball were fired faster than a certain critical velocity, called the escape velocity , a Newtonian could calculate that it would escape from the Earth’s gravitational pull entirely, hurtling away into outer space, never to return.

The conclusion we can draw from these examples is that the same laws that govern a projectile here on Earth must also govern a projectile in the heavens, as well as the motion of the planets and the stars. In other words, Newtonians accepted that the same laws of nature applied in the terrestrial regions of the universe as in the celestial. For this reason, they abandoned the distinction between the two regions that characterized the Aristotelian-Medieval worldview, and instead accepted the principle of homogeneity of the laws of nature. That is, in addition to the idea of infinite universe, they also accepted that all regions of the universe obey the same laws.

image

By this point in the chapter, it is hopefully evident that the underlying assumptions of the Newtonian worldview are vastly different from those of the Aristotelian-Medieval worldview. It might also seem as though Newtonians shared many assumptions with Cartesians – which they did. But the rival Cartesian and Newtonian communities also saw stark contrasts in the basic characteristics of their worldviews. In some ways, Cartesians shared more with Aristotelians than Newtonians.

Let us consider the idea of absolute space . Neither Aristotelians nor Cartesians would ever accept such an idea, for it conflicted with some basic assumptions of their worldviews. First, recall the Aristotelian law of violent motion : if force ( F ) is greater than resistance ( R ), then the object will move with a velocity ( V ) proportional to F/R ; if resistance ( R ) is greater than force ( F ), then the object will stay put. According to this law, should a moving object experience no resistance, the formula calls for us to divide by 0, leaving the object with an infinite velocity. It’s tough to imagine what infinite velocity would look like in the real world, but we might imagine it as something like instant teleportation or being in two or more places at once. Aristotelians recognized the absurdity of an infinite velocity, and accordingly denied it was even possible in the first place. It followed from the impossibility of an infinite velocity that some resistance is always necessary in any motion. For our purposes, what this means is that, for Aristotelians, the universe is always filled with something that creates resistance. Thus, implicit in the Aristotelian-Medieval worldview was the idea of plenism , that there can be no empty space, i.e. no space devoid of matter.

image

Cartesians, as we know from chapter 8, also accepted plenism, though they justified it in a very different way. Since extension, according to Cartesians, is the principal attribute of matter, and since no attribute can exist without a substance, extension too cannot exist on its own. Extension, according to Cartesians, is always attached to something extended, i.e. to material things. Thus, there is no space without matter. The idea of plenism was one of the essential metaphysical assumptions of the Cartesian worldview.

In contrast, Newtonians rejected plenism. Recall that, in the Principia , Newton introduced and defended the idea of absolute space – the idea of space as independent from material objects. This implies vacuism , which is quite simply, the exact opposite of plenism. It says that there can be space absolutely devoid of matter, or that there can be a vacuum.

image

That said, why might Newton have introduced the idea of absolute space to begin with? Our historical hypothesis is that, at the time Newton was writing the Principia , scientists across Europe were conducting experiments that seemed to suggest the existence of a vacuum. These included barometric experiments conducted by Evangelista Torricelli and Blaise Pascal. Because the idea of a vacuism contradicted the then-accepted Aristotelian-Medieval idea of plenism, it could only, at best, be seen as a pursued theory at the time. Newton, however, seemed to have taken the results of these experiments seriously and developed a physical theory that could account for the possibility, or even actuality, of empty space.

Let’s focus on one such experiment in more detail: the Magdeburg hemispheres . After the barometric experiments of Torricelli and Pascal, Otto von Guericke, mayor of the German town of Magdeburg, invented a device that could pump the air out of a sealed space, effectively, he claimed, creating a vacuum. Von Guericke’s device consisted of two semi-spherical shells or hemispheres, that, when placed together and emptied of air, would remain sealed by both the vacuum within and the air pressure without. So powerful was the vacuum within the sealed hemispheres that, reportedly, two sets of horses could not pry the device apart. Were the universe a plenum, it should have been impossible to create a vacuum inside the device and the horses would have been easily able to displace the halves in different directions. Since this was not the case, it seemed that pumping air out of the device left a vacuum inside of it. The pressure exerted by the air outside the two halves of the device was not balanced by the pressure of any matter within the device making the hemispheres extremely difficult to pull apart.

It is because of experiments like those of von Guericke that Newton seems to have been inspired to base his physics on the idea of absolute space. Effectively, he built his theory upon a new assumption, that space is not an attribute of matter, but rather an independent void or a receptacle that can, but need not, hold any kind of matter. It is furthermore the reason that vacuism was an important element of the Newtonian worldview.

There are other important metaphysical elements that separate the Cartesian worldview from the Newtonian. Recall that Cartesians accepted the principle of action by contact – under which material particles can only interact by colliding with one another. Also recall that Descartes’ first law of motion – that every material body maintains its state of motion or rest unless a collision with another body changes that state – follows from action by contact. Newton, as well, had a first law of motion. Phrased more along the lines of Descartes’ first law, Newton’s first law of motion says that every material body maintains its state of motion or rest unless it is caused to change its state by some force. The key difference between the two first laws is that the Newtonian worldview allows changes to result from the influence of forces , while in the Cartesian worldview changes can only result from the actual contact of material bodies. The clearest example of a force is probably the force of gravity. It is entirely possible in the Newtonian worldview for two objects, like the Moon and the Earth, to be gravitationally attracted to one another without any intermediary objects, like the bits of a matter of a vortex. Essentially, this means that in place of action by contact, Newtonians accepted the principle of action at a distance – that material objects can influence each other at a distance through empty space.

image

There is an important clarification to make about action at a distance. Accepting the possibility of action at a distance does not necessitate that all objects interact at a distance. For instance, were a Newtonian to observe a football (or soccer, if that’s what you think the sport is called) player dribbling a ball down a field with their feet, they would not assume that there is some kind of contactless force keeping the ball with the moving player. Rather, they would explain that the ball is being moved by the player’s feet contacting the ball and pushing it down the field. Newtonians continued to accept that many objects interact by contact. But they also accepted the idea that objects can influence one another across empty space through forces.

In addition to action by contact, we also know that Cartesians accepted the principle of mechanicism – that all material objects are extended substances composed of bits of interacting matter. So, for Cartesians all seeming instances of action at a distance, like the revolution of the Moon around the Earth or the Earth around the Sun, must actually be the result of colliding particles – the matter of the terrestrial and solar vortices, in these cases. Effectively, the absence of forces in the Cartesian worldview, and the fact that the source of all motion is external to any particular piece of matter, i.e. caused by its collision with another piece of matter, mean that all matter is inert . In other words, material things do not have any capacity to influence other things without actually touching them.

Newtonians conceived of matter in a new way. While Cartesian mechanical matter was inert, Newtonian matter was active and dynamic. It not only occupied space and maintained its state of rest or motion unless compelled to change it by an external force, it also had an active capacity to exert force on other bodies from a distance. Newton’s law of gravitation tells us that any two objects are gravitationally attracted to one another. Effectively, Newtonians replaced the Cartesian conception of mechanicism with dynamism , the idea of matter as an extended substance interacting through forces. Thus, they saw all matter as not only occupying some space, i.e. as being extended, but also as having some active capacity.

image

These new ideas were actually troubling to Newton himself. Newton disliked that his theory of universal gravitation suggested that objects have a mysterious, almost magical, ability to interact from a distance. Action at a distance and dynamic matter were, for Newton, occult ideas. Mathematically, the law of gravity worked, and it allowed one to predict and explain a wide range of terrestrial and celestial phenomena. But because Newton was educated in a mechanistic tradition, he initially believed that proper explanations should be mechanistic, i.e. should involve physical contact. In fact, he searched for, but ultimately failed to provide, a mechanical explanation for gravity. In other words, it seems as though Newton himself likely accepted mechanicism. However, because of his failure to provide such a mechanical explanation, and the many successes of his theory, the Newtonian community eventually accepted the notion of the force of gravity as acting at a distance. Furthermore, the strong implication that action at a distance remained the most reasonable explanation for the motion of the planets and the stars led the Newtonian community to accept that matter must be dynamic.

We bring up the conflicting perspectives of Newton and the Newtonians to emphasize the importance of distinguishing between individual and communal beliefs when studying the history of science. On the one hand, we can write fascinating intellectual biographies of great scientists such as Newton. But when we write such histories – histories of individuals – we risk misrepresenting the accepted worldviews of the communities in which those great individuals worked. In such a case, we trade a focus on the discoveries and inventions of individuals – what we would call newly pursued theories – for a proper reconstruction of the belief system of the time. If we write our histories from the perspective of the community, we can understand these individuals in their proper context. We can better realize not only how novel their ideas were but also what the response of the community was and at what point, if ever, their proposed theories became accepted. In sum, it is only by distinguishing the history of the individual from the history of the community that we can realize that Newton’s personal views on matter and motion did not necessarily align with that of the Newtonians; he most likely accepted mechanicism, while Newtonians clearly accepted dynamism.

The dynamic conception of matter underlies more than just the cosmology and physics of the Newtonian mosaic. We see dynamism implicit in the theories of other fields as well. For instance, Newtonians rejected the Cartesian idea of corkscrew particles to explain magnetism, accepting the idea of a magnetic force in its place. In chemistry, Newtonians accepted that the chemicals they believed to exist – like mercury, lead, silver, and gold – combine and react more or less effectively because of something they called a chemical affinity . Chemical affinity was interpreted as an active capacity inherent in different chemical substances which caused some to combine with others in a way that had clear parallels to the Newtonian conception of gravity. For example, following numerous experiments and observations, they concluded that mercury would combine better with gold than silver, and they explained this in terms of mercury’s strong chemical affinity to gold. Even in physiology, Newtonians posited and accepted the existence of a vital force that brought organisms to life.

In the Cartesian worldview, the accepted physiological theories were mechanistic . That is, Cartesians saw human bodies – indeed, all living organisms – as complex machines of interconnected and moving parts. Though they were uncertain how the mind commands the body to operate, they were confident that all biological processes acted mechanistically through actual contact, similar to a clock with its various gears and cogs.

The Newtonian response to mechanistic physiology was known as vitalism . In the first few decades of the eighteenth century, physicians found themselves asking about what properties are essential to life. Mechanicists would probably answer that living organisms were carefully organized combinations of bits of extended matter, much like the carefully organized gears, wheels, and pendulum of a clock. But by the mid-to-late eighteenth century, the medical community began observing phenomena that were anomalous for mechanistic physiology. One observation concerned an animal’s ability to preserve its body heat, even when the circulation of its blood was stopped. Mechanicists posited that heat is generated by circulating blood, and they could not provide a satisfactory explanation for why heat continued to be generated in the absence of this mechanical cause. Another observation concerned the temperature of a dog’s nose. It was noted that a dog’s nose is filled with blood and should thus be warm like the rest of its body, and yet most often a dog’s nose is as cold as the temperature of the air around the dog. Why did the temperature of the rest of the dog’s body not cool to the temperature of the air around it? It seemed that mechanicists could not produce a satisfactory answer to such questions. Vitalists, on the other hand, posited that there was some additional force inherent in living things which, in this case, regulates an animal’s body heat. By the late 1700’s, vitalist physiology and the idea of a vital force had replaced mechanistic physiology as the accepted physiological theory of the time. In essence, vitalism suggested that living matter is organized by an inherent vital force.

Newtonians saw vital forces as the living principles responsible for maintaining health and curing illness. Physicians generally characterized an organism’s vital force by two properties, sensibility and contractility. Sensibility involved what the different parts of your body could feel. It included both voluntary properties that allowed you to use your senses to interact with your environment, and involuntary properties, like feeling hunger or maintaining a sense of balance. Contractility involved how the different parts of your body moved. It was sometimes an involuntary property that ensured the beating of your heart or the digestion of food, and sometimes a voluntary property involved with things like locomotion. In essence, vitalists accepted that organisms would lack both sensibility and contractility in the absence of a vital force.

Accordingly, vitalists suggested that illness and disease derived from damage to one of these vital properties. For instance, vitalists believed proper digestion was a contractile property directed by a vital force. Were a person to catch the flu, vitalists suggested that the vital force directing proper digestion was somehow interrupted. In effect, that person’s digestive system would not function properly, as demonstrated by flu-like symptoms such as vomiting. Alternatively, consider a sensible property guided by a vital force, like hunger or thirst. Falling ill manifests itself not only in changes in existing sensible properties, like a loss of appetite, but sometimes also in the addition of new, unwanted sensations, like itching or tingling.

The treatment of illness, for vitalists, was about administering medicine that activates the vital forces of the body in such a way as to accelerate healing by the proper functioning of these vital properties. Treatment did not always involve straightforward activation of one of these properties. For instance, a physician would avoid giving medicine that heightens contractility to a person suffering from convulsions – a symptom, the involuntary contraction of the muscles, associated with an unwanted increase in the contractible property.

The vitalist conceptions of illness and treatment were in sharp contrast with those of both Aristotelians and Cartesians. As explained in earlier chapters, for Aristotelians disease was a result of an imbalance of bodily fluids or humors. Consequently, treatment involved the rebalancing of these humors. For Cartesians, the ideas of disease and curing remained largely the same as those of Aristotelians, despite the fact that in Cartesian physiology, humors received a purely mechanistic interpretation. Conversely, vitalists at the time of the Newtonian worldview didn’t believe curing was about the mechanical balancing of humors; it was about restoring the vital forces that help maintain a properly functioning body.

Similarly, to the force of gravity, vital force was seen as a property of a material body, as something that doesn’t exist independently of the body. Importantly, Newtonians did not see vital force as a separate immaterial substance. This is true about the dynamic conception of matter in general: the behaviour of material objects is guided by forces that are inherent in matter itself.

Although Cartesians and Newtonians had different conceptions of matter – mechanical and dynamic respectively – they agreed that matter and mind can exist independently of each other. Both Cartesians and Newtonians accepted dualism, the idea that there are two independent substances: matter and mind.

In addition to purely material and purely spiritual entities, both parties would agree that there are also entities that are both material and spiritual. Specifically, Cartesians and Newtonians would agree that human beings are the only citizens of two worlds. However, some alternatives to this view were pursued at the time. For instance, some philosophers believed that animals and plants were composed of not only matter, but also mind. They believed this because they saw all living organisms as having inherent organizing principles – minds, souls, or spirits – which are essentially non-material. Others denied that humans are composed of any mind, or spiritual substance, at all; they saw humans, along with animals, plants, and rocks, as entirely material, while only angels and God were composed of a mental, spiritual substance. However, albeit pursued, these alternative views remained unaccepted. The position implicit in the Newtonian worldview was that only humans are composed of both mind and matter.

This dualistic position was very much in accord with another important puzzle piece of the Newtonian mosaic – theology. Different Newtonian communities accepted different theologies. In Europe alone, there would be a number of different mosaics: Catholic Newtonian, Orthodox Newtonian, Lutheran Newtonian, Anglican Newtonian etc. Yet, the theologies accepted in all of these mosaics assumed that the spiritual world can exist independently of the material world and that matter and mind are separate substances. So, it is not surprising that dualism was accepted in all of these mosaics.

Theology, or the study of God and relations between God, humankind, and the universe, held an important place in the Newtonian worldview. Theologians and natural philosophers alike were concerned with revealing the attributes of God as well as finding proofs of his existence. Nowadays, these theological questions strike us as non -scientific. But in the eighteenth and nineteenth centuries, they remained legitimate topics of scientific study.

Just like Aristotelians and Cartesians before them, Newtonians accepted that there were two distinct branches of theology: revealed theology and natural theology. Revealed theology was concerned with inferring God’s existence and attributes – what he can and cannot do – exclusively from his acts of self-revelation. Most commonly for Newtonians, revealed theology meant that God revealed knowledge about himself and the natural world through a holy text like the Bible. But revelation also occurred in the form of a supernatural entity like a saint, an angel, or God himself speaking to a mortal person, or through a genuine miracle like the curing of some untreatable illness.

It wasn’t uncommon for a natural philosopher of the time to practice revealed theology. Newton himself interpreted many passages from the Bible as evidence of various prophecies. For instance, he believed that a passage from the Book of Revelations indicated that the reign of the Catholic Church would only last for 1260 years. But he was never certain on which year the reign of the Catholic Church had actually begun, and so he came up with multiple dates to mark the fulfilment of the prophecy of 1260 years. Nevertheless, Newton’s belief in this prophecy stems from his reading of the Bible, i.e. it stems from his practice of and belief in revealed theology.

In contrast, natural theology was the branch of theology concerned with inferring God’s existence and attributes by means of reason unaided by God’s acts of self-revelation. Philosophers were practicing natural theology when they made arguments about God with reason and logic. Descartes’ ontological argument for the existence of God from chapter 8 is an example of a theory in natural theology. Others would practice natural theology by studying God through the natural world around them. In any case, what characterizes natural theology is that conclusions regarding God, his attributes, and works were drawn without any reference to a holy text.

Let’s consider one formulation of the famous argument from design for God’s existence. The argument goes like this. On the one hand, the universe seems like a great big machine, a dynamic system of interacting parts. It is, in a sense, analogous to human artefacts; it is akin to a very complex clock, where all the bits and pieces work in a perfect harmony. On the other hand, we know that artefacts including machines have a designer. Therefore, so the argument goes, the universe also has a designer, i.e. God:

image

In essence, the argument from design assumes an analogy between the universe as a whole and a human artefact, such as a steam engine, a mercury thermometer, or a marine chronometer. Since such artefacts are the product of design by a higher entity (i.e. humans), then perhaps the universe itself, with the planets and stars moving about in the heavens, is the product of design by some even higher entity – God. The argument fits under the category of natural theology because it is based on a certain understanding of nature, i.e. the idea that the universe is a dynamic system of interacting parts operating through collisions and forces.

The argument from design was far from perfect. The eighteenth-century philosopher David Hume remained unconvinced by it and pointed out some of its major problems. First, Hume rejected the premise that there is an analogy between the universe as a machine and artefacts, making the entire argument unsound. He noted that the reason we claim artefacts have a designer is that we have experienced humanity designing artefacts from initial concept to final product. But when it comes to the universe as a whole, Hume reasoned, we have never experienced such a conceptual stage. That is, no one has ever seen an all-powerful being create a universe; we’re merely living in an already operational “machine”. So, while artefacts clearly have a designer, this doesn’t imply that the universe has a designer.

Second, Hume points out that this argument for the existence of God says nothing about what God is like. Even if we were to accept the argument, the only conclusion that would logically follow from it is that the universe has some designer. Importantly, it wouldn’t imply that this designer is necessarily the omnipotent, omniscient, and omnibenevolent God of the Christian religion. There is nothing in the argument to preclude an imperfect God from designing the universe, or even multiple gods from designing it. To accept an imperfect God, or the existence of multiple Gods, would be incompatible with the then-accepted Christian beliefs concerning an all-perfect God.

Regardless of Hume’s criticism, Newtonians accepted some form of the argument from design for the existence of God. More generally, Newtonians accepted both revealed theology , or the study of God through his acts of self-revelation, and natural theology , or the study of God by examining the universe he created.

While theology was an essential part of the Newtonian mosaic, astrology , the study of celestial influences upon terrestrial events, suffered a different fate. Newtonians understood that the stars and the planets exerted some kind of influence upon the Earth. However, in the Newtonian worldview, the astrological topics that were accepted in the Aristotelian worldview were either gradually encompassed by other fields of natural science or rejected altogether.

Traditionally, astrology was divided into two branches – judicial astrology and natural astrology. Judicial astrology was the branch of astrology concerned with celestial influences upon human affairs. For instance, consulting the heavens to advise a monarch on when to go to war, or when to conceive an heir fell within the domain of judicial astrology. Judicial astrology could also be involved in something as innocent as reading a horoscope to decide when to ask a crush’s hand in marriage. In all of these examples, it was suggested that the influence of the heavens extended beyond the material world to the human mind.

The other branch of astrology was natural astrology , and it was concerned with celestial influences upon natural things. For instance, positing a link between the rising and setting of the Moon and the ebb and flow of the tide fell under natural astrology. Similarly, any study of light and heat coming from the Sun and affecting the Earth was considered part of natural astrology. The measuring of time and the forecasting of weather by studying planetary positions would also pertain to natural astrology. Medical prognostications using natal horoscopes would equally belong to this field.

A question arises: why weren’t medical prognostications using natal horoscopes a part of judicial astrology; didn’t they concern heavenly influences upon humans ? To answer this question, we need to recall the distinction between the mind and the body. You would be right to recall that physicians would require knowledge of the heavens – specifically of a patient’s natal horoscope – in order to properly rebalance a patient’s humors. So, it might seem as though physicians were studying celestial influences over human affairs and therefore practicing judicial astrology. But, more precisely, physicians would simply monitor and make changes to a patient’s body ; they would not be concerned with any celestial influences over a patient’s mind . Medical prognostication did not fall under judicial astrology because physicians accepted that the celestial realm can and does influence the material world by bringing humors in and out of balance. They did not accept, as claimed by judicial astrologers, that the heavens could determine what would normally be determined by a person’s mind or by an even more powerful agent, God.

Furthermore, the idea of judicial astrology was in conflict with the Christian belief in free will. According to one of the fundamental Christian dogmas, humans have an ability to act spontaneously and make decisions that are not predetermined by prior events. This goes against the key idea of judicial astrology: if celestial events do, in fact, determine the actions of humans, then in what sense can humans be said to possess free will? If a state of the human mind is determined by the position of stars and planets, then the very notion of human free will becomes questionable. It is not surprising, therefore, that the practice of judicial astrology was considered heresy and therefore banned. As such, judicial astrology was never an accepted part of either the Aristotelian-Medieval, Cartesian, or Newtonian mosaics.

Natural astrology, on the other hand, was an accepted element of the Aristotelian-Medieval worldview and many of its topics even persisted through the Cartesian and Newtonian worldviews. While in the Aristotelian-Medieval mosaic natural astrology was a separate element, it ceased to be so in the Cartesian and Newtonian mosaics; only some of its topics survived and were subsumed under other fields, such as astronomy, geology, meteorology, or physics. There were other topics of natural astrology that were simply rejected. Consider the following three questions:

How are tides influenced by celestial objects such as the Sun and the Moon?

How is the weather on Earth influenced by celestial phenomena?

How is human health influenced by the arrangement of celestial objects?

In the Aristotelian-Medieval worldview, all three of these questions were accepted as legitimate topics of study; they all pertained to the domain of natural astrology. In particular, Aristotelians accepted that there was a certain link between the Moon and the tides. They believed that the positions of planets influenced weather on Earth. For instance, a major flood might be explained by a conjunction of planets in the constellation of Aquarius. Finally, Aristotelian physicians believed that the positions of the planets affect the balance of humors in the body, and thus human health.

Of these three topics, only two survived in the Cartesian and Newtonian mosaics. Thus, both Cartesians and Newtonians accepted that the position of the Moon plays an important role in the ebb and flow of the tides. While they would not agree as to what actual mechanism causes this ebb and flow, they would all accept that a “celestial” influence upon the tides exists. Similarly, they would both agree that the weather on Earth might be affected by the Sun. However, the question of the celestial influence upon the human body was rejected in the Cartesian and Newtonian worldviews.

In short, while some traditional topics of natural astrology survived in the Cartesian and Newtonian worldviews, they didn’t do so under the label of natural astrology. Instead they were absorbed by other fields of science.

Newtonian Method

How did Newtonians evaluate the theories that would become a part of their worldview? If we were to ask this question to Newtonians themselves, especially in the eighteenth century, their explicit answer would be in accord with the empiricist-inductivist methodology of Locke and Newton. As mentioned in chapter 3, the empiricist-inductivist methodology prescribed that a theory is acceptable if it merely inductively generalizes from empirical results without postulating any hypothetical entities. However, the actual expectations – their method not methodology – of the Newtonian community was different. When we study the actual eighteenth-century transitions in accepted theories, it becomes apparent that the scientists of the time were willing to accept theories that postulated unobservable entities. Recall, for instance, the fluid theory of electricity that postulated the existence of an electric fluid , the theory of preformation that postulated invisibly small homunculi in men’s semen, or Newton’s theory that postulated the existence of absolute space, absolute time, and the force of gravity. In other words, the actual expectations of the community of the time, i.e. their methods, were different from their explicitly proclaimed methodological rules.

So how did Newtonians actually evaluate their theories? In fact, the same way as Cartesians: all theories in the Newtonian mosaic had to satisfy the requirements of the hypothetico-deductive (HD) method in order to be accepted. Indeed, the employment of this new method also had to follow the third law of scientific change : just as in the case of the Cartesian mosaic, the HD method was a logical consequence of some of the key metaphysical principles underlying the Newtonian worldview. These metaphysical principles would be the same in both mosaics, but they would be arrived at differently in Cartesian and Newtonian mosaics.

In previous chapters, we explained how the HD method became employed in the Cartesian mosaic because it followed from their belief that the principle attribute of matter is extension. Newtonians, on the other hand, had a slightly different understanding of matter. They believed in a dynamical conception of matter, that matter is an extended substance interacting through forces. We can draw two conclusions from the Newtonian belief in dynamic matter. First, the secondary qualities of matter, like taste, smell, and colour, result from the combination and dynamic interaction of material parts. Since these secondary qualities were taken as the products of a more fundamental inner mechanism, albeit one that allows for the influence of forces, Newtonians accepted the principle of complexity . Second, any phenomenon can be produced by an infinite number of different combinations of particles interacting through collisions and forces. Accordingly, it is possible for many different, equally precise explanations to be given for any phenomenon after the fact. Thus, Newtonians also accepted the principle that post-hoc explanations should be distrusted, and novel otherwise unexpected predictions should be valued. If these conclusions seem similar to the conclusions Cartesians drew from their belief that matter is extension, it’s because they are. The fact that Newtonians also accepted forces doesn’t seem to have influenced their employment of the HD method. As per the third law of scientific change, the HD method becomes employed because it is a deductive consequence of Newtonians’ belief in complexity and their mistrust for post-hoc explanations.

Let’s summarize the many metaphysical conceptions we’ve uncovered in this chapter. While Aristotelians believed in the heterogeneity of the laws of nature and that the universe was finite, Cartesians and Newtonians alike believed the laws of nature to be homogenous and the universe to, in fact, be infinite . Newtonians also shared the Cartesian belief in dualism , that there are two substances in the world: mind and matter. At the same time, the metaphysical assumptions of the Newtonian worldview contrasted in many ways with both those of the Aristotelian-Medieval and the Cartesian worldviews. Newtonians replaced the idea of plenism with that of vacuism – that there can be empty space; they expanded their understanding of change and motion from action by contact to allowing for the possibility of action at a distance ; and they modified the conception of matter from being mechanical and inert to being active and dynamic .

image

One point that we want to emphasize about all of these metaphysical elements insofar as the Newtonian worldview is concerned is that they weren’t always explicitly discussed or taught in the eighteenth and nineteenth centuries. Rather, some of these assumptions are implicit elements of the worldview; they are ideas that, had we had a conversation with a Newtonian, we would expect them to agree with, for they all follow from their accepted theories. For instance, the reason Newtonians would accept the conception of dynamism is that their explicit acceptance of the law of gravity implies that matter can interact through forces.

Before concluding this chapter, there’s another important point to re-emphasize about these historical chapters in general: all mosaics change in a piecemeal fashion. What this means is that the Newtonian worldview of, say, 1760, looked vastly different from the Newtonian worldview of 1900. While we’ve tried to describe the theories accepted by Newtonians and the metaphysical principles that characterized their worldview around the second half of the eighteenth century, this may create a false impression that the Newtonian mosaic did not change much after that. It actually did.

One notable change within the Newtonian worldview was a shift in the belief over how to conceive of matter, a shift from dynamism to the belief in particles and waves . Up until around the 1820s, Newtonians conceived of matter merely in terms of particles. These particles could interact via collisions and forces – hence the dynamic conception of matter – but they were always understood in a literally corpuscular sense. After the 1820s, however, Newtonians had experimentally observed matter behaving in a non-corpuscular way. Some matter had been observed acting more like a wave in a fluid medium than like a particle. The idea of dynamic matter consisting exclusively of particles interacting through collisions and forces came to be replaced by the idea of dynamic matter consisting of both particles and waves interacting through collisions and forces. It is possible that some Newtonians sought to replace the idea of a force with that of a wave, so that all forces pushing and pulling dynamic matter about could be interpreted in terms of waves in a subtle fluid medium sometimes called the luminiferous ether. But not all forces at the time of the Newtonian worldview were explained in terms of waves, so the most that we can say about them is that they accepted that both particles and waves interacted through collisions and forces. Recall the discussion of Fresnel’s wave theory of light from chapter 3.

Did the Newtonian conception of matter in terms of particles and waves persist in the twentieth century? That will be a topic of our next chapter, on the Contemporary worldview.

Introduction to History and Philosophy of Science Copyright © by Barseghyan, Hakob; Overgaard, Nicholas; and Rupik, Gregory is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

a hypothesis or prediction of what would happen perfect world

Understanding Science

How science REALLY works...

  • Understanding Science 101
  • Misconceptions
  • Testing ideas with evidence is at the heart of the process of science.
  • Scientific testing involves figuring out what we would  expect  to observe if an idea were correct and comparing that expectation to what we  actually  observe.

Misconception:  Science proves ideas.

Misconception:  Science can only disprove ideas.

Correction:  Science neither proves nor disproves. It accepts or rejects ideas based on supporting and refuting evidence, but may revise those conclusions if warranted by new evidence or perspectives.  Read more about it.

Testing scientific ideas

Testing ideas about childbed fever.

As a simple example of how scientific testing works, consider the case of Ignaz Semmelweis, who worked as a doctor on a maternity ward in the 1800s. In his ward, an unusually high percentage of new mothers died of what was then called childbed fever. Semmelweis considered many possible explanations for this high death rate. Two of the many ideas that he considered were (1) that the fever was caused by mothers giving birth lying on their backs (as opposed to on their sides) and (2) that the fever was caused by doctors’ unclean hands (the doctors often performed autopsies immediately before examining women in labor). He tested these ideas by considering what expectations each idea generated. If it were true that childbed fever were caused by giving birth on one’s back, then changing procedures so that women labored on their sides should lead to lower rates of childbed fever. Semmelweis tried changing the position of labor, but the incidence of fever did not decrease; the actual observations did not match the expected results. If, however, childbed fever were caused by doctors’ unclean hands, having doctors wash their hands thoroughly with a strong disinfecting agent before attending to women in labor should lead to lower rates of childbed fever. When Semmelweis tried this, rates of fever plummeted; the actual observations matched the expected results, supporting the second explanation.

Testing in the tropics

Let’s take a look at another, very different, example of scientific testing: investigating the origins of coral atolls in the tropics. Consider the atoll Eniwetok (Anewetak) in the Marshall Islands — an oceanic ring of exposed coral surrounding a central lagoon. From the 1800s up until today, scientists have been trying to learn what supports atoll structures beneath the water’s surface and exactly how atolls form. Coral only grows near the surface of the ocean where light penetrates, so Eniwetok could have formed in several ways:

Hypothesis 2: The coral that makes up Eniwetok might have grown in a ring atop an underwater mountain already near the surface. The key to this hypothesis is the idea that underwater mountains don’t sink; instead the remains of dead sea animals (shells, etc.) accumulate on underwater mountains, potentially assisted by tectonic uplifting. Eventually, the top of the mountain/debris pile would reach the depth at which coral grow, and the atoll would form.

Which is a better explanation for Eniwetok? Did the atoll grow atop a sinking volcano, forming an underwater coral tower, or was the mountain instead built up until it neared the surface where coral were eventually able to grow? Which of these explanations is best supported by the evidence? We can’t perform an experiment to find out. Instead, we must figure out what expectations each hypothesis generates, and then collect data from the world to see whether our observations are a better match with one of the two ideas.

If Eniwetok grew atop an underwater mountain, then we would expect the atoll to be made up of a relatively thin layer of coral on top of limestone or basalt. But if it grew upwards around a subsiding island, then we would expect the atoll to be made up of many hundreds of feet of coral on top of volcanic rock. When geologists drilled into Eniwetok in 1951 as part of a survey preparing for nuclear weapons tests, the drill bored through more than 4000 feet (1219 meters) of coral before hitting volcanic basalt! The actual observation contradicted the underwater mountain explanation and matched the subsiding island explanation, supporting that idea. Of course, many other lines of evidence also shed light on the origins of coral atolls, but the surprising depth of coral on Eniwetok was particularly convincing to many geologists.

  • Take a sidetrip

Visit the NOAA website to see an animation of coral atoll formation according to Hypothesis 1.

  • Teaching resources

Scientists test hypotheses and theories. They are both scientific explanations for what we observe in the natural world, but theories deal with a much wider range of phenomena than do hypotheses. To learn more about the differences between hypotheses and theories, jump ahead to  Science at multiple levels .

  • Use our  web interactive  to help students document and reflect on the process of science.
  • Learn strategies for building lessons and activities around the Science Flowchart: Grades 3-5 Grades 6-8 Grades 9-12 Grades 13-16
  • Find lesson plans for introducing the Science Flowchart to your students in: Grades 3-5 Grades 6-8 Grades 9-16
  • Get  graphics and pdfs of the Science Flowchart  to use in your classroom. Translations are available in Spanish, French, Japanese, and Swahili.

Observation beyond our eyes

The logic of scientific arguments

Subscribe to our newsletter

  • The science flowchart
  • Science stories
  • Grade-level teaching guides
  • Teaching resource database
  • Journaling tool

June 18, 2020

The Truth about Scientific Models

They don’t necessarily try to predict what will happen—but they can help us understand possible futures

By Sabine Hossenfelder

a hypothesis or prediction of what would happen perfect world

Getty Images

As COVID-19 claimed victims at the start of the pandemic, scientific models made headlines. We needed such models to make informed decisions. But how can we tell whether they can be trusted? The philosophy of science, it seems, has become a matter of life or death. Whether we are talking about traffic noise from a new highway, climate change or a pandemic, scientists rely on models, which are simplified, mathematical representations of the real world. Models are approximations and omit details, but a good model will robustly output the quantities it was developed for.

Models do not always predict the future. This does not make them unscientific, but it makes them a target for science skeptics. I cannot even blame the skeptics, because scientists frequently praise correct predictions to prove a model’s worth. It isn’t originally their idea. Many eminent philosophers of science, including Karl Popper and Imre Lakatos, opined that correct predictions are a way of telling science from pseudoscience.

But correct predictions alone don’t make for a good scientific model. And the opposite is also true: a model can be good science without ever making predictions. Indeed, the models that matter most for political discourse are those that do not make predictions. Instead they produce “projections” or “scenarios” that, in contrast to predictions, are forecasts that depend on the course of action we will take. That is, after all, the reason we consult models: so we can decide what to do. But because we cannot predict political decisions themselves, the actual future trend is necessarily unpredictable.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

This has become one of the major difficulties in explaining pandemic models. Dire predictions in March 2020 for COVID’s global death toll did not come true. But they were projections for the case in which we took no measures; they were not predictions.

Political decisions are not the only reason a model may make merely contingent projections rather than definite predictions. Trends of global warming, for example, depend on the frequency and severity of volcanic eruptions, which themselves cannot currently be predicted. They also depend on technological progress, which itself depends on economic prosperity, which in turn depends on, among many other things, whether society is in the grasp of a pandemic. Sometimes asking for predictions is really asking for too much.

Predictions are also not enough to make for good science. Recall how each time a natural catastrophe happens, it turns out to have been “predicted” in a movie or a book. Given that most natural catastrophes are predictable to the extent that “eventually something like this will happen,” this is hardly surprising. But these are not predictions; they are scientifically meaningless prophecies because they are not based on a model whose methodology can be reproduced, and no one has tested whether the prophecies were better than random guesses.

Thus, predictions are neither necessary for a good scientific model nor sufficient to judge one. But why, then, were the philosophers so adamant that good science needs to make predictions? It’s not that they were wrong. It’s just that they were trying to address a different problem than what we are facing now.

Scientists tell good models from bad ones by using statistical methods that are hard to communicate without equations. These methods depend on the type of model, the amount of data and the field of research. In short, it’s difficult. The rough answer is that a good scientific model accurately explains a lot of data with few assumptions. The fewer the assumptions and the better the fit to data, the better the model.

But the philosophers were not concerned with quantifying explanatory power. They were looking for a way to tell good science from bad science without having to dissect scientific details. And although correct predictions may not tell you whether a model is good science, they increase trust in the scientists’ conclusions because predictions prevent scientists from adding assumptions after they have seen the data. Thus, asking for predictions is a good rule of thumb, but it is a crude and error-prone criterion. And fundamentally it makes no sense. A model either accurately describes nature or doesn’t. At which moment in time a scientist made a calculation is irrelevant for the model’s relation to nature.

A confusion closely related to the idea that good science must make predictions is the belief that scientists should not update a model when new data come in. This can also be traced back to Popper & Co., who thought it is bad scientific practice. But of course, a good scientist updates their model when they get new data! This is the essence of the scientific method: When you learn something new, revise. In practice, this usually means recalibrating model parameters with new data. This is why we saw regular updates of COVID case projections. What a scientist is not supposed to do is add so many assumptions that their model can fit any data. This would be a model with no explanatory power.

Understanding the role of predictions in science also matters for climate models. These models have correctly predicted many observed trends, from the increase of surface temperature, to stratospheric cooling, to sea ice melting. This fact is often used by scientists against climate change deniers. But the deniers then come back with some papers that made wrong predictions. In response, the scientists point out the wrong predictions were few and far between. The deniers counter there may have been all kinds of reasons for the skewed number of papers that have nothing to do with scientific merit. Now we are counting heads and quibbling about the ethics of scientific publishing rather than talking science. What went wrong? Predictions are the wrong argument.

A better answer to deniers is that climate models explain loads of data with few assumptions. The computationally simplest explanation for our observations is that the trends are caused by human carbon dioxide emission. It’s the hypothesis that has the most explanatory power.

In summary, to judge a scientific model, do not ask for predictions. Ask instead to what degree the data are explained by the model and how many assumptions were necessary for this. And most of all, do not judge a model by whether you like what it tells you.

Sabine Hossenfelder is a physicist and research fellow at the Frankfurt Institute for Advanced Studies in Germany. She currently works on dark matter and the foundations of quantum mechanics.

SA Special Editions Vol 29 Issue 4s

Home

  • Peterborough

an student standing in front of a blackboard full of physics and Math formulas.

Understanding Hypotheses and Predictions

Hypotheses and predictions are different components of the scientific method. The scientific method is a systematic process that helps minimize bias in research and begins by developing good research questions.

Research Questions

Descriptive research questions are based on observations made in previous research or in passing. This type of research question often quantifies these observations. For example, while out bird watching, you notice that a certain species of sparrow made all its nests with the same material: grasses. A descriptive research question would be “On average, how much grass is used to build sparrow nests?”

Descriptive research questions lead to causal questions. This type of research question seeks to understand why we observe certain trends or patterns. If we return to our observation about sparrow nests, a causal question would be “Why are the nests of sparrows made with grasses rather than twigs?”

In simple terms, a hypothesis is the answer to your causal question. A hypothesis should be based on a strong rationale that is usually supported by background research. From the question about sparrow nests, you might hypothesize, “Sparrows use grasses in their nests rather than twigs because grasses are the more abundant material in their habitat.” This abundance hypothesis might be supported by your prior knowledge about the availability of nest building materials (i.e. grasses are more abundant than twigs).

On the other hand, a prediction is the outcome you would observe if your hypothesis were correct. Predictions are often written in the form of “if, and, then” statements, as in, “if my hypothesis is true, and I were to do this test, then this is what I will observe.” Following our sparrow example, you could predict that, “If sparrows use grass because it is more abundant, and I compare areas that have more twigs than grasses available, then, in those areas, nests should be made out of twigs.” A more refined prediction might alter the wording so as not to repeat the hypothesis verbatim: “If sparrows choose nesting materials based on their abundance, then when twigs are more abundant, sparrows will use those in their nests.”

As you can see, the terms hypothesis and prediction are different and distinct even though, sometimes, they are incorrectly used interchangeably.

Let us take a look at another example:

Causal Question:  Why are there fewer asparagus beetles when asparagus is grown next to marigolds?

Hypothesis: Marigolds deter asparagus beetles.

Prediction: If marigolds deter asparagus beetles, and we grow asparagus next to marigolds, then we should find fewer asparagus beetles when asparagus plants are planted with marigolds.

A final note

It is exciting when the outcome of your study or experiment supports your hypothesis. However, it can be equally exciting if this does not happen. There are many reasons why you can have an unexpected result, and you need to think why this occurred. Maybe you had a potential problem with your methods, but on the flip side, maybe you have just discovered a new line of evidence that can be used to develop another experiment or study.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Definition, Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

a hypothesis or prediction of what would happen perfect world

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

a hypothesis or prediction of what would happen perfect world

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis.

  • Operationalization

Hypothesis Types

Hypotheses examples.

  • Collecting Data

A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.

Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

At a Glance

A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.

Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

How to Formulate a Good Hypothesis

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

The Importance of Operational Definitions

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.

Replicability

One of the basic principles of any type of scientific research is that the results must be replicable.

Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type suggests a relationship between three or more variables, such as two independent and dependent variables.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative population sample and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."
  • "Children who receive a new reading intervention will have higher reading scores than students who do not receive the intervention."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "There is no difference in anxiety levels between people who take St. John's wort supplements and those who do not."
  • "There is no difference in scores on a memory recall task between children and adults."
  • "There is no difference in aggression levels between children who play first-person shooter games and those who do not."

Examples of an alternative hypothesis:

  • "People who take St. John's wort supplements will have less anxiety than those who do not."
  • "Adults will perform better on a memory task than children."
  • "Children who play first-person shooter games will show higher levels of aggression than children who do not." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when  conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a  correlational study  can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Thompson WH, Skau S. On the scope of scientific hypotheses .  R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607

Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:].  Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z

Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004

Nosek BA, Errington TM. What is replication ?  PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write a Research Hypothesis: Good & Bad Examples

a hypothesis or prediction of what would happen perfect world

What is a research hypothesis?

A research hypothesis is an attempt at explaining a phenomenon or the relationships between phenomena/variables in the real world. Hypotheses are sometimes called “educated guesses”, but they are in fact (or let’s say they should be) based on previous observations, existing theories, scientific evidence, and logic. A research hypothesis is also not a prediction—rather, predictions are ( should be) based on clearly formulated hypotheses. For example, “We tested the hypothesis that KLF2 knockout mice would show deficiencies in heart development” is an assumption or prediction, not a hypothesis. 

The research hypothesis at the basis of this prediction is “the product of the KLF2 gene is involved in the development of the cardiovascular system in mice”—and this hypothesis is probably (hopefully) based on a clear observation, such as that mice with low levels of Kruppel-like factor 2 (which KLF2 codes for) seem to have heart problems. From this hypothesis, you can derive the idea that a mouse in which this particular gene does not function cannot develop a normal cardiovascular system, and then make the prediction that we started with. 

What is the difference between a hypothesis and a prediction?

You might think that these are very subtle differences, and you will certainly come across many publications that do not contain an actual hypothesis or do not make these distinctions correctly. But considering that the formulation and testing of hypotheses is an integral part of the scientific method, it is good to be aware of the concepts underlying this approach. The two hallmarks of a scientific hypothesis are falsifiability (an evaluation standard that was introduced by the philosopher of science Karl Popper in 1934) and testability —if you cannot use experiments or data to decide whether an idea is true or false, then it is not a hypothesis (or at least a very bad one).

So, in a nutshell, you (1) look at existing evidence/theories, (2) come up with a hypothesis, (3) make a prediction that allows you to (4) design an experiment or data analysis to test it, and (5) come to a conclusion. Of course, not all studies have hypotheses (there is also exploratory or hypothesis-generating research), and you do not necessarily have to state your hypothesis as such in your paper. 

But for the sake of understanding the principles of the scientific method, let’s first take a closer look at the different types of hypotheses that research articles refer to and then give you a step-by-step guide for how to formulate a strong hypothesis for your own paper.

Types of Research Hypotheses

Hypotheses can be simple , which means they describe the relationship between one single independent variable (the one you observe variations in or plan to manipulate) and one single dependent variable (the one you expect to be affected by the variations/manipulation). If there are more variables on either side, you are dealing with a complex hypothesis. You can also distinguish hypotheses according to the kind of relationship between the variables you are interested in (e.g., causal or associative ). But apart from these variations, we are usually interested in what is called the “alternative hypothesis” and, in contrast to that, the “null hypothesis”. If you think these two should be listed the other way round, then you are right, logically speaking—the alternative should surely come second. However, since this is the hypothesis we (as researchers) are usually interested in, let’s start from there.

Alternative Hypothesis

If you predict a relationship between two variables in your study, then the research hypothesis that you formulate to describe that relationship is your alternative hypothesis (usually H1 in statistical terms). The goal of your hypothesis testing is thus to demonstrate that there is sufficient evidence that supports the alternative hypothesis, rather than evidence for the possibility that there is no such relationship. The alternative hypothesis is usually the research hypothesis of a study and is based on the literature, previous observations, and widely known theories. 

Null Hypothesis

The hypothesis that describes the other possible outcome, that is, that your variables are not related, is the null hypothesis ( H0 ). Based on your findings, you choose between the two hypotheses—usually that means that if your prediction was correct, you reject the null hypothesis and accept the alternative. Make sure, however, that you are not getting lost at this step of the thinking process: If your prediction is that there will be no difference or change, then you are trying to find support for the null hypothesis and reject H1. 

Directional Hypothesis

While the null hypothesis is obviously “static”, the alternative hypothesis can specify a direction for the observed relationship between variables—for example, that mice with higher expression levels of a certain protein are more active than those with lower levels. This is then called a one-tailed hypothesis. 

Another example for a directional one-tailed alternative hypothesis would be that 

H1: Attending private classes before important exams has a positive effect on performance. 

Your null hypothesis would then be that

H0: Attending private classes before important exams has no/a negative effect on performance.

Nondirectional Hypothesis

A nondirectional hypothesis does not specify the direction of the potentially observed effect, only that there is a relationship between the studied variables—this is called a two-tailed hypothesis. For instance, if you are studying a new drug that has shown some effects on pathways involved in a certain condition (e.g., anxiety) in vitro in the lab, but you can’t say for sure whether it will have the same effects in an animal model or maybe induce other/side effects that you can’t predict and potentially increase anxiety levels instead, you could state the two hypotheses like this:

H1: The only lab-tested drug (somehow) affects anxiety levels in an anxiety mouse model.

You then test this nondirectional alternative hypothesis against the null hypothesis:

H0: The only lab-tested drug has no effect on anxiety levels in an anxiety mouse model.

hypothesis in a research paper

How to Write a Hypothesis for a Research Paper

Now that we understand the important distinctions between different kinds of research hypotheses, let’s look at a simple process of how to write a hypothesis.

Writing a Hypothesis Step:1

Ask a question, based on earlier research. Research always starts with a question, but one that takes into account what is already known about a topic or phenomenon. For example, if you are interested in whether people who have pets are happier than those who don’t, do a literature search and find out what has already been demonstrated. You will probably realize that yes, there is quite a bit of research that shows a relationship between happiness and owning a pet—and even studies that show that owning a dog is more beneficial than owning a cat ! Let’s say you are so intrigued by this finding that you wonder: 

What is it that makes dog owners even happier than cat owners? 

Let’s move on to Step 2 and find an answer to that question.

Writing a Hypothesis Step 2:

Formulate a strong hypothesis by answering your own question. Again, you don’t want to make things up, take unicorns into account, or repeat/ignore what has already been done. Looking at the dog-vs-cat papers your literature search returned, you see that most studies are based on self-report questionnaires on personality traits, mental health, and life satisfaction. What you don’t find is any data on actual (mental or physical) health measures, and no experiments. You therefore decide to make a bold claim come up with the carefully thought-through hypothesis that it’s maybe the lifestyle of the dog owners, which includes walking their dog several times per day, engaging in fun and healthy activities such as agility competitions, and taking them on trips, that gives them that extra boost in happiness. You could therefore answer your question in the following way:

Dog owners are happier than cat owners because of the dog-related activities they engage in.

Now you have to verify that your hypothesis fulfills the two requirements we introduced at the beginning of this resource article: falsifiability and testability . If it can’t be wrong and can’t be tested, it’s not a hypothesis. We are lucky, however, because yes, we can test whether owning a dog but not engaging in any of those activities leads to lower levels of happiness or well-being than owning a dog and playing and running around with them or taking them on trips.  

Writing a Hypothesis Step 3:

Make your predictions and define your variables. We have verified that we can test our hypothesis, but now we have to define all the relevant variables, design our experiment or data analysis, and make precise predictions. You could, for example, decide to study dog owners (not surprising at this point), let them fill in questionnaires about their lifestyle as well as their life satisfaction (as other studies did), and then compare two groups of active and inactive dog owners. Alternatively, if you want to go beyond the data that earlier studies produced and analyzed and directly manipulate the activity level of your dog owners to study the effect of that manipulation, you could invite them to your lab, select groups of participants with similar lifestyles, make them change their lifestyle (e.g., couch potato dog owners start agility classes, very active ones have to refrain from any fun activities for a certain period of time) and assess their happiness levels before and after the intervention. In both cases, your independent variable would be “ level of engagement in fun activities with dog” and your dependent variable would be happiness or well-being . 

Examples of a Good and Bad Hypothesis

Let’s look at a few examples of good and bad hypotheses to get you started.

Good Hypothesis Examples

Bad hypothesis examples, tips for writing a research hypothesis.

If you understood the distinction between a hypothesis and a prediction we made at the beginning of this article, then you will have no problem formulating your hypotheses and predictions correctly. To refresh your memory: We have to (1) look at existing evidence, (2) come up with a hypothesis, (3) make a prediction, and (4) design an experiment. For example, you could summarize your dog/happiness study like this:

(1) While research suggests that dog owners are happier than cat owners, there are no reports on what factors drive this difference. (2) We hypothesized that it is the fun activities that many dog owners (but very few cat owners) engage in with their pets that increases their happiness levels. (3) We thus predicted that preventing very active dog owners from engaging in such activities for some time and making very inactive dog owners take up such activities would lead to an increase and decrease in their overall self-ratings of happiness, respectively. (4) To test this, we invited dog owners into our lab, assessed their mental and emotional well-being through questionnaires, and then assigned them to an “active” and an “inactive” group, depending on… 

Note that you use “we hypothesize” only for your hypothesis, not for your experimental prediction, and “would” or “if – then” only for your prediction, not your hypothesis. A hypothesis that states that something “would” affect something else sounds as if you don’t have enough confidence to make a clear statement—in which case you can’t expect your readers to believe in your research either. Write in the present tense, don’t use modal verbs that express varying degrees of certainty (such as may, might, or could ), and remember that you are not drawing a conclusion while trying not to exaggerate but making a clear statement that you then, in a way, try to disprove . And if that happens, that is not something to fear but an important part of the scientific process.

Similarly, don’t use “we hypothesize” when you explain the implications of your research or make predictions in the conclusion section of your manuscript, since these are clearly not hypotheses in the true sense of the word. As we said earlier, you will find that many authors of academic articles do not seem to care too much about these rather subtle distinctions, but thinking very clearly about your own research will not only help you write better but also ensure that even that infamous Reviewer 2 will find fewer reasons to nitpick about your manuscript. 

Perfect Your Manuscript With Professional Editing

Now that you know how to write a strong research hypothesis for your research paper, you might be interested in our free AI proofreader , Wordvice AI, which finds and fixes errors in grammar, punctuation, and word choice in academic texts. Or if you are interested in human proofreading , check out our English editing services , including research paper editing and manuscript editing .

On the Wordvice academic resources website , you can also find many more articles and other resources that can help you with writing the other parts of your research paper , with making a research paper outline before you put everything together, or with writing an effective cover letter once you are ready to submit.

What Is a Hypothesis? (Science)

If...,Then...

Angela Lumsden/Getty Images

  • Scientific Method
  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject.

In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

In the study of logic, a hypothesis is an if-then proposition, typically written in the form, "If X , then Y ."

In common usage, a hypothesis is simply a proposed explanation or prediction, which may or may not be tested.

Writing a Hypothesis

Most scientific hypotheses are proposed in the if-then format because it's easy to design an experiment to see whether or not a cause and effect relationship exists between the independent variable and the dependent variable . The hypothesis is written as a prediction of the outcome of the experiment.

  • Null Hypothesis and Alternative Hypothesis

Statistically, it's easier to show there is no relationship between two variables than to support their connection. So, scientists often propose the null hypothesis . The null hypothesis assumes changing the independent variable will have no effect on the dependent variable.

In contrast, the alternative hypothesis suggests changing the independent variable will have an effect on the dependent variable. Designing an experiment to test this hypothesis can be trickier because there are many ways to state an alternative hypothesis.

For example, consider a possible relationship between getting a good night's sleep and getting good grades. The null hypothesis might be stated: "The number of hours of sleep students get is unrelated to their grades" or "There is no correlation between hours of sleep and grades."

An experiment to test this hypothesis might involve collecting data, recording average hours of sleep for each student and grades. If a student who gets eight hours of sleep generally does better than students who get four hours of sleep or 10 hours of sleep, the hypothesis might be rejected.

But the alternative hypothesis is harder to propose and test. The most general statement would be: "The amount of sleep students get affects their grades." The hypothesis might also be stated as "If you get more sleep, your grades will improve" or "Students who get nine hours of sleep have better grades than those who get more or less sleep."

In an experiment, you can collect the same data, but the statistical analysis is less likely to give you a high confidence limit.

Usually, a scientist starts out with the null hypothesis. From there, it may be possible to propose and test an alternative hypothesis, to narrow down the relationship between the variables.

Example of a Hypothesis

Examples of a hypothesis include:

  • If you drop a rock and a feather, (then) they will fall at the same rate.
  • Plants need sunlight in order to live. (if sunlight, then life)
  • Eating sugar gives you energy. (if sugar, then energy)
  • White, Jay D.  Research in Public Administration . Conn., 1998.
  • Schick, Theodore, and Lewis Vaughn.  How to Think about Weird Things: Critical Thinking for a New Age . McGraw-Hill Higher Education, 2002.
  • Null Hypothesis Definition and Examples
  • Definition of a Hypothesis
  • What Are the Elements of a Good Hypothesis?
  • Six Steps of the Scientific Method
  • Independent Variable Definition and Examples
  • What Are Examples of a Hypothesis?
  • Understanding Simple vs Controlled Experiments
  • Scientific Method Flow Chart
  • Scientific Method Vocabulary Terms
  • What Is a Testable Hypothesis?
  • Null Hypothesis Examples
  • What 'Fail to Reject' Means in a Hypothesis Test
  • How To Design a Science Fair Experiment
  • What Is an Experiment? Definition and Design
  • Hypothesis Test for the Difference of Two Population Proportions

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Course: MCAT   >   Unit 12

  • Perception, prejudice, and bias questions
  • Attribution Theory - Basic covariation
  • Attribution theory - Attribution error and culture
  • Stereotypes stereotype threat and self fulfilling prophecies
  • Emotion and cognition in prejudice
  • Prejudice and discrimination based on race, ethnicity, power, social class, and prestige
  • Stigma - Social and self
  • Social perception - Primacy recency
  • Social perception - The Halo Effect

Social perception - The Just World Hypothesis

  • Ethnocentrism and cultural relativism in group and out group

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Good Answer

Video transcript

  • Skip to primary navigation
  • Skip to main content
  • Skip to footer

Science Struck

Science Struck

What’s the Real Difference Between Hypothesis and Prediction

Both hypothesis and prediction fall in the realm of guesswork, but with different assumptions. This Buzzle write-up below will elaborate on the differences between hypothesis and prediction.

Like it? Share it!

What's the Difference Between Hypothesis and Prediction

“There is no justifiable prediction about how the hypothesis will hold up in the future; its degree of corroboration simply is a historical statement describing how severely the hypothesis has been tested in the past.” ― Robert Nozick, American author, professor, and philosopher

A lot of people tend to think that a hypothesis is the same as prediction, but this is not true. They are entirely different terms, though they can be manifested within the same example. They are both entities that stem from statistics, and are used in a variety of applications like finance, mathematics, science (widely), sports, psychology, etc. A hypothesis may be a prediction, but the reverse may not be true.

Also, a prediction may or may not agree with the hypothesis. Confused? Don’t worry, read the hypothesis vs. prediction comparison, provided below with examples, to clear your doubts regarding both these entities.

  • A hypothesis is a kind of guess or proposition regarding a situation.
  • It can be called a kind of intelligent guess or prediction, and it needs to be proved using different methods.
  • Formulating a hypothesis is an important step in experimental design, for it helps to predict things that might take place in the course of research.
  • The strength of the statement is based on how effectively it is proved while conducting experiments.
  • It is usually written in the ‘If-then-because’ format.
  • For example, ‘ If Susan’s mood depends on the weather, then she will be happy today, because it is bright and sunny outside. ‘. Here, Susan’s mood is the dependent variable, and the weather is the independent variable. Thus, a hypothesis helps establish a relationship.
  • A prediction is also a type of guess, in fact, it is a guesswork in the true sense of the word.
  • It is not an educated guess, like a hypothesis, i.e., it is based on established facts.
  • While making a prediction for various applications, you have to take into account all the current observations.
  • It can be testable, but just once. This goes to prove that the strength of the statement is based on whether the predicted event occurs or not.
  • It is harder to define, and it contains many variations, which is why, probably, it is confused to be a fictional guess or forecast.
  • For example, He is studying very hard, he might score an A . Here, we are predicting that since the student is working hard, he might score good marks. It is based on an observation and does not establish any relationship.

Factors of Differentiation

♦ Consider a statement, ‘If I add some chili powder, the pasta may become spicy’. This is a hypothesis, and a testable statement. You can carry on adding 1 pinch of chili powder, or a spoon, or two spoons, and so on. The dish may become spicier or pungent, or there may be no reaction at all. The sum and substance is that, the amount of chili powder is the independent variable here, and the pasta dish is the dependent variable, which is expected to change with the addition of chili powder. This statement thus establishes and analyzes the relationship between both variables, and you will get a variety of results when the test is performed multiple times. Your hypothesis may even be opposed tomorrow.

♦ Consider the statement, ‘Robert has longer legs, he may run faster’. This is just a prediction. You may have read somewhere that people with long legs tend to run faster. It may or may not be true. What is important here is ‘Robert’. You are talking only of Robert’s legs, so you will test if he runs faster. If he does, your prediction is true, if he doesn’t, your prediction is false. No more testing.

♦ Consider a statement, ‘If you eat chocolates, you may get acne’. This is a simple hypothesis, based on facts, yet necessary to be proven. It can be tested on a number of people. It may be true, it may be false. The fact is, it defines a relationship between chocolates and acne. The relationship can be analyzed and the results can be recorded. Tomorrow, someone might come up with an alternative hypothesis that chocolate does not cause acne. This will need to be tested again, and so on. A hypothesis is thus, something that you think happens due to a reason.

♦ Consider a statement, ‘The sky is overcast, it may rain today’. A simple guess, based on the fact that it generally rains if the sky is overcast. It may not even be testable, i.e., the sky can be overcast now and clear the next minute. If it does rain, you have predicted correctly. If it does not, you are wrong. No further analysis or questions.

Both hypothesis and prediction need to be effectively structured so that further analysis of the problem statement is easier. Remember that, the key difference between the two is the procedure of proving the statements. Also, you cannot state one is better than the other, this depends entirely on the application in hand.

Get Updates Right to Your Inbox

Privacy overview.

Research Hypothesis In Psychology: Types, & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .

Hypotheses connect theory to data and guide the research process towards expanding scientific understanding

Some key points about hypotheses:

  • A hypothesis expresses an expected pattern or relationship. It connects the variables under investigation.
  • It is stated in clear, precise terms before any data collection or analysis occurs. This makes the hypothesis testable.
  • A hypothesis must be falsifiable. It should be possible, even if unlikely in practice, to collect data that disconfirms rather than supports the hypothesis.
  • Hypotheses guide research. Scientists design studies to explicitly evaluate hypotheses about how nature works.
  • For a hypothesis to be valid, it must be testable against empirical evidence. The evidence can then confirm or disprove the testable predictions.
  • Hypotheses are informed by background knowledge and observation, but go beyond what is already known to propose an explanation of how or why something occurs.
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.

Types of Research Hypotheses

Alternative hypothesis.

The research hypothesis is often called the alternative or experimental hypothesis in experimental research.

It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.

The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).

A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:

  • Important hypotheses lead to predictions that can be tested empirically. The evidence can then confirm or disprove the testable predictions.

In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.

It states that the results are not due to chance and are significant in supporting the theory being investigated.

The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.

Null Hypothesis

The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.

It states results are due to chance and are not significant in supporting the idea being investigated.

The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.

Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.

This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.

Nondirectional Hypothesis

A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.

It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.

For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.

Directional Hypothesis

A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)

It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.

For example, “Exercise increases weight loss” is a directional hypothesis.

hypothesis

Falsifiability

The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.

Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.

It means that there should exist some potential evidence or experiment that could prove the proposition false.

However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.

For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.

Can a Hypothesis be Proven?

Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.

All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.

In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
  • Disproving a hypothesis is definitive. Solid disconfirmatory evidence will falsify a hypothesis and require altering or discarding it based on the evidence.
  • However, confirming evidence is always open to revision. Other explanations may account for the same results, and additional or contradictory evidence may emerge over time.

We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.

If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.

Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.

How to Write a Hypothesis

  • Identify variables . The researcher manipulates the independent variable and the dependent variable is the measured outcome.
  • Operationalized the variables being investigated . Operationalization of a hypothesis refers to the process of making the variables physically measurable or testable, e.g. if you are about to study aggression, you might count the number of punches given by participants.
  • Decide on a direction for your prediction . If there is evidence in the literature to support a specific effect of the independent variable on the dependent variable, write a directional (one-tailed) hypothesis. If there are limited or ambiguous findings in the literature regarding the effect of the independent variable on the dependent variable, write a non-directional (two-tailed) hypothesis.
  • Make it Testable : Ensure your hypothesis can be tested through experimentation or observation. It should be possible to prove it false (principle of falsifiability).
  • Clear & concise language . A strong hypothesis is concise (typically one to two sentences long), and formulated using clear and straightforward language, ensuring it’s easily understood and testable.

Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).

Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:

  • The alternative hypothesis states that students will recall significantly more information on a Monday morning than on a Friday afternoon.
  • The null hypothesis states that there will be no significant difference in the amount recalled on a Monday morning compared to a Friday afternoon. Any difference will be due to chance or confounding factors.

More Examples

  • Memory : Participants exposed to classical music during study sessions will recall more items from a list than those who studied in silence.
  • Social Psychology : Individuals who frequently engage in social media use will report higher levels of perceived social isolation compared to those who use it infrequently.
  • Developmental Psychology : Children who engage in regular imaginative play have better problem-solving skills than those who don’t.
  • Clinical Psychology : Cognitive-behavioral therapy will be more effective in reducing symptoms of anxiety over a 6-month period compared to traditional talk therapy.
  • Cognitive Psychology : Individuals who multitask between various electronic devices will have shorter attention spans on focused tasks than those who single-task.
  • Health Psychology : Patients who practice mindfulness meditation will experience lower levels of chronic pain compared to those who don’t meditate.
  • Organizational Psychology : Employees in open-plan offices will report higher levels of stress than those in private offices.
  • Behavioral Psychology : Rats rewarded with food after pressing a lever will press it more frequently than rats who receive no reward.

Print Friendly, PDF & Email

  • Newsletters

Site search

  • Israel-Hamas war
  • Home Planet
  • 2024 election
  • Supreme Court
  • TikTok’s fate
  • All explainers
  • Future Perfect

Filed under:

How technological progress is making it likelier than ever that humans will destroy ourselves

The “vulnerable world hypothesis,” explained.

Share this story

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: How technological progress is making it likelier than ever that humans will destroy ourselves

a hypothesis or prediction of what would happen perfect world

Technological progress has eradicated diseases, helped double life expectancy, reduced starvation and extreme poverty, enabled flight and global communications, and made this generation the richest one in history.

It has also made it easier than ever to cause destruction on a massive scale. And because it’s easier for a few destructive actors to use technology to wreak catastrophic damage, humanity may be in trouble.

This is the argument made by Oxford professor Nick Bostrom, director of the Future of Humanity Institute, in a new working paper, “ The Vulnerable World Hypothesis .” The paper explores whether it’s possible for truly destructive technologies to be cheap and simple — and therefore exceptionally difficult to control. Bostrom looks at historical developments to imagine how the proliferation of some of those technologies might have gone differently if they’d been less expensive, and describes some reasons to think such dangerous future technologies might be ahead.

In general, progress has brought about unprecedented prosperity while also making it easier to do harm. But between two kinds of outcomes — gains in well-being and gains in destructive capacity — the beneficial ones have largely won out. We have much better guns than we had in the 1700s, but it is estimated that we have a much lower homicide rate , because prosperity, cultural changes, and better institutions have combined to decrease violence by more than improvements in technology have increased it.

But what if there’s an invention out there — something no scientist has thought of yet — that has catastrophic destructive power, on the scale of the atom bomb, but simpler and less expensive to make? What if it’s something that could be made in somebody’s basement? If there are inventions like that in the future of human progress, then we’re all in a lot of trouble — because it’d only take a few people and resources to cause catastrophic damage.

That’s the problem that Bostrom wrestles with in his new paper. A “vulnerable world,” he argues, is one where “there is some level of technological development at which civilization almost certainly gets devastated by default.” The paper doesn’t prove (and doesn’t try to prove) that we live in such a vulnerable world, but makes a compelling case that the possibility is worth considering.

Progress has largely been highly beneficial. Will it stay that way?

Bostrom is among the most prominent philosophers and researchers in the field of global catastrophic risks and the future of human civilization. He co-founded the Future of Humanity Institute at Oxford and authored Superintelligence , a book about the risks and potential of advanced artificial intelligence. His research is typically concerned with how humanity can solve the problems we’re creating for ourselves and see our way through to a stable future.

When we invent a new technology, we often do so in ignorance of all of its side effects. We first determine whether it works, and we learn later, sometimes much later, what other effects it has. CFCs, for example, made refrigeration cheaper, which was great news for consumers — until we realized CFCs were destroying the ozone layer, and the global community united to ban them.

On other occasions, worries about side effects aren’t borne out. GMOs sounded to many consumers like they could pose health risks, but there’s now a sizable body of research suggesting they are safe.

Bostrom proposes a simplified analogy for new inventions:

One way of looking at human creativity is as a process of pulling balls out of a giant urn. The balls represent possible ideas, discoveries, technological inventions. Over the course of history, we have extracted a great many balls—mostly white (beneficial) but also various shades of grey (moderately harmful ones and mixed blessings). The cumulative effect on the human condition has so far been overwhelmingly positive, and may be much better still in the future. The global population has grown about three orders of magnitude over the last ten thousand years, and in the last two centuries per capita income, standards of living, and life expectancy have also risen. What we haven’t extracted, so far, is a black ball—a technology that invariably or by default destroys the civilization that invents it. The reason is not that we have been particularly careful or wise in our technology policy. We have just been lucky.

That terrifying final claim is the focus of the rest of the paper.

A hard look at the history of nuclear weapon development

One might think it unfair to say “we have just been lucky” that no technology we’ve invented has had destructive consequences we didn’t anticipate. After all, we’ve also been careful, and tried to calculate the potential risks of things like nuclear tests before we conducted them.

Bostrom, looking at the history of nuclear weapons development, concludes we weren’t careful enough.

In 1942, it occurred to Edward Teller, one of the Manhattan scientists, that a nuclear explosion would create a temperature unprecedented in Earth’s history, producing conditions similar to those in the center of the sun, and that this could conceivably trigger a self-sustaining thermonuclear reaction in the surrounding air or water. The importance of Teller’s concern was immediately recognized by Robert Oppenheimer, the head of the Los Alamos lab. Oppenheimer notified his superior and ordered further calculations to investigate the possibility. These calculations indicated that atmospheric ignition would not occur. This prediction was confirmed in 1945 by the Trinity test, which involved the detonation of the world’s first nuclear explosive.

That might sound like a reassuring story — we considered the possibility, did a calculation, concluded we didn’t need to worry, and went ahead.

The report that Robert Oppenheimer commissioned, though, sounds fairly shaky, for something that was used as reason to proceed with a dangerous new experiment. It ends: “One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundation makes further work on the subject highly desirable.” That was our state of understanding of the risk of atmospheric ignition when we proceeded with the first nuclear test.

A few years later, we badly miscalculated in a different risk assessment about nuclear weapons. Bostrom writes:

In 1954, the U.S. carried out another nuclear test, the Castle Bravo test, which was planned as a secret experiment with an early lithium-based thermonuclear bomb design. Lithium, like uranium, has two important isotopes: lithium-6 and lithium-7. Ahead of the test, the nuclear scientists calculated the yield to be 6 megatons (with an uncertainty range of 4-8 megatons). They assumed that only the lithium-6 would contribute to the reaction, but they were wrong. The lithium-7 contributed more energy than the lithium-6, and the bomb detonated with a yield of 15 megaton—more than double of what they had calculated (and equivalent to about 1,000 Hiroshimas). The unexpectedly powerful blast destroyed much of the test equipment. Radioactive fallout poisoned the inhabitants of downwind islands and the crew of a Japanese fishing boat, causing an international incident.

Bostrom concludes that “we may regard it as lucky that it was the Castle Bravo calculation that was incorrect, and not the calculation of whether the Trinity test would ignite the atmosphere.”

Nuclear reactions happen not to ignite the atmosphere. But Bostrom believes that we weren’t sufficiently careful, in advance of the first tests, to be totally certain of this. There were big holes in our understanding of how nuclear weapons worked when we rushed to first test them. It could be that the next time we deploy a new, powerful technology, with big holes in our understanding of how it works, we won’t be so lucky.

Destructive technologies up to this point have been extremely complex. Future ones could be simple.

We haven’t done a great job of managing nuclear nonproliferation . But most countries still don’t have nuclear weapons — and no individuals do — because of how nuclear weapons must be developed. Building nuclear weapons takes years, costs billions of dollars, and requires the expertise of top scientists. As a result, it’s possible to tell when a country is pursuing nuclear weapons.

Bostrom invites us to imagine how things would have gone if nuclear weaponry had required abundant elements, rather than rare ones.

Investigations showed that making an atomic weapon requires several kilograms of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, suppose it had turned out otherwise: that there had been some really easy way to unleash the energy of the atom—say, by sending an electric current through a metal object placed between two sheets of glass.

In that case, the weapon would proliferate as quickly as the knowledge that it was possible. We might react by trying to ban the study of nuclear physics, but it’s hard to ban a whole field of knowledge and it’s not clear the political will would materialize. It’d be even harder to try to ban glass or electric circuitry — probably impossible.

In some respects, we were remarkably fortunate with nuclear weapons. The fact that they rely on extremely rare materials and are so complex and expensive to build makes it far more tractable to keep them from being used than it would be if the materials for them had happened to be abundant.

If future technological discoveries — not in nuclear physics, which we now understand very well, but in other less-understood, speculative fields — are easier to build, Bostrom warns, they may proliferate widely.

Would some people use weapons of mass destruction, if they could?

We might think that the existence of simple destructive weapons shouldn’t, in itself, be enough to worry us. Most people don’t engage in acts of terroristic violence, even though technically it wouldn’t be very hard. Similarly, most people would never use dangerous technologies even if they could be assembled in their garage.

Bostrom observes, though, that it doesn’t take very many people who would act destructively. Even if only one in a million people were interested in using an invention violently, that could lead to disaster. And he argues that there will be at least some such people: “Given the diversity of human character and circumstance, for any ever so imprudent, immoral, or self-defeating action, there is some residual fraction of humans who would choose to take that action.”

That means, he argues, that anything as destructive as a nuclear weapon, and straightforward enough that most people can build it with widely available technology, will almost certainly be repeatedly used, anywhere in the world.

These aren’t the only scenarios of interest. Bostrom also examines technologies that would drive nation-states to war. “A technology that ‘democratizes’ mass destruction is not the only kind of black ball that could be hoisted out of the urn. Another kind would be a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction,” he writes.

Again, he looks to the history of nuclear war for examples. He argues that the most dangerous period in history was the period between the start of the nuclear arms race and the invention of second-strike capabilities such as nuclear submarines. With the introduction of second-strike capabilities, nuclear risk may have decreased.

It is widely believed among nuclear strategists that the development of a reasonably secure second-strike capability by both superpowers by the mid-1960s created the conditions for “strategic stability.” Prior to this period, American war plans reflected a much greater inclination, in any crisis situation, to launch a preemptive nuclear strike against the Soviet Union’s nuclear arsenal. The introduction of nuclear submarine-based ICBMs was thought to be particularly helpful for ensuring second-strike capabilities (and thus “mutually assured destruction”) since it was widely believed to be practically impossible for an aggressor to eliminate the adversary’s boomer [sic] fleet in the initial attack.

In this case, one technology brought us into a dangerous situation with great powers highly motivated to use their weapons. Another technology — the capacity to retaliate — brought us out of that terrible situation and into a stabler one. If nuclear submarines hadn’t developed, nuclear weapons might have been used in the past half-century or so.

The solutions for a vulnerable world are unappealing — and perhaps ineffective

Bostrom devotes the second half of the paper to examining our options for preserving stability if there turn out to be dangerous technologies ahead for us.

None of them are appealing.

Halting the progress of technology could save us from confronting any of these problems. Bostrom considers it and discards it as impossible — some countries or actors would continue their research, in secrecy if necessary, and the outrage and backlash associated with a ban on a field of science might draw more attention to the ban.

A limited variant, which Bostrom calls differential technological development, might be more workable: “Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.”

To the extent we can identify which technologies will be stabilizing (like nuclear submarines) and work to build them faster than building dangerous technologies (like nuclear weapons), we can manage some risks in that fashion. Despite the frightening tone and implications of the paper, Bostrom writes that “[the vulnerable world hypothesis] does not imply that civilization is doomed.” But differential technological development won’t manage every risk, and might fail to be sufficient for many categories of risk.

The other options Bostrom puts forward are less appealing.

If the criminal use of a destructive technology can kill millions of people, then crime prevention becomes essential — and total crime prevention would require a massive surveillance state. If international arms races are likely to be even more dangerous than the nuclear brinksmanship of the Cold War, Bostrom argues we might need a single global government with the power to enforce demands on member states.

For some vulnerabilities, he argues further, we might actually need both:

Extremely effective preventive policing would be required because individuals can engage in hard-to-regulate activities that must nevertheless be effectively regulated, and strong global governance would be required because states may have incentives not to effectively regulate those activities even if they have the capability to do so. In combination, however, ubiquitous-surveillance-powered preventive policing and effective global governance would be sufficient to stabilize most vulnerabilities, making it safe to continue scientific and technological development even if [the vulnerable world hypothesis] is true.

It’s here, where the conversation turns from philosophy to policy, that it seems to me Bostrom’s argument gets weaker.

While he’s aware of the abuses of power that such a universal surveillance state would make possible, his overall take on it is more optimistic than seems warranted; he writes, for example, “If the system works as advertised, many forms of crime could be nearly eliminated, with concomitant reductions in costs of policing, courts, prisons, and other security systems. It might also generate growth in many beneficial cultural practices that are currently inhibited by a lack of social trust.”

But it’s hard to imagine that universal surveillance would in fact produce universal and uniform law enforcement, especially in a country like the US. Surveillance wouldn’t solve prosecutorial discretion or the criminalization of things that shouldn’t be illegal in the first place. Most of the world’s population lives under governments without strong protections for political or religious freedom. Bostrom’s optimism here feels out of touch.

Furthermore, most countries in the world simply do not have the governance capacity to run a surveillance state, and it’s unclear that the U.S. or another superpower has the ability to impose such capacity externally (to say nothing of whether it would be desirable).

If the continued survival of humanity depended on successfully imposing worldwide surveillance, I would expect the effort to lead to disastrous unintended consequences — as efforts at “nation-building” historically have. Even in the places where such a system was successfully imposed, I would expect an overtaxed law enforcement apparatus that engaged in just as much, or more, selective enforcement as it engages in presently.

Economist Robin Hanson, responding to the paper , highlighted Bostrom’s optimism about global governance as a weak point, raising a number of objections. First, “It is fine for Bostrom to seek not-yet-appreciated upsides [of more governance], but we should also seek not-yet-appreciated downsides” — downsides like introducing a single point of failure and reducing healthy competition between political systems and ideas.

Second, Hanson writes, “I worry that ‘bad cases make bad law.’ Legal experts say it is bad to focus on extreme cases when changing law, and similarly it may go badly to focus on very unlikely but extreme-outcome scenarios when reasoning about future-related policy.”

Finally, “existing governance mechanisms do especially badly with extreme scenarios. The history of how the policy world responded badly to extreme nanotech scenarios is a case worth considering.”

Bostrom’s paper is stronger where it’s focused on the question of management of catastrophic risks than when it ventures into these issues. The policy questions about risk management are of such complexity that it’s impossible for the paper to do more than skim the subject.

But even though the paper wavers there, it’s overall a compelling — and scary — case that technological progress can make a civilization frighteningly vulnerable, and that it’d be an exceptionally challenging project to make such a world safe.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good

Will you support Vox today?

We believe that everyone deserves to understand the world that they live in. That kind of knowledge helps create better citizens, neighbors, friends, parents, and stewards of this planet. Producing deeply researched, explanatory journalism takes resources. You can support this mission by making a financial gift to Vox today. Will you join us?

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

a hypothesis or prediction of what would happen perfect world

Next Up In Future Perfect

Sign up for the newsletter today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

Thanks for signing up!

Check your inbox for a welcome email.

Oops. Something went wrong. Please enter a valid email and try again.

The sun, obscured by a hazy grayish sky, shines above a series of telephone poles and wire.

We could be heading into the hottest summer of our lives

A statue of George Washington has a keffiyeh around its neck and a Palestinian flag as a cape. Behind it, students camp in tents and sit on the grass.

How today’s antiwar protests stack up against major student movements in history

Pro-Palestinian protesters holding a sign that says “Liberated Zone” in New York.

What the backlash to student protests over Gaza is really about

a hypothesis or prediction of what would happen perfect world

You need $500. How should you get it?

Close-up photo of someone looking at a burger on a food delivery app on their phone.

Food delivery fees have soared. How much of it goes to workers?

An employee doing lab work.

So you’ve found research fraud. Now what?

IMAGES

  1. How to Write a Hypothesis: The Ultimate Guide with Examples

    a hypothesis or prediction of what would happen perfect world

  2. Hypothesis

    a hypothesis or prediction of what would happen perfect world

  3. Hypothesis vs Prediction|Difference between hypothesis & prediction

    a hypothesis or prediction of what would happen perfect world

  4. What is a Hypothesis

    a hypothesis or prediction of what would happen perfect world

  5. How Do You Formulate A Hypothesis? Hypothesis Testing Assignment Help

    a hypothesis or prediction of what would happen perfect world

  6. Research Hypothesis: Definition, Types, Examples and Quick Tips

    a hypothesis or prediction of what would happen perfect world

VIDEO

  1. Tinubu: Nigerians Waking Up Going By Lagos Reception, Will Peter Obi's Prediction Happen In 2024?

  2. If you had the chance to change one thing about the world, what would it be and why?

  3. What happen if humans disappear suddenly? (Save Nature, Save Humanity 🙏)#humanity #facts #shorts

  4. Hypothesis vs Prediction

  5. Hypothesis vs Prediction 1

  6. Carl Sagan's Perfect Science Prediction

COMMENTS

  1. Yogi Berra: 'If the world were perfect, it wouldn't be.'

    If the world were perfect, it wouldn't be. "If the world were perfect, it wouldn't be." - Yogi BerraYogi Berra, a legendary American baseball player and philosopher, famously stated this intriguing quote that invites us to consider the imperfection of our world. At first glance, it may seem paradoxical or even pessimistic, but delving deeper ...

  2. Writing a Hypothesis for Your Science Fair Project

    Predictions should include both an independent variable (the factor you change in an experiment) and a dependent variable (the factor you observe or measure in an experiment). A single hypothesis can lead to multiple predictions, but generally, one or two predictions is enough to tackle for a science fair project.

  3. The scientific method (article)

    At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis. Test the prediction.

  4. Scientific hypothesis

    The Royal Society - On the scope of scientific hypotheses (Apr. 24, 2024) scientific hypothesis, an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world. The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an "If ...

  5. 9 Newtonian Worldview

    In this chapter, we will cover the Newtonian worldview. The Newtonian mosaic was first accepted in Britain ca. 1700. Continental Europe accepted the Newtonian mosaic around 1740, following the confirmation of a novel prediction concerning the shape of the Earth that favoured the Newtonian theory of gravity.

  6. Testing scientific ideas

    Testing hypotheses and theories is at the core of the process of science.Any aspect of the natural world could be explained in many different ways. It is the job of science to collect all those plausible explanations and to use scientific testing to filter through them, retaining ideas that are supported by the evidence and discarding the others. You can think of scientific testing as ...

  7. The Truth about Scientific Models

    It's the hypothesis that has the most explanatory power. In summary, to judge a scientific model, do not ask for predictions. Ask instead to what degree the data are explained by the model and ...

  8. Understanding Hypotheses and Predictions

    Prediction. On the other hand, a prediction is the outcome you would observe if your hypothesis were correct. Predictions are often written in the form of "if, and, then" statements, as in, "if my hypothesis is true, and I were to do this test, then this is what I will observe.". Following our sparrow example, you could predict that ...

  9. PDF Understanding Hypotheses, Predictions, Laws, and Theories

    2. Advancing a causal hypothesis (defined as a proposed explanation) for what has been observed (e.g., "the grass grows better on this side of the building because it is exposed to more sunlight on this side"). 3. Planning a test of the hypothesis that incorporates the generation of a prediction from the hypothesis. 4.

  10. Writing a hypothesis and prediction

    A hypothesis is an idea about how something works that can be tested using experiments. A prediction says what will happen in an experiment if the hypothesis is correct. Presenter 1: We are going ...

  11. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  12. Hypothesis: Definition, Examples, and Types

    A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process. Consider a study designed to examine the relationship between sleep deprivation and test ...

  13. Difference Between Making a Hypothesis and Prediction

    The difference between hypothesis and prediction is explained through explanations & examples. Use our simple table for hypothesis vs prediction reference. ... In science, a prediction is what you expect to happen if your hypothesis is true. So, based on the hypothesis you've created, you can predict the outcome of the experiment.

  14. How to Write a Research Hypothesis: Good & Bad Examples

    Another example for a directional one-tailed alternative hypothesis would be that. H1: Attending private classes before important exams has a positive effect on performance. Your null hypothesis would then be that. H0: Attending private classes before important exams has no/a negative effect on performance.

  15. What Is a Hypothesis? The Scientific Method

    A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject. In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

  16. Social perception

    The Just World Hypothesis suggests that noble actions are rewarded and evil deeds punished. It helps rationalize others' fortunes or misfortunes and gives a sense of predictability. However, it's often challenged as the world isn't always fair. People use rational or irrational techniques to make sense of this.

  17. Hypothesis vs. Prediction: What's the Difference?

    Even though people sometimes use these terms interchangeably, hypotheses and predictions are two different things. Here are some of the primary differences between them: Hypothesis. Prediction. Format. Statements with variables. Commonly "if, then" statements. Function. Provides testable claim for research.

  18. What's the Real Difference Between Hypothesis and Prediction

    Prediction. A prediction is also a type of guess, in fact, it is a guesswork in the true sense of the word. It is not an educated guess, like a hypothesis, i.e., it is based on established facts. While making a prediction for various applications, you have to take into account all the current observations.

  19. Research Hypothesis In Psychology: Types, & Examples

    A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

  20. How technological progress is making it likelier than ever that ...

    Kelsey Piper is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial ...

  21. What would happen in a world without hypothetical situations?

    Hypothesis 2: other loud things also cause fire. Hypothesis 3: Other hot things also cause fire. Fact: After a lightning strike, there is light, heat and thunderous noise. Hypothesis: heat and light have the same root cause, but noise doesn't. Fact: Fire is hot. Hypothesis: Heat may be useful even though it can also be destructive.

  22. AP Comp Sci: Unit 4 Flashcards

    AP Comp Sci: Unit 4. Real-world models. Click the card to flip 👆. What would happen if there were lots more wolves than there are bunnies? Would the wolves live forever? Record your hypothesis, prediction and experiment results. Answer. No the wolves would die from lack of food supply once all the bunnies are gone Their population would ...