SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Critical Thinking

Critical thinking is a widely accepted educational goal. Its definition is contested, but the competing definitions can be understood as differing conceptions of the same basic concept: careful thinking directed to a goal. Conceptions differ with respect to the scope of such thinking, the type of goal, the criteria and norms for thinking carefully, and the thinking components on which they focus. Its adoption as an educational goal has been recommended on the basis of respect for students’ autonomy and preparing students for success in life and for democratic citizenship. “Critical thinkers” have the dispositions and abilities that lead them to think critically when appropriate. The abilities can be identified directly; the dispositions indirectly, by considering what factors contribute to or impede exercise of the abilities. Standardized tests have been developed to assess the degree to which a person possesses such dispositions and abilities. Educational intervention has been shown experimentally to improve them, particularly when it includes dialogue, anchored instruction, and mentoring. Controversies have arisen over the generalizability of critical thinking across domains, over alleged bias in critical thinking theories and instruction, and over the relationship of critical thinking to other types of thinking.

2.1 Dewey’s Three Main Examples

2.2 dewey’s other examples, 2.3 further examples, 2.4 non-examples, 3. the definition of critical thinking, 4. its value, 5. the process of thinking critically, 6. components of the process, 7. contributory dispositions and abilities, 8.1 initiating dispositions, 8.2 internal dispositions, 9. critical thinking abilities, 10. required knowledge, 11. educational methods, 12.1 the generalizability of critical thinking, 12.2 bias in critical thinking theory and pedagogy, 12.3 relationship of critical thinking to other types of thinking, other internet resources, related entries.

Use of the term ‘critical thinking’ to describe an educational goal goes back to the American philosopher John Dewey (1910), who more commonly called it ‘reflective thinking’. He defined it as

active, persistent and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it, and the further conclusions to which it tends. (Dewey 1910: 6; 1933: 9)

and identified a habit of such consideration with a scientific attitude of mind. His lengthy quotations of Francis Bacon, John Locke, and John Stuart Mill indicate that he was not the first person to propose development of a scientific attitude of mind as an educational goal.

In the 1930s, many of the schools that participated in the Eight-Year Study of the Progressive Education Association (Aikin 1942) adopted critical thinking as an educational goal, for whose achievement the study’s Evaluation Staff developed tests (Smith, Tyler, & Evaluation Staff 1942). Glaser (1941) showed experimentally that it was possible to improve the critical thinking of high school students. Bloom’s influential taxonomy of cognitive educational objectives (Bloom et al. 1956) incorporated critical thinking abilities. Ennis (1962) proposed 12 aspects of critical thinking as a basis for research on the teaching and evaluation of critical thinking ability.

Since 1980, an annual international conference in California on critical thinking and educational reform has attracted tens of thousands of educators from all levels of education and from many parts of the world. Also since 1980, the state university system in California has required all undergraduate students to take a critical thinking course. Since 1983, the Association for Informal Logic and Critical Thinking has sponsored sessions in conjunction with the divisional meetings of the American Philosophical Association (APA). In 1987, the APA’s Committee on Pre-College Philosophy commissioned a consensus statement on critical thinking for purposes of educational assessment and instruction (Facione 1990a). Researchers have developed standardized tests of critical thinking abilities and dispositions; for details, see the Supplement on Assessment . Educational jurisdictions around the world now include critical thinking in guidelines for curriculum and assessment.

For details on this history, see the Supplement on History .

2. Examples and Non-Examples

Before considering the definition of critical thinking, it will be helpful to have in mind some examples of critical thinking, as well as some examples of kinds of thinking that would apparently not count as critical thinking.

Dewey (1910: 68–71; 1933: 91–94) takes as paradigms of reflective thinking three class papers of students in which they describe their thinking. The examples range from the everyday to the scientific.

Transit : “The other day, when I was down town on 16th Street, a clock caught my eye. I saw that the hands pointed to 12:20. This suggested that I had an engagement at 124th Street, at one o’clock. I reasoned that as it had taken me an hour to come down on a surface car, I should probably be twenty minutes late if I returned the same way. I might save twenty minutes by a subway express. But was there a station near? If not, I might lose more than twenty minutes in looking for one. Then I thought of the elevated, and I saw there was such a line within two blocks. But where was the station? If it were several blocks above or below the street I was on, I should lose time instead of gaining it. My mind went back to the subway express as quicker than the elevated; furthermore, I remembered that it went nearer than the elevated to the part of 124th Street I wished to reach, so that time would be saved at the end of the journey. I concluded in favor of the subway, and reached my destination by one o’clock.” (Dewey 1910: 68–69; 1933: 91–92)

Ferryboat : “Projecting nearly horizontally from the upper deck of the ferryboat on which I daily cross the river is a long white pole, having a gilded ball at its tip. It suggested a flagpole when I first saw it; its color, shape, and gilded ball agreed with this idea, and these reasons seemed to justify me in this belief. But soon difficulties presented themselves. The pole was nearly horizontal, an unusual position for a flagpole; in the next place, there was no pulley, ring, or cord by which to attach a flag; finally, there were elsewhere on the boat two vertical staffs from which flags were occasionally flown. It seemed probable that the pole was not there for flag-flying.

“I then tried to imagine all possible purposes of the pole, and to consider for which of these it was best suited: (a) Possibly it was an ornament. But as all the ferryboats and even the tugboats carried poles, this hypothesis was rejected. (b) Possibly it was the terminal of a wireless telegraph. But the same considerations made this improbable. Besides, the more natural place for such a terminal would be the highest part of the boat, on top of the pilot house. (c) Its purpose might be to point out the direction in which the boat is moving.

“In support of this conclusion, I discovered that the pole was lower than the pilot house, so that the steersman could easily see it. Moreover, the tip was enough higher than the base, so that, from the pilot’s position, it must appear to project far out in front of the boat. Moreover, the pilot being near the front of the boat, he would need some such guide as to its direction. Tugboats would also need poles for such a purpose. This hypothesis was so much more probable than the others that I accepted it. I formed the conclusion that the pole was set up for the purpose of showing the pilot the direction in which the boat pointed, to enable him to steer correctly.” (Dewey 1910: 69–70; 1933: 92–93)

Bubbles : “In washing tumblers in hot soapsuds and placing them mouth downward on a plate, bubbles appeared on the outside of the mouth of the tumblers and then went inside. Why? The presence of bubbles suggests air, which I note must come from inside the tumbler. I see that the soapy water on the plate prevents escape of the air save as it may be caught in bubbles. But why should air leave the tumbler? There was no substance entering to force it out. It must have expanded. It expands by increase of heat, or by decrease of pressure, or both. Could the air have become heated after the tumbler was taken from the hot suds? Clearly not the air that was already entangled in the water. If heated air was the cause, cold air must have entered in transferring the tumblers from the suds to the plate. I test to see if this supposition is true by taking several more tumblers out. Some I shake so as to make sure of entrapping cold air in them. Some I take out holding mouth downward in order to prevent cold air from entering. Bubbles appear on the outside of every one of the former and on none of the latter. I must be right in my inference. Air from the outside must have been expanded by the heat of the tumbler, which explains the appearance of the bubbles on the outside. But why do they then go inside? Cold contracts. The tumbler cooled and also the air inside it. Tension was removed, and hence bubbles appeared inside. To be sure of this, I test by placing a cup of ice on the tumbler while the bubbles are still forming outside. They soon reverse” (Dewey 1910: 70–71; 1933: 93–94).

Dewey (1910, 1933) sprinkles his book with other examples of critical thinking. We will refer to the following.

Weather : A man on a walk notices that it has suddenly become cool, thinks that it is probably going to rain, looks up and sees a dark cloud obscuring the sun, and quickens his steps (1910: 6–10; 1933: 9–13).

Disorder : A man finds his rooms on his return to them in disorder with his belongings thrown about, thinks at first of burglary as an explanation, then thinks of mischievous children as being an alternative explanation, then looks to see whether valuables are missing, and discovers that they are (1910: 82–83; 1933: 166–168).

Typhoid : A physician diagnosing a patient whose conspicuous symptoms suggest typhoid avoids drawing a conclusion until more data are gathered by questioning the patient and by making tests (1910: 85–86; 1933: 170).

Blur : A moving blur catches our eye in the distance, we ask ourselves whether it is a cloud of whirling dust or a tree moving its branches or a man signaling to us, we think of other traits that should be found on each of those possibilities, and we look and see if those traits are found (1910: 102, 108; 1933: 121, 133).

Suction pump : In thinking about the suction pump, the scientist first notes that it will draw water only to a maximum height of 33 feet at sea level and to a lesser maximum height at higher elevations, selects for attention the differing atmospheric pressure at these elevations, sets up experiments in which the air is removed from a vessel containing water (when suction no longer works) and in which the weight of air at various levels is calculated, compares the results of reasoning about the height to which a given weight of air will allow a suction pump to raise water with the observed maximum height at different elevations, and finally assimilates the suction pump to such apparently different phenomena as the siphon and the rising of a balloon (1910: 150–153; 1933: 195–198).

Diamond : A passenger in a car driving in a diamond lane reserved for vehicles with at least one passenger notices that the diamond marks on the pavement are far apart in some places and close together in others. Why? The driver suggests that the reason may be that the diamond marks are not needed where there is a solid double line separating the diamond lane from the adjoining lane, but are needed when there is a dotted single line permitting crossing into the diamond lane. Further observation confirms that the diamonds are close together when a dotted line separates the diamond lane from its neighbour, but otherwise far apart.

Rash : A woman suddenly develops a very itchy red rash on her throat and upper chest. She recently noticed a mark on the back of her right hand, but was not sure whether the mark was a rash or a scrape. She lies down in bed and thinks about what might be causing the rash and what to do about it. About two weeks before, she began taking blood pressure medication that contained a sulfa drug, and the pharmacist had warned her, in view of a previous allergic reaction to a medication containing a sulfa drug, to be on the alert for an allergic reaction; however, she had been taking the medication for two weeks with no such effect. The day before, she began using a new cream on her neck and upper chest; against the new cream as the cause was mark on the back of her hand, which had not been exposed to the cream. She began taking probiotics about a month before. She also recently started new eye drops, but she supposed that manufacturers of eye drops would be careful not to include allergy-causing components in the medication. The rash might be a heat rash, since she recently was sweating profusely from her upper body. Since she is about to go away on a short vacation, where she would not have access to her usual physician, she decides to keep taking the probiotics and using the new eye drops but to discontinue the blood pressure medication and to switch back to the old cream for her neck and upper chest. She forms a plan to consult her regular physician on her return about the blood pressure medication.

Candidate : Although Dewey included no examples of thinking directed at appraising the arguments of others, such thinking has come to be considered a kind of critical thinking. We find an example of such thinking in the performance task on the Collegiate Learning Assessment (CLA+), which its sponsoring organization describes as

a performance-based assessment that provides a measure of an institution’s contribution to the development of critical-thinking and written communication skills of its students. (Council for Aid to Education 2017)

A sample task posted on its website requires the test-taker to write a report for public distribution evaluating a fictional candidate’s policy proposals and their supporting arguments, using supplied background documents, with a recommendation on whether to endorse the candidate.

Immediate acceptance of an idea that suggests itself as a solution to a problem (e.g., a possible explanation of an event or phenomenon, an action that seems likely to produce a desired result) is “uncritical thinking, the minimum of reflection” (Dewey 1910: 13). On-going suspension of judgment in the light of doubt about a possible solution is not critical thinking (Dewey 1910: 108). Critique driven by a dogmatically held political or religious ideology is not critical thinking; thus Paulo Freire (1968 [1970]) is using the term (e.g., at 1970: 71, 81, 100, 146) in a more politically freighted sense that includes not only reflection but also revolutionary action against oppression. Derivation of a conclusion from given data using an algorithm is not critical thinking.

What is critical thinking? There are many definitions. Ennis (2016) lists 14 philosophically oriented scholarly definitions and three dictionary definitions. Following Rawls (1971), who distinguished his conception of justice from a utilitarian conception but regarded them as rival conceptions of the same concept, Ennis maintains that the 17 definitions are different conceptions of the same concept. Rawls articulated the shared concept of justice as

a characteristic set of principles for assigning basic rights and duties and for determining… the proper distribution of the benefits and burdens of social cooperation. (Rawls 1971: 5)

Bailin et al. (1999b) claim that, if one considers what sorts of thinking an educator would take not to be critical thinking and what sorts to be critical thinking, one can conclude that educators typically understand critical thinking to have at least three features.

  • It is done for the purpose of making up one’s mind about what to believe or do.
  • The person engaging in the thinking is trying to fulfill standards of adequacy and accuracy appropriate to the thinking.
  • The thinking fulfills the relevant standards to some threshold level.

One could sum up the core concept that involves these three features by saying that critical thinking is careful goal-directed thinking. This core concept seems to apply to all the examples of critical thinking described in the previous section. As for the non-examples, their exclusion depends on construing careful thinking as excluding jumping immediately to conclusions, suspending judgment no matter how strong the evidence, reasoning from an unquestioned ideological or religious perspective, and routinely using an algorithm to answer a question.

If the core of critical thinking is careful goal-directed thinking, conceptions of it can vary according to its presumed scope, its presumed goal, one’s criteria and threshold for being careful, and the thinking component on which one focuses. As to its scope, some conceptions (e.g., Dewey 1910, 1933) restrict it to constructive thinking on the basis of one’s own observations and experiments, others (e.g., Ennis 1962; Fisher & Scriven 1997; Johnson 1992) to appraisal of the products of such thinking. Ennis (1991) and Bailin et al. (1999b) take it to cover both construction and appraisal. As to its goal, some conceptions restrict it to forming a judgment (Dewey 1910, 1933; Lipman 1987; Facione 1990a). Others allow for actions as well as beliefs as the end point of a process of critical thinking (Ennis 1991; Bailin et al. 1999b). As to the criteria and threshold for being careful, definitions vary in the term used to indicate that critical thinking satisfies certain norms: “intellectually disciplined” (Scriven & Paul 1987), “reasonable” (Ennis 1991), “skillful” (Lipman 1987), “skilled” (Fisher & Scriven 1997), “careful” (Bailin & Battersby 2009). Some definitions specify these norms, referring variously to “consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends” (Dewey 1910, 1933); “the methods of logical inquiry and reasoning” (Glaser 1941); “conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication” (Scriven & Paul 1987); the requirement that “it is sensitive to context, relies on criteria, and is self-correcting” (Lipman 1987); “evidential, conceptual, methodological, criteriological, or contextual considerations” (Facione 1990a); and “plus-minus considerations of the product in terms of appropriate standards (or criteria)” (Johnson 1992). Stanovich and Stanovich (2010) propose to ground the concept of critical thinking in the concept of rationality, which they understand as combining epistemic rationality (fitting one’s beliefs to the world) and instrumental rationality (optimizing goal fulfillment); a critical thinker, in their view, is someone with “a propensity to override suboptimal responses from the autonomous mind” (2010: 227). These variant specifications of norms for critical thinking are not necessarily incompatible with one another, and in any case presuppose the core notion of thinking carefully. As to the thinking component singled out, some definitions focus on suspension of judgment during the thinking (Dewey 1910; McPeck 1981), others on inquiry while judgment is suspended (Bailin & Battersby 2009, 2021), others on the resulting judgment (Facione 1990a), and still others on responsiveness to reasons (Siegel 1988). Kuhn (2019) takes critical thinking to be more a dialogic practice of advancing and responding to arguments than an individual ability.

In educational contexts, a definition of critical thinking is a “programmatic definition” (Scheffler 1960: 19). It expresses a practical program for achieving an educational goal. For this purpose, a one-sentence formulaic definition is much less useful than articulation of a critical thinking process, with criteria and standards for the kinds of thinking that the process may involve. The real educational goal is recognition, adoption and implementation by students of those criteria and standards. That adoption and implementation in turn consists in acquiring the knowledge, abilities and dispositions of a critical thinker.

Conceptions of critical thinking generally do not include moral integrity as part of the concept. Dewey, for example, took critical thinking to be the ultimate intellectual goal of education, but distinguished it from the development of social cooperation among school children, which he took to be the central moral goal. Ennis (1996, 2011) added to his previous list of critical thinking dispositions a group of dispositions to care about the dignity and worth of every person, which he described as a “correlative” (1996) disposition without which critical thinking would be less valuable and perhaps harmful. An educational program that aimed at developing critical thinking but not the correlative disposition to care about the dignity and worth of every person, he asserted, “would be deficient and perhaps dangerous” (Ennis 1996: 172).

Dewey thought that education for reflective thinking would be of value to both the individual and society; recognition in educational practice of the kinship to the scientific attitude of children’s native curiosity, fertile imagination and love of experimental inquiry “would make for individual happiness and the reduction of social waste” (Dewey 1910: iii). Schools participating in the Eight-Year Study took development of the habit of reflective thinking and skill in solving problems as a means to leading young people to understand, appreciate and live the democratic way of life characteristic of the United States (Aikin 1942: 17–18, 81). Harvey Siegel (1988: 55–61) has offered four considerations in support of adopting critical thinking as an educational ideal. (1) Respect for persons requires that schools and teachers honour students’ demands for reasons and explanations, deal with students honestly, and recognize the need to confront students’ independent judgment; these requirements concern the manner in which teachers treat students. (2) Education has the task of preparing children to be successful adults, a task that requires development of their self-sufficiency. (3) Education should initiate children into the rational traditions in such fields as history, science and mathematics. (4) Education should prepare children to become democratic citizens, which requires reasoned procedures and critical talents and attitudes. To supplement these considerations, Siegel (1988: 62–90) responds to two objections: the ideology objection that adoption of any educational ideal requires a prior ideological commitment and the indoctrination objection that cultivation of critical thinking cannot escape being a form of indoctrination.

Despite the diversity of our 11 examples, one can recognize a common pattern. Dewey analyzed it as consisting of five phases:

  • suggestions , in which the mind leaps forward to a possible solution;
  • an intellectualization of the difficulty or perplexity into a problem to be solved, a question for which the answer must be sought;
  • the use of one suggestion after another as a leading idea, or hypothesis , to initiate and guide observation and other operations in collection of factual material;
  • the mental elaboration of the idea or supposition as an idea or supposition ( reasoning , in the sense on which reasoning is a part, not the whole, of inference); and
  • testing the hypothesis by overt or imaginative action. (Dewey 1933: 106–107; italics in original)

The process of reflective thinking consisting of these phases would be preceded by a perplexed, troubled or confused situation and followed by a cleared-up, unified, resolved situation (Dewey 1933: 106). The term ‘phases’ replaced the term ‘steps’ (Dewey 1910: 72), thus removing the earlier suggestion of an invariant sequence. Variants of the above analysis appeared in (Dewey 1916: 177) and (Dewey 1938: 101–119).

The variant formulations indicate the difficulty of giving a single logical analysis of such a varied process. The process of critical thinking may have a spiral pattern, with the problem being redefined in the light of obstacles to solving it as originally formulated. For example, the person in Transit might have concluded that getting to the appointment at the scheduled time was impossible and have reformulated the problem as that of rescheduling the appointment for a mutually convenient time. Further, defining a problem does not always follow after or lead immediately to an idea of a suggested solution. Nor should it do so, as Dewey himself recognized in describing the physician in Typhoid as avoiding any strong preference for this or that conclusion before getting further information (Dewey 1910: 85; 1933: 170). People with a hypothesis in mind, even one to which they have a very weak commitment, have a so-called “confirmation bias” (Nickerson 1998): they are likely to pay attention to evidence that confirms the hypothesis and to ignore evidence that counts against it or for some competing hypothesis. Detectives, intelligence agencies, and investigators of airplane accidents are well advised to gather relevant evidence systematically and to postpone even tentative adoption of an explanatory hypothesis until the collected evidence rules out with the appropriate degree of certainty all but one explanation. Dewey’s analysis of the critical thinking process can be faulted as well for requiring acceptance or rejection of a possible solution to a defined problem, with no allowance for deciding in the light of the available evidence to suspend judgment. Further, given the great variety of kinds of problems for which reflection is appropriate, there is likely to be variation in its component events. Perhaps the best way to conceptualize the critical thinking process is as a checklist whose component events can occur in a variety of orders, selectively, and more than once. These component events might include (1) noticing a difficulty, (2) defining the problem, (3) dividing the problem into manageable sub-problems, (4) formulating a variety of possible solutions to the problem or sub-problem, (5) determining what evidence is relevant to deciding among possible solutions to the problem or sub-problem, (6) devising a plan of systematic observation or experiment that will uncover the relevant evidence, (7) carrying out the plan of systematic observation or experimentation, (8) noting the results of the systematic observation or experiment, (9) gathering relevant testimony and information from others, (10) judging the credibility of testimony and information gathered from others, (11) drawing conclusions from gathered evidence and accepted testimony, and (12) accepting a solution that the evidence adequately supports (cf. Hitchcock 2017: 485).

Checklist conceptions of the process of critical thinking are open to the objection that they are too mechanical and procedural to fit the multi-dimensional and emotionally charged issues for which critical thinking is urgently needed (Paul 1984). For such issues, a more dialectical process is advocated, in which competing relevant world views are identified, their implications explored, and some sort of creative synthesis attempted.

If one considers the critical thinking process illustrated by the 11 examples, one can identify distinct kinds of mental acts and mental states that form part of it. To distinguish, label and briefly characterize these components is a useful preliminary to identifying abilities, skills, dispositions, attitudes, habits and the like that contribute causally to thinking critically. Identifying such abilities and habits is in turn a useful preliminary to setting educational goals. Setting the goals is in its turn a useful preliminary to designing strategies for helping learners to achieve the goals and to designing ways of measuring the extent to which learners have done so. Such measures provide both feedback to learners on their achievement and a basis for experimental research on the effectiveness of various strategies for educating people to think critically. Let us begin, then, by distinguishing the kinds of mental acts and mental events that can occur in a critical thinking process.

  • Observing : One notices something in one’s immediate environment (sudden cooling of temperature in Weather , bubbles forming outside a glass and then going inside in Bubbles , a moving blur in the distance in Blur , a rash in Rash ). Or one notes the results of an experiment or systematic observation (valuables missing in Disorder , no suction without air pressure in Suction pump )
  • Feeling : One feels puzzled or uncertain about something (how to get to an appointment on time in Transit , why the diamonds vary in spacing in Diamond ). One wants to resolve this perplexity. One feels satisfaction once one has worked out an answer (to take the subway express in Transit , diamonds closer when needed as a warning in Diamond ).
  • Wondering : One formulates a question to be addressed (why bubbles form outside a tumbler taken from hot water in Bubbles , how suction pumps work in Suction pump , what caused the rash in Rash ).
  • Imagining : One thinks of possible answers (bus or subway or elevated in Transit , flagpole or ornament or wireless communication aid or direction indicator in Ferryboat , allergic reaction or heat rash in Rash ).
  • Inferring : One works out what would be the case if a possible answer were assumed (valuables missing if there has been a burglary in Disorder , earlier start to the rash if it is an allergic reaction to a sulfa drug in Rash ). Or one draws a conclusion once sufficient relevant evidence is gathered (take the subway in Transit , burglary in Disorder , discontinue blood pressure medication and new cream in Rash ).
  • Knowledge : One uses stored knowledge of the subject-matter to generate possible answers or to infer what would be expected on the assumption of a particular answer (knowledge of a city’s public transit system in Transit , of the requirements for a flagpole in Ferryboat , of Boyle’s law in Bubbles , of allergic reactions in Rash ).
  • Experimenting : One designs and carries out an experiment or a systematic observation to find out whether the results deduced from a possible answer will occur (looking at the location of the flagpole in relation to the pilot’s position in Ferryboat , putting an ice cube on top of a tumbler taken from hot water in Bubbles , measuring the height to which a suction pump will draw water at different elevations in Suction pump , noticing the spacing of diamonds when movement to or from a diamond lane is allowed in Diamond ).
  • Consulting : One finds a source of information, gets the information from the source, and makes a judgment on whether to accept it. None of our 11 examples include searching for sources of information. In this respect they are unrepresentative, since most people nowadays have almost instant access to information relevant to answering any question, including many of those illustrated by the examples. However, Candidate includes the activities of extracting information from sources and evaluating its credibility.
  • Identifying and analyzing arguments : One notices an argument and works out its structure and content as a preliminary to evaluating its strength. This activity is central to Candidate . It is an important part of a critical thinking process in which one surveys arguments for various positions on an issue.
  • Judging : One makes a judgment on the basis of accumulated evidence and reasoning, such as the judgment in Ferryboat that the purpose of the pole is to provide direction to the pilot.
  • Deciding : One makes a decision on what to do or on what policy to adopt, as in the decision in Transit to take the subway.

By definition, a person who does something voluntarily is both willing and able to do that thing at that time. Both the willingness and the ability contribute causally to the person’s action, in the sense that the voluntary action would not occur if either (or both) of these were lacking. For example, suppose that one is standing with one’s arms at one’s sides and one voluntarily lifts one’s right arm to an extended horizontal position. One would not do so if one were unable to lift one’s arm, if for example one’s right side was paralyzed as the result of a stroke. Nor would one do so if one were unwilling to lift one’s arm, if for example one were participating in a street demonstration at which a white supremacist was urging the crowd to lift their right arm in a Nazi salute and one were unwilling to express support in this way for the racist Nazi ideology. The same analysis applies to a voluntary mental process of thinking critically. It requires both willingness and ability to think critically, including willingness and ability to perform each of the mental acts that compose the process and to coordinate those acts in a sequence that is directed at resolving the initiating perplexity.

Consider willingness first. We can identify causal contributors to willingness to think critically by considering factors that would cause a person who was able to think critically about an issue nevertheless not to do so (Hamby 2014). For each factor, the opposite condition thus contributes causally to willingness to think critically on a particular occasion. For example, people who habitually jump to conclusions without considering alternatives will not think critically about issues that arise, even if they have the required abilities. The contrary condition of willingness to suspend judgment is thus a causal contributor to thinking critically.

Now consider ability. In contrast to the ability to move one’s arm, which can be completely absent because a stroke has left the arm paralyzed, the ability to think critically is a developed ability, whose absence is not a complete absence of ability to think but absence of ability to think well. We can identify the ability to think well directly, in terms of the norms and standards for good thinking. In general, to be able do well the thinking activities that can be components of a critical thinking process, one needs to know the concepts and principles that characterize their good performance, to recognize in particular cases that the concepts and principles apply, and to apply them. The knowledge, recognition and application may be procedural rather than declarative. It may be domain-specific rather than widely applicable, and in either case may need subject-matter knowledge, sometimes of a deep kind.

Reflections of the sort illustrated by the previous two paragraphs have led scholars to identify the knowledge, abilities and dispositions of a “critical thinker”, i.e., someone who thinks critically whenever it is appropriate to do so. We turn now to these three types of causal contributors to thinking critically. We start with dispositions, since arguably these are the most powerful contributors to being a critical thinker, can be fostered at an early stage of a child’s development, and are susceptible to general improvement (Glaser 1941: 175)

8. Critical Thinking Dispositions

Educational researchers use the term ‘dispositions’ broadly for the habits of mind and attitudes that contribute causally to being a critical thinker. Some writers (e.g., Paul & Elder 2006; Hamby 2014; Bailin & Battersby 2016a) propose to use the term ‘virtues’ for this dimension of a critical thinker. The virtues in question, although they are virtues of character, concern the person’s ways of thinking rather than the person’s ways of behaving towards others. They are not moral virtues but intellectual virtues, of the sort articulated by Zagzebski (1996) and discussed by Turri, Alfano, and Greco (2017).

On a realistic conception, thinking dispositions or intellectual virtues are real properties of thinkers. They are general tendencies, propensities, or inclinations to think in particular ways in particular circumstances, and can be genuinely explanatory (Siegel 1999). Sceptics argue that there is no evidence for a specific mental basis for the habits of mind that contribute to thinking critically, and that it is pedagogically misleading to posit such a basis (Bailin et al. 1999a). Whatever their status, critical thinking dispositions need motivation for their initial formation in a child—motivation that may be external or internal. As children develop, the force of habit will gradually become important in sustaining the disposition (Nieto & Valenzuela 2012). Mere force of habit, however, is unlikely to sustain critical thinking dispositions. Critical thinkers must value and enjoy using their knowledge and abilities to think things through for themselves. They must be committed to, and lovers of, inquiry.

A person may have a critical thinking disposition with respect to only some kinds of issues. For example, one could be open-minded about scientific issues but not about religious issues. Similarly, one could be confident in one’s ability to reason about the theological implications of the existence of evil in the world but not in one’s ability to reason about the best design for a guided ballistic missile.

Facione (1990a: 25) divides “affective dispositions” of critical thinking into approaches to life and living in general and approaches to specific issues, questions or problems. Adapting this distinction, one can usefully divide critical thinking dispositions into initiating dispositions (those that contribute causally to starting to think critically about an issue) and internal dispositions (those that contribute causally to doing a good job of thinking critically once one has started). The two categories are not mutually exclusive. For example, open-mindedness, in the sense of willingness to consider alternative points of view to one’s own, is both an initiating and an internal disposition.

Using the strategy of considering factors that would block people with the ability to think critically from doing so, we can identify as initiating dispositions for thinking critically attentiveness, a habit of inquiry, self-confidence, courage, open-mindedness, willingness to suspend judgment, trust in reason, wanting evidence for one’s beliefs, and seeking the truth. We consider briefly what each of these dispositions amounts to, in each case citing sources that acknowledge them.

  • Attentiveness : One will not think critically if one fails to recognize an issue that needs to be thought through. For example, the pedestrian in Weather would not have looked up if he had not noticed that the air was suddenly cooler. To be a critical thinker, then, one needs to be habitually attentive to one’s surroundings, noticing not only what one senses but also sources of perplexity in messages received and in one’s own beliefs and attitudes (Facione 1990a: 25; Facione, Facione, & Giancarlo 2001).
  • Habit of inquiry : Inquiry is effortful, and one needs an internal push to engage in it. For example, the student in Bubbles could easily have stopped at idle wondering about the cause of the bubbles rather than reasoning to a hypothesis, then designing and executing an experiment to test it. Thus willingness to think critically needs mental energy and initiative. What can supply that energy? Love of inquiry, or perhaps just a habit of inquiry. Hamby (2015) has argued that willingness to inquire is the central critical thinking virtue, one that encompasses all the others. It is recognized as a critical thinking disposition by Dewey (1910: 29; 1933: 35), Glaser (1941: 5), Ennis (1987: 12; 1991: 8), Facione (1990a: 25), Bailin et al. (1999b: 294), Halpern (1998: 452), and Facione, Facione, & Giancarlo (2001).
  • Self-confidence : Lack of confidence in one’s abilities can block critical thinking. For example, if the woman in Rash lacked confidence in her ability to figure things out for herself, she might just have assumed that the rash on her chest was the allergic reaction to her medication against which the pharmacist had warned her. Thus willingness to think critically requires confidence in one’s ability to inquire (Facione 1990a: 25; Facione, Facione, & Giancarlo 2001).
  • Courage : Fear of thinking for oneself can stop one from doing it. Thus willingness to think critically requires intellectual courage (Paul & Elder 2006: 16).
  • Open-mindedness : A dogmatic attitude will impede thinking critically. For example, a person who adheres rigidly to a “pro-choice” position on the issue of the legal status of induced abortion is likely to be unwilling to consider seriously the issue of when in its development an unborn child acquires a moral right to life. Thus willingness to think critically requires open-mindedness, in the sense of a willingness to examine questions to which one already accepts an answer but which further evidence or reasoning might cause one to answer differently (Dewey 1933; Facione 1990a; Ennis 1991; Bailin et al. 1999b; Halpern 1998, Facione, Facione, & Giancarlo 2001). Paul (1981) emphasizes open-mindedness about alternative world-views, and recommends a dialectical approach to integrating such views as central to what he calls “strong sense” critical thinking. In three studies, Haran, Ritov, & Mellers (2013) found that actively open-minded thinking, including “the tendency to weigh new evidence against a favored belief, to spend sufficient time on a problem before giving up, and to consider carefully the opinions of others in forming one’s own”, led study participants to acquire information and thus to make accurate estimations.
  • Willingness to suspend judgment : Premature closure on an initial solution will block critical thinking. Thus willingness to think critically requires a willingness to suspend judgment while alternatives are explored (Facione 1990a; Ennis 1991; Halpern 1998).
  • Trust in reason : Since distrust in the processes of reasoned inquiry will dissuade one from engaging in it, trust in them is an initiating critical thinking disposition (Facione 1990a, 25; Bailin et al. 1999b: 294; Facione, Facione, & Giancarlo 2001; Paul & Elder 2006). In reaction to an allegedly exclusive emphasis on reason in critical thinking theory and pedagogy, Thayer-Bacon (2000) argues that intuition, imagination, and emotion have important roles to play in an adequate conception of critical thinking that she calls “constructive thinking”. From her point of view, critical thinking requires trust not only in reason but also in intuition, imagination, and emotion.
  • Seeking the truth : If one does not care about the truth but is content to stick with one’s initial bias on an issue, then one will not think critically about it. Seeking the truth is thus an initiating critical thinking disposition (Bailin et al. 1999b: 294; Facione, Facione, & Giancarlo 2001). A disposition to seek the truth is implicit in more specific critical thinking dispositions, such as trying to be well-informed, considering seriously points of view other than one’s own, looking for alternatives, suspending judgment when the evidence is insufficient, and adopting a position when the evidence supporting it is sufficient.

Some of the initiating dispositions, such as open-mindedness and willingness to suspend judgment, are also internal critical thinking dispositions, in the sense of mental habits or attitudes that contribute causally to doing a good job of critical thinking once one starts the process. But there are many other internal critical thinking dispositions. Some of them are parasitic on one’s conception of good thinking. For example, it is constitutive of good thinking about an issue to formulate the issue clearly and to maintain focus on it. For this purpose, one needs not only the corresponding ability but also the corresponding disposition. Ennis (1991: 8) describes it as the disposition “to determine and maintain focus on the conclusion or question”, Facione (1990a: 25) as “clarity in stating the question or concern”. Other internal dispositions are motivators to continue or adjust the critical thinking process, such as willingness to persist in a complex task and willingness to abandon nonproductive strategies in an attempt to self-correct (Halpern 1998: 452). For a list of identified internal critical thinking dispositions, see the Supplement on Internal Critical Thinking Dispositions .

Some theorists postulate skills, i.e., acquired abilities, as operative in critical thinking. It is not obvious, however, that a good mental act is the exercise of a generic acquired skill. Inferring an expected time of arrival, as in Transit , has some generic components but also uses non-generic subject-matter knowledge. Bailin et al. (1999a) argue against viewing critical thinking skills as generic and discrete, on the ground that skilled performance at a critical thinking task cannot be separated from knowledge of concepts and from domain-specific principles of good thinking. Talk of skills, they concede, is unproblematic if it means merely that a person with critical thinking skills is capable of intelligent performance.

Despite such scepticism, theorists of critical thinking have listed as general contributors to critical thinking what they variously call abilities (Glaser 1941; Ennis 1962, 1991), skills (Facione 1990a; Halpern 1998) or competencies (Fisher & Scriven 1997). Amalgamating these lists would produce a confusing and chaotic cornucopia of more than 50 possible educational objectives, with only partial overlap among them. It makes sense instead to try to understand the reasons for the multiplicity and diversity, and to make a selection according to one’s own reasons for singling out abilities to be developed in a critical thinking curriculum. Two reasons for diversity among lists of critical thinking abilities are the underlying conception of critical thinking and the envisaged educational level. Appraisal-only conceptions, for example, involve a different suite of abilities than constructive-only conceptions. Some lists, such as those in (Glaser 1941), are put forward as educational objectives for secondary school students, whereas others are proposed as objectives for college students (e.g., Facione 1990a).

The abilities described in the remaining paragraphs of this section emerge from reflection on the general abilities needed to do well the thinking activities identified in section 6 as components of the critical thinking process described in section 5 . The derivation of each collection of abilities is accompanied by citation of sources that list such abilities and of standardized tests that claim to test them.

Observational abilities : Careful and accurate observation sometimes requires specialist expertise and practice, as in the case of observing birds and observing accident scenes. However, there are general abilities of noticing what one’s senses are picking up from one’s environment and of being able to articulate clearly and accurately to oneself and others what one has observed. It helps in exercising them to be able to recognize and take into account factors that make one’s observation less trustworthy, such as prior framing of the situation, inadequate time, deficient senses, poor observation conditions, and the like. It helps as well to be skilled at taking steps to make one’s observation more trustworthy, such as moving closer to get a better look, measuring something three times and taking the average, and checking what one thinks one is observing with someone else who is in a good position to observe it. It also helps to be skilled at recognizing respects in which one’s report of one’s observation involves inference rather than direct observation, so that one can then consider whether the inference is justified. These abilities come into play as well when one thinks about whether and with what degree of confidence to accept an observation report, for example in the study of history or in a criminal investigation or in assessing news reports. Observational abilities show up in some lists of critical thinking abilities (Ennis 1962: 90; Facione 1990a: 16; Ennis 1991: 9). There are items testing a person’s ability to judge the credibility of observation reports in the Cornell Critical Thinking Tests, Levels X and Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005). Norris and King (1983, 1985, 1990a, 1990b) is a test of ability to appraise observation reports.

Emotional abilities : The emotions that drive a critical thinking process are perplexity or puzzlement, a wish to resolve it, and satisfaction at achieving the desired resolution. Children experience these emotions at an early age, without being trained to do so. Education that takes critical thinking as a goal needs only to channel these emotions and to make sure not to stifle them. Collaborative critical thinking benefits from ability to recognize one’s own and others’ emotional commitments and reactions.

Questioning abilities : A critical thinking process needs transformation of an inchoate sense of perplexity into a clear question. Formulating a question well requires not building in questionable assumptions, not prejudging the issue, and using language that in context is unambiguous and precise enough (Ennis 1962: 97; 1991: 9).

Imaginative abilities : Thinking directed at finding the correct causal explanation of a general phenomenon or particular event requires an ability to imagine possible explanations. Thinking about what policy or plan of action to adopt requires generation of options and consideration of possible consequences of each option. Domain knowledge is required for such creative activity, but a general ability to imagine alternatives is helpful and can be nurtured so as to become easier, quicker, more extensive, and deeper (Dewey 1910: 34–39; 1933: 40–47). Facione (1990a) and Halpern (1998) include the ability to imagine alternatives as a critical thinking ability.

Inferential abilities : The ability to draw conclusions from given information, and to recognize with what degree of certainty one’s own or others’ conclusions follow, is universally recognized as a general critical thinking ability. All 11 examples in section 2 of this article include inferences, some from hypotheses or options (as in Transit , Ferryboat and Disorder ), others from something observed (as in Weather and Rash ). None of these inferences is formally valid. Rather, they are licensed by general, sometimes qualified substantive rules of inference (Toulmin 1958) that rest on domain knowledge—that a bus trip takes about the same time in each direction, that the terminal of a wireless telegraph would be located on the highest possible place, that sudden cooling is often followed by rain, that an allergic reaction to a sulfa drug generally shows up soon after one starts taking it. It is a matter of controversy to what extent the specialized ability to deduce conclusions from premisses using formal rules of inference is needed for critical thinking. Dewey (1933) locates logical forms in setting out the products of reflection rather than in the process of reflection. Ennis (1981a), on the other hand, maintains that a liberally-educated person should have the following abilities: to translate natural-language statements into statements using the standard logical operators, to use appropriately the language of necessary and sufficient conditions, to deal with argument forms and arguments containing symbols, to determine whether in virtue of an argument’s form its conclusion follows necessarily from its premisses, to reason with logically complex propositions, and to apply the rules and procedures of deductive logic. Inferential abilities are recognized as critical thinking abilities by Glaser (1941: 6), Facione (1990a: 9), Ennis (1991: 9), Fisher & Scriven (1997: 99, 111), and Halpern (1998: 452). Items testing inferential abilities constitute two of the five subtests of the Watson Glaser Critical Thinking Appraisal (Watson & Glaser 1980a, 1980b, 1994), two of the four sections in the Cornell Critical Thinking Test Level X (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005), three of the seven sections in the Cornell Critical Thinking Test Level Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005), 11 of the 34 items on Forms A and B of the California Critical Thinking Skills Test (Facione 1990b, 1992), and a high but variable proportion of the 25 selected-response questions in the Collegiate Learning Assessment (Council for Aid to Education 2017).

Experimenting abilities : Knowing how to design and execute an experiment is important not just in scientific research but also in everyday life, as in Rash . Dewey devoted a whole chapter of his How We Think (1910: 145–156; 1933: 190–202) to the superiority of experimentation over observation in advancing knowledge. Experimenting abilities come into play at one remove in appraising reports of scientific studies. Skill in designing and executing experiments includes the acknowledged abilities to appraise evidence (Glaser 1941: 6), to carry out experiments and to apply appropriate statistical inference techniques (Facione 1990a: 9), to judge inductions to an explanatory hypothesis (Ennis 1991: 9), and to recognize the need for an adequately large sample size (Halpern 1998). The Cornell Critical Thinking Test Level Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005) includes four items (out of 52) on experimental design. The Collegiate Learning Assessment (Council for Aid to Education 2017) makes room for appraisal of study design in both its performance task and its selected-response questions.

Consulting abilities : Skill at consulting sources of information comes into play when one seeks information to help resolve a problem, as in Candidate . Ability to find and appraise information includes ability to gather and marshal pertinent information (Glaser 1941: 6), to judge whether a statement made by an alleged authority is acceptable (Ennis 1962: 84), to plan a search for desired information (Facione 1990a: 9), and to judge the credibility of a source (Ennis 1991: 9). Ability to judge the credibility of statements is tested by 24 items (out of 76) in the Cornell Critical Thinking Test Level X (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005) and by four items (out of 52) in the Cornell Critical Thinking Test Level Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005). The College Learning Assessment’s performance task requires evaluation of whether information in documents is credible or unreliable (Council for Aid to Education 2017).

Argument analysis abilities : The ability to identify and analyze arguments contributes to the process of surveying arguments on an issue in order to form one’s own reasoned judgment, as in Candidate . The ability to detect and analyze arguments is recognized as a critical thinking skill by Facione (1990a: 7–8), Ennis (1991: 9) and Halpern (1998). Five items (out of 34) on the California Critical Thinking Skills Test (Facione 1990b, 1992) test skill at argument analysis. The College Learning Assessment (Council for Aid to Education 2017) incorporates argument analysis in its selected-response tests of critical reading and evaluation and of critiquing an argument.

Judging skills and deciding skills : Skill at judging and deciding is skill at recognizing what judgment or decision the available evidence and argument supports, and with what degree of confidence. It is thus a component of the inferential skills already discussed.

Lists and tests of critical thinking abilities often include two more abilities: identifying assumptions and constructing and evaluating definitions.

In addition to dispositions and abilities, critical thinking needs knowledge: of critical thinking concepts, of critical thinking principles, and of the subject-matter of the thinking.

We can derive a short list of concepts whose understanding contributes to critical thinking from the critical thinking abilities described in the preceding section. Observational abilities require an understanding of the difference between observation and inference. Questioning abilities require an understanding of the concepts of ambiguity and vagueness. Inferential abilities require an understanding of the difference between conclusive and defeasible inference (traditionally, between deduction and induction), as well as of the difference between necessary and sufficient conditions. Experimenting abilities require an understanding of the concepts of hypothesis, null hypothesis, assumption and prediction, as well as of the concept of statistical significance and of its difference from importance. They also require an understanding of the difference between an experiment and an observational study, and in particular of the difference between a randomized controlled trial, a prospective correlational study and a retrospective (case-control) study. Argument analysis abilities require an understanding of the concepts of argument, premiss, assumption, conclusion and counter-consideration. Additional critical thinking concepts are proposed by Bailin et al. (1999b: 293), Fisher & Scriven (1997: 105–106), Black (2012), and Blair (2021).

According to Glaser (1941: 25), ability to think critically requires knowledge of the methods of logical inquiry and reasoning. If we review the list of abilities in the preceding section, however, we can see that some of them can be acquired and exercised merely through practice, possibly guided in an educational setting, followed by feedback. Searching intelligently for a causal explanation of some phenomenon or event requires that one consider a full range of possible causal contributors, but it seems more important that one implements this principle in one’s practice than that one is able to articulate it. What is important is “operational knowledge” of the standards and principles of good thinking (Bailin et al. 1999b: 291–293). But the development of such critical thinking abilities as designing an experiment or constructing an operational definition can benefit from learning their underlying theory. Further, explicit knowledge of quirks of human thinking seems useful as a cautionary guide. Human memory is not just fallible about details, as people learn from their own experiences of misremembering, but is so malleable that a detailed, clear and vivid recollection of an event can be a total fabrication (Loftus 2017). People seek or interpret evidence in ways that are partial to their existing beliefs and expectations, often unconscious of their “confirmation bias” (Nickerson 1998). Not only are people subject to this and other cognitive biases (Kahneman 2011), of which they are typically unaware, but it may be counter-productive for one to make oneself aware of them and try consciously to counteract them or to counteract social biases such as racial or sexual stereotypes (Kenyon & Beaulac 2014). It is helpful to be aware of these facts and of the superior effectiveness of blocking the operation of biases—for example, by making an immediate record of one’s observations, refraining from forming a preliminary explanatory hypothesis, blind refereeing, double-blind randomized trials, and blind grading of students’ work. It is also helpful to be aware of the prevalence of “noise” (unwanted unsystematic variability of judgments), of how to detect noise (through a noise audit), and of how to reduce noise: make accuracy the goal, think statistically, break a process of arriving at a judgment into independent tasks, resist premature intuitions, in a group get independent judgments first, favour comparative judgments and scales (Kahneman, Sibony, & Sunstein 2021). It is helpful as well to be aware of the concept of “bounded rationality” in decision-making and of the related distinction between “satisficing” and optimizing (Simon 1956; Gigerenzer 2001).

Critical thinking about an issue requires substantive knowledge of the domain to which the issue belongs. Critical thinking abilities are not a magic elixir that can be applied to any issue whatever by somebody who has no knowledge of the facts relevant to exploring that issue. For example, the student in Bubbles needed to know that gases do not penetrate solid objects like a glass, that air expands when heated, that the volume of an enclosed gas varies directly with its temperature and inversely with its pressure, and that hot objects will spontaneously cool down to the ambient temperature of their surroundings unless kept hot by insulation or a source of heat. Critical thinkers thus need a rich fund of subject-matter knowledge relevant to the variety of situations they encounter. This fact is recognized in the inclusion among critical thinking dispositions of a concern to become and remain generally well informed.

Experimental educational interventions, with control groups, have shown that education can improve critical thinking skills and dispositions, as measured by standardized tests. For information about these tests, see the Supplement on Assessment .

What educational methods are most effective at developing the dispositions, abilities and knowledge of a critical thinker? In a comprehensive meta-analysis of experimental and quasi-experimental studies of strategies for teaching students to think critically, Abrami et al. (2015) found that dialogue, anchored instruction, and mentoring each increased the effectiveness of the educational intervention, and that they were most effective when combined. They also found that in these studies a combination of separate instruction in critical thinking with subject-matter instruction in which students are encouraged to think critically was more effective than either by itself. However, the difference was not statistically significant; that is, it might have arisen by chance.

Most of these studies lack the longitudinal follow-up required to determine whether the observed differential improvements in critical thinking abilities or dispositions continue over time, for example until high school or college graduation. For details on studies of methods of developing critical thinking skills and dispositions, see the Supplement on Educational Methods .

12. Controversies

Scholars have denied the generalizability of critical thinking abilities across subject domains, have alleged bias in critical thinking theory and pedagogy, and have investigated the relationship of critical thinking to other kinds of thinking.

McPeck (1981) attacked the thinking skills movement of the 1970s, including the critical thinking movement. He argued that there are no general thinking skills, since thinking is always thinking about some subject-matter. It is futile, he claimed, for schools and colleges to teach thinking as if it were a separate subject. Rather, teachers should lead their pupils to become autonomous thinkers by teaching school subjects in a way that brings out their cognitive structure and that encourages and rewards discussion and argument. As some of his critics (e.g., Paul 1985; Siegel 1985) pointed out, McPeck’s central argument needs elaboration, since it has obvious counter-examples in writing and speaking, for which (up to a certain level of complexity) there are teachable general abilities even though they are always about some subject-matter. To make his argument convincing, McPeck needs to explain how thinking differs from writing and speaking in a way that does not permit useful abstraction of its components from the subject-matters with which it deals. He has not done so. Nevertheless, his position that the dispositions and abilities of a critical thinker are best developed in the context of subject-matter instruction is shared by many theorists of critical thinking, including Dewey (1910, 1933), Glaser (1941), Passmore (1980), Weinstein (1990), Bailin et al. (1999b), and Willingham (2019).

McPeck’s challenge prompted reflection on the extent to which critical thinking is subject-specific. McPeck argued for a strong subject-specificity thesis, according to which it is a conceptual truth that all critical thinking abilities are specific to a subject. (He did not however extend his subject-specificity thesis to critical thinking dispositions. In particular, he took the disposition to suspend judgment in situations of cognitive dissonance to be a general disposition.) Conceptual subject-specificity is subject to obvious counter-examples, such as the general ability to recognize confusion of necessary and sufficient conditions. A more modest thesis, also endorsed by McPeck, is epistemological subject-specificity, according to which the norms of good thinking vary from one field to another. Epistemological subject-specificity clearly holds to a certain extent; for example, the principles in accordance with which one solves a differential equation are quite different from the principles in accordance with which one determines whether a painting is a genuine Picasso. But the thesis suffers, as Ennis (1989) points out, from vagueness of the concept of a field or subject and from the obvious existence of inter-field principles, however broadly the concept of a field is construed. For example, the principles of hypothetico-deductive reasoning hold for all the varied fields in which such reasoning occurs. A third kind of subject-specificity is empirical subject-specificity, according to which as a matter of empirically observable fact a person with the abilities and dispositions of a critical thinker in one area of investigation will not necessarily have them in another area of investigation.

The thesis of empirical subject-specificity raises the general problem of transfer. If critical thinking abilities and dispositions have to be developed independently in each school subject, how are they of any use in dealing with the problems of everyday life and the political and social issues of contemporary society, most of which do not fit into the framework of a traditional school subject? Proponents of empirical subject-specificity tend to argue that transfer is more likely to occur if there is critical thinking instruction in a variety of domains, with explicit attention to dispositions and abilities that cut across domains. But evidence for this claim is scanty. There is a need for well-designed empirical studies that investigate the conditions that make transfer more likely.

It is common ground in debates about the generality or subject-specificity of critical thinking dispositions and abilities that critical thinking about any topic requires background knowledge about the topic. For example, the most sophisticated understanding of the principles of hypothetico-deductive reasoning is of no help unless accompanied by some knowledge of what might be plausible explanations of some phenomenon under investigation.

Critics have objected to bias in the theory, pedagogy and practice of critical thinking. Commentators (e.g., Alston 1995; Ennis 1998) have noted that anyone who takes a position has a bias in the neutral sense of being inclined in one direction rather than others. The critics, however, are objecting to bias in the pejorative sense of an unjustified favoring of certain ways of knowing over others, frequently alleging that the unjustly favoured ways are those of a dominant sex or culture (Bailin 1995). These ways favour:

  • reinforcement of egocentric and sociocentric biases over dialectical engagement with opposing world-views (Paul 1981, 1984; Warren 1998)
  • distancing from the object of inquiry over closeness to it (Martin 1992; Thayer-Bacon 1992)
  • indifference to the situation of others over care for them (Martin 1992)
  • orientation to thought over orientation to action (Martin 1992)
  • being reasonable over caring to understand people’s ideas (Thayer-Bacon 1993)
  • being neutral and objective over being embodied and situated (Thayer-Bacon 1995a)
  • doubting over believing (Thayer-Bacon 1995b)
  • reason over emotion, imagination and intuition (Thayer-Bacon 2000)
  • solitary thinking over collaborative thinking (Thayer-Bacon 2000)
  • written and spoken assignments over other forms of expression (Alston 2001)
  • attention to written and spoken communications over attention to human problems (Alston 2001)
  • winning debates in the public sphere over making and understanding meaning (Alston 2001)

A common thread in this smorgasbord of accusations is dissatisfaction with focusing on the logical analysis and evaluation of reasoning and arguments. While these authors acknowledge that such analysis and evaluation is part of critical thinking and should be part of its conceptualization and pedagogy, they insist that it is only a part. Paul (1981), for example, bemoans the tendency of atomistic teaching of methods of analyzing and evaluating arguments to turn students into more able sophists, adept at finding fault with positions and arguments with which they disagree but even more entrenched in the egocentric and sociocentric biases with which they began. Martin (1992) and Thayer-Bacon (1992) cite with approval the self-reported intimacy with their subject-matter of leading researchers in biology and medicine, an intimacy that conflicts with the distancing allegedly recommended in standard conceptions and pedagogy of critical thinking. Thayer-Bacon (2000) contrasts the embodied and socially embedded learning of her elementary school students in a Montessori school, who used their imagination, intuition and emotions as well as their reason, with conceptions of critical thinking as

thinking that is used to critique arguments, offer justifications, and make judgments about what are the good reasons, or the right answers. (Thayer-Bacon 2000: 127–128)

Alston (2001) reports that her students in a women’s studies class were able to see the flaws in the Cinderella myth that pervades much romantic fiction but in their own romantic relationships still acted as if all failures were the woman’s fault and still accepted the notions of love at first sight and living happily ever after. Students, she writes, should

be able to connect their intellectual critique to a more affective, somatic, and ethical account of making risky choices that have sexist, racist, classist, familial, sexual, or other consequences for themselves and those both near and far… critical thinking that reads arguments, texts, or practices merely on the surface without connections to feeling/desiring/doing or action lacks an ethical depth that should infuse the difference between mere cognitive activity and something we want to call critical thinking. (Alston 2001: 34)

Some critics portray such biases as unfair to women. Thayer-Bacon (1992), for example, has charged modern critical thinking theory with being sexist, on the ground that it separates the self from the object and causes one to lose touch with one’s inner voice, and thus stigmatizes women, who (she asserts) link self to object and listen to their inner voice. Her charge does not imply that women as a group are on average less able than men to analyze and evaluate arguments. Facione (1990c) found no difference by sex in performance on his California Critical Thinking Skills Test. Kuhn (1991: 280–281) found no difference by sex in either the disposition or the competence to engage in argumentative thinking.

The critics propose a variety of remedies for the biases that they allege. In general, they do not propose to eliminate or downplay critical thinking as an educational goal. Rather, they propose to conceptualize critical thinking differently and to change its pedagogy accordingly. Their pedagogical proposals arise logically from their objections. They can be summarized as follows:

  • Focus on argument networks with dialectical exchanges reflecting contesting points of view rather than on atomic arguments, so as to develop “strong sense” critical thinking that transcends egocentric and sociocentric biases (Paul 1981, 1984).
  • Foster closeness to the subject-matter and feeling connected to others in order to inform a humane democracy (Martin 1992).
  • Develop “constructive thinking” as a social activity in a community of physically embodied and socially embedded inquirers with personal voices who value not only reason but also imagination, intuition and emotion (Thayer-Bacon 2000).
  • In developing critical thinking in school subjects, treat as important neither skills nor dispositions but opening worlds of meaning (Alston 2001).
  • Attend to the development of critical thinking dispositions as well as skills, and adopt the “critical pedagogy” practised and advocated by Freire (1968 [1970]) and hooks (1994) (Dalgleish, Girard, & Davies 2017).

A common thread in these proposals is treatment of critical thinking as a social, interactive, personally engaged activity like that of a quilting bee or a barn-raising (Thayer-Bacon 2000) rather than as an individual, solitary, distanced activity symbolized by Rodin’s The Thinker . One can get a vivid description of education with the former type of goal from the writings of bell hooks (1994, 2010). Critical thinking for her is open-minded dialectical exchange across opposing standpoints and from multiple perspectives, a conception similar to Paul’s “strong sense” critical thinking (Paul 1981). She abandons the structure of domination in the traditional classroom. In an introductory course on black women writers, for example, she assigns students to write an autobiographical paragraph about an early racial memory, then to read it aloud as the others listen, thus affirming the uniqueness and value of each voice and creating a communal awareness of the diversity of the group’s experiences (hooks 1994: 84). Her “engaged pedagogy” is thus similar to the “freedom under guidance” implemented in John Dewey’s Laboratory School of Chicago in the late 1890s and early 1900s. It incorporates the dialogue, anchored instruction, and mentoring that Abrami (2015) found to be most effective in improving critical thinking skills and dispositions.

What is the relationship of critical thinking to problem solving, decision-making, higher-order thinking, creative thinking, and other recognized types of thinking? One’s answer to this question obviously depends on how one defines the terms used in the question. If critical thinking is conceived broadly to cover any careful thinking about any topic for any purpose, then problem solving and decision making will be kinds of critical thinking, if they are done carefully. Historically, ‘critical thinking’ and ‘problem solving’ were two names for the same thing. If critical thinking is conceived more narrowly as consisting solely of appraisal of intellectual products, then it will be disjoint with problem solving and decision making, which are constructive.

Bloom’s taxonomy of educational objectives used the phrase “intellectual abilities and skills” for what had been labeled “critical thinking” by some, “reflective thinking” by Dewey and others, and “problem solving” by still others (Bloom et al. 1956: 38). Thus, the so-called “higher-order thinking skills” at the taxonomy’s top levels of analysis, synthesis and evaluation are just critical thinking skills, although they do not come with general criteria for their assessment (Ennis 1981b). The revised version of Bloom’s taxonomy (Anderson et al. 2001) likewise treats critical thinking as cutting across those types of cognitive process that involve more than remembering (Anderson et al. 2001: 269–270). For details, see the Supplement on History .

As to creative thinking, it overlaps with critical thinking (Bailin 1987, 1988). Thinking about the explanation of some phenomenon or event, as in Ferryboat , requires creative imagination in constructing plausible explanatory hypotheses. Likewise, thinking about a policy question, as in Candidate , requires creativity in coming up with options. Conversely, creativity in any field needs to be balanced by critical appraisal of the draft painting or novel or mathematical theory.

  • Abrami, Philip C., Robert M. Bernard, Eugene Borokhovski, David I. Waddington, C. Anne Wade, and Tonje Person, 2015, “Strategies for Teaching Students to Think Critically: A Meta-analysis”, Review of Educational Research , 85(2): 275–314. doi:10.3102/0034654314551063
  • Aikin, Wilford M., 1942, The Story of the Eight-year Study, with Conclusions and Recommendations , Volume I of Adventure in American Education , New York and London: Harper & Brothers. [ Aikin 1942 available online ]
  • Alston, Kal, 1995, “Begging the Question: Is Critical Thinking Biased?”, Educational Theory , 45(2): 225–233. doi:10.1111/j.1741-5446.1995.00225.x
  • –––, 2001, “Re/Thinking Critical Thinking: The Seductions of Everyday Life”, Studies in Philosophy and Education , 20(1): 27–40. doi:10.1023/A:1005247128053
  • American Educational Research Association, 2014, Standards for Educational and Psychological Testing / American Educational Research Association, American Psychological Association, National Council on Measurement in Education , Washington, DC: American Educational Research Association.
  • Anderson, Lorin W., David R. Krathwohl, Peter W. Airiasian, Kathleen A. Cruikshank, Richard E. Mayer, Paul R. Pintrich, James Raths, and Merlin C. Wittrock, 2001, A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives , New York: Longman, complete edition.
  • Bailin, Sharon, 1987, “Critical and Creative Thinking”, Informal Logic , 9(1): 23–30. [ Bailin 1987 available online ]
  • –––, 1988, Achieving Extraordinary Ends: An Essay on Creativity , Dordrecht: Kluwer. doi:10.1007/978-94-009-2780-3
  • –––, 1995, “Is Critical Thinking Biased? Clarifications and Implications”, Educational Theory , 45(2): 191–197. doi:10.1111/j.1741-5446.1995.00191.x
  • Bailin, Sharon and Mark Battersby, 2009, “Inquiry: A Dialectical Approach to Teaching Critical Thinking”, in Juho Ritola (ed.), Argument Cultures: Proceedings of OSSA 09 , CD-ROM (pp. 1–10), Windsor, ON: OSSA. [ Bailin & Battersby 2009 available online ]
  • –––, 2016a, “Fostering the Virtues of Inquiry”, Topoi , 35(2): 367–374. doi:10.1007/s11245-015-9307-6
  • –––, 2016b, Reason in the Balance: An Inquiry Approach to Critical Thinking , Indianapolis: Hackett, 2nd edition.
  • –––, 2021, “Inquiry: Teaching for Reasoned Judgment”, in Daniel Fasko, Jr. and Frank Fair (eds.), Critical Thinking and Reasoning: Theory, Development, Instruction, and Assessment , Leiden: Brill, pp. 31–46. doi: 10.1163/9789004444591_003
  • Bailin, Sharon, Roland Case, Jerrold R. Coombs, and Leroi B. Daniels, 1999a, “Common Misconceptions of Critical Thinking”, Journal of Curriculum Studies , 31(3): 269–283. doi:10.1080/002202799183124
  • –––, 1999b, “Conceptualizing Critical Thinking”, Journal of Curriculum Studies , 31(3): 285–302. doi:10.1080/002202799183133
  • Blair, J. Anthony, 2021, Studies in Critical Thinking , Windsor, ON: Windsor Studies in Argumentation, 2nd edition. [Available online at https://windsor.scholarsportal.info/omp/index.php/wsia/catalog/book/106]
  • Berman, Alan M., Seth J. Schwartz, William M. Kurtines, and Steven L. Berman, 2001, “The Process of Exploration in Identity Formation: The Role of Style and Competence”, Journal of Adolescence , 24(4): 513–528. doi:10.1006/jado.2001.0386
  • Black, Beth (ed.), 2012, An A to Z of Critical Thinking , London: Continuum International Publishing Group.
  • Bloom, Benjamin Samuel, Max D. Engelhart, Edward J. Furst, Walter H. Hill, and David R. Krathwohl, 1956, Taxonomy of Educational Objectives. Handbook I: Cognitive Domain , New York: David McKay.
  • Boardman, Frank, Nancy M. Cavender, and Howard Kahane, 2018, Logic and Contemporary Rhetoric: The Use of Reason in Everyday Life , Boston: Cengage, 13th edition.
  • Browne, M. Neil and Stuart M. Keeley, 2018, Asking the Right Questions: A Guide to Critical Thinking , Hoboken, NJ: Pearson, 12th edition.
  • Center for Assessment & Improvement of Learning, 2017, Critical Thinking Assessment Test , Cookeville, TN: Tennessee Technological University.
  • Cleghorn, Paul. 2021. “Critical Thinking in the Elementary School: Practical Guidance for Building a Culture of Thinking”, in Daniel Fasko, Jr. and Frank Fair (eds.), Critical Thinking and Reasoning: Theory, Development, Instruction, and Assessmen t, Leiden: Brill, pp. 150–167. doi: 10.1163/9789004444591_010
  • Cohen, Jacob, 1988, Statistical Power Analysis for the Behavioral Sciences , Hillsdale, NJ: Lawrence Erlbaum Associates, 2nd edition.
  • College Board, 1983, Academic Preparation for College. What Students Need to Know and Be Able to Do , New York: College Entrance Examination Board, ERIC document ED232517.
  • Commission on the Relation of School and College of the Progressive Education Association, 1943, Thirty Schools Tell Their Story , Volume V of Adventure in American Education , New York and London: Harper & Brothers.
  • Council for Aid to Education, 2017, CLA+ Student Guide . Available at http://cae.org/images/uploads/pdf/CLA_Student_Guide_Institution.pdf ; last accessed 2022 07 16.
  • Dalgleish, Adam, Patrick Girard, and Maree Davies, 2017, “Critical Thinking, Bias and Feminist Philosophy: Building a Better Framework through Collaboration”, Informal Logic , 37(4): 351–369. [ Dalgleish et al. available online ]
  • Dewey, John, 1910, How We Think , Boston: D.C. Heath. [ Dewey 1910 available online ]
  • –––, 1916, Democracy and Education: An Introduction to the Philosophy of Education , New York: Macmillan.
  • –––, 1933, How We Think: A Restatement of the Relation of Reflective Thinking to the Educative Process , Lexington, MA: D.C. Heath.
  • –––, 1936, “The Theory of the Chicago Experiment”, Appendix II of Mayhew & Edwards 1936: 463–477.
  • –––, 1938, Logic: The Theory of Inquiry , New York: Henry Holt and Company.
  • Dominguez, Caroline (coord.), 2018a, A European Collection of the Critical Thinking Skills and Dispositions Needed in Different Professional Fields for the 21st Century , Vila Real, Portugal: UTAD. Available at http://bit.ly/CRITHINKEDUO1 ; last accessed 2022 07 16.
  • ––– (coord.), 2018b, A European Review on Critical Thinking Educational Practices in Higher Education Institutions , Vila Real: UTAD. Available at http://bit.ly/CRITHINKEDUO2 ; last accessed 2022 07 16.
  • ––– (coord.), 2018c, The CRITHINKEDU European Course on Critical Thinking Education for University Teachers: From Conception to Delivery , Vila Real: UTAD. Available at http:/bit.ly/CRITHINKEDU03; last accessed 2022 07 16.
  • Dominguez Caroline and Rita Payan-Carreira (eds.), 2019, Promoting Critical Thinking in European Higher Education Institutions: Towards an Educational Protocol , Vila Real: UTAD. Available at http:/bit.ly/CRITHINKEDU04; last accessed 2022 07 16.
  • Ennis, Robert H., 1958, “An Appraisal of the Watson-Glaser Critical Thinking Appraisal”, The Journal of Educational Research , 52(4): 155–158. doi:10.1080/00220671.1958.10882558
  • –––, 1962, “A Concept of Critical Thinking: A Proposed Basis for Research on the Teaching and Evaluation of Critical Thinking Ability”, Harvard Educational Review , 32(1): 81–111.
  • –––, 1981a, “A Conception of Deductive Logical Competence”, Teaching Philosophy , 4(3/4): 337–385. doi:10.5840/teachphil198143/429
  • –––, 1981b, “Eight Fallacies in Bloom’s Taxonomy”, in C. J. B. Macmillan (ed.), Philosophy of Education 1980: Proceedings of the Thirty-seventh Annual Meeting of the Philosophy of Education Society , Bloomington, IL: Philosophy of Education Society, pp. 269–273.
  • –––, 1984, “Problems in Testing Informal Logic, Critical Thinking, Reasoning Ability”, Informal Logic , 6(1): 3–9. [ Ennis 1984 available online ]
  • –––, 1987, “A Taxonomy of Critical Thinking Dispositions and Abilities”, in Joan Boykoff Baron and Robert J. Sternberg (eds.), Teaching Thinking Skills: Theory and Practice , New York: W. H. Freeman, pp. 9–26.
  • –––, 1989, “Critical Thinking and Subject Specificity: Clarification and Needed Research”, Educational Researcher , 18(3): 4–10. doi:10.3102/0013189X018003004
  • –––, 1991, “Critical Thinking: A Streamlined Conception”, Teaching Philosophy , 14(1): 5–24. doi:10.5840/teachphil19911412
  • –––, 1996, “Critical Thinking Dispositions: Their Nature and Assessability”, Informal Logic , 18(2–3): 165–182. [ Ennis 1996 available online ]
  • –––, 1998, “Is Critical Thinking Culturally Biased?”, Teaching Philosophy , 21(1): 15–33. doi:10.5840/teachphil19982113
  • –––, 2011, “Critical Thinking: Reflection and Perspective Part I”, Inquiry: Critical Thinking across the Disciplines , 26(1): 4–18. doi:10.5840/inquiryctnews20112613
  • –––, 2013, “Critical Thinking across the Curriculum: The Wisdom CTAC Program”, Inquiry: Critical Thinking across the Disciplines , 28(2): 25–45. doi:10.5840/inquiryct20132828
  • –––, 2016, “Definition: A Three-Dimensional Analysis with Bearing on Key Concepts”, in Patrick Bondy and Laura Benacquista (eds.), Argumentation, Objectivity, and Bias: Proceedings of the 11th International Conference of the Ontario Society for the Study of Argumentation (OSSA), 18–21 May 2016 , Windsor, ON: OSSA, pp. 1–19. Available at http://scholar.uwindsor.ca/ossaarchive/OSSA11/papersandcommentaries/105 ; last accessed 2022 07 16.
  • –––, 2018, “Critical Thinking Across the Curriculum: A Vision”, Topoi , 37(1): 165–184. doi:10.1007/s11245-016-9401-4
  • Ennis, Robert H., and Jason Millman, 1971, Manual for Cornell Critical Thinking Test, Level X, and Cornell Critical Thinking Test, Level Z , Urbana, IL: Critical Thinking Project, University of Illinois.
  • Ennis, Robert H., Jason Millman, and Thomas Norbert Tomko, 1985, Cornell Critical Thinking Tests Level X & Level Z: Manual , Pacific Grove, CA: Midwest Publication, 3rd edition.
  • –––, 2005, Cornell Critical Thinking Tests Level X & Level Z: Manual , Seaside, CA: Critical Thinking Company, 5th edition.
  • Ennis, Robert H. and Eric Weir, 1985, The Ennis-Weir Critical Thinking Essay Test: Test, Manual, Criteria, Scoring Sheet: An Instrument for Teaching and Testing , Pacific Grove, CA: Midwest Publications.
  • Facione, Peter A., 1990a, Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction , Research Findings and Recommendations Prepared for the Committee on Pre-College Philosophy of the American Philosophical Association, ERIC Document ED315423.
  • –––, 1990b, California Critical Thinking Skills Test, CCTST – Form A , Millbrae, CA: The California Academic Press.
  • –––, 1990c, The California Critical Thinking Skills Test--College Level. Technical Report #3. Gender, Ethnicity, Major, CT Self-Esteem, and the CCTST , ERIC Document ED326584.
  • –––, 1992, California Critical Thinking Skills Test: CCTST – Form B, Millbrae, CA: The California Academic Press.
  • –––, 2000, “The Disposition Toward Critical Thinking: Its Character, Measurement, and Relationship to Critical Thinking Skill”, Informal Logic , 20(1): 61–84. [ Facione 2000 available online ]
  • Facione, Peter A. and Noreen C. Facione, 1992, CCTDI: A Disposition Inventory , Millbrae, CA: The California Academic Press.
  • Facione, Peter A., Noreen C. Facione, and Carol Ann F. Giancarlo, 2001, California Critical Thinking Disposition Inventory: CCTDI: Inventory Manual , Millbrae, CA: The California Academic Press.
  • Facione, Peter A., Carol A. Sánchez, and Noreen C. Facione, 1994, Are College Students Disposed to Think? , Millbrae, CA: The California Academic Press. ERIC Document ED368311.
  • Fisher, Alec, and Michael Scriven, 1997, Critical Thinking: Its Definition and Assessment , Norwich: Centre for Research in Critical Thinking, University of East Anglia.
  • Freire, Paulo, 1968 [1970], Pedagogia do Oprimido . Translated as Pedagogy of the Oppressed , Myra Bergman Ramos (trans.), New York: Continuum, 1970.
  • Gigerenzer, Gerd, 2001, “The Adaptive Toolbox”, in Gerd Gigerenzer and Reinhard Selten (eds.), Bounded Rationality: The Adaptive Toolbox , Cambridge, MA: MIT Press, pp. 37–50.
  • Glaser, Edward Maynard, 1941, An Experiment in the Development of Critical Thinking , New York: Bureau of Publications, Teachers College, Columbia University.
  • Groarke, Leo A. and Christopher W. Tindale, 2012, Good Reasoning Matters! A Constructive Approach to Critical Thinking , Don Mills, ON: Oxford University Press, 5th edition.
  • Halpern, Diane F., 1998, “Teaching Critical Thinking for Transfer Across Domains: Disposition, Skills, Structure Training, and Metacognitive Monitoring”, American Psychologist , 53(4): 449–455. doi:10.1037/0003-066X.53.4.449
  • –––, 2016, Manual: Halpern Critical Thinking Assessment , Mödling, Austria: Schuhfried. Available at https://pdfcoffee.com/hcta-test-manual-pdf-free.html; last accessed 2022 07 16.
  • Hamby, Benjamin, 2014, The Virtues of Critical Thinkers , Doctoral dissertation, Philosophy, McMaster University. [ Hamby 2014 available online ]
  • –––, 2015, “Willingness to Inquire: The Cardinal Critical Thinking Virtue”, in Martin Davies and Ronald Barnett (eds.), The Palgrave Handbook of Critical Thinking in Higher Education , New York: Palgrave Macmillan, pp. 77–87.
  • Haran, Uriel, Ilana Ritov, and Barbara A. Mellers, 2013, “The Role of Actively Open-minded Thinking in Information Acquisition, Accuracy, and Calibration”, Judgment and Decision Making , 8(3): 188–201.
  • Hatcher, Donald and Kevin Possin, 2021, “Commentary: Thinking Critically about Critical Thinking Assessment”, in Daniel Fasko, Jr. and Frank Fair (eds.), Critical Thinking and Reasoning: Theory, Development, Instruction, and Assessment , Leiden: Brill, pp. 298–322. doi: 10.1163/9789004444591_017
  • Haynes, Ada, Elizabeth Lisic, Kevin Harris, Katie Leming, Kyle Shanks, and Barry Stein, 2015, “Using the Critical Thinking Assessment Test (CAT) as a Model for Designing Within-Course Assessments: Changing How Faculty Assess Student Learning”, Inquiry: Critical Thinking Across the Disciplines , 30(3): 38–48. doi:10.5840/inquiryct201530316
  • Haynes, Ada and Barry Stein, 2021, “Observations from a Long-Term Effort to Assess and Improve Critical Thinking”, in Daniel Fasko, Jr. and Frank Fair (eds.), Critical Thinking and Reasoning: Theory, Development, Instruction, and Assessment , Leiden: Brill, pp. 231–254. doi: 10.1163/9789004444591_014
  • Hiner, Amanda L. 2021. “Equipping Students for Success in College and Beyond: Placing Critical Thinking Instruction at the Heart of a General Education Program”, in Daniel Fasko, Jr. and Frank Fair (eds.), Critical Thinking and Reasoning: Theory, Development, Instruction, and Assessment , Leiden: Brill, pp. 188–208. doi: 10.1163/9789004444591_012
  • Hitchcock, David, 2017, “Critical Thinking as an Educational Ideal”, in his On Reasoning and Argument: Essays in Informal Logic and on Critical Thinking , Dordrecht: Springer, pp. 477–497. doi:10.1007/978-3-319-53562-3_30
  • –––, 2021, “Seven Philosophical Implications of Critical Thinking: Themes, Variations, Implications”, in Daniel Fasko, Jr. and Frank Fair (eds.), Critical Thinking and Reasoning: Theory, Development, Instruction, and Assessment , Leiden: Brill, pp. 9–30. doi: 10.1163/9789004444591_002
  • hooks, bell, 1994, Teaching to Transgress: Education as the Practice of Freedom , New York and London: Routledge.
  • –––, 2010, Teaching Critical Thinking: Practical Wisdom , New York and London: Routledge.
  • Johnson, Ralph H., 1992, “The Problem of Defining Critical Thinking”, in Stephen P, Norris (ed.), The Generalizability of Critical Thinking , New York: Teachers College Press, pp. 38–53.
  • Kahane, Howard, 1971, Logic and Contemporary Rhetoric: The Use of Reason in Everyday Life , Belmont, CA: Wadsworth.
  • Kahneman, Daniel, 2011, Thinking, Fast and Slow , New York: Farrar, Straus and Giroux.
  • Kahneman, Daniel, Olivier Sibony, & Cass R. Sunstein, 2021, Noise: A Flaw in Human Judgment , New York: Little, Brown Spark.
  • Kenyon, Tim, and Guillaume Beaulac, 2014, “Critical Thinking Education and Debasing”, Informal Logic , 34(4): 341–363. [ Kenyon & Beaulac 2014 available online ]
  • Krathwohl, David R., Benjamin S. Bloom, and Bertram B. Masia, 1964, Taxonomy of Educational Objectives, Handbook II: Affective Domain , New York: David McKay.
  • Kuhn, Deanna, 1991, The Skills of Argument , New York: Cambridge University Press. doi:10.1017/CBO9780511571350
  • –––, 2019, “Critical Thinking as Discourse”, Human Development, 62 (3): 146–164. doi:10.1159/000500171
  • Lipman, Matthew, 1987, “Critical Thinking–What Can It Be?”, Analytic Teaching , 8(1): 5–12. [ Lipman 1987 available online ]
  • –––, 2003, Thinking in Education , Cambridge: Cambridge University Press, 2nd edition.
  • Loftus, Elizabeth F., 2017, “Eavesdropping on Memory”, Annual Review of Psychology , 68: 1–18. doi:10.1146/annurev-psych-010416-044138
  • Makaiau, Amber Strong, 2021, “The Good Thinker’s Tool Kit: How to Engage Critical Thinking and Reasoning in Secondary Education”, in Daniel Fasko, Jr. and Frank Fair (eds.), Critical Thinking and Reasoning: Theory, Development, Instruction, and Assessment , Leiden: Brill, pp. 168–187. doi: 10.1163/9789004444591_011
  • Martin, Jane Roland, 1992, “Critical Thinking for a Humane World”, in Stephen P. Norris (ed.), The Generalizability of Critical Thinking , New York: Teachers College Press, pp. 163–180.
  • Mayhew, Katherine Camp, and Anna Camp Edwards, 1936, The Dewey School: The Laboratory School of the University of Chicago, 1896–1903 , New York: Appleton-Century. [ Mayhew & Edwards 1936 available online ]
  • McPeck, John E., 1981, Critical Thinking and Education , New York: St. Martin’s Press.
  • Moore, Brooke Noel and Richard Parker, 2020, Critical Thinking , New York: McGraw-Hill, 13th edition.
  • Nickerson, Raymond S., 1998, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises”, Review of General Psychology , 2(2): 175–220. doi:10.1037/1089-2680.2.2.175
  • Nieto, Ana Maria, and Jorge Valenzuela, 2012, “A Study of the Internal Structure of Critical Thinking Dispositions”, Inquiry: Critical Thinking across the Disciplines , 27(1): 31–38. doi:10.5840/inquiryct20122713
  • Norris, Stephen P., 1985, “Controlling for Background Beliefs When Developing Multiple-choice Critical Thinking Tests”, Educational Measurement: Issues and Practice , 7(3): 5–11. doi:10.1111/j.1745-3992.1988.tb00437.x
  • Norris, Stephen P. and Robert H. Ennis, 1989, Evaluating Critical Thinking (The Practitioners’ Guide to Teaching Thinking Series), Pacific Grove, CA: Midwest Publications.
  • Norris, Stephen P. and Ruth Elizabeth King, 1983, Test on Appraising Observations , St. John’s, NL: Institute for Educational Research and Development, Memorial University of Newfoundland.
  • –––, 1984, The Design of a Critical Thinking Test on Appraising Observations , St. John’s, NL: Institute for Educational Research and Development, Memorial University of Newfoundland. ERIC Document ED260083.
  • –––, 1985, Test on Appraising Observations: Manual , St. John’s, NL: Institute for Educational Research and Development, Memorial University of Newfoundland.
  • –––, 1990a, Test on Appraising Observations , St. John’s, NL: Institute for Educational Research and Development, Memorial University of Newfoundland, 2nd edition.
  • –––, 1990b, Test on Appraising Observations: Manual , St. John’s, NL: Institute for Educational Research and Development, Memorial University of Newfoundland, 2nd edition.
  • OCR [Oxford, Cambridge and RSA Examinations], 2011, AS/A Level GCE: Critical Thinking – H052, H452 , Cambridge: OCR. Past papers available at https://pastpapers.co/ocr/?dir=A-Level/Critical-Thinking-H052-H452; last accessed 2022 07 16.
  • Ontario Ministry of Education, 2013, The Ontario Curriculum Grades 9 to 12: Social Sciences and Humanities . Available at http://www.edu.gov.on.ca/eng/curriculum/secondary/ssciences9to122013.pdf ; last accessed 2022 07 16.
  • Passmore, John Arthur, 1980, The Philosophy of Teaching , London: Duckworth.
  • Paul, Richard W., 1981, “Teaching Critical Thinking in the ‘Strong’ Sense: A Focus on Self-Deception, World Views, and a Dialectical Mode of Analysis”, Informal Logic , 4(2): 2–7. [ Paul 1981 available online ]
  • –––, 1984, “Critical Thinking: Fundamental to Education for a Free Society”, Educational Leadership , 42(1): 4–14.
  • –––, 1985, “McPeck’s Mistakes”, Informal Logic , 7(1): 35–43. [ Paul 1985 available online ]
  • Paul, Richard W. and Linda Elder, 2006, The Miniature Guide to Critical Thinking: Concepts and Tools , Dillon Beach, CA: Foundation for Critical Thinking, 4th edition.
  • Payette, Patricia, and Edna Ross, 2016, “Making a Campus-Wide Commitment to Critical Thinking: Insights and Promising Practices Utilizing the Paul-Elder Approach at the University of Louisville”, Inquiry: Critical Thinking Across the Disciplines , 31(1): 98–110. doi:10.5840/inquiryct20163118
  • Possin, Kevin, 2008, “A Field Guide to Critical-Thinking Assessment”, Teaching Philosophy , 31(3): 201–228. doi:10.5840/teachphil200831324
  • –––, 2013a, “Some Problems with the Halpern Critical Thinking Assessment (HCTA) Test”, Inquiry: Critical Thinking across the Disciplines , 28(3): 4–12. doi:10.5840/inquiryct201328313
  • –––, 2013b, “A Serious Flaw in the Collegiate Learning Assessment (CLA) Test”, Informal Logic , 33(3): 390–405. [ Possin 2013b available online ]
  • –––, 2013c, “A Fatal Flaw in the Collegiate Learning Assessment Test”, Assessment Update , 25 (1): 8–12.
  • –––, 2014, “Critique of the Watson-Glaser Critical Thinking Appraisal Test: The More You Know, the Lower Your Score”, Informal Logic , 34(4): 393–416. [ Possin 2014 available online ]
  • –––, 2020, “CAT Scan: A Critical Review of the Critical-Thinking Assessment Test”, Informal Logic , 40 (3): 489–508. [Available online at https://informallogic.ca/index.php/informal_logic/article/view/6243]
  • Rawls, John, 1971, A Theory of Justice , Cambridge, MA: Harvard University Press.
  • Rear, David, 2019, “One Size Fits All? The Limitations of Standardised Assessment in Critical Thinking”, Assessment & Evaluation in Higher Education , 44(5): 664–675. doi: 10.1080/02602938.2018.1526255
  • Rousseau, Jean-Jacques, 1762, Émile , Amsterdam: Jean Néaulme.
  • Scheffler, Israel, 1960, The Language of Education , Springfield, IL: Charles C. Thomas.
  • Scriven, Michael, and Richard W. Paul, 1987, Defining Critical Thinking , Draft statement written for the National Council for Excellence in Critical Thinking Instruction. Available at http://www.criticalthinking.org/pages/defining-critical-thinking/766 ; last accessed 2022 07 16.
  • Sheffield, Clarence Burton Jr., 2018, “Promoting Critical Thinking in Higher Education: My Experiences as the Inaugural Eugene H. Fram Chair in Applied Critical Thinking at Rochester Institute of Technology”, Topoi , 37(1): 155–163. doi:10.1007/s11245-016-9392-1
  • Siegel, Harvey, 1985, “McPeck, Informal Logic and the Nature of Critical Thinking”, in David Nyberg (ed.), Philosophy of Education 1985: Proceedings of the Forty-First Annual Meeting of the Philosophy of Education Society , Normal, IL: Philosophy of Education Society, pp. 61–72.
  • –––, 1988, Educating Reason: Rationality, Critical Thinking, and Education , New York: Routledge.
  • –––, 1999, “What (Good) Are Thinking Dispositions?”, Educational Theory , 49(2): 207–221. doi:10.1111/j.1741-5446.1999.00207.x
  • Simon, Herbert A., 1956, “Rational Choice and the Structure of the Environment”, Psychological Review , 63(2): 129–138. doi: 10.1037/h0042769
  • Simpson, Elizabeth, 1966–67, “The Classification of Educational Objectives: Psychomotor Domain”, Illinois Teacher of Home Economics , 10(4): 110–144, ERIC document ED0103613. [ Simpson 1966–67 available online ]
  • Skolverket, 2018, Curriculum for the Compulsory School, Preschool Class and School-age Educare , Stockholm: Skolverket, revised 2018. Available at https://www.skolverket.se/download/18.31c292d516e7445866a218f/1576654682907/pdf3984.pdf; last accessed 2022 07 15.
  • Smith, B. Othanel, 1953, “The Improvement of Critical Thinking”, Progressive Education , 30(5): 129–134.
  • Smith, Eugene Randolph, Ralph Winfred Tyler, and the Evaluation Staff, 1942, Appraising and Recording Student Progress , Volume III of Adventure in American Education , New York and London: Harper & Brothers.
  • Splitter, Laurance J., 1987, “Educational Reform through Philosophy for Children”, Thinking: The Journal of Philosophy for Children , 7(2): 32–39. doi:10.5840/thinking1987729
  • Stanovich Keith E., and Paula J. Stanovich, 2010, “A Framework for Critical Thinking, Rational Thinking, and Intelligence”, in David D. Preiss and Robert J. Sternberg (eds), Innovations in Educational Psychology: Perspectives on Learning, Teaching and Human Development , New York: Springer Publishing, pp 195–237.
  • Stanovich Keith E., Richard F. West, and Maggie E. Toplak, 2011, “Intelligence and Rationality”, in Robert J. Sternberg and Scott Barry Kaufman (eds.), Cambridge Handbook of Intelligence , Cambridge: Cambridge University Press, 3rd edition, pp. 784–826. doi:10.1017/CBO9780511977244.040
  • Tankersley, Karen, 2005, Literacy Strategies for Grades 4–12: Reinforcing the Threads of Reading , Alexandria, VA: Association for Supervision and Curriculum Development.
  • Thayer-Bacon, Barbara J., 1992, “Is Modern Critical Thinking Theory Sexist?”, Inquiry: Critical Thinking Across the Disciplines , 10(1): 3–7. doi:10.5840/inquiryctnews199210123
  • –––, 1993, “Caring and Its Relationship to Critical Thinking”, Educational Theory , 43(3): 323–340. doi:10.1111/j.1741-5446.1993.00323.x
  • –––, 1995a, “Constructive Thinking: Personal Voice”, Journal of Thought , 30(1): 55–70.
  • –––, 1995b, “Doubting and Believing: Both are Important for Critical Thinking”, Inquiry: Critical Thinking across the Disciplines , 15(2): 59–66. doi:10.5840/inquiryctnews199515226
  • –––, 2000, Transforming Critical Thinking: Thinking Constructively , New York: Teachers College Press.
  • Toulmin, Stephen Edelston, 1958, The Uses of Argument , Cambridge: Cambridge University Press.
  • Turri, John, Mark Alfano, and John Greco, 2017, “Virtue Epistemology”, in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2017 Edition). URL = < https://plato.stanford.edu/archives/win2017/entries/epistemology-virtue/ >
  • Vincent-Lancrin, Stéphan, Carlos González-Sancho, Mathias Bouckaert, Federico de Luca, Meritxell Fernández-Barrerra, Gwénaël Jacotin, Joaquin Urgel, and Quentin Vidal, 2019, Fostering Students’ Creativity and Critical Thinking: What It Means in School. Educational Research and Innovation , Paris: OECD Publishing.
  • Warren, Karen J. 1988. “Critical Thinking and Feminism”, Informal Logic , 10(1): 31–44. [ Warren 1988 available online ]
  • Watson, Goodwin, and Edward M. Glaser, 1980a, Watson-Glaser Critical Thinking Appraisal, Form A , San Antonio, TX: Psychological Corporation.
  • –––, 1980b, Watson-Glaser Critical Thinking Appraisal: Forms A and B; Manual , San Antonio, TX: Psychological Corporation,
  • –––, 1994, Watson-Glaser Critical Thinking Appraisal, Form B , San Antonio, TX: Psychological Corporation.
  • Weinstein, Mark, 1990, “Towards a Research Agenda for Informal Logic and Critical Thinking”, Informal Logic , 12(3): 121–143. [ Weinstein 1990 available online ]
  • –––, 2013, Logic, Truth and Inquiry , London: College Publications.
  • Willingham, Daniel T., 2019, “How to Teach Critical Thinking”, Education: Future Frontiers , 1: 1–17. [Available online at https://prod65.education.nsw.gov.au/content/dam/main-education/teaching-and-learning/education-for-a-changing-world/media/documents/How-to-teach-critical-thinking-Willingham.pdf.]
  • Zagzebski, Linda Trinkaus, 1996, Virtues of the Mind: An Inquiry into the Nature of Virtue and the Ethical Foundations of Knowledge , Cambridge: Cambridge University Press. doi:10.1017/CBO9781139174763
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Association for Informal Logic and Critical Thinking (AILACT)
  • Critical Thinking Across the European Higher Education Curricula (CRITHINKEDU)
  • Critical Thinking Definition, Instruction, and Assessment: A Rigorous Approach
  • Critical Thinking Research (RAIL)
  • Foundation for Critical Thinking
  • Insight Assessment
  • Partnership for 21st Century Learning (P21)
  • The Critical Thinking Consortium
  • The Nature of Critical Thinking: An Outline of Critical Thinking Dispositions and Abilities , by Robert H. Ennis

abilities | bias, implicit | children, philosophy for | civic education | decision-making capacity | Dewey, John | dispositions | education, philosophy of | epistemology: virtue | logic: informal

Copyright © 2022 by David Hitchcock < hitchckd @ mcmaster . ca >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

  • Research article
  • Open access
  • Published: 12 April 2024

Feedback sources in essay writing: peer-generated or AI-generated feedback?

  • Seyyed Kazem Banihashem 1 , 2 ,
  • Nafiseh Taghizadeh Kerman 3 ,
  • Omid Noroozi 2 ,
  • Jewoong Moon 4 &
  • Hendrik Drachsler 1 , 5  

International Journal of Educational Technology in Higher Education volume  21 , Article number:  23 ( 2024 ) Cite this article

1 Altmetric

Metrics details

Peer feedback is introduced as an effective learning strategy, especially in large-size classes where teachers face high workloads. However, for complex tasks such as writing an argumentative essay, without support peers may not provide high-quality feedback since it requires a high level of cognitive processing, critical thinking skills, and a deep understanding of the subject. With the promising developments in Artificial Intelligence (AI), particularly after the emergence of ChatGPT, there is a global argument that whether AI tools can be seen as a new source of feedback or not for complex tasks. The answer to this question is not completely clear yet as there are limited studies and our understanding remains constrained. In this study, we used ChatGPT as a source of feedback for students’ argumentative essay writing tasks and we compared the quality of ChatGPT-generated feedback with peer feedback. The participant pool consisted of 74 graduate students from a Dutch university. The study unfolded in two phases: firstly, students’ essay data were collected as they composed essays on one of the given topics; subsequently, peer feedback and ChatGPT-generated feedback data were collected through engaging peers in a feedback process and using ChatGPT as a feedback source. Two coding schemes including coding schemes for essay analysis and coding schemes for feedback analysis were used to measure the quality of essays and feedback. Then, a MANOVA analysis was employed to determine any distinctions between the feedback generated by peers and ChatGPT. Additionally, Spearman’s correlation was utilized to explore potential links between the essay quality and the feedback generated by peers and ChatGPT. The results showed a significant difference between feedback generated by ChatGPT and peers. While ChatGPT provided more descriptive feedback including information about how the essay is written, peers provided feedback including information about identification of the problem in the essay. The overarching look at the results suggests a potential complementary role for ChatGPT and students in the feedback process. Regarding the relationship between the quality of essays and the quality of the feedback provided by ChatGPT and peers, we found no overall significant relationship. These findings imply that the quality of the essays does not impact both ChatGPT and peer feedback quality. The implications of this study are valuable, shedding light on the prospective use of ChatGPT as a feedback source, particularly for complex tasks like argumentative essay writing. We discussed the findings and delved into the implications for future research and practical applications in educational contexts.

Introduction

Feedback is acknowledged as one of the most crucial tools for enhancing learning (Banihashem et al., 2022 ). The general and well-accepted definition of feedback conceptualizes it as information provided by an agent (e.g., teacher, peer, self, AI, technology) regarding aspects of one’s performance or understanding (e.g., Hattie & Timplerely, 2007 ). Feedback serves to heighten students’ self-awareness concerning their strengths and areas warranting improvement, through providing actionable steps required to enhance performance (Ramson, 2003 ). The literature abounds with numerous studies that illuminate the positive impact of feedback on diverse dimensions of students’ learning journey including increasing motivation (Amiryousefi & Geld, 2021 ), fostering active engagement (Zhang & Hyland, 2022 ), promoting self-regulation and metacognitive skills (Callender et al., 2016 ; Labuhn et al., 2010 ), and enriching the depth of learning outcomes (Gan et al., 2021 ).

Normally, teachers have primarily assumed the role of delivering feedback, providing insights into students’ performance on specific tasks or their grasp of particular subjects (Konold et al., 2004 ). This responsibility has naturally fallen upon teachers owing to their expertise in the subject matter and their competence to offer constructive input (Diezmann & Watters, 2015 ; Holt-Reynolds, 1999 ; Valero Haro et al., 2023 ). However, teachers’ role as feedback providers has been challenged in recent years as we have witnessed a growth in class sizes due to the rapid advances in technology and the widespread use of digital technologies that resulted in flexible and accessible education (Shi et al., 2019 ). The growth in class sizes has translated into an increased workload for teachers, leading to a pertinent predicament. This situation has directly impacted their capacity to provide personalized and timely feedback to each student, a capability that has encountered limitations (Er et al., 2021 ).

In response to this challenge, various solutions have emerged, among which peer feedback has arisen as a promising alternative instructional approach (Er et al., 2021 ; Gao et al., 2024 ; Noroozi et al., 2023 ; Kerman et al., 2024 ). Peer feedback entails a process wherein students assume the role of feedback providers instead of teachers (Liu & Carless, 2006 ). Involving students in feedback can add value to education in several ways. First and foremost, research indicates that students delve into deeper and more effective learning when they take on the role of assessors, critically evaluating and analyzing their peers’ assignments (Gielen & De Wever, 2015 ; Li et al., 2010 ). Moreover, involving students in the feedback process can augment their self-regulatory awareness, active engagement, and motivation for learning (e.g., Arguedas et al., 2016 ). Lastly, the incorporation of peer feedback not only holds the potential to significantly alleviate teachers’ workload by shifting their responsibilities from feedback provision to the facilitation of peer feedback processes but also nurtures a dynamic learning environment wherein students are actively immersed in the learning journey (e.g., Valero Haro et al., 2023 ).

Despite the advantages of peer feedback, furnishing high-quality feedback to peers remains a challenge. Several factors contribute to this challenge. Primarily, generating effective feedback necessitates a solid understanding of feedback principles, an element that peers often lack (Latifi et al., 2023 ; Noroozi et al., 2016 ). Moreover, offering high-quality feedback is inherently a complex task, demanding substantial cognitive processing to meticulously evaluate peers’ assignments, identify issues, and propose constructive remedies (King, 2002 ; Noroozi et al., 2022 ). Furthermore, the provision of valuable feedback calls for a significant level of domain-specific expertise, which is not consistently possessed by students (Alqassab et al., 2018 ; Kerman et al., 2022 ).

In recent times, advancements in technology, coupled with the emergence of fields like Learning Analytics (LA), have presented promising avenues to elevate feedback practices through the facilitation of scalable, timely, and personalized feedback (Banihashem et al., 2023 ; Deeva et al., 2021 ; Drachsler, 2023 ; Drachsler & Kalz, 2016 ; Pardo et al., 2019 ; Zawacki-Richter et al., 2019 ; Rüdian et al., 2020 ). Yet, a striking stride forward in the field of educational technology has been the advent of a novel Artificial Intelligence (AI) tool known as “ChatGPT,” which has sparked a global discourse on its potential to significantly impact the current education system (Ray, 2023 ). This tool’s introduction has initiated discussions on the considerable ways AI can support educational endeavors (Bond et al., 2024 ; Darvishi et al., 2024 ).

In the context of feedback, AI-powered ChatGPT introduces what is referred to as AI-generated feedback (Farrokhnia et al., 2023 ). While the literature suggests that ChatGPT has the potential to facilitate feedback practices (Dai et al., 2023 ; Katz et al., 2023 ), this literature is very limited and mostly not empirical leading us to realize that our current comprehension of its capabilities in this regard is quite restricted. Therefore, we lack a comprehensive understanding of how ChatGPT can effectively support feedback practices and to what degree it can improve the timeliness, impact, and personalization of feedback, which remains notably limited at this time.

More importantly, considering the challenges we raised for peer feedback, the question is whether AI-generated feedback and more specifically feedback provided by ChatGPT has the potential to provide quality feedback. Taking this into account, there is a scarcity of knowledge and research gaps regarding the extent to which AI tools, specifically ChatGPT, can effectively enhance feedback quality compared to traditional peer feedback. Hence, our research aims to investigate the quality of feedback generated by ChatGPT within the context of essay writing and to juxtapose its quality with that of feedback generated by students.

This study carries the potential to make a substantial contribution to the existing body of recent literature on the potential of AI and in particular ChatGPT in education. It can cast a spotlight on the quality of AI-generated feedback in contrast to peer-generated feedback, while also showcasing the viability of AI tools like ChatGPT as effective automated feedback mechanisms. Furthermore, the outcomes of this study could offer insights into mitigating the feedback-related workload experienced by teachers through the intelligent utilization of AI tools (e.g., Banihashem et al., 2022 ; Er et al., 2021 ; Pardo et al., 2019 ).

However, there might be an argument regarding the rationale for conducting this study within the specific context of essay writing. Addressing this potential query, it is crucial to highlight that essay writing stands as one of the most prevalent yet complex tasks for students (Liunokas, 2020 ). This task is not without its challenges, as evidenced by the extensive body of literature that indicates students often struggle to meet desired standards in their essay composition (e.g., Bulqiyah et al., 2021 ; Noroozi et al., 2016 ;, 2022 ; Latifi et al., 2023 ).

Furthermore, teachers frequently express dissatisfaction with the depth and overall quality of students’ essay writing (Latifi et al., 2023 ). Often, these teachers lament that their feedback on essays remains superficial due to the substantial time and effort required for critical assessment and individualized feedback provision (Noroozi et al., 2016 ;, 2022 ). Regrettably, these constraints prevent them from delving deeper into the evaluation process (Kerman et al., 2022 ).

Hence, directing attention towards the comparison of peer-generated feedback quality and AI-generated feedback quality within the realm of essay writing bestows substantial value upon both research and practical application. This study enriches the academic discourse and informs practical approaches by delivering insights into the adequacy of feedback quality offered by both peers and AI for the domain of essay writing. This investigation serves as a critical step in determining whether the feedback imparted by peers and AI holds the necessary caliber to enhance the craft of essay writing.

The ramifications of addressing this query are noteworthy. Firstly, it stands to significantly alleviate the workload carried by teachers in the process of essay evaluation. By ascertaining the viability of feedback from peers and AI, teachers can potentially reduce the time and effort expended in reviewing essays. Furthermore, this study has the potential to advance the quality of essay compositions. The collaboration between students providing feedback to peers and the integration of AI-powered feedback tools can foster an environment where essays are not only better evaluated but also refined in their content and structure.With this in mind, we aim to tackle the following key questions within the scope of this study:

RQ1. To what extent does the quality of peer-generated and ChatGPT-generated feedback differ in the context of essay writing?

Rq2. does a relationship exist between the quality of essay writing performance and the quality of feedback generated by peers and chatgpt, context and participant.

This study was conducted in the academic year of 2022–2023 at a Dutch university specializing in life sciences. In total, 74 graduate students from food sciences participated in this study in which 77% of students were female ( N  = 57) and 23% were male ( N  = 17).

Study design and procedure

This empirical study has an exploratory nature and it was conducted in two phases. An online module called “ Argumentative Essay Writing ” (AEW) was designed to be followed by students within the Brightspace platform. The purpose of the AEW module was to improve students’ essay writing skills by engaging them in a peer learning process where students were invited to provide feedback on each other’s essays. After designing the module, the study was implemented in two weeks and followed in two phases.

In week one (phase one), students were asked to write an essay on given topics. The topics for the essay were controversial and included “ Scientists with affiliations to the food industry should abstain from participating in risk assessment processes ”, “ powdered infant formula must adhere to strict sterility standards ”, and “ safe food consumption is the responsibility of the consumer ”. The given controversial topics were directly related to the course content and students’ area of study. Students had time for one week to write their essays individually and submit them to the Brightspace platform.

In week two (phase two), students were randomly invited to provide two sets of written/asynchronous feedback on their peers’ submitted essays. We gave a prompt to students to be used for giving feedback ( Please provide feedback to your peer and explain the extent to which she/he has presented/elaborated/justified various elements of an argumentative essay. What are the problems and what are your suggestions to improve each element of the essay? Your feedback must be between 250 and 350 words ). To be able to engage students in the online peer feedback activity, we used the FeedbackFruits app embedded in the Brightspace platform. FeedbackFruits functions as an external educational technology tool seamlessly integrated into Brightspace, aimed at enhancing student engagement via diverse peer collaboration approaches. Among its features are peer feedback, assignment evaluation, skill assessment, automated feedback, interactive videos, dynamic documents, discussion tasks, and engaging presentations (Noroozi et al., 2022 ). In this research, our focus was on the peer feedback feature of the FeedbackFruits app, which empowers teachers to design tasks that enable students to offer feedback to their peers.

In addition, we used ChatGPT as another feedback source on peers’ essays. To be consistent with the criteria for peer feedback, we gave the same feedback prompt question with a minor modification to ChatGPT and asked it to give feedback on the peers’ essays ( Please read and provide feedback on the following essay and explain the extent to which she/he has presented/elaborated/justified various elements of an argumentative essay. What are the problems and what are your suggestions to improve each element of the essay? Your feedback must be between 250 and 350 words ).

Following this design, we were able to collect students’ essay data, peer feedback data, and feedback data generated by ChatGPT. In the next step, we used two coding schemes to analyze the quality of the essays and feedback generated by peers and ChatGPT.

Measurements

Coding scheme to assess the quality of essay writing.

In this study, a coding scheme proposed by Noroozi et al. ( 2016 ) was employed to assess students’ essay quality. This coding system was constructed based on the key components of high-quality essay composition, encompassing eight elements: introduction pertaining to the subject, taking a clear stance on the subject, presenting arguments in favor of the chosen position, providing justifications for the arguments supporting the position, counter-arguments, justifications for counter-arguments, responses to counter-arguments, and concluding with implications. Each element in the coding system is assigned a score ranging from zero (indicating the lowest quality level) to three (representing the highest quality level). The cumulative scores across all these elements were aggregated to determine the overall quality score of the student’s written essays. Two experienced coders in the field of education collaborated to assess the quality of the written essays, and their agreement level was measured at 75% (Cohen’s Kappa = 0.75 [95% confidence interval: 0.70–0.81]; z = 25.05; p  < 0.001), signifying a significant level of consensus between the coders.

Coding scheme to assess the quality of feedback generated by peers and ChatGPT

To assess the quality of feedback provided by both peers and ChatGPT, we employed a coding scheme developed by Noroozi et al. ( 2022 ). This coding framework dissects the characteristics of feedback, encompassing three key elements: the affective component, which considers the inclusion of emotional elements such as positive sentiments like praise or compliments, as well as negative emotions such as anger or disappointment; the cognitive component, which includes description (a concise summary of the essay), identification (pinpointing and specifying issues within the essay), and justification (providing explanations and justifications for the identified issues); and the constructive component, which involves offering recommendations, albeit not detailed action plans for further enhancements. Ratings within this coding framework range from zero, indicating poor quality, to two, signifying good quality. The cumulative scores were tallied to determine the overall quality of the feedback provided to the students. In this research, as each essay received feedback from both peers and ChatGPT, we calculated the average score from the two sets of feedback to establish the overall quality score for the feedback received, whether from peers or ChatGPT. The same two evaluators were involved in the assessment. The inter-rater reliability between the evaluators was determined to be 75% (Cohen’s Kappa = 0.75 [95% confidence interval: 0.66–0.84]; z = 17.52; p  < 0.001), showing a significant level of agreement between them.

The logic behind choosing these coding schemes was as follows: Firstly, from a theoretical standpoint, both coding schemes were developed based on robust and well-established theories. The coding scheme for evaluating essay quality draws on Toulmin’s argumentation model ( 1958 ), a respected framework for essay writing. It encompasses all elements essential for high-quality essay composition and aligns well with the structure of essays assigned in the chosen course for this study. Similarly, the feedback coding scheme is grounded in prominent works on identifying feedback features (e.g., Nelson & Schunn, 2009 ; Patchan et al., 2016 ; Wu & Schunn, 2020 ), enabling the identification of key features of high-quality feedback (Noroozi et al., 2022 ). Secondly, from a methodological perspective, both coding schemes feature a transparent scoring method, mitigating coder bias and bolstering the tool’s credibility.

To ensure the data’s validity and reliability for statistical analysis, two tests were implemented. Initially, the Levene test assessed group homogeneity, followed by the Kolmogorov-Smirnov test to evaluate data normality. The results confirmed both group homogeneity and data normality. For the first research question, gender was considered as a control variable, and the MANCOVA test was employed to compare the variations in feedback quality between peer feedback and ChatGPT-generated feedback. Addressing the second research question involved using Spearman’s correlation to examine the relationships among original argumentative essays, peer feedback, and ChatGPT-generated feedback.

The results showed a significant difference in feedback quality between peer feedback and ChatGPT-generated feedback. Peers provided feedback of higher quality compared to ChatGPT. This difference was mainly due to the descriptive and identification of the problem features of feedback. ChatGPT tended to produce more extensive descriptive feedback including a summary statement such as the description of the essay or taken action, while students performed better in pinpointing and identifying the issues in the feedback provided (see Table  1 ).

A comprehensive list featuring selected examples of feedback generated by peers and ChatGPT is presented in Fig  1 . This table additionally outlines examples of how the generated feedback was coded based on the coding scheme to assess the quality of feedback.

figure 1

A comparative list of selected examples of peer-generated and ChatGPT-generated feedback

Overall, the results indicated that there was no significant relationship between the quality of essay writing and the feedback generated by peers and ChatGPT. However, a positive correlation was observed between the quality of the essay and the affective feature of feedback generated by ChatGPT, while a negative relationship was observed between the quality of the essay and the affective feature of feedback generated by peers. This finding means that as the quality of the essay improves, ChatGPT tends to provide more affective feedback, while peers tend to provide less affective feedback (see Table  2 ).

This study was an initial effort to explore the potential of ChatGPT as a feedback source in the context of essay writing and to compare the extent to which the quality of feedback generated by ChatGPT differs from the feedback provided by peers. Below we discuss our findings for each research question.

Discussion on the results of RQ1

For the first research question, the results revealed a disparity in feedback quality when comparing peer-generated feedback to feedback generated by ChatGPT. Peer feedback demonstrated higher quality compared to ChatGPT-generated feedback. This discrepancy is attributed primarily to variations in the descriptive and problem-identification features of the feedback.

ChatGPT tended to provide more descriptive feedback, often including elements such as summarizing the content of the essay. This inclination towards descriptive feedback could be related to ChatGPT’s capacity to analyze and synthesize textual information effectively. Research on ChatGPT further supports this notion, demonstrating the AI tool’s capacity to offer a comprehensive overview of the provided content, therefore potentially providing insights and a holistic perspective on the content (Farrokhnia et al., 2023 ; Ray, 2023 ).

ChatGPT’s proficiency in providing extensive descriptive feedback could be seen as a strength. It might be particularly valuable for summarizing complex arguments or providing comprehensive overviews, which could aid students in understanding the overall structure and coherence of their essays.

In contrast, students’ feedback content entailed high quality regarding identifying specific issues and areas for improvement. Peers outperformance compared to ChatGPT in identifying problems within the essays could be related to humans’ potential in cognitive skills, critical thinking abilities, and contextual understanding (e.g., Korteling et al., 2021 ; Lamb et al., 2019 ). This means that students, with their contextual knowledge and critical thinking skills, may be better equipped to identify issues within the essays that ChatGPT may overlook.

Furthermore, a detailed look at the findings of the first research question discloses that the feedback generated by ChatGPT comprehensively encompassed all essential components characterizing high-quality feedback, including affective, cognitive, and constructive dimensions (Kerman et al., 2022 ; Patchan et al., 2016 ). This comprehensive observation could be an indication of the fact that ChatGPT-generated feedback could potentially serve as a viable source of feedback. This observation is supported by previous studies where a positive role for AI-generated feedback and automated feedback in enhancing educational outcomes has been recognized (e.g., Bellhäuser et al., 2023 ; Gombert et al., 2024 ; Huang et al., 2023 ; Xia et al., 2022 ).

Finally, an overarching look at the results of the first research question suggests a potential complementary role for ChatGPT and students in the feedback process. This means that using these two feedback sources together creates a synergistic relationship that could result in better feedback outcomes.

Discussion on the results of RQ2

Results for the second research question revealed no observations of a significant correlation between the quality of the essays and the quality of the feedback generated by both peers and ChatGPT. These findings carry a consequential implication, suggesting that the inherent quality of the essays under scrutiny exerts negligible influence over the quality of feedback furnished by both students and the ChatGPT.

In essence, these results point to a notable degree of independence between the writing prowess exhibited in the essays and the efficacy of the feedback received from either source. This disassociation implies that the ability to produce high-quality essays does not inherently translate into a corresponding ability to provide equally insightful feedback, neither for peers nor for ChatGPT. This decoupling of essay quality from feedback quality highlighted the multifaceted nature of these evaluative processes, where proficiency in constructing a coherent essay does not necessarily guarantee an equally adept capacity for evaluating and articulating constructive commentary on peers’ work.

The implications of these findings are both intriguing and defy conventional expectations, as they deviate somewhat from the prevailing literature’s stance. The existing body of scholarly work generally posits a direct relationship between the quality of an essay and the subsequent quality of generated feedback (Noroozi et al., 2016 ;, 2022 ; Kerman et al., 2022 ; Vale Haro et al., 2023 ). This line of thought contends that essays of inferior quality might serve as a catalyst for more pronounced error detection among students, encompassing grammatical intricacies, depth of content, clarity, and coherence, as well as the application of evidence and support. Conversely, when essays are skillfully crafted, the act of pinpointing areas for enhancement becomes a more complex task, potentially necessitating a heightened level of subject comprehension and nuanced evaluation.

However, the present study’s findings challenge this conventional wisdom. The observed decoupling of essay quality from feedback quality suggests a more nuanced interplay between the two facets of assessment. Rather than adhering to the anticipated pattern, wherein weaker essays prompt clearer identification of deficiencies, and superior essays potentially render the feedback process more challenging, the study suggests that the process might be more complex than previously thought. It hints at a dynamic in which the act of evaluating essays and providing constructive feedback transcends a simple linear connection with essay quality.

These findings, while potentially unexpected, are an indication of the complex nature of essay assignments and feedback provision highlighting the complexity of cognitive processes that underlie both tasks, and suggesting that the relationship between essay quality and feedback quality is not purely linear but influenced by a multitude of factors, including the evaluator’s cognitive framework, familiarity with the subject matter, and critical analysis skills.

Despite this general observation, a closer examination of the affective features within the feedback reveals a different pattern. The positive correlation between essay quality and the affective features present in ChatGPT-generated feedback could be related to ChatGPT’s capacity to recognize and appreciate students’ good work. As the quality of the essay increases, ChatGPT might be programmed to offer more positive and motivational feedback to acknowledge students’ progress (e.g., Farrokhnia et al., 2023 ; Ray, 2023 ). In contrast, the negative relationship between essay quality and the affective features in peer feedback may be attributed to the evolving nature of feedback from peers (e.g., Patchan et al., 2016 ). This suggests that as students witness improvements in their peers’ essay-writing skills and knowledge, their feedback priorities may naturally evolve. For instance, students may transition from emphasizing emotional and affective comments to focusing on cognitive and constructive feedback, with the goal of further enhancing the overall quality of the essays.

Limitations and implications for future research and practice

We acknowledge the limitations of this study. Primarily, the data underpinning this investigation was drawn exclusively from a singular institution and a solitary course, featuring a relatively modest participant pool. This confined scope inevitably introduces certain constraints that need to be taken into consideration when interpreting the study’s outcomes and generalizing them to broader educational contexts. Under this constrained sampling, the findings might exhibit a degree of contextual specificity, potentially limiting their applicability to diverse institutional settings and courses with distinct curricular foci. The diverse array of academic environments, student demographics, and subject matter variations existing across educational institutions could potentially yield divergent patterns of results. Therefore, while the current study’s outcomes provide insights within the confines of the studied institution and course, they should be interpreted and generalized with prudence. Recognizing these limitations, for future studies, we recommend considering a large-scale participant pool with a diverse range of variables, including individuals from various programs and demographics. This approach would enrich the depth and breadth of understanding in this domain, fostering a more comprehensive comprehension of the complex dynamics at play.

In addition, this study omitted an exploration into the degree to which students utilize feedback provided by peers and ChatGPT. That is to say that we did not investigate the effects of such feedback on essay enhancements in the revision phase. This omission inherently introduces a dimension of uncertainty and places a constraint on the study’s holistic understanding of the feedback loop. By not addressing these aspects, the study’s insights are somewhat partial, limiting the comprehensive grasp of the potential influences that these varied feedback sources wield on students’ writing enhancement processes. An analysis of the feedback assimilation patterns and their subsequent effects on essay refinement would have unveiled insights into the practical utility and impact of the feedback generated by peers and ChatGPT.

To address this limitation, future investigations could be structured to encompass a more thorough examination of students’ feedback utilization strategies and the resulting implications for the essay revision process. By shedding light on the complex interconnection between feedback reception, its integration into the revision process, and the ultimate outcomes in terms of essay improvement, a more comprehensive understanding of the dynamics involved could be attained.

Furthermore, in this study, we employed identical question prompts for both peers and ChatGPT. However, there is evidence indicating that ChatGPT is sensitive to how prompts are presented to it (e.g., Cao et al., 2023 ; White et al., 2023 ; Zuccon & Koopman, 2023 ). This suggests that variations in the wording, structure, or context of prompts might influence the responses generated by ChatGPT, potentially impacting the comparability of its outputs with those of peers. Therefore, it is essential to carefully consider and control for prompt-related factors in future research when assessing ChatGPT’s performance and capabilities in various tasks and contexts.

In addition, We acknowledge that ChatGPT can potentially generate inaccurate results. Nevertheless, in the context of this study, our examination of the results generated by ChatGPT did not reveal a significant inaccuracies that would warrant inclusion in our findings.

From a methodological perspective, we reported the interrater reliability between the coders to be 75%. While this level of agreement was statistically significant, signifying the reliability of our coders’ analyses, it did not reach the desired level of precision. We acknowledge this as a limitation of the study and suggest enhancing interrater reliability through additional coder training.

In addition, it is worth noting that the advancement of Generative AI like ChatGPT, opens new avenues in educational feedback mechanisms. Beyond just generating feedback, these AI models have the potential to redefine how feedback is presented and assimilated. In the realm of research on adaptive learning systems, the findings of this study also echo the importance of adaptive learning support empowered by AI and ChatGPT (Rummel et al., 2016 ). It can pave the way for tailored educational experiences that respond dynamically to individual student needs. This is not just about the feedback’s content but its delivery, timing, and adaptability. Further exploratory data analyses, such as sequential analysis and data mining, may offer insights into the nuanced ways different adaptive learning supports can foster student discussions (Papamitsiou & Economides, 2014 ). This involves dissecting the feedback dynamics, understanding how varied feedback types stimulate discourse, and identifying patterns that lead to enhanced student engagement.

Ensuring the reliability and validity of AI-empowered feedback is also crucial. The goal is to ascertain that technology-empowered learning support genuinely enhances students’ learning process in a consistent and unbiased manner. Given ChatGPT’s complex nature of generating varied responses based on myriad prompts, the call for enhancing methodological rigor through future validation studies becomes both timely and essential. For example, in-depth prompt validation and blind feedback assessment studies could be employed to meticulously probe the consistency and quality of ChatGPT’s responses. Also, comparative analysis with different AI models can be useful.

From an educational standpoint, our research findings advocate for the integration of ChatGPT as a feedback resource with peer feedback within higher education environments for essay writing tasks since there is a complementary role potential for pee-generated and ChatGPT-generated feedback. This approach holds the potential to alleviate the workload burden on teachers, particularly in the context of online courses with a significant number of students.

This study contributes to and adds value to the young existing but rapidly growing literature in two distinct ways. From a research perspective, this study addresses a significant void in the current literature by responding to the lack of research on AI-generated feedback for complex tasks like essay writing in higher education. The research bridges this gap by analyzing the effectiveness of ChatGPT-generated feedback compared to peer-generated feedback, thereby establishing a foundation for further exploration in this field. From a practical perspective of higher education, the study’s findings offer insights into the potential integration of ChatGPT as a feedback source within higher education contexts. The discovery that ChatGPT’s feedback quality could potentially complement peer feedback highlights its applicability for enhancing feedback practices in higher education. This holds particular promise for courses with substantial enrolments and essay-writing components, providing teachers with a feasible alternative for delivering constructive feedback to a larger number of students.

Data availability

The data is available upon a reasonable request.

Alqassab, M., Strijbos, J. W., & Ufer, S. (2018). Training peer-feedback skills on geometric construction tasks: Role of domain knowledge and peer-feedback levels. European Journal of Psychology of Education , 33 (1), 11–30. https://doi.org/10.1007/s10212-017-0342-0 .

Article   Google Scholar  

Amiryousefi, M., & Geld, R. (2021). The role of redressing teachers’ instructional feedback interventions in EFL learners’ motivation and achievement in distance education. Innovation in Language Learning and Teaching , 15 (1), 13–25. https://doi.org/10.1080/17501229.2019.1654482 .

Arguedas, M., Daradoumis, A., & Xhafa Xhafa, F. (2016). Analyzing how emotion awareness influences students’ motivation, engagement, self-regulation and learning outcome. Educational Technology and Society , 19 (2), 87–103. https://www.jstor.org/stable/jeductechsoci.19.2.87 .

Google Scholar  

Banihashem, S. K., Noroozi, O., van Ginkel, S., Macfadyen, L. P., & Biemans, H. J. (2022). A systematic review of the role of learning analytics in enhancing feedback practices in higher education. Educational Research Review , 100489. https://doi.org/10.1016/j.edurev.2022.100489 .

Banihashem, S. K., Dehghanzadeh, H., Clark, D., Noroozi, O., & Biemans, H. J. (2023). Learning analytics for online game-based learning: A systematic literature review. Behaviour & Information Technology , 1–28. https://doi.org/10.1080/0144929X.2023.2255301 .

Bellhäuser, H., Dignath, C., & Theobald, M. (2023). Daily automated feedback enhances self-regulated learning: A longitudinal randomized field experiment. Frontiers in Psychology , 14 , 1125873. https://doi.org/10.3389/fpsyg.2023.1125873 .

Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education , 21 (4), 1–41. https://doi.org/10.1186/s41239-023-00436-z .

Bulqiyah, S., Mahbub, M., & Nugraheni, D. A. (2021). Investigating writing difficulties in Essay writing: Tertiary Students’ perspectives. English Language Teaching Educational Journal , 4 (1), 61–73. https://doi.org/10.12928/eltej.v4i1.2371 .

Callender, A. A., Franco-Watkins, A. M., & Roberts, A. S. (2016). Improving metacognition in the classroom through instruction, training, and feedback. Metacognition and Learning , 11 (2), 215–235. https://doi.org/10.1007/s11409-015-9142-6 .

Cao, J., Li, M., Wen, M., & Cheung, S. C. (2023). A study on prompt design, advantages and limitations of chatgpt for deep learning program repair. arXiv Preprint arXiv:2304 08191 . https://doi.org/10.48550/arXiv.2304.08191 .

Dai, W., Lin, J., Jin, F., Li, T., Tsai, Y. S., Gasevic, D., & Chen, G. (2023). Can large language models provide feedback to students? A case study on ChatGPT. https://doi.org/10.35542/osf.io/hcgzj .

Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., & Siemens, G. (2024). Impact of AI assistance on student agency. Computers & Education , 210 , 104967. https://doi.org/10.1016/j.compedu.2023.104967 .

Deeva, G., Bogdanova, D., Serral, E., Snoeck, M., & De Weerdt, J. (2021). A review of automated feedback systems for learners: Classification framework, challenges and opportunities. Computers & Education , 162 , 104094. https://doi.org/10.1016/j.compedu.2020.104094 .

Diezmann, C. M., & Watters, J. J. (2015). The knowledge base of subject matter experts in teaching: A case study of a professional scientist as a beginning teacher. International Journal of Science and Mathematics Education , 13 , 1517–1537. https://doi.org/10.1007/s10763-014-9561-x .

Drachsler, H. (2023). Towards highly informative learning analytics . Open Universiteit. https://doi.org/10.25656/01:26787 .

Drachsler, H., & Kalz, M. (2016). The MOOC and learning analytics innovation cycle (MOLAC): A reflective summary of ongoing research and its challenges. Journal of Computer Assisted Learning , 32 (3), 281–290. https://doi.org/10.1111/jcal.12135 .

Er, E., Dimitriadis, Y., & Gašević, D. (2021). Collaborative peer feedback and learning analytics: Theory-oriented design for supporting class-wide interventions. Assessment & Evaluation in Higher Education , 46 (2), 169–190. https://doi.org/10.1080/02602938.2020.1764490 .

Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International , 1–15. https://doi.org/10.1080/14703297.2023.2195846 .

Gan, Z., An, Z., & Liu, F. (2021). Teacher feedback practices, student feedback motivation, and feedback behavior: How are they associated with learning outcomes? Frontiers in Psychology , 12 , 697045. https://doi.org/10.3389/fpsyg.2021.697045 .

Gao, X., Noroozi, O., Gulikers, J. T. M., Biemans, H. J., & Banihashem, S. K. (2024). A systematic review of the key components of online peer feedback practices in higher education. Educational Research Review , 100588. https://doi.org/10.1016/j.edurev.2023.100588 .

Gielen, M., & De Wever, B. (2015). Scripting the role of assessor and assessee in peer assessment in a wiki environment: Impact on peer feedback quality and product improvement. Computers & Education , 88 , 370–386. https://doi.org/10.1016/j.compedu.2015.07.012 .

Gombert, S., Fink, A., Giorgashvili, T., Jivet, I., Di Mitri, D., Yau, J., & Drachsler, H. (2024). From the Automated Assessment of Student Essay Content to highly informative feedback: A case study. International Journal of Artificial Intelligence in Education , 1–39. https://doi.org/10.1007/s40593-023-00387-6 .

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research , 77 (1), 81–112. https://doi.org/10.3102/003465430298487 .

Holt-Reynolds, D. (1999). Good readers, good teachers? Subject matter expertise as a challenge in learning to teach. Harvard Educational Review , 69 (1), 29–51. https://doi.org/10.17763/haer.69.1.pl5m5083286l77t2 .

Huang, A. Y., Lu, O. H., & Yang, S. J. (2023). Effects of artificial intelligence–enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom. Computers & Education , 194 , 104684. https://doi.org/10.1016/j.compedu.2022.104684 .

Katz, A., Wei, S., Nanda, G., Brinton, C., & Ohland, M. (2023). Exploring the efficacy of ChatGPT in analyzing Student Teamwork Feedback with an existing taxonomy. arXiv Preprint arXiv . https://doi.org/10.48550/arXiv.2305.11882 .

Kerman, N. T., Noroozi, O., Banihashem, S. K., Karami, M., & Biemans, H. J. (2022). Online peer feedback patterns of success and failure in argumentative essay writing. Interactive Learning Environments , 1–13. https://doi.org/10.1080/10494820.2022.2093914 .

Kerman, N. T., Banihashem, S. K., Karami, M., Er, E., Van Ginkel, S., & Noroozi, O. (2024). Online peer feedback in higher education: A synthesis of the literature. Education and Information Technologies , 29 (1), 763–813. https://doi.org/10.1007/s10639-023-12273-8 .

King, A. (2002). Structuring peer interaction to promote high-level cognitive processing. Theory into Practice , 41 (1), 33–39. https://doi.org/10.1207/s15430421tip4101_6 .

Konold, K. E., Miller, S. P., & Konold, K. B. (2004). Using teacher feedback to enhance student learning. Teaching Exceptional Children , 36 (6), 64–69. https://doi.org/10.1177/004005990403600608 .

Korteling, J. H., van de Boer-Visschedijk, G. C., Blankendaal, R. A., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human-versus artificial intelligence. Frontiers in Artificial Intelligence , 4 , 622364. https://doi.org/10.3389/frai.2021.622364 .

Labuhn, A. S., Zimmerman, B. J., & Hasselhorn, M. (2010). Enhancing students’ self-regulation and mathematics performance: The influence of feedback and self-evaluative standards. Metacognition and Learning , 5 , 173–194. https://doi.org/10.1007/s11409-010-9056-2 .

Lamb, R., Firestone, J., Schmitter-Edgecombe, M., & Hand, B. (2019). A computational model of student cognitive processes while solving a critical thinking problem in science. The Journal of Educational Research , 112 (2), 243–254. https://doi.org/10.1080/00220671.2018.1514357 .

Latifi, S., Noroozi, O., & Talaee, E. (2023). Worked example or scripting? Fostering students’ online argumentative peer feedback, essay writing and learning. Interactive Learning Environments , 31 (2), 655–669. https://doi.org/10.1080/10494820.2020.1799032 .

Li, L., & Liu, X. (2010). Steckelberg. Assessor or assessee: How student learning improves by giving and receiving peer feedback. British Journal of Educational Technology , 41 (3), 525–536. https://doi.org/10.1111/j.1467-8535.2009.00968.x .

Liu, N. F., & Carless, D. (2006). Peer feedback: The learning element of peer assessment. Teaching in Higher Education , 11 (3), 279–290. https://doi.org/10.1080/13562510600680582 .

Liunokas, Y. (2020). Assessing students’ ability in writing argumentative essay at an Indonesian senior high school. IDEAS: Journal on English language teaching and learning. Linguistics and Literature , 8 (1), 184–196. https://doi.org/10.24256/ideas.v8i1.1344 .

Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science , 37 , 375–401. https://doi.org/10.1007/s11251-008-9053-x .

Noroozi, O., Banihashem, S. K., Taghizadeh Kerman, N., Parvaneh Akhteh Khaneh, M., Babayi, M., Ashrafi, H., & Biemans, H. J. (2022). Gender differences in students’ argumentative essay writing, peer review performance and uptake in online learning environments. Interactive Learning Environments , 1–15. https://doi.org/10.1080/10494820.2022.2034887 .

Noroozi, O., Biemans, H., & Mulder, M. (2016). Relations between scripted online peer feedback processes and quality of written argumentative essay. The Internet and Higher Education , 31, 20-31. https://doi.org/10.1016/j.iheduc.2016.05.002

Noroozi, O., Banihashem, S. K., Biemans, H. J., Smits, M., Vervoort, M. T., & Verbaan, C. L. (2023). Design, implementation, and evaluation of an online supported peer feedback module to enhance students’ argumentative essay quality. Education and Information Technologies , 1–28. https://doi.org/10.1007/s10639-023-11683-y .

Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Journal of Educational Technology & Society , 17 (4), 49–64. https://doi.org/10.2307/jeductechsoci.17.4.49 . https://www.jstor.org/stable/ .

Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., & Mirriahi, N. (2019). Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology , 50 (1), 128–138. https://doi.org/10.1111/bjet.12592 .

Patchan, M. M., Schunn, C. D., & Correnti, R. J. (2016). The nature of feedback: How peer feedback features affect students’ implementation rate and quality of revisions. Journal of Educational Psychology , 108 (8), 1098. https://doi.org/10.1037/edu0000103 .

Ramsden, P. (2003). Learning to teach in higher education . Routledge.

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems , 3 , 121–154. https://doi.org/10.1016/j.iotcps.2023.04.003 .

Rüdian, S., Heuts, A., & Pinkwart, N. (2020). Educational Text Summarizer: Which sentences are worth asking for? In DELFI 2020 - The 18th Conference on Educational Technologies of the German Informatics Society (pp. 277–288). Bonn, Germany.

Rummel, N., Walker, E., & Aleven, V. (2016). Different futures of adaptive collaborative learning support. International Journal of Artificial Intelligence in Education , 26 , 784–795. https://doi.org/10.1007/s40593-016-0102-3 .

Shi, M. (2019). The effects of class size and instructional technology on student learning performance. The International Journal of Management Education , 17 (1), 130–138. https://doi.org/10.1016/j.ijme.2019.01.004 .

Article   MathSciNet   Google Scholar  

Toulmin, S. (1958). The uses of argument . Cambridge University Press.

Valero Haro, A., Noroozi, O., Biemans, H. J., Mulder, M., & Banihashem, S. K. (2023). How does the type of online peer feedback influence feedback quality, argumentative essay writing quality, and domain-specific learning? Interactive Learning Environments , 1–20. https://doi.org/10.1080/10494820.2023.2215822 .

White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382 . https://doi.org/10.48550/arXiv.2302.11382 .

Wu, Y., & Schunn, C. D. (2020). From feedback to revisions: Effects of feedback features and perceptions. Contemporary Educational Psychology , 60 , 101826. https://doi.org/10.1016/j.cedpsych.2019.101826 .

Xia, Q., Chiu, T. K., Zhou, X., Chai, C. S., & Cheng, M. (2022). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence , 100118. https://doi.org/10.1016/j.caeai.2022.100118 .

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education , 16 (1), 1–27. https://doi.org/10.1186/s41239-019-0171-0 .

Zhang, Z. V., & Hyland, K. (2022). Fostering student engagement with feedback: An integrated approach. Assessing Writing , 51 , 100586. https://doi.org/10.1016/j.asw.2021.100586 .

Zuccon, G., & Koopman, B. (2023). Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness. arXiv preprint arXiv:2302 .13793. https://doi.org/10.48550/arXiv.2302.13793 .

Download references

No funding has been received for this research.

Author information

Authors and affiliations.

Open Universiteit, Heerlen, The Netherlands

Seyyed Kazem Banihashem & Hendrik Drachsler

Wageningen University and Research, Wageningen, The Netherlands

Seyyed Kazem Banihashem & Omid Noroozi

Ferdowsi University of Mashhad, Mashhad, Iran

Nafiseh Taghizadeh Kerman

The University of Alabama, Tuscaloosa, USA

Jewoong Moon

DIPE Leibniz Institute, Goethe University, Frankfurt, Germany

Hendrik Drachsler

You can also search for this author in PubMed   Google Scholar

Contributions

S. K. Banihashem led this research experiment. N. T. Kerman contributed to the data analysis and writing. O. Noroozi contributed to the designing, writing, and reviewing the manuscript. J. Moon contributed to the writing and revising the manuscript. H. Drachsler contributed to the writing and revising the manuscript.

Corresponding author

Correspondence to Seyyed Kazem Banihashem .

Ethics declarations

Declaration of ai-assisted technologies in the writing process.

The authors used generative AI for language editing and took full responsibility.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Banihashem, S.K., Kerman, N.T., Noroozi, O. et al. Feedback sources in essay writing: peer-generated or AI-generated feedback?. Int J Educ Technol High Educ 21 , 23 (2024). https://doi.org/10.1186/s41239-024-00455-4

Download citation

Received : 20 November 2023

Accepted : 18 March 2024

Published : 12 April 2024

DOI : https://doi.org/10.1186/s41239-024-00455-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • AI-generated feedback
  • Essay writing
  • Feedback sources
  • Higher education
  • Peer feedback

week 4 journal critical thinking

To read this content please select one of the options below:

Please note you do not have access to teaching notes, using definitions to provoke deep explorations into the nature of leadership.

Journal of Leadership Education

ISSN : 1552-9045

Article publication date: 15 October 2018

Issue publication date: 15 October 2018

Leadership is filled with concepts that often do not have an agreed upon definition. The purpose of this paper is to share a learning activity that provokes students’ thinking about the nature of leadership using six leadership definitions. This activity is a dynamic starting place to explore what leadership is and is not, how it differs from management, a historical perspective of leadership, and students’ diverse perspectives about leadership. This activity is a straightforward, critical thinking exercise that offers a conduit to a deeper understanding that how we define leadership says something about what we value in a leader. We suggest modifications to this definitional exercise and discuss how to use it in different teaching environments.

Raffo, D.M. and Clark, L.A. (2018), "Using Definitions to Provoke Deep Explorations into the Nature of Leadership", Journal of Leadership Education , Vol. 17 No. 4, pp. 208-218. https://doi.org/10.12806/V17/I4/C1

Emerald Publishing Limited

Copyright © 2018, The Journal of Leadership Education

Related articles

We’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

Exploring Factors That Support Pre-service Teachers’ Engagement in Learning Artificial Intelligence

  • Research Article
  • Open access
  • Published: 12 April 2024

Cite this article

You have full access to this open access article

  • Musa Adekunle Ayanwale   ORCID: orcid.org/0000-0001-7640-9898 1 ,
  • Emmanuel Kwabena Frimpong 2 ,
  • Oluwaseyi Aina Gbolade Opesemowo   ORCID: orcid.org/0000-0003-0242-7027 1 &
  • Ismaila Temitayo Sanusi   ORCID: orcid.org/0000-0002-5705-6684 2  

12 Accesses

1 Altmetric

Explore all metrics

Artificial intelligence (AI) is becoming increasingly relevant, and students need to understand the concept. To design an effective AI program for schools, we need to find ways to expose students to AI knowledge, provide AI learning opportunities, and create engaging AI experiences. However, there is a lack of trained teachers who can facilitate students’ AI learning, so we need to focus on developing the capacity of pre-service teachers to teach AI. Since engagement is known to enhance learning, it is necessary to explore how pre-service teachers engage in learning AI. This study aimed to investigate pre-service teachers’ engagement with learning AI after a 4-week AI program at a university. Thirty-five participants took part in the study and reported their perception of engagement with learning AI on a 7-factor scale. The factors assessed in the survey included engagement (cognitive—critical thinking and creativity, behavioral, and social), attitude towards AI, anxiety towards AI, AI readiness, self-transcendent goals, and confidence in learning AI. We used a structural equation modeling approach to test the relationships in our hypothesized model using SmartPLS 4.0. The results of our study supported all our hypotheses, with attitude, anxiety, readiness, self-transcendent goals, and confidence being found to influence engagement. We discuss our findings and consider their implications for practice and policy.

Avoid common mistakes on your manuscript.

Introduction

Artificial intelligence (AI) is becoming increasingly relevant globally, integrated into various aspects of human life and sectors, including education (Long & Magerko, 2020 ). The growing importance of AI has led to a demand for its incorporation into school systems. While researchers, practitioners, and education policymakers have recognized the significance of teaching AI in K-12 systems (Ma et al., 2023, Touretzky et al., 2019), limited initiatives have been taken in the context of teacher education (Sanusi et al., 2022 ). Nevertheless, education stakeholders agree about the importance of AI education, as evidenced by the development of tools, curricula activities, and frameworks for effective implementation of AI as a subject or integrated throughout the curriculum (Casal-Otero et al., 2023 ; Mahipal et al., 2023 ; Sanusi, 2023 ). While these initiatives are crucial for promoting AI education in schools, focusing on teacher education is essential (Sanusi et al., 2023 ). Existing literature highlights a need for further work on teacher education programs for AI. Although there are a few initiatives for teacher education on AI, they are primarily conducted as professional development programs. However, to ensure the integration of AI within the K-12 system, future teachers must be prepared to facilitate AI, as it is now considered an essential skill for the future (Frimpong, 2022 ; Park et al., 2023 ).

As a new subject in schools and teacher education programs, learning AI requires new approaches to engage students with learning materials and activities. Engagement is crucial because studies have found a correlation between engagement and learning (Carroll et al., 2021 ; Fredricks et al., 2004 ; Poondej & Lerdpornkulrat, 2016 ). These studies suggest that more engaged students tend to have better learning outcomes. Bryson and Hand ( 2007 ) stated that engagement is key to student autonomy and improved learning overall. Given the importance of engagement, research has been conducted to understand how to increase students’ engagement in learning. For example, Kim et al. ( 2015 ) explored the use of robotics to promote STEM engagement in pre-service teachers, while Volet et al. ( 2019 ) examined engagement in collaborative science learning among pre-service teacher students. Although research on pre-service teachers and engagement in STEM learning continues to grow, there is currently a limited research on student engagement in AI education. Xia et al. ( 2022 ) discussed student engagement from the perspective of self-determination theory, but no study has investigated the factors that influence pre-service teachers’ engagement with AI. Therefore, this research aims to examine the factors that support students’ engagement with AI in the context of teacher education. The framework used in this research combines the theory of planned behavior (Ajzen, 2020 ) with other constructs, including engagement. By exploring the factors that support pre-service teachers’ engagement in learning AI, this study contributes to the limited literature on developing AI literacy within teacher education programs. The findings of this research will advance our knowledge of how to effectively engage students in learning AI.

To better understand the factors that impact student engagement with learning AI, we conducted an AI intervention for 35 pre-service teachers. We then collected their perspectives using a 7-factor scale, considering engagement (cognitive-critical thinking and creativity, behavioral, and social), intrinsic motivation, attitude towards AI, anxiety towards AI, AI readiness, self-transcendent goals, and confidence in learning AI. To analyze the participants’ data, we utilized SmartPLS 4.0 to perform a variance-based structural equation modeling and evaluate our proposed model. This study is organized as follows: first, we outline the aim of the study; then, we review related research, discuss the theoretical framework, and develop hypotheses in the “Review of Related Work” section. The “ Methodology ” section provides a detailed explanation of the data collection method, participants, and analytical approaches. In the “ Results ” section, we present the findings of the data analysis, followed by a discussion of the implications of the study in the “ Discussion ” section. Finally, we conclude with a discussion of the study’s limitations and suggestions for future research.

Review of Related Work

In this section, we reviewed the related works and developed the study hypothesis. We specifically discussed the research that has explored pre-service teachers’ engagement within the STEM (science, technology, engineering, and mathematics) education context. We further explained the theoretical framework that inspired our research, highlighted why exploring engagement in learning AI is essential, and proposed a set of hypotheses based on Fig.  1 .

figure 1

Research conceptual framework. Note: AT attitude towards AI, AN anxiety towards AI, AR AI readiness, SG self-transcendent goals, CL confidence in learning AI, ENG student engagement in the AI program, CECT cognitive engagement—critical thinking, CEC cognitive engagement—creativity, BESL behavioral engagement—self-directed learning, SOE social engagement

Engagement in the STEM Teacher Education Program

Engagement in STEM teacher education programs is crucial for improving student outcomes in STEM subjects. Research has shown that active engagement in STEM education leads to higher-order thinking skills, increased motivation, and improved achievement in learning activities (Kamarrudin et al., 2023 ). Engagement in STEM learning has been recognized as beneficial for preparing students to address real-world problems (Dong et al., 2019 ; Kim et al., 2015 ). Challenges in implementing integrated STEM curricula in schools due to the lack of teachers’ experience have also been reported in the literature (e.g., Hamad et al., 2022 ). Several studies (Aydeniz & Bilican, 2018 ; Dong et al., 2019 ) investigated the relationship among different variables and engagement. However, few empirical evidence exists on what predicts pre-service teachers’ engagement in STEM education programs.

Furthermore, there is a paucity of research that focuses on the factors that influence pre-service teachers’ engagement in learning AI. As AI is considered a STEM-related concept, this study aims to fill this gap by investigating the factors influencing pre-service teachers’ engagement in learning AI. Understanding these factors will provide valuable insights into how to effectively prepare pre-service teachers to integrate AI into their teaching practices. This study will also contribute to developing strategies and interventions to enhance pre-service teachers’ learning experiences in AI. In addition, findings from this study will have practical implications for teacher education programs and curriculum development. By identifying the specific factors such as attitude towards AI, anxiety towards AI, AI readiness, self-transcendent goals, and confidence in learning AI that influence pre-service teachers’ engagement in learning AI, pre-service teachers can tailor their pedagogical approach to meet their students’ needs and interests better. Ultimately, the goal is to equip pre-service teachers with the necessary knowledge and skills to integrate AI effectively into their classrooms, ensuring they are prepared adequately for the ever-evolving technological landscape of education.

Extensive research indicates the importance of engagement in learning (Fredricks et al., 2004 ; Tarantino et al., 2013 ). Student engagement has also been referred to as a crucial means of fostering and enhancing student learning (Renninger & Bachrach, 2015 ; Sanusi et al., 2023 ). Engagement is characterized by the behavioral intensity and emotional quality of a person’s active involvement in a task (Sun et al., 2019 ). Without engagement, meaningful learning remains elusive (Kim et al., 2015 ) and cannot accurately determine the extent to which a person has grasped a concept. Within the context of teacher education, particularly in STEM-related programs, we have identified literature that emphasizes the significance of engagement in promoting increased learning (Lange et al., 2022 ; Ryu et al., 2019 ). Previous research (e.g., Grimble, 2019 ; McClure et al., 2017 ) suggests that pre-service teachers’ engagement with learning materials fosters a deep mastery of the subject matter and effective pedagogical practices that can stimulate their students' interest in STEM.

Teacher education programs can equip pre-service teachers with the skills and knowledge necessary to cultivate STEM literacy in the next generation by immersing them in hands-on experiences, encouraging them to explore real-world applications, and supporting collaborative learning (Suryadi et al., 2023 ). In this way, pre-service teachers become more than just conveyors of information; they also foster curiosity, problem-solving, and innovation in their classrooms, cultivating a lifelong interest in STEM disciplines. Moreover, integrating STEM courses into teacher education programs helps pre-service teachers develop a growth mindset and adaptability (Griful-Freixenet et al., 2021 ; Jones et al., 2017 ; Rowston et al., 2020 ), both of which are necessary for navigating the ever-changing landscape of science and technology. Pre-service teachers’ engagement and learning of STEM subjects in teacher education programs are vital for their future performance in the classroom (Berisha & Vula, 2021; Bosica et al., 2021 ). These programs should emphasize acquiring topic knowledge and developing teaching practices that encourage students’ active participation. By implementing active participation in STEM education programs, pre-service teachers better understand STEM concepts and learn how to create dynamic and interactive learning environments (Billington, 2023 ; Yllana-Prieto et al., 2023 ). Exposure to various teaching methods (Bin Abdulrahman et al., 2021 ) and the integration of technology provide students with the necessary capabilities to meet the evolving needs of STEM education. Similarly, encouraging pre-service teachers’ engagement in STEM courses goes beyond the transfer of knowledge (Huang et al., 2022 ; Manasia et al., 2020 ). It instills a passion for these disciplines, inspires them to develop a growth mindset, and cultivates lifelong learners.

Berisha and Vula (2021) stated that the engagement and learning of pre-service teachers in STEM subjects are crucial for their professional development and the success of their future students. Teacher education programs strive to equip pre-service teachers with the knowledge and skills to effectively teach STEM subjects to their future students (Yang & Ball, 2022). It is important to foster their curiosity and enhance their problem-solving abilities. By actively engaging in STEM learning during these programs, educators become well-prepared to inspire the future of innovation and scientific discovery, ensuring a brighter future for STEM education. Encouraging pre-service teachers’ interest in and learning of STEM courses helps build their confidence and competence in making these subjects accessible and enjoyable for their future students. As pre-service teachers become more adept in using STEM teaching methods, they are better equipped to address the challenges and misconceptions that often discourage students from pursuing STEM careers (Akaygun & Aslan-Tutak, 2020; Çinar et al., 2016; Delello, 2014). Ultimately, the success of STEM teacher education programs hinges on their ability to instill a genuine passion for these subjects in pre-service teachers, while providing them with the knowledge and skills to inspire the next generation of problem-solvers, critical thinkers, and innovators.

Theoretical Background

This study is based on the theory of planned behavior (Ajzen, 2020 ) and incorporates other relevant constructs. In the field of AI education, this theory has primarily been used to examine the intentions of various stakeholders in terms of learning (Chai et al., 2020a , 2020b ; Sing et al., 2022 ) or teaching AI (Ayanwale & Sanusi, 2023; Ayanwale et al., 2022 ). These constructs have previously been used as predictors of behavioral intention. However, we have not found any studies that specifically utilize these constructs as predictors of engagement in the context of AI education, particularly in teacher education programs. Nonetheless, we briefly mention some instances where the variables examined in this study are related to engagement in similar fields.

Attitude Towards AI

In STEM programs, attitudes towards AI education play a crucial role in determining pre-service teachers’ readiness for the evolving educational landscape. A positive attitude towards AI encourages acceptance of its value as a tool to enhance STEM instruction (Papadakis et al., 2021 ), while negative attitudes can lead to resistance and limited adoption (Balakrishnan et al., 2021 ). Pre-service teachers must develop an open-minded attitude towards AI, enabling them to leverage its potential for personalized learning and innovative teaching. This will also ensure that AI becomes a valuable tool in their future STEM classrooms. The engagement of pre-service teachers in AI education is grounded in educational theories and pedagogical principles (Celik, 2023 ). Constructivist theories emphasize the significance of active engagement, collaboration, and hands-on experiences in learning (Kaufman, 1996 ). AI education for pre-service teachers aligns with these theories, advocating for immersive and experiential learning opportunities. Furthermore, the literature (Celik, 2023 ; Shelman, 1987; Yau et al., 2023 ) draws upon the principles of technological pedagogical content knowledge (TPACK), suggesting that effective AI education involves the integration of technological knowledge, pedagogical skills, and subject matter expertise. Theoretical perspectives often emphasize the importance of pre-service teachers developing a positive attitude (Opesemowo et al., 2022 ) and a deep understanding of AI concepts and their applications in educational settings. However, studies (Al Darayseh, 2023 ; Kelly et al., 2023 ; Zhang et al., 2023 ) have demonstrated that attitude is a critical factor that influences teachers’ acceptance or rejection of the use of AI. Some individuals hold a positive attitude towards AI technologies and recognize their potential, even if they do not fully comprehend the essence of these technologies (Yadrovskaia et al., 2023 ). Kaya et al. ( 2024 ) observed that personality traits, AI anxiety, and demographics significantly shape attitudes towards AI. The use of AI in the STEM context is an ongoing topic of public discourse, and there is a need for reliable measures to assess pre-service teachers' attitudes towards AI in STEM programs.

Anxiety Towards AI

Anxiety towards AI refers to the fear of using computers or technophobia, which is a term used to describe fear or aversion towards technology in general (Li & Huang, 2020 ; Wang & Wang, 2022 ). Various perspectives on anxiety towards AI and pre-service teachers’ education in STEM programs have been proposed. Some argue that anxiety towards AI stems from a lack of understanding and fear of the unknown (Hopcan et al., 2023 ; Zhan et al., 2023 ). They suggest that pre-service teachers can better understand and overcome their anxiety by receiving comprehensive education in AI technologies. Others believe that anxiety towards AI among pre-service teachers is justified because they feel threatened by AI advancements’ potential job market implications. Anxiety towards AI education in STEM programs can hinder pre-service teachers’ acceptance of technology-driven teaching techniques. This apprehension may stem from concerns about their technological skills or anxieties that AI may replace traditional instructional responsibilities. Pre-service instructors can build confidence in AI tools by addressing these concerns through training and assistance (Jones et al., 2017 ). It is crucial to foster an environment that encourages experimentation while highlighting the complementary role of AI in improving STEM education, reducing anxiety, and promoting its beneficial integration. Kaya et al. ( 2024 ) noted that anxiety about learning AI significantly predicted positive and negative attitudes towards AI. According to Terzi ( 2020 ) and Wang and Wang ( 2022 ), anxiety about learning AI is the fear of being unable to acquire specific knowledge and skills about AI. Several studies have been conducted on anxiety towards AI, but few or none has explored the engagement of pre-service teachers, as used in this study. The relationship between anxiety towards AI and pre-service teachers’ engagement with AI in STEM education is a crucial aspect that requires exploration. Pre-service teachers who experience anxiety towards AI may be less likely to embrace AI tools in their teaching practices (Chocarro et al., 2023 ; Wang et al., 2021). Therefore, we propose that anxiety towards AI can inversely affect student engagement in the AI program.

AI Readiness

AI readiness refers to the preparedness of pre-service teachers, individuals, organizations, and countries to adopt and utilize AI technologies effectively. It can be seen as the eagerness to use AI technological innovations (Garg & Kumar, 2017 ). The AI readiness of pre-service teachers in STEM programs demonstrates their willingness to use AI as an instructional resource. AI readiness entails technical proficiency and a proactive attitude towards incorporating AI technologies into instruction. It necessitates knowledge of AI-driven systems and a dedication to remaining current on AI breakthroughs. Educators who are well-prepared for the AI-infused future can exploit AI’s potential (Hsu et al., 2019 ) to improve STEM instruction, adapt to changing educational demands, and give students creative and individualized learning experiences. Several studies have explored AI readiness in different contexts. Xuan et al. ( 2023 ) conducted a survey to evaluate medical AI readiness among undergraduate medical students and found that most participants had moderate readiness. Palade and Carutasu ( 2021 ) emphasized the need for organizations to adopt AI technologies to keep up with innovation. They suggested that AI readiness adoption can be normalized under an existing model for digitization. Baguma et al. ( 2023 ) proposed an AI readiness index specifically tailored to the needs of African countries, highlighting dimensions such as vision, governance and ethics, digital capacity, and research and development. Taskiran ( 2023 ) reported that an AI course in the nursing curriculum positively affected students’ readiness for medical AI. These studies highlight the importance of assessing and enhancing AI readiness in various domains and contexts. Still, a drought of studies focused on the AI readiness of pre-service teachers to engage with STEM programs.

Self-transcendent Goals

Self-transcendent goals involve looking beyond oneself and adopting a larger perspective, including concern for others (Ge & Yang, 2023 ). Self-transcendence is a multifaceted psychological phenomenon that includes acts of kindness, philanthropy, and community service as individuals strive to go beyond their individual needs and desires to make a positive impact on the lives of others. It has been shown that self-transcendence is linked to mental health and nursing (Haugan et al., 2013 ; Nygren et al., 2005 ), spirituality (Bovero et al., 2023 ; Suliman et al., 2022 ), and performance in learning and motivation (Reeves et al., 2021 ; Yeager et al., 2014 ), social activism (Barton & Hart, 2023 ) among other fields. The self-transcendent aspirations of pre-service teachers in STEM programs encompass their desire to go beyond personal accomplishments (Naftzger, 2018 ) and contribute more significantly to the welfare of society through STEM education. These objectives frequently include instilling a love of STEM in their pupils, promoting diversity and inclusivity, and addressing real-world issues through STEM education (Okundaye et al., 2022 ). Embracing self-transcendent aspirations inspires pre-service teachers to consistently enhance their STEM topic knowledge, pedagogical abilities, and empathy, driving them to become inspirational educators who inspire future generations to engage profoundly with STEM and promote positive social change. With self-transcendence, pre-service teachers are motivated to continuously adapt and evolve their teaching practices, seeking innovative ways to integrate AI tools and resources into their lessons. By embracing the new trend of teaching and learning AI, pre-service teachers are preparing their students for the future and actively shaping the future of education. To the best of our knowledge, few studies (Ge & Yang, 2023 ; Sanusi et al., 2024a , 2024b ; Yeager et al., 2014 ) have been conducted to examine whether pre-service teachers with a self-transcendent goal for engaging AI are more motivated to learn AI.

Confidence in Learning AI

Pre-service teachers’ confidence in learning AI is a significant component of their readiness to integrate AI into STEM education (Roy et al., 2022 ). Confidence here refers to their belief in their ability to effectively learn AI-related knowledge and skills (Lin et al., 2023 ). When pre-service teachers feel confident in their ability to master AI, they are more likely to participate in AI-related professional development, investigate AI applications in their teaching practices, and adapt to the changing educational landscape. Building this confidence through professional development training is critical for equipping pre-service teachers to use AI as a beneficial resource for improving STEM instruction and preparing students for an AI-driven future. This study attempts to validate existing research (Sanusi et al., 2024a , 2024b ) by investigating whether confidence in learning AI influences student engagement in an AI program.

Engagement in AI Learning

Engagement sparks curiosity and motivates individuals to actively participate in and absorb new information. When learners are engaged, they are more likely to ask questions, seek additional resources, and apply the material to their own experiences. According to Martin ( 2012 ), motivation is the basis of engagement, so AI can be used as a tool to engage pre-service teachers in integrated STEM learning and teaching (Kim et al., 2015 ). Exploring engagement in AI learning is essential, as it establishes a relationship between engagement and learning. Since there are indications that students engaged in learning activities benefit from increased learning, it is imperative to explore this relationship. This investigation is crucial because AI learning is a new initiative, and strategies must be examined to effectively communicate the concepts to students and teachers. Based on the description by Fredricks et al. ( 2004 ), engagement is a multidimensional construct that encompasses behavior, emotion, and cognition. We will briefly describe each engagement type (in relation to AI learning) highlighted below.

Cognitive Engagement—Critical Thinking: Cognitive (Looking at the Focused Effort Students Give to What Is Being Taught)

Learning and mastering artificial intelligence (AI) require critical thinking (Benvenuti et al., 2023 ), particularly in cognitive engagement. The CE details how students process information (Schnitzler et al., 2021 ). AI requires deep cognitive engagement from learners because of its complex algorithms (Jaiswal & Arun, 2021 ), diverse applications, and ethical implications. Critical thinking in this context involves analyzing data sources for potential biases, evaluating the ethical implications of AI decisions, and challenging the assumptions that underpin AI decisions. Additionally, it requires learners to explore and evaluate different approaches and methods to solve real-world problems using AI techniques. Developing critical thinking skills with cognitive engagement helps individuals understand AI concepts and provides them with the tools to innovate effectively and navigate the rapidly changing AI landscape. In addition, cognitive engagement through critical thinking catalyzes innovation in the fast-expanding field of AI. Cognitive engagement and critical thinking are important aspects of pre-service teachers’ engagement in STEM education. Research has shown that active engagement in STEM education leads to higher-order thinking skills, increased motivation, and improved learning outcomes (Kamarrudin et al., 2023 ). In STEM education, pre-service teachers employ cognitive engagement via critical thinking skills to successfully teach STEM and achieve meaningful learning experiences for their students (HacioĞLu, 2021 ). Recently, Yıldız-Feyzioğlu and Kıran ( 2022 ) showed that collaborative group investigation (CGI) learning and self-efficacy have also been found to positively impact the critical thinking skills of pre-service science teachers. Therefore, cognitive engagement and critical thinking play a crucial role in pre-service teachers’ engagement in STEM education, leading to improved learning outcomes and the development of effective instructional strategies.

Cognitive Engagement—Creativity

Cognitive engagement via creativity is a dynamic and necessary part of learning AI. While AI is founded on mathematical and computational concepts, encouraging creativity in AI education is crucial for several reasons (Lin et al., 2023 ). Creativity enables students to conceive unique AI applications, leading to novel healthcare, economics, and entertainment solutions. Cognitive engagement for pre-service teachers in STEM education involves their continuous intellectual involvement, the design of stimulating instructional strategies, effective use of technology, and the promotion of a growth mindset (Kim et al., 2015 ). These cognitive aspects contribute to a dynamic and enriching STEM learning experience, preparing students to think critically, adapt to new challenges, and thrive in a knowledge-based society. Patar ( 2023 ) reveals that active engagement activities, such as exploration, sharing knowledge, and assessment, can enhance pre-service teachers’ cognitive engagement. Pre-service teachers should champion the integration of digital tools and resources to enhance the learning experience, providing students with opportunities to explore, experiment, and apply their cognitive skills in a technology-driven world. This integration also supports the development of digital literacy skills, which is essential for successful STEM disciplines. Whether cognitive engagement through creative thinking will significantly affect pre-service teachers in STEM education remains to be investigated.

Behavioral Engagement—Self-directed Learning: Behavioral (Measuring Attendance and Participation)

Behavior engagement refers to measuring academic performance and participation in educational activities (Bowden et al., 2021 ). It is critical to understand the discipline of AI, particularly in the context of self-directed learning (Nazari et al., 2021 ). Pre-service teachers must consider behavioral engagement as an important aspect of STEM education. When pre-service teachers actively engage students in hands-on activities, discussions, and problem-solving tasks, students are more likely to understand STEM concepts better. However, taking the initiative indicates a high level of behavioral engagement (Kim et al., 2015 ). STEM education differs from conventional teaching, which treats students as passive listeners. To implement STEM innovations in the classroom, teachers must design inquiry activities and learning contexts to engage students in authentic problem-solving (Dong et al., 2019 ). Kim et al. ( 2015 ) found that using technology (robotics) significantly impacted students’ behavioral engagement. Thus, this study supports behavioral engagement in STEM education for pre-service teachers.

Social Engagement

Social interaction can be referred to as social interaction, which is an essential component of learning (Okita, 2012 ). It entails working with peers, experts, and AI communities to exchange ideas, share knowledge, and get diverse viewpoints, ultimately improving the learning experience and driving creativity. Social engagement for pre-service teachers in STEM education involves building positive relationships within the school community, integrating collaborative learning experiences, actively participating in professional networks, and instilling a sense of social responsibility in students. These social aspects contribute to a holistic STEM education experience, fostering a collaborative and purpose-driven approach that prepares students for success in both academic and real-world STEM contexts. Ishmuradova et al. ( 2023 ) reported that pre-service science teachers have shown high awareness of social responsibility in human welfare, safety, and a sustainable environment. However, their awareness related to practice and participation is relatively low. To our knowledge, there is apparently no study on social engagement among pre-service teachers in STEM education.

Research Hypotheses

H1: Attitude towards AI will significantly positively influence student engagement in the AI program.

H2: Anxiety towards AI will significantly negatively influence student engagement in the AI program.

H3: AI readiness will significantly positively influence student engagement in the AI program.

H4: Self-transcendent goals will significantly positively influence student engagement in the AI program.

H5: Confidence in learning AI will significantly positively influence student engagement in the AI program.

Methodology

Research context and participants.

This study was conducted at a public university of education in Ghana, specifically focusing on the students enrolled in the Information Communication and Technology (ICT) Education program. It is important to note that the student teachers had not completed any courses in AI. As shown in Table  1 , 35 pre-service teachers participated in our research, with a majority being male and aged between 19 and 25 years. Most of the participants (57.1%) were in their second year of the teacher training program. For this research, we utilized a simple random sampling approach. We extended an invitation to all the students in the ICT department to participate in our study, and their involvement was based on informed consent. We also assured the participants of their anonymity and the ability to withdraw from the project at any time.

Data Collection Procedure

The data utilized for this study was gathered through an online survey shortly after a 4-week AI short course program organized between September and October 2022. The course was designed to expose pre-service teachers to AI knowledge and its ethical implications. The program is designed as an intervention of 2 h 30 min weekly, including assignments, and comprises four different learning sessions and five different topics. The topics include Introduction to AI and Ethical Dilemmas, Image Recognition, Algorithms and Bias, Convolution Neural Networks, k-Nearest Neighbor, and Decision Trees. We used different plugged and unplugged activities to demystify the topics to the study participants (Ma et al., 2023 ). We used AI tools like Google Teachable Machine (plugged activities) during the learning session, including a series of paper-based activities (unplugged) that support collaborative learning (Frimpong, Sanusi, Ayanwale, et al., n.d) After the sessions, the pre-service teachers filled out a survey to gather their perspectives about their learning.

Instrumentation

Our research instrument was adapted from different sources in the research literature (see “Appendix”). We modified some terms slightly to fit our research context. We adapted the items for engagement from the studies of Bowden et al. ( 2021 ), Reeve and Tseng ( 2011 ), and Sun et al. ( 2019 ). Confidence in learning AI scale was adapted from Xia et al. ( 2022 ). Finally, the scales for attitude towards AI, anxiety towards AI, AI readiness, and self-transcendent goals were derived from the study of Sanusi et al. ( 2023 ). A 6-point Likert scale ranging from “strongly disagree” to “strongly agree” was used to retrieve all the items’ responses. We decided to use a 6-point Likert scale since it provides opportunities for more choice and may measure the participants’ evaluation more accurately (Taherdoost, 2019 ).

Analytical Approach

In this study, we employed a variance-based structural equation modeling (VB-SEM) approach to assess our proposed model. This methodology allowed us to estimate both the measurement and structural models simultaneously. We chose VB-SEM over covariance-based structural equation modeling (CB-SEM) due to its suitability for our study’s specific characteristics. These include dealing with small sample sizes, not having strict distribution requirements for the data, explaining variance, and managing a complex hierarchical component model. This complexity is evident in our study, which focuses on student engagement in the AI program (Benitez et al., 2020 ; Hair et al., 2014 ). To conduct our data analysis, we utilized SmartPLS software version 4.0.9.6 (Ringle et al., 2022 ). More so, various parameters were considered when estimating our model in partial least squares (PLS), including the use of the path weighting scheme as the estimation method, raw data for data metric, and default settings of the initial weight PLS-SEM algorithm (Hair et al., 2017 ). To validate our model, we employed the two-stage disjoint approach for higher-order constructs (Sarstedt et al., 2019 ) since the variable “engagement” is indeed a higher-order construct consisting of four lower-order constructs: cognitive engagement—critical thinking (CECT), cognitive engagement—creativity (CEC), behavioral engagement—self-directed learning (BESL), and social engagement (SOE).

In addition, our analysis process involved assessing the goodness of model fit for the measurement model, which was based on the saturated model, and for the structural model, which was based on the estimated model. We evaluated these models using various parameters, including the standardized residual mean square root (SRMR) and other fit indices like normed fit index (NFI), the distance of unweighted least squares (d ULS ), and the geodesic distance (d G ) to ensure adequate model fit (Benitez et al., 2020 ; Hair et al., 2017 ). In the evaluation of the measurement model, both first- and second-order constructs were examined for reliability and validity, looking at factors such as item factor loadings (FL ≥ 0.60), construct reliability (i.e., Cronbach alpha and composite reliability indices—CA ≥ 0.70; CR ≥ 0.70), convergent validity (average variance extracted—AVE ≥ 0.5), and discriminant validity (i.e., heterotrait-monotrait correlation—HTMT < 0.85 or HTMT < 0.90) (Ayanwale & Ndlovu, 2024 ; Hair et al., 2017, 2019, 2022; Henseler et al., 2015 ; Ringle et al., 2023 ; Sarstedt et al., 2019 ). Items with factor loadings below 0.60 and constructs with average variance extracted (AVE) below 0.50 were removed, and the models were subsequently refined. To test the hypotheses proposed in our study, we analyzed the relationships between constructs in the structural model using bootstrapping with 10,000 subsamples in PLS. We assessed the magnitude and statistical significance of direct effects to understand the relative importance of constructs in explaining others in the structural model (Amusa & Ayanwale, 2021 ; Hair et al., 2018 ; Hock et al., 2010 ; Ringle & Sarstedt, 2016 ). We also estimated the predictive power within the sample using the coefficient of determination ( R 2 ), which should exceed 0.1 ( R 2  > 0.1), and the predictive power outside the sample through the PLSpredict ( Q 2 predict ) obtained by comparing the RMSE (root mean square error) or MAE (mean absolute error) values of all the indicators in the PLS-SEM analysis to those of the LM (linear model) benchmark. When most of these indicators yield lower RMSE or MAE values than the LM benchmark, it demonstrates a moderate level of predictive power. On the other hand, if only a minority of the indicators exhibit lower prediction errors compared to the LM benchmark, the model’s predictive capability is low. If none of the indicators shows lower prediction errors than the LM benchmark, the model lacks predictive power (Sanusi et al., 2023 ; Shmueli & Koppius, 2011 ; Shmueli et al., 2019 ).

This section presents the results of the analysis. Thus, Table  2 evaluates the overall model fit for the measurement and structural models. This analysis indicates that the SRMR value falls below the recommended threshold (SRMR < 0.08), and the SRMR, NFI, d ULS , and d G values are all below the 95% quantile (HI95) of their reference distribution. These findings collectively suggest that the measurement model demonstrates an acceptable fit, and there is empirical evidence supporting the validity of the estimated model (Molefi & Ayanwale, 2023 ; Quintana & Maxwell, 1999 ).

In the measurement model, we conducted an evaluation of reliability and validity for both the lower-order constructs (LOC) and higher-order constructs (HOC). The results, as depicted in Table  3 , indicate that the factor loadings for LOC range from 0.648 to 0.975, composite reliability (CR) values for LOC range from 0.826 to 0.980, Cronbach’s alpha (α) values for LOC range from 0.783 to 0.962, and average variance extracted (AVE) values for LOC range from 0.541 to 0.923. Furthermore, the factor loadings for HOC range from 0.784 to 0.846, with a CR value for HOC of 0.888, a Cronbach’s α value for HOC of 0.834, and an AVE value for HOC of 0.664.

Significantly, all these values surpass the recommended thresholds, signifying that the lower-order and higher-order constructs exhibit strong validity, reliability, and internal consistency. Additionally, we confirmed discriminant validity, as indicated in Table  4 , demonstrating that each reflective construct shows more robust associations with its indicators than any other construct within the PLS path model. In other words, the constructs are distinguishable from one another, with correlation values well below the suggested threshold. This underscores the effectiveness of the measurement model in establishing good discriminant validity (Ayanwale & Oladele, 2021 ; Hair et al., 2022 ).

The findings from the structural model are illustrated in Table  4 and Fig.  2 . Following the results, attitude towards AI has a significant positive effect on student engagement in the AI program ( β  = 0.262, t  = 3.814, p  < 0.05), supporting H1. Anxiety towards AI is found to exert a negative influence on student engagement in the AI program ( β  =  − 0.257, t  =  − 3.438, p  < 0.05), validating H2. AI readiness positively influences student engagement in the AI program ( β  = 0.265, t  = 4.420, p  < 0.05), so H3 is supported. Self-transcendent goals positively impact student engagement in the AI program (β = 0.232, t = 4.171, p < 0.05), thus supporting H4. At the same time, confidence in learning AI is positively associated with student engagement in the AI program ( β  = 0.386, t  = 6.037, p  < 0.05), supporting H5. Attitude towards AI, anxiety towards AI, AI readiness, self-transcendent goals, and confidence in learning AI jointly explain 63.1% of the variance in student engagement in the AI program. Hence, the model’s ability to explain variance within the sample is deemed adequate, as the coefficient of determination ( R 2 ) values surpass the threshold of 0.10 (Ayanwale & Molefi, 2024 ; Falk & Miller, 1992 ; Molefi & Ayanwale, 2023 ).

figure 2

Structural model result

In addition, the effect size ( f 2 ) was calculated to assess how much removing each exogenous variable from the model influences the model’s ability to explain variance. The f 2 values were interpreted according to Cohen ( 1988 )’s guidelines, which classify effect sizes as small ( f 2  >  = 0.02), medium ( f 2  ≥ 0.15), or large ( f 2  ≥ 0.35). The effect sizes for the different exogenous variables, as shown in Table  5 , revealed that AT ( f 2  = 0.292) had a substantial effect size. This means that removing variable AT from the model would significantly reduce the model’s ability to explain variance. Therefore, variable AT plays a crucial role in explaining variance in the model, and its inclusion is essential for an accurate model. Variable CL ( f 2  = 0.214) also had a notable effect size, indicating its substantial contribution to the model’s explanatory power. Its removal would significantly diminish the model’s capacity to explain variance. Also, AR ( f 2  = 0.179) had a moderate effect size. Removing variable AR would moderately decrease the model’s ability to explain the variance, underlining its importance in the model, and AN ( f 2  = 0.042) and SG ( f 2  = 0.031) had relatively smaller effect sizes. While these variables contribute to the model’s ability to explain the variance, their removal would have a minor impact on its overall performance. Prioritizing and retaining variables AT and CL are crucial to maintaining the model’s accuracy and explanatory power. Although not as influential as AT and CL, variable AR still plays a moderate role in explaining variance and should be retained in the analysis.

Furthermore, when examining the results of Q 2 predict (see Table  6 ), we noticed that all the metrics associated with the endogenous construct (student engagement in the AI program) exhibited lower values for RMSE (root mean square error) and MAE (mean absolute error) in comparison to a simple linear model benchmark that was based on the means of the indicators from the training sample. These metrics yielded Q 2 predict values that exceeded 0. This suggests that the indicators used in our PLS-SEM analysis produced fewer prediction errors when compared to the linear model benchmark, thereby indicating a strong predictive capability for our model.

While previous research has explored constructs such as AT, CL, AR, AN, and SG and their links to behavioral intention in the context of AI and education (Ayanwale et al., 2022 ; Chai et al., 2021, 2020a, 2020b), this study contributes to the existing literature by investigating how these constructs affect pre-service teacher engagement with AI. The novelty of this research lies in its examination of the relationship between these constructs and the engagement of pre-service teachers, addressing a gap in literature. This paper adopts a holistic approach to measuring pre-service teacher engagement in AI programs, which includes four dimensions: cognitive engagement (critical thinking and creativity), behavioral engagement (self-directed learning), and social engagement. Additionally, composite-based structural equation modeling is employed to unravel the intricate interrelationships among student engagement with AI learning, attitude towards AI, anxiety towards AI, self-transcendent goals, AI readiness, and confidence in learning AI.

The findings affirm the validity of all proposed hypotheses (H1–H5) as antecedents to pre-service teachers’ engagement with AI content. Collectively, these constructs account for 63.1% of the observed variance in teachers’ engagement with AI. Among the predictor variables, confidence in learning AI emerges as the most influential predictor of pre-service teachers’ engagement, followed by AI readiness, attitude towards AI, and self-transcendent goals. These findings resonate with the previous research (e.g., Ayanwale, 2023 ; Lin et al., 2023 ; Papadakis et al., 2021 ; Roy et al., 2022 ). Confidence in one’s ability to learn AI and use technology has been a recurring theme in technology adoption literature. Bandura’s theory (1977) underscores the significance of self-efficacy in adopting and effectively using new technologies. Thus, confidence in learning AI plays a pivotal role in driving engagement with AI activities. These findings align with Chen et al. ( 2018 ), which found that undergraduate students’ confidence in their ability to grasp AI significantly predicted their intention to learn AI. Consistent with our findings, Sun et al. ( 2019 ) asserted that confidence, as one of the intrinsic motivation components, significantly predicts students’ engagement in MOOC courses. When students perceive learning in MOOCs as enjoyable and are confident in their abilities, they are more motivated and engaged in their studies. Therefore, it is imperative to prioritize building confidence in pre-service teachers concerning their capacity to learn AI and to create supportive learning environments and practical training to enhance their engagement in AI programs.

As the second most influential variable, AI readiness has been identified as critical in enhancing student engagement in learning AI (Tang & Chen, 2018 ). While existing studies have primarily explored the relationship between AI readiness and intention (Ayanwale et al., 2022 ; Chai et al., 2020a , 2020b ), this study delves into how individuals’ preparedness and willingness to engage with and adapt to AI influence engagement with AI learning materials. It examines whether their comfort level with AI technology contributes to their active involvement in AI-related educational programs, including attendance, coursework engagement, and participation in AI-related projects (Dai et al., 2020 ; Hsu et al., 2019 ; Sun et al., 2019 ). The positive coefficient uncovered in our findings indicates that higher AI readiness positively correlates with increased engagement in learning AI. This suggests that pre-service teachers are more likely to engage in AI-related activities when they feel prepared and willing to embrace AI. Therefore, it emphasizes the importance of adequately preparing pre-service teachers to work with AI. AI readiness is critical in teacher training to enhance engagement and effectiveness in AI education.

In addition, previous research (Ayanwale et al., 2022 ; Kumar & Mantri, 2021 ; Weng et al., 2018 ) has consistently highlighted the significance of one’s attitude in predicting the intention to learn AI. Our study also observes a substantial positive relationship between a positive attitude towards AI and pre-service teacher engagement with AI. This finding aligns with the work of Papadakis et al. ( 2021 ), emphasizing that a positive attitude towards AI promotes its acceptance as a valuable tool for enhancing STEM instruction and increasing engagement. It further corroborates the findings of Kim and Park ( 2019 ), who reported that individuals with more positive attitudes towards AI were more likely to plan the use of AI-based technologies. Ayanwale ( 2023 ) and Ng and Chu ( 2021 ) also underscore the importance of a positive attitude, as students with such an attitude were more inclined to learn AI. Our results indicate that pre-service teachers are more likely to actively participate in AI-related educational activities when they view AI more favorably. This underscores the critical role of instilling positive attitudes and perceptions about AI in teacher training programs, urging educators and institutions to prioritize this aspect to enhance engagement with AI-related content.

We also examine the impact of self-transcendent goals, encompassing objectives beyond personal well-being. Our results reveal a significant positive coefficient, indicating that having self-transcendent goals positively correlates with pre-service teacher engagement in learning AI. This outcome aligns with the findings of Naftzger ( 2018 ) and Okundaye et al. ( 2022 ), who found that pre-service teachers in STEM programs often harbor aspirations to make a broader societal impact, transcending personal accomplishments. In practical terms, their engagement increases when teachers are motivated by goals benefiting their students, including the society. Therefore, emphasizing self-transcendent goals in pre-service teachers may enhance their commitment to AI-related education and its potential impact on students.

In addition to previous studies that explore the relationship between anxiety and intention (Ayanwale et al., 2022 ; Chai et al., 2020a , b ), our study delves into how self-perceived fear and discomfort concerning AI tools affect engagement in AI programs. The results support our hypothesis, showing a negative coefficient, indicating that anxiety towards AI is negatively associated with pre-service teacher engagement in learning AI. This finding resonates with the work of Katsarou (2021) and Kin (2020), which also found a significant negative relationship between anxiety and intention regarding AI. Jones et al. ( 2017 ) also note that apprehension might arise from concerns about technological skills or fears that AI might replace traditional instructional roles. To address this anxiety, pre-service instructors can build confidence in AI tools through training and support. Creating an environment that encourages experimentation and emphasizes AI’s complementary role in improving STEM education is crucial. Reducing anxiety and promoting AI’s beneficial integration is essential for encouraging engagement. While some scholars find anxiety less predictive of behavioral intention, our study suggests that anxiety towards AI significantly impacts pre-service teacher engagement with learning AI. This insight underscores the importance of recognizing and addressing AI-related anxiety among pre-service teachers. It highlights the need for strategies to reduce anxiety and enhance comfort with AI to promote engagement in AI education programs. Notably, while our study specifically targets pre-service teachers, we recognize the importance of exploring how these findings could be replicated across various academic disciplines. By discussing the relevance of our results to broader educational contexts, we provide insights into potential variations that might arise in different settings. This discussion facilitates a more comprehensive understanding of the generalizability and applicability of our findings.

Implication for Practice and Policy

Understanding the factors influencing pre-service teachers’ engagement with AI has significant implications for both educational practices and policy development. Based on this study’s findings, we recommend that educational institutions and policymakers prioritize integrating AI-related content within pre-service teacher education programs. This integration will facilitate the development of essential AI literacy and skills, equipping teachers to incorporate AI technologies into their teaching methods effectively. To ensure a well-rounded and practical approach, schools should offer opportunities for teachers to engage in ongoing professional development focused on AI. Additionally, we emphasize the importance of exposing pre-service teachers to various AI-powered teaching tools and methodologies. This exposure will empower them to create more engaging and personalized learning experiences for their students. Consequently, policies should encourage the adoption of AI tools that can cater to the unique needs of each student, fostering more inclusive and accommodating learning environments.

Furthermore, pre-service teachers must comprehend the ethical implications associated with AI technologies. They should be well-prepared to guide their students in the responsible utilization of AI. Policymakers can contribute by allocating school resources to acquire AI technologies and providing teachers with the necessary tools and training. This includes investments in AI software, hardware, and technical support to ensure teachers can effectively integrate AI into their classrooms. Robust policies should be established to safeguard student data when employing AI tools. Pre-service teachers should be well-versed in data privacy and security measures and adhere to regulations when incorporating AI technologies into their teaching practices.

Promoting cross-disciplinary learning that incorporates AI concepts is also crucial. Pre-service teachers should be primed to teach AI not only as a standalone subject but also as a complementary tool in various disciplines. Policies can foster collaboration among pre-service teachers, experienced educators, and AI experts. Such interactions can yield valuable insights and drive innovation in AI education. Encouraging pre-service teachers to engage in action research to assess the impact of AI on student learning and their teaching practices can be pivotal. This research can inform best practices and contribute to a growing knowledge of AI in education. On the policy front, both policymakers and educators should strive to ensure that AI resources and training are accessible to all, regardless of a student’s socioeconomic background or geographical location. This may entail initiatives aimed at bridging the digital divide and promoting equitable access to AI education. The policy framework should also account for ongoing support and professional development for teachers as AI technologies evolve. Teachers must possess the skills to adapt to changes and stay current with developments in AI in education. Also, our study offers practical recommendations for practitioners. Emphasizing the critical role of building confidence in pre-service teachers, enhancing AI readiness in teacher training, fostering positive attitudes towards AI, and incorporating self-transcendent goals, we provide actionable steps for educators and institutions. These recommendations offer a roadmap for creating supportive learning environments and practical training to enhance pre-service teacher engagement in AI programs.

Limitation and Future Work

Some limitations should be noted despite the valuable results this study generates. First, the selection of study participants is restricted to the ICT education department at a university in Ghana. Hence, it is necessary to consider subjects across different disciplines within the teacher education program as well as other regions to understand students’ engagement from a broader perspective. Second, our sample size may limit the generalizability of our results. Future research should consider a relatively large sample size across different contexts. Third, using only a quantitative approach limits the insight we may generate from students’ explanations during the learning process. To this end, a qualitative or mixed-method approach should be considered for triangulation purposes. Lastly, the AI program in this study spans over a few weeks. Future research should investigate student engagement across an academic session and a longitudinal study of the candidates.

Ajzen, I. (2020). The theory of planned behavior: Frequently asked questions. Human Behavior and Emerging Technologies, 2 (4), 314–324. https://doi.org/10.1002/hbe2.195

Article   Google Scholar  

Al Darayseh, A. (2023). Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Computers and Education: Artificial Intelligence, 4 , 100132. https://doi.org/10.1016/j.caeai.2023.100132

Amusa, J. O., & Ayanwale, M. A. (2021). Partial least square modeling of personality traits and academic achievement in physics. Asian Journal of Assessment in Teaching and Learning, 11 (2), 77–92. https://doi.org/10.37134/ajatel.vol11.2.8.2021

Ayanwale, M. A. (2023). Evidence from Lesotho secondary schools on students’ intention to engage in artificial intelligence learning. In 2023 IEEE AFRICON , Nairobi, Kenya, 199–204. https://doi.org/10.1109/AFRICON55910.2023.10293644

Ayanwale, M. A., & Molefi, R. R. (2024). Exploring intention of undergraduate students to embrace chatbots: From the vantage point of Lesotho. International Journal of Education Technology in Higher Education, 21 , 20. https://doi.org/10.1186/s41239-024-00451-8

Ayanwale, M. A., Molefi, R. R., & Matsie, N. (2023). Modeling secondary school students’ attitudes toward TVET subjects using social cognitive and planned behavior theories. Social Sciences & Humanities Open, 8 (1), 100478.

Ayanwale, M. A., & Ndlovu, M. (2024). Investigating factors of students’ behavioral intentions to adopt chatbot technologies in higher education: Perspective from expanded diffusion theory of innovation. Computers in Human Behavior Report, 14 , 100396. https://doi.org/10.1016/j.chbr.2024.100396

Ayanwale, M. A., & Oladele, J. I. (2021). Path modeling of online learning indicators and students’ satisfaction during Covid-19 pandemic. International Journal of Innovation, Creativity and Change , 15 (10), 521–541. https://www.ijicc.net/images/Vol_15/Iss_10/151038_Ayanwale_2021_E1_R.pdf . Accessed 19 Oct 2023.

Ayanwale, M. A., & Sanusi, I. T. (2023). Perceptions of STEM vs. Non-STEM teachers toward teaching artificial intelligence. Proceedings of the Institute of Electrical and Electronics Engineers Africa Conference , Kenya, 16 , 933–937.  https://doi.org/10.1109/AFRICON55910.2023.10293455

Ayanwale, M. A., Sanusi, I. T., Adelana, O. P., Aruleba, K., & Oyelere, S. S. (2022). Teachers’ readiness and intention to teach artificial intelligence in schools. Computers and Education: Artificial Intelligence, 3 , 1–11. https://doi.org/10.1016/j.caeai.2022.100099

Aydeniz, M., & Bilican, K. (2018). The impact of engagement in STEM activities on primary pre-service teachers’ conceptualization of STEM and knowledge of STEM pedagogy. Journal of Research in STEM Education, 4 (2), 213–234. https://doi.org/10.51355/jstem.2018.46

Baguma, R., Mkoba, E., Nahabwe, M., Mubangizi, M. G., Amutorine, M., & Wanyama, D. (2023). Towards an artificial intelligence readiness index for Africa. In P. Ndayizigamiye, H. Twinomurinzi, B. Kalema, K. Bwalya, & M. Bembe, Digital-for-development: Enabling transformation, inclusion and sustainability through ICTs Cham, 23 (4), 234–258 .  https://doi.org/10.1007/978-3-031-28472-4_18

Balakrishnan, J., Dwivedi, Y. K., Hughes, L., & Boy, F. (2021). Enablers and inhibitors of AI-powered voice assistants: A dual-factor approach by integrating the status quo bias and technology acceptance model. Information Systems Frontiers . https://doi.org/10.1007/s10796-021-10203-y

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84 (2), 191. https://doi.org/10.1037/0033-295X.84.2.191

Barton, C., & Hart, R. (2023). The experience of self-transcendence in social activists. Behavioral Sciences, 13 (1), 66. https://www.mdpi.com/2076-328X/13/1/66 . Accessed 13 Sept 2023.

Benitez, J., Henseler, J., Castillo, A., & Schuberth, F. (2020). How to perform and report an impactful analysis using partial least squares: Guidelines for confirmatory and explanatory IS research. Information & Management, 57 (2), 103168. https://doi.org/10.1016/j.im.2019.05.003

Benvenuti, M., Cangelosi, A., Weinberger, A., Mazzoni, E., Benassi, M., Barbaresi, M., & Orsoni, M. (2023). Artificial intelligence and human behavioral development: A perspective on new skills and competencies acquisition for the educational context. Computers in Human Behavior, 148 , 107903. https://doi.org/10.1016/j.chb.2023.107903

Billington, B. (2023). A case study: Exploring pre-service teachers’ readiness for teaching in K-12 online learning environments while enrolled in a University-based teacher preparation program [Ed.D., Drexel University]. ProQuest Dissertations & Theses Global. United States -- Pennsylvania. 1–285. https://www.proquest.com/dissertations-theses/case-study-exploring-pre-service-teachers/docview/2854683389/se-2?accountid=13425

Bin Abdulrahman, K. A., Jumaa, M. I., Hanafy, S. M., Elkordy, E. A., Arafa, M. A., Ahmad, T., & Rasheed, S. (2021). Students’ perceptions and attitudes after exposure to three different instructional strategies in applied anatomy. Advances in Medical Education and Practice, 12 , 607–612. https://doi.org/10.2147/AMEP.S310147

Bosica, J., Pyper, J. S., & MacGregor, S. (2021). Incorporating problem-based learning in a secondary school mathematics pre-service teacher education course. Teaching and Teacher Education, 102 , 103335. https://doi.org/10.1016/j.tate.2021.103335

Bovero, A., Pesce, S., Botto, R., Tesio, V., & Ghiggia, A. (2023). Self-transcendence: Association with spirituality in an Italian sample of terminal cancer patients. Behavioral Sciences, 13 (7), 559. https://www.mdpi.com/2076-328X/13/7/559 . Accessed 26 Sept 2023.

Bowden, J. L. H., Tickle, L., & Naumann, K. (2021). The four pillars of tertiary student engagement and success: A holistic measurement approach. Studies in Higher Education, 46 (6), 1207–1224. https://doi.org/10.1080/03075079.2019.1672647

Bryson, C., & Hand, L. (2007). The role of engagement in inspiring teaching and learning. Innovations in Education and Teaching International, 44 (4), 349–362. https://doi.org/10.1080/14703290701602748

Carroll, M., Lindsey, S., Chaparro, M., & Winslow, B. (2021). An applied model of learner engagement and strategies for increasing learner engagement in the modern educational environment. Interactive Learning Environments, 29 (5), 757–771. https://doi.org/10.1080/10494820.2019.1636083

Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., & Barro, S. (2023). AI literacy in K-12: A systematic literature review. International Journal of STEM Education, 10 (1), 29. https://doi.org/10.1186/s40594-023-00418-7

Celik, I. (2023). Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers in Human Behavior, 138 , 107468. https://doi.org/10.1016/j.chb.2022.107468

Chai, C. S., Lin, P. Y., Jong, M. S. Y., Dai, Y., Chiu, T. K., & Qin, J. (2021). Perceptions of and behavioral intentions towards learning artificial intelligence in primary school students. Educational Technology & Society, 24 (3), 89–101. https://www.jstor.org/stable/27032858 . Accessed 14 Oct 2023.

Chai, C. S., Lin, P., & Jong, M. S. (2020). Factors influencing students’ behavioral intention to continue artificial intelligence learning. Conference proceedings of International Symposium on Educational Technology, Thailand, 8 , 147–150. https://doi.org/10.1109/ISET49818.2020.00040

Chai, C. S., Wang, X., & Xu, C. (2020b). An extended theory of planned behavior for modeling Chinese secondary school students’ intention to learn artificial intelligence. Mathematics, 8 (11), 1–18. https://doi.org/10.3390/math8112089

Chen, Y., Wang, Y., & Zou, W. (2018). The impact of attitudes, subjective norms, and perceived behavioral control on high school students’ intentions to study computer science. Education Sciences, 8 (2), 65.

Google Scholar  

Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2023). Teachers’ attitudes towards chatbots in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 49 (2), 295–313. https://doi.org/10.1080/03055698.2020.1850426

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge.

Dai, Y., Chai, C. S., Lin, P. Y., Jong, M. S. Y., Guo, Y., & Qin, J. (2020). Promoting students’ well-being by developing their readiness for the artificial intelligence age. Sustainability, 12 (16), 1–15. https://doi.org/10.3390/su12166597

Dong, Y., Xu, C., Song, X., Fu, Q., Chai, C. S., & Huang, Y. (2019). Exploring the effects of contextual factors on in-service teachers’ engagement in STEM teaching. The Asia-Pacific Education Researcher, 28 (1), 25–34. https://doi.org/10.1007/s40299-018-0407-0

Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling . University of Akron Press.

Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74 (1), 59–109. https://doi.org/10.3102/00346543074001059

Frimpong, E. K (2022).  Developing pre-service teachers’ artificial intelligence literacy  (Master's thesis, Itä-Suomen yliopisto).

Frimpong, E. K., Sanusi, I. T., Ayanwale, M. A., & Oyelere, S. S. (n.d) Assessing pre-service teachers’ needs for implementing artificial intelligence in teacher education. Computers in Human Behavior Reports.

Garg, A., & Kumar, J. (2017). Exploring customer satisfaction with university cafeteria food services. An empirical study of Temptation Restaurant at Taylor’s University, Malaysia. European Journal of Tourism, Hospitality and Recreation, 8 (2), 96–106. https://doi.org/10.1515/ejthr-2017-0009

Ge, B. H., & Yang, F. (2023). Transcending the self to transcend suffering. Frontiers in Psychology, 14 , 1113965. https://doi.org/10.3389/fpsyg.2023.1113965

Griful-Freixenet, J., Struyven, K., & Vantieghem, W. (2021). Exploring pre-service teachers’ beliefs and practices about two inclusive frameworks: Universal Design for Learning and differentiated instruction. Teaching and Teacher Education, 107 , 103503. https://doi.org/10.1016/j.tate.2021.103503

Grimble, T. (2019). Teacher professional development challenges for science, technology, engineering, and mathematics education: A case study . University of Phoenix.

HacioĞLu, Y. (2021). The effect of STEM education on 21th century skills: Preservice science teachers’ evaluations. Journal of STEAM Education, 4 (2), 140–167.

Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2022). A primer on partial least squares structural equation modeling (PLS-SEM) (3rd ed.). Sage.

Hair, J. F., Hult, G. T., Ringle, C. M., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM) (2a ed.). SAGE Publications.

Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31 (1), 2–24. https://doi.org/10.1108/ebr-11-2018-0203

Hair, J. F., Sarstedt, M., Ringle, C. M., & Gudergan, S. P. (2018). Advanced issues in partial least squares structural equation modeling (PLS-SEM) . Sage.

Hair, J. F., Jr., Sarstedt, M., Hopkins, L., & Kuppelwieser, G. V. (2014). Partial least squares structural equation modeling (PLS-SEM). European Business Review, 26 (2), 106–121. https://doi.org/10.1108/EBR-10-2013-0128

Hamad, S., Tairab, H., Wardat, Y., Rabbani, L., AlArabi, K., Yousif, M., . . . Stoica, G. (2022). Understanding science teachers’ implementations of integrated STEM: teacher perceptions and practice. Sustainability, 14 (6), 3594. https://www.mdpi.com/2071-1050/14/6/3594 . Accessed 23 Sept 2023.

Haugan, G., Hanssen, B., & Moksnes, U. K. (2013). Self-transcendence, nurse–patient interaction and the outcome of multidimensional well-being in cognitively intact nursing home patients. Scandinavian Journal of Caring Sciences, 27 (4), 882–893. https://doi.org/10.1111/scs.12000

Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43 (1), 115–135.

Hock, C., Ringle, C. M., & Sarstedt, M. (2010). Management of multi-purpose stadiums: Importance and performance measurement of service interfaces. International Journal of Services Technology and Management, 14 (2/3), 188–207.

Hopcan, S., Türkmen, G., & Polat, E. (2023). Exploring the artificial intelligence anxiety and machine learning attitudes of teacher candidates. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12086-9

Hsu, H.-C.K., Wang, C. V., & Levesque-Bristol, C. (2019). Reexamining the impact of self-determination theory on learning outcomes in the online learning environment. Education and Information Technologies, 24 (3), 2159–2174. https://doi.org/10.1007/s10639-019-09863-w

Huang, B., Siu-Yung Jong, M., Tu, Y.-F., Hwang, G.-J., Chai, C. S., & Yi-Chao Jiang, M. (2022). Trends and exemplary practices of STEM teacher professional development programs in K-12 contexts: A systematic review of empirical studies. Computers & Education, 189 , 104577. https://doi.org/10.1016/j.compedu.2022.104577

Ishmuradova, I. I., Sazonova, T. V., Panova, S. A., Andryushchenko, I. S., Mashkin, N. A., & Zakharova, V. L. (2023). Examining pre-service science teachers’ perspectives on the social responsibility of scientists and engineers. Eurasia Journal of Mathematics, Science and Technology Education, 19 (8), em315. https://doi.org/10.29333/ejmste/13457

Jaiswal, A., & Arun, C. J. (2021). Potential of artificial intelligence for transformation of the education system in India. International Journal of Education and Development Using Information and Communication Technology, 17 (1), 142–158.

Jones, W. M., Smith, S., & Cohen, J. (2017). Pre-service teachers’ beliefs about using maker activities in formal K-12 educational settings: A multi-institutional study. Journal of Research on Technology in Education, 49 (3–4), 134–148. https://doi.org/10.1080/15391523.2017.1318097

Kamarrudin, H., Talib, O., Kamarudin, N., & Ismail, N. (2023). Igniting active engagement in pre-service teachers in STEM education: A comprehensive systematic literature review. Malaysian Journal of Social Sciences and Humanities (MJSSH), 8 (6), 1–26. https://doi.org/10.47405/mjssh.v8i6.2342

Kaufman, D. (1996). Constructivist-based experiential learning in teacher education. Action in Teacher Education, 18 (2), 40–50. https://doi.org/10.1080/01626620.1996.10462832

Kaya, F., Aydin, F., Schepman, A., Rodway, P., Yetişensoy, O., & Demir Kaya, M. (2024). The roles of personality traits, AI anxiety, and demographic factors in attitudes toward artificial intelligence. International Journal of Human-Computer Interaction, 40 (2), 497–514. https://doi.org/10.1080/10447318.2022.2151730

Kelly, S., Kaye, S.-A., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A Systematic Review. Telematics and Informatics, 77 , 101925. https://doi.org/10.1016/j.tele.2022.101925

Kim, M., & Park, Y. (2019). The relationship between attitudes toward artificial intelligence and students’ intention to use it in education. International Journal of Human-Computer Interaction, 35 (13), 1223–1233.

Kim, C., Kim, D., Yuan, J., Hill, R. B., Doshi, P., & Thai, C. N. (2015). Robotics to promote elementary education pre-service teachers’ STEM engagement, learning, and teaching. Computers & Education, 91 , 14–31. https://doi.org/10.1016/j.compedu.2015.08.005

Kumar, A., & Mantri, A. (2021). Evaluating the attitude towards the intention to use the ARITE system for improving laboratory skills by engineering educators. Education and Information Technologies , 27 , 671–700. https://doi.org/10.1007/s10639-020-10420-z

Lange, A. A., Robertson, L., Tian, Q., Nivens, R., & Price, J. (2022). The effects of an early childhood-elementary teacher preparation program in STEM on pre-service teachers. Eurasia Journal of Mathematics, Science and Technology Education, 18 (12), em2197. https://doi.org/10.29333/ejmste/12698

Li, J., & Huang, J. S. (2020). Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technology in Society, 63 , 101410. https://doi.org/10.1016/j.techsoc.2020.101410

Lin, X.-F., Zhou, Y., Shen, W., Luo, G., Xian, X., & Pang, B. (2023). Modeling the structural relationships among Chinese secondary school students’ computational thinking efficacy in learning AI, AI literacy, and approaches to learning AI. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12029-4

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1–16).  https://doi.org/10.1145/3313831.3376727

Ma, R., Sanusi, I. T, Mahipal, V., Gonzales, J., & Martin, F. (2023). Developing machine learning algorithm literacy with novel plugged and unplugged approaches.  Proceedings of the 54th ACM Technical Symposium on Computer Science Education , 298–304.  https://doi.org/10.1145/3545945.3569772

Mahipal, V., Ghosh, S. Sanusi, I. T., Ma, R., Gonzales, J. E., & Martin, F.G. (2023). DoodleIt: A novel tool and approach for teaching how CNNs perform image recognition. Australasian Computing Education Conference (ACE ’23 ), January 30-February 3, 2023, Melbourne, VIC, Australia. ACM, New York, NY, USA, 8. https://doi.org/10.1145/3576123.3576127

Manasia, L., Ianos, M. G., & Chicioreanu, T. D. (2020). Pre-service teacher preparedness for fostering education for sustainable development: An empirical analysis of central dimensions of teaching readiness. Sustainability, 12 (1), 166. https://doi.org/10.3390/su12010166

Martin, A. J. (2012). Part II commentary: Motivation and engagement: Conceptual, operational, and empirical clarity. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds .), Handbook of Research on Student Engagemen t, 5 , 303–311 Springer US. https://doi.org/10.1007/978-1-4614-2018-7_14

McClure, E. R., Guernsey, L., Clements, D. H., Bales, S. N., Nichols, J., Kendall-Taylor, N., & Levine, M. H. (2017). STEM starts early: Grounding science, technology, engineering, and math education in early childhood. Joan Ganz Cooney center at sesame workshop. https://eric.ed.gov/?id=ED574402 . Accessed 12 Sept 2023.

Molefi, R. R., & Ayanwale, M. A. (2023). Using composite structural equation modeling to examine high school teachers’ acceptance of e-learning after Covid-19. New Trends and Issues Proceedings on Humanities and Social Sciences., 10 (1), 01–11. https://doi.org/10.18844/prosoc.v10i1.8837

Naftzger, N. J. (2018).  Exploring the role purpose-related experiences can play in supporting interest development in STEM [Ph.D., Northern Illinois University]. ProQuest Dissertations & Theses Global. United States– Illinois, 1–351. https://www.proquest.com/dissertations-theses/exploring-role-purpose-related-experiencescan/docview/2183332528/se-2?accountid=13425

Nazari, N., Shabbir, M. S., & Setiawan, R. (2021). Application of artificial intelligence powered digital writing assistant in higher education: Randomized controlled trial. Heliyon, 7 (5), e07014. https://doi.org/10.1016/j.heliyon.2021.e07014

Ng, T. K., & Chu, K. W. (2021). Motivating students to learn AI through social networking sites: A case study in Hong Kong. Online Learning, 25 (1), 195–208. https://doi.org/10.24059/olj.v25i1.2454

Nygren, B., Aléx, L., Jonsén, E., Gustafson, Y., Norberg, A., & Lundman, B. (2005). Resilience, sense of coherence, purpose in life and self-transcendence in relation to perceived physical and mental health among the oldest old. Aging & Mental Health, 9 (4), 354–362. https://doi.org/10.1080/1360500114415

Okita, S. Y. (2012). Social interactions and learning. In Seel N. M. (Ed.), Encyclopedia of the sciences of learning , 23, 182–211. Springer. https://doi.org/10.1007/978-1-4419-1428-6_1770

Okundaye, O., Natarajarathinam, M., Qiu, S., Kuttolamadom, M. A., Chu, S., & Quek, F. (2022). Making STEM real: The design of a making-production model for hands-on STEM learning. European Journal of Engineering Education, 47 (6), 1122–1143. https://doi.org/10.1080/03043797.2022.2121685

Opesemowo, O., Obanisola, A., & Oluwatimilehin, T. (2022). From brick-and-mortar to online teaching during the COVID-19 pandemic lockdown in Osun state, Nigeria. Journal of Education in Black Sea Region, 8 (1), 134–142. https://doi.org/10.31578/jebs.v8i1.286

Palade, M., & Carutasu, G. (2021). Organizational readiness for artificial intelligence adoption. Transaction on Engineering and Management, 7 (1), 30–35. https://doi.org/10.1016/j.ijinfomgt.2022.102497

Papadakis, S., Vaiopoulou, J., Sifaki, E., Stamovlasis, D., & Kalogiannakis, M. (2021). Attitudes towards the use of educational robotics: Exploring pre-service and in-service early childhood teacher profiles. Education Sciences , 11 (5), 204. https://www.mdpi.com/2227-7102/11/5/204 . Accessed 7 Oct 2023.

Park, J., Teo, T. W., Teo, A., Chang, J., Huang, J. S., & Koo, S. (2023). Integrating artificial intelligence into science lessons: Teachers’ experiences and views. International Journal of STEM Education, 10 (1), 61. https://doi.org/10.1186/s40594-023-00454-3

Patar, K. (2023). Pre-service mathematics teachers’ engagement in Geogebra Applet-based task design in online learning. AIP Conference Proceedings, 2540 (1), 1–15. https://doi.org/10.1063/5.0106241

Poondej, C., & Lerdpornkulrat, T. (2016). The development of gamified learning activities to increase student engagement in learning. Australian Educational Computing , 31 (2). https://eric.ed.gov/?id=EJ1123845

Quintana, S. M., & Maxwell, S. E. (1999). Implications of recent developments in structural equation modeling for counseling psychology. The Counseling Psychologist, 27 (4), 485–527.

Reeve, J., & Tseng, C. M. (2011). Agency as a fourth aspect of students’ engagement during learning activities. Contemporary Educational Psychology, 36 (4), 257–267. https://doi.org/10.1016/j.cedpsych.2011.05.002

Reeves, S. L., Henderson, M. D., Cohen, G. L., Steingut, R. R., Hirschi, Q., & Yeager, D. S. (2021). Psychological affordances help explain where a self-transcendent purpose intervention improves performance. Journal of Personality and Social Psychology, 120 (1), 1–15. https://doi.org/10.1037/pspa0000246

Renninger, K. A., & Bachrach, J. E. (2015). Studying triggers for interest and engagement using observational methods. Educational Psychologist, 50 (1), 58–69. https://doi.org/10.1080/00461520.2014.999920

Ringle, C. M., & Sarstedt, M. (2016). Gain more insight from your PLS-SEM results. Industrial Management & Data Systems, 116 (9), 1865–1886. https://doi.org/10.1108/IMDS-10-2015-0449

Ringle, C. M., Wende, S., and Becker, J.-M. (2022). “SmartPLS 4.” Oststeinbek: SmartPLS GmbH, http://www.smartpls.com . Accessed 15 Oct 2023.

Ringle, C. M., Sarstedt, M., Sinkovics, N., & Sinkovics, R. R. (2023). A perspective on using partial least squares structural equation modelling in data articles. Data in Brief, 48 , 109074. https://doi.org/10.1016/j.dib.2023.109074

Rowston, K., Bower, M., & Woodcock, S. (2020). The lived experiences of career-change pre-service teachers and the promise of meaningful technology pedagogy beliefs and practice. Education and Information Technologies, 25 (2), 681–705. https://doi.org/10.1007/s10639-019-10064-8

Roy, R., Babakerkhell, M. D., Mukherjee, S., Pal, D., & Funilkul, S. (2022). Evaluating the intention for the adoption of artificial intelligence-based robots in the university to educate the students. IEEE Access, 10 , 125666–125678. https://doi.org/10.1109/ACCESS.2022.3225555

Ryu, M., Mentzer, N., & Knobloch, N. (2019). Pre-service teachers’ experiences of STEM integration: Challenges and implications for integrated STEM teacher preparation. International Journal of Technology and Design Education, 29 (3), 493–512. https://doi.org/10.1007/s10798-018-9440-9

Sanusi, I. T. (2023). Machine Learning Education in the K–12 Context (Doctoral dissertation, Itä-Suomen yliopisto). https://www.uef.fi/en/article/doctoral-defence-of-ismaila-temitayo-sanusi-med-19122023-machine-learning-education-in-the-k-12 . Accessed 27 Oct 2023.

Sanusi, I. T., Ayanwale, M. A., & Chiu, T. K. F. (2024a). Investigating the moderating effects of social good and confidence on teachers’ intention to prepare school students for artificial intelligence education. Education and Information Technologies, 29 (1), 273–295. https://doi.org/10.1007/s10639-023-12250-1

Sanusi, I. T., Ayanwale, M. A., & Tolorunleke, A. E. (2024b). Investigating pre-service teachers’ artificial intelligence perception from the perspective of planned behavior theory. Computers and Education: Artificial Intelligence, 6 , 100202. https://doi.org/10.1016/j.caeai.2024.100202

Sanusi, I. T., Oyelere, S. S., & Omidiora, J. O. (2022). Exploring teachers’ preconceptions of teaching machine learning in high school: A preliminary insight from Africa. Computers and Education Open, 3 , 100072. https://doi.org/10.1016/j.caeo.2021.100072

Sanusi, I. T., Oyelere, S. S., Vartiainen, H., Suhonen, J., & Tukiainen, M. (2023). Developing middle school students’ understanding of machine learning in an African school. Computers and Education: Artificial Intelligence, 5 , 100155. https://doi.org/10.1016/j.caeai.2023.100155

Sarstedt, M., Hair, J. F., Jr-Cheah, J.-H., Becker, J.-M., & Ringle, C. M. (2019). How to specify, estimate, and validate higher-order constructs in PLS-SEM. Australasian Marketing Journal (AMJ), 27 (3), 197–211. https://doi.org/10.1016/j.ausmj.2019.05.003

Schnitzler, K., Holzberger, D., & Seidel, T. (2021). All better than being disengaged: Student engagement patterns and their relations to academic self-concept and achievement. European Journal of Psychology of Education, 36 , 627–652.

Shmueli, G., & Koppius, O. R. (2011). Predictive analytics in information systems research. MIS Quarterly, 35 (3), 553–572.

Shmueli, G., Sarstedt, M., Hair, J. F., Cheah, J.-H., Ting, H., Vaithilingam, S. & Ringle, C. M. (2019). Predictive model assessment in PLS-SEM: Guidelines for using PLSpredict. European Journal of Marketing , 12 (4), 18–35. https://doi.org/10.1108/EJM-02-2019-0189

Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57 (1), 1–23. https://doi.org/10.17763/haer.57.1.j463w79r56455411

Sing, C. C., Teo, T., Huang, F., Chiu, T. K. F., & Xing, W. (2022). Secondary school students’ intentions to learn AI: Testing moderation effects of readiness, social good and optimism. Educational Technology Research and Development, 70 (3), 765–782. https://doi.org/10.1007/s11423-022-10111-1

Suliman, M., Ghani, N., Sohail, M., & Reed, P. G. (2022). Self-transcendence and spiritual well-being among stroke patients. Journal of Saidu Medical College, Swat, 12 (1), 31–36. https://doi.org/10.52206/jsmc.2022.12.1.628

Sun, Y., Ni, L., Zhao, Y., Shen, X. L., & Wang, N. (2019). Understanding students’ engagement in MOOCs: An integration of self-determination theory and theory of relationship quality. British Journal of Educational Technology, 50 (6), 3156–3174.

Suryadi, A., Purwaningsih, E., Yuliati, L., & Koes-Handayanto, S. (2023). STEM teacher professional development in pre-service teacher education: A literature review. Waikato Journal of Education ,  28 (1), 23–45. https://doi.org/10.15663/wje.v28i1.1063

Taherdoost, H. (2019). What is the best response scale for survey and questionnaire design; review of different lengths of rating scale/attitude scale/Likert scale. International Journal of Academic Research in Management (IJARM), 8 (1). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3588604 . Accessed 22 Oct 2023.

Tang, X., & Chen, Y. (2018). Fundamentals of artificial intelligence (p. 9787567575615). East China Normal University.

Tarantino, K., McDonough, J., & Hua, M. (2013). Effects of student engagement with social media on student learning: A review of literature. The Journal of Technology in Student Affairs, 1 (8), 1–8.

Taskiran, N. (2023). Effect of artificial intelligence course in nursing on students’ medical artificial intelligence readiness: A comparative quasi-experimental study. Nurse Educator, 48 (5), E147–E152. https://doi.org/10.1097/nne.0000000000001446

Terzi, R. (2020). An adaptation of artificial intelligence anxiety scale into Turkish: Reliability and validity study. International Online Journal of Education and Teaching, 7 (4), 1501–1515.

Touretzky, D., Gardner-McCune, C., Breazeal, C., Martin, F., & Seehorn, D. (2019). A Year in K–12 AI Education. AI Magazine, 40 (4), 88–90. https://doi.org/10.1609/aimag.v40i4.5289

Volet, S., Jones, C., & Vauras, M. (2019). Attitude-, group-and activity-related differences in the quality of pre-service teacher students’ engagement in collaborative science learning. Learning and Individual Differences, 73 , 79–91. https://doi.org/10.1016/j.lindif.2019.05.002

Wang, Y.-Y., & Wang, Y.-S. (2022). Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interactive Learning Environments, 30 (4), 619–634. https://doi.org/10.1080/10494820.2019.1674887

Weng, F., Yang, R. J., Ho, H. J., & Su, H. M. (2018). A TAM-based study of the attitude towards use intention of multimedia among School Teachers. Applied System Innovations, 1 (3), 36. https://doi.org/10.3390/asi1030036

Xia, Q., Chiu, T. K., Lee, M., Sanusi, I. T., Dai, Y., & Chai, C. S. (2022). A self-determination theory (SDT) design approach for inclusive and diverse artificial intelligence (AI) education. Computers & Education, 189 , 104582. https://doi.org/10.1016/j.compedu.2022.104582

Xuan, P. Y., Fahumida, M. I. F., Hussain, M. I. A. N., Jayathilake, N. T., Khobragade, S., Soe, H. H. K., . . . Htay, M. N. N. (2023). Readiness towards artificial intelligence among undergraduate medical students in Malaysia. Education in Medicine Journal, 15 (2), 49–60. https://doi.org/10.21315/eimj2023.15.2.4

Yadrovskaia, M., Porksheyan, M., Petrova, A., Dudukalova, D., & Bulygin, Y. (2023). About the attitude towards artificial intelligence technologies. Proceedings of E3S Web of Conferences , 376 , 05025. https://doi.org/10.1051/e3sconf/202337605025

Yau, K. W., Chai, C. S., Chiu, T. K. F., Meng, H., King, I., & Yam, Y. (2023). A phenomenographic approach on teacher conceptions of teaching artificial intelligence (AI) in K-12 schools. Education and Information Technologies, 28 (1), 1041–1064. https://doi.org/10.1007/s10639-022-11161-x

Yeager, D. S., Henderson, M. D., Paunesku, D., Walton, G. M., D’Mello, S., Spitzer, B. J., & Duckworth, A. L. (2014). Boring but important: A self-transcendent purpose for learning fosters academic self-regulation. Journal of Personality and Social Psychology, 107 (4), 559–580. https://doi.org/10.1037/a0037637

Yıldız-Feyzioğlu, E., & Kıran, R. (2022). Investigating the relationships between self-efficacy for argumentation and critical thinking skills. Journal of Science Teacher Education, 33 (5), 555–577. https://doi.org/10.1080/1046560X.2021.1967568

Yllana-Prieto, F., González-Gómez, D., & Jeong, J. S. (2023). The escape room and breakout as an aid to learning STEM contents in primary schools: An examination of the development of pre-service teachers in Spain. Education, 3–13 , 1–17. https://doi.org/10.1080/03004279.2022.2163183

Zhan, E. S., Molina, M. D., Rheu, M., & Peng, W. (2023). What is there to fear? Understanding Multi-dimensional fear of AI from a technological affordance perspective. International Journal of Human–Computer Interaction, 10 , 1–18. https://doi.org/10.1080/10447318.2023.2261731

Zhang, C., Schießl, J., Plößl, L., Hofmann, F., & Gläser-Zikuda, M. (2023). Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis. International Journal of Educational Technology in Higher Education, 20 (1), 49. https://doi.org/10.1186/s41239-023-00420-7

Download references

Open access funding provided by University of Johannesburg.

Author information

Authors and affiliations.

Department of Science and Technology Education, University of Johannesburg, Auckland Park, 2006, South Africa

Musa Adekunle Ayanwale & Oluwaseyi Aina Gbolade Opesemowo

School of Computing, University of Eastern Finland, P.O. Box 111, 80101, Joensuu, Finland

Emmanuel Kwabena Frimpong & Ismaila Temitayo Sanusi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Musa Adekunle Ayanwale .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Applications and services that use the latest AI technologies are much more convenient to use.

I prefer to use the most advanced AI technologies.

I am confident that AI technologies will follow my instructions.

I can use different software to support AI learning.

I can use appropriate hardware to support AI learning.

I have access to relevant content on AI.

I am confident that I can succeed if I work hard enough in learning AI.

I am certain that I can learn the basic concepts of AI.

I am certain that I can understand the most difficult AI resources.

I am certain that I can design AI applications.

Learning to understand all of the special functions associated with an AI technique/product makes me anxious.

Learning to use AI techniques/product makes me anxious.

Learning how an AI techniques/product works makes me anxious.

Learning to interact with an AI technique/product makes me anxious.

I look forward to using AI in my daily life.

I would like to use AI in my learning.

It is important that my future students learn AI.

It is important that my future students acquire the necessary abilities to take advantage of AI.

Self-Transcendent Goals

I wish to use my AI knowledge to serve others.

I wish I use AI to help people with physical and mental difficulties.

I wish I could design AI applications that can benefit people.

I am ready to learn design thinking to enhance my ability to use AI for helping others.

I want to learn AI knowledge to help me to have a positive impact on the world.

I want to master AI technologies to become a citizen who contributes to society.

Cognitive Engagement—Critical Thinking

In this AI course, I use different possible ways to complete the task.

In this AI course, I think the good and bad of different methods.

In this AI course, I provide different reasons and evidence for my opinions.

In this AI course, I consider different opinions to see which one makes more sense.

In this AI course, I generate many new ideas.

In this AI course, I create different solutions for a problem.

In this AI course, I suggest new ways of doing things.

In this AI course, I produce ideas that are likely to be useful.

Behavioral Engagement—Self-Directed Learning

In this AI course, I explore the online resources on my own.

In this AI course, I set goals to complete this AI class.

In this AI course, I think about different ways or methods I can use to improve my learning.

In this AI course, I adjust my learning method based on my learning progression.

In this AI course, my colleagues and I actively work together to learn new things.

In this AI course, my colleagues and I actively discuss different views we have about things we are learning.

In this AI course, my colleagues and I actively work together to complete tasks.

In this AI course, my colleagues and I actively share and explain our understanding.

In this AI course, my colleagues and I develop complex ideas.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Ayanwale, M.A., Frimpong, E.K., Opesemowo, O.A.G. et al. Exploring Factors That Support Pre-service Teachers’ Engagement in Learning Artificial Intelligence. Journal for STEM Educ Res (2024). https://doi.org/10.1007/s41979-024-00121-4

Download citation

Accepted : 25 March 2024

Published : 12 April 2024

DOI : https://doi.org/10.1007/s41979-024-00121-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Pre-service teachers
  • Student engagement
  • Self-transcendent goals
  • School education
  • Find a journal
  • Publish with us
  • Track your research

search

Printable Warm-Up Writing Journal for 2nd 3rd 4th 5th Grade CYCLE 3

Product image #0

Creative Writing, Grammar, Writing

Grade 2, 3, 4, 5

About This Product

Ready to get your learners working in a more gentle on-ramp to your class time maybe your homeschool already includes instruction, but a gentle warm-up to your learners' writing instruction is a great way to keep them motivated to learn more this is cycle 3 of the warm-up writing journal.

With 45 MORE engaging prompts in varied formats, this easy-to-print journal makes adding writing into class or your homeschool day easy. Simply print out the numbered pages, and have students decorate the cover to their liking. Then, bind the pages with a stapler or a hole punch and yarn. Learners are able to write directly on the page, illustrate when asked to, and check their work for capitalization, spelling, and punctuation errors.

Objective : Provide done-for-you writing prompts for 2nd-5th graders. Encourage self-editing with included editing checklists.

Formats : Printable PDF was designed with easy, ink-saving printing in mind. Print 2 pages per sheet, and choose 2-sided printing to save both ink and paper!

Grades : 2nd, 3rd, 4th, 5th

Variations : This is the third cycle of the Warm-Up Writing Journal. This resource will soon be available in multiple variations (referred to as "cycles") for a year-long writing resource!

What you'll get: This resource contains a 49 page PDF that includes:

Journal cover which students can personalize

Instructions

Bonus STEAM extension ideas

Editing checklists on each page

Variety of formats to motivate learners

Printable certificate of completion

What people are saying: " Fun writing prompts. They really encouraged my child to keep writing beyond the first simple sentence! Great resource. Thank you!" -Jessica P.

Looking for more English resources? Be sure to check out my debate activities, critical thinking games, and vocabulary lessons and games on Teach Simple!

Topics include:

-Narrative Writing Matrix Activities

-"Bigger & Better" debate

-"I Have, Who Has" card game and vocabulary lesson on: food & drink, weather & landforms, numbers, colors, and patterns, Christmas, calendar, and school & classroom vocabulary.

Got a request? Feel free to get in touch on socials @melissaisteaching .

Happy teaching!

Resource Tags

Check out these other great products

Printable Warm-Up Writing Journal for 2nd 3rd 4th 5th Grade

IMAGES

  1. Critical Thinking Journal

    week 4 journal critical thinking

  2. Acc 410 week 4 critical thinking quiz by Oliviabrewer

    week 4 journal critical thinking

  3. Week 4 Journal Critical Thinking complete .docx

    week 4 journal critical thinking

  4. Week 4 Journal Critical Thinking-1.docx

    week 4 journal critical thinking

  5. week 4 critical thinking.docx

    week 4 journal critical thinking

  6. Journal Week 4.docx

    week 4 journal critical thinking

VIDEO

  1. ECD 315 Week 4 Journal Interview

  2. Does a 4-Day Workweek Increase Productivity? Wharton Management Professor Iwan Barankay

  3. MN664 week 4 Journal

  4. Critical Creativity in Action

  5. ECD315 Week 4 Journal Interview

  6. 4. Journal with me using my new take a note planner B6

COMMENTS

  1. PHIL347 Week 4 Journal

    Week 4 Journal phil 347 week journal inference deductive and inductive arguments are use in reasoning methodologies to generate conclusion closest to the truth. ... Do you agree that wisdom/critical thinking is a better predictor of well-being. Critical Reasoning. Assignments. 100% (8) 4. Week 8 Journal. Critical Reasoning. Assignments. 100% (8)

  2. Week 4 Journal

    Week 4 Journal. Chamberlain University College of Nursing PHIL: 347 Dr. Buck 7/31/ Assignment Journal Week 4 Inference The main goal of inductive and deductive reasoning is to come to a consensual and logical conclusion. ... WEEK 2 Critical Thinking Discussion. Critical Reasoning 100% (8) Recommended for you. 10. Week 5 Course Project Annotated ...

  3. Bridging critical thinking and transformative learning: The role of

    In recent decades, approaches to critical thinking have generally taken a practical turn, pivoting away from more abstract accounts - such as emphasizing the logical relations that hold between statements (Ennis, 1964) - and moving toward an emphasis on belief and action.According to the definition that Robert Ennis (2018) has been advocating for the last few decades, critical thinking is ...

  4. Week 4 Journal Critical Thinking.docx

    View Homework Help - Week 4 Journal Critical Thinking.docx from EXP105 (GEN1807D) at Ashford University. Week 4 Critical Thinking Keep in mind that there are no right or wrong answers. You will be

  5. Week #4 Journal Critical Thinking.docx

    View Week #4 Journal Critical Thinking.docx from EXP 105 G301929F at Ashford University. Week 4 Critical Thinking Keep in mind that there are no right or wrong answers. You will be assessed on the

  6. Critical Thinking Week Four Assignment Journal.docx

    View Critical Thinking Week Four Assignment Journal.docx from NR 347 at Chamberlain College of Nursing. Running head: ASSIGNMENT JOURNAL 1 Week 4 Assignment: Journal Nursing Student Chamberlain

  7. Week 4 Journal

    Phil347N week 1 jounal - Journal week 1; Critical Thinking, Clinical Reasoning, and Clinical Judgment, Seventh Edition; ... Week 4 Journal Princess Hunter PHIL347-Professor Metz Chamberlain University November 26, 2023 Inference Understanding the meaning of 'valid' provides significant insight into the purpose of deductive arguments ...

  8. Critical Thinking

    Critical Thinking. Critical thinking is a widely accepted educational goal. Its definition is contested, but the competing definitions can be understood as differing conceptions of the same basic concept: careful thinking directed to a goal. Conceptions differ with respect to the scope of such thinking, the type of goal, the criteria and norms ...

  9. Feedback sources in essay writing: peer-generated or AI-generated

    Peer feedback is introduced as an effective learning strategy, especially in large-size classes where teachers face high workloads. However, for complex tasks such as writing an argumentative essay, without support peers may not provide high-quality feedback since it requires a high level of cognitive processing, critical thinking skills, and a deep understanding of the subject. With the ...

  10. Critical thinking

    Critical thinking is the analysis of available facts, evidence, observations, and arguments in order to form a judgement by the application of rational, skeptical, and unbiased analyses and evaluation. The application of critical thinking includes self-directed, self-disciplined, self-monitored, and self-corrective habits of the mind, thus a critical thinker is a person who practices the ...

  11. Using Definitions to Provoke Deep Explorations into the Nature of

    Abstract. Leadership is filled with concepts that often do not have an agreed upon definition. The purpose of this paper is to share a learning activity that provokes students' thinking about the nature of leadership using six leadership definitions. This activity is a dynamic starting place to explore what leadership is and is not, how it ...

  12. Week 4 Journal Critical Thinking .docx

    View Week 4 Journal Critical Thinking .docx from POS 201 at Front Range Community College. Kevin Vang 3/18/2019 Week 4 Critical Thinking Keep in mind that there are no right or wrong answers. You

  13. Exploring Factors That Support Pre-service Teachers ...

    This study aimed to investigate pre-service teachers' engagement with learning AI after a 4-week AI program at a university. ... The factors assessed in the survey included engagement (cognitive—critical thinking and creativity, behavioral, and social), attitude towards AI, anxiety towards AI, AI readiness, self-transcendent goals, and ...

  14. Bloom's taxonomy

    Bloom's taxonomy is a set of three hierarchical models used for classification of educational learning objectives into levels of complexity and specificity. The three lists cover the learning objectives in cognitive, affective and psychomotor domains. The cognitive domain list has been the primary focus of most traditional education and is frequently used to structure curriculum learning ...

  15. Week 4 Journal Critical Thinking 4 .docx

    View Essay - Week 4 Journal Critical Thinking(4).docx from INTRODUCT PERSONAL D at Ashford University. Week 4 Critical Thinking Reflecting on your thinking 1. Critically read the article, "Learning

  16. Printable Warm-Up Writing Journal for 2nd 3rd 4th 5th Grade CYCLE 3

    Objective: Provide done-for-you writing prompts for 2nd-5th graders. Encourage self-editing with included editing checklists. Formats: Printable PDF was designed with easy, ink-saving printing in mind. Print 2 pages per sheet, and choose 2-sided printing to save both ink and paper! Grades: 2nd, 3rd, 4th, 5th. Variations: This is the third cycle ...

  17. Week 4 Journal Critical Thinking

    Week 4 Journal Critical Thinking - Week 4 Critical Thinking... Doc Preview. Pages 1. Identified Q&As 5. Solutions available. Total views 74. Ashford University. ONLINE. ONLINE week 4. SargentSummerButterfly52. 3/17/2020. View full document. Students also studied. PHIL347N week 5 DB response.docx.

  18. Principal's Message

    This program is designed to develop critical thinking skills. Pre-Algebra: 6PA will be finishing their second unit on geometry, focused on angle relationships and the Pythagorean theorem. The focus for the third and final unit of geometry is on transformations. ... Next week the students will take a CFA that will assess their skills with text ...