• Skip to main content
  • Skip to quick search
  • Skip to global navigation

To Improve the Academy

  • Return Home
  • Recent Issues (2021-)
  • Back Issues (1982-2020)
  • Search Back Issues

10 Defining Critical Thinking in Higher Education

Creative Commons License

Permissions : This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please contact [email protected] to use this work in a way not covered by the license.

For more information, read Michigan Publishing's access and usage policy .

Critical thinking is an important learning outcome for higher education, yet the definitions used on campuses and national assessment instruments vary. This article describes a mapping technique that faculty and administrators can use to evaluate the similarities and differences across these definitions. Results demonstrate that the definitions reflected by standardized tests are more narrowly construed than those of the campus and leave dimensions of critical thinking unassessed. This mapping process not only helps campuses make better-informed decisions regarding their responses to accountability pressures; it also provides a stimulus for rich, evidence-based discussions about teaching and learning priorities related to critical thinking.

Critical thinking has emerged as an essential higher education learning outcome for both external audiences focused on issues of accountability and for colleges and universities themselves. One of the most recent national efforts to respond to accountability pressures, the Voluntary System of Accountability (VSA), requires campuses to use one of three standardized tests to measure and report student learning gains on critical thinking and written communication (Voluntary System, 2010, para. 17). In its survey of employers, the Association of American Colleges and Universities (AAC&U, 2008) found that 73 percent of employers wanted colleges to "place more emphasis on critical thinking and analytic reasoning" (p. 16). In a recent survey of AAC&U member colleges and universities, 74 percent of respondents indicated that critical thinking was a core learning objective for the campus’s general education program (AAC&U, 2009, p. 4). While there is general agreement that critical thinking is important, there is less consensus, and often lack of clarity, about what exactly constitutes critical thinking. For example, in a California study, only 19 percent of faculty could give a clear explanation of critical thinking even though the vast majority (89 percent) indicated that they emphasize it (Paul, Elder, & Bartell, 1997). In their interviews with faculty at a private liberal arts college, Halx and Reybold (2005) explored instructors’ perspectives of undergraduate thinking. While participants were "eager to promote critical thinking" (p. 300), the authors note that none had been specifically trained to do so. As a result, these instructors each developed their own distinct definition of critical thinking.

Perhaps this variability in critical thinking definitions is to be expected given the range of definitions available in the literature. Critical thinking can include the thinker’s dispositions and orientations; a range of specific analytical, evaluative, and problem-solving skills; contextual influences; use of multiple perspectives; awareness of one’s own assumptions; capacities for metacognition; or a specific set of thinking processes or tasks (Bean, 1996; Beyer, Gillmore, & Fisher, 2007; Brookfield, 1987; Donald, 2002; Facione, 1990; Foundation for Critical Thinking, 2009; Halx & Reybold, 2005; Kurfiss, 1988; Paul, Binker, Jensen, & Kreklau, 1990). Academic discipline can also shape critical thinking definitions, playing an important role in both the forms of critical thinking that faculty emphasize and the preferred teaching strategies used to support students’ development of critical thinking capacities (Beyer et al., 2007; Huber & Morreale, 2002; Lattuca & Stark, 2009; Pace & Middendorf, 2004).

The Dilemma for Student Learning Outcomes Assessment

External accountability pressures increasingly focus on using standardized measures of student learning outcomes as comparable indicators of institutional effectiveness, and students’ critical thinking performance is among the outcomes most often mentioned (see VSA, 2010, as an example). The range of critical thinking dimensions and the lack of one agreed-on definition pose a challenge for campuses working to align their course, program, and institution-wide priorities for critical thinking with appropriate national or standardized assessment methods. Among the questions facing these institutions are these three: (1) What dimensions of critical thinking do national and standardized methods emphasize? (2) To what extent do these dimensions reflect campus-based critical thinking instructional and curricular priorities? (3) What gaps in understanding students’ critical thinking performance will we encounter when we use national or standardized tools?

Answers to these questions are important to any campus that wants to develop assessment strategies that accurately reflect teaching and learning priorities and practices on campus. A focus on the alignment of assessment tools with campus priorities is also essential for engaging faculty in the assessment decision-making process. It is unlikely that faculty will use evidence to inform changes in instructional practices and curricular design unless they have been involved in the assessment design and believe the tools and results accurately represent instructional priorities and practices.

To determine the alignment of current assessment tools with institutional instructional priorities, we conducted a qualitative content analysis of five representations of the critical thinking construct and identified the common and distinct dimensions across the five sources. The five sources used for this study represent two different contexts for defining critical thinking: an internal definition developed by a group of general education instructors on our campus and a number of external sources representing the primary tools currently under discussion for national assessments of critical thinking in higher education.

Internal Source

To represent our campus’s operational definition of critical thinking, we use a definition developed by a group of general education instructors and administrators at a large public research university. The definition was developed as a part of a campuswide workshop on teaching critical thinking in general education and was generated by collecting the responses of groups of participants to the following question and prompt: "What learning behaviors (skills, values, attitudes) do students exhibit that reflect critical thinking? Students demonstrate critical thinking when they … " Participant responses were then clustered by researchers in the campus’s Office of Academic Planning and Assessment into twelve dimensions of critical thinking, listed in Table 10.1 in the "Results" section. A post-hoc confirmation of these dimensions was done by comparing the categories to the definitions of critical thinking present in the literature (see Office of Academic Planning, 2007, for the full set of responses and the links to the literature).

External Context

Critical thinking definitions from four external sources were used, which include three national standardized tests of critical thinking currently being used as a part of the VSA.

STANDARDIZED TESTS

ACT’s Collegiate Assessment of Academic Proficiency (CAAP) comprises six independent test modules, of which the writing essays and critical thinking are relevant to this study. The critical thinking assessment is a forty-minute, thirty-two-item, multiple-choice test that, according to ACT, measures "students’ skills in clarifying, analyzing, evaluating, and extending arguments" (ACT, 2011). The writing essays consist of two twenty-minute writing tasks, which include a short prompt that provides the test taker with a hypothetical situation and an audience.

The Collegiate Learning Assessment (CLA) is the Council for Aid’tO Education (CAE)’s testing instrument. Varying in length between ninety minutes (for the performance task) and seventy-five minutes (for the make-an-argument and critique-an-argument tasks, taken together), these written tests require students to work with realistic problems and analyze diverse written materials. CLA measures students’ critical thinking skills with respect to analytic reasoning, problem solving, and effectiveness in writing. CLA is unique among the three standardized tests in its view of writing as integral to critical thinking.

Educational Testing Service (ETS) offers the Proficiency Profile (PP), a test of four skills, including reading and critical thinking. The PP is available in a standard form (two hours, 108 questions) and an abbreviated form accepted by VSA (forty minutes, 36 questions). Reading and critical thinking are measured together on a single proficiency scale.

NATIONAL ASSESSMENT TOOL

The fourth external source, the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics, is a set of scoring rubrics faculty or other reviewers can use to assess student work. The rubrics provide specific criteria for each of fifteen learning outcomes, two of which are relevant to this study: critical thinking, and inquiry and analysis (AAC&U, 2010a).

Three-Phase Content Analysis

Using these five sources, our research team conducted a three-phase content analysis.

PHASE ONE: IDENTIFYING THE DEFINITIONS

In order to compare our internal definitions with those of the external sources, we had to identify what aspects of critical thinking serve as the focus of each external assessment tool. We used a number of approaches to gather this information for the three standardized tests. To ascertain each testing agency’s working definition of critical thinking, we used the most detailed descriptions available, drawing from promotional materials, information on their websites, and communication with company representatives.

ACT (2010) describes the skills tested within each of three content categories: analysis of elements of an argument (seventeen to twenty-one questions, 53 to 66 percent of the test), evaluation of an argument (five to nine questions, 16 to 28 percent of the test), and extension of an argument (six questions, 9 percent of the test). Since it is not accessible through ACT’s website, we obtained this document through a representative of ACT.

For ETS’s PP, we selected passages from the User’s Guide (Educational Testing Service, 2010): an introductory section that describes the abilities that the critical thinking questions measure and a more detailed description of the skills measured in the area of reading and critical thinking at the intermediate and high proficiency levels.

For the CLA, we began with the skills contained in the CLA Common Scoring Rubric (Council for Aid to Education, 2008). This rubric is divided into two categories: (1) critical thinking, analytic reasoning, and problem solving and (2) written communication. In spring 2010, we learned that CAE was in the process of implementing new critical thinking rubrics: analytic reasoning and evaluation, problem solving, and writing effectiveness. We analyzed these new descriptions (CLA II) along-side the older rubric (CLA I). In fall 2010, after the research described here was completed, CAE published a more detailed version of the critical thinking scoring rubric that is now available on its website. While differently formatted, the descriptors we used for this analysis are similar to the categories in this new rubric.

We were able to use the actual measures in the VALUE rubrics because they are the components of the rubric used to review and assess students’ work (AAC&U, 20106). We incorporated both the critical thinking and the inquiry and analysis rubrics in our analysis. We chose to include them because this category seemed particularly relevant to the conceptualization of critical thinking emerging from our campus discussions.

PHASE TWO: CODING FOR COMMONALITIES WITH CAMPUS CRITICAL THINKING DEFINITION

To understand the commonalities between the four external sources and our campus’s own critical thinking definition, we used our internal definition as the anchor definition and coded the external sources in relation to the categories present in that internal definition. The research team reviewed each descriptor of the four external source definitions and coded each for its alignment with one or more of the twelve dimensions of our internal definition. For example, the CLA listed "constructing cogent arguments rooted in data/information rather than speculation/ opinion" (Council for Aid to Education, 2008) as one descriptor of their critical thinking/writing effectiveness definition. In our analysis, we coded this descriptor as falling into the judgment/argument dimension of the campus-based definition. In conducting this coding, we used two approaches. First, to develop common understandings of the process, we worked as a team (three coders) to code two of the external sources (CAAP and PP). We then individually coded the CLA and VALUE sources and met to confirm our coding. In both approaches, we identified areas of disagreement and worked together for clarity in our standards, coming to mutually agreed-on final codes.

Once the coding was completed, we sorted the individual descriptors by dimension and reviewed them again for consistency. For example, we checked to see if the items we had coded as evidence-based thinking all reflected our deepening understanding of the construct. This stage helped us further clarify distinctions among the dimensions.

PHASE THREE: ANALYSIS OF PATTERNS

Once the coding and checking were complete, we arrayed the results in a table to facilitate a comparative analysis. We calculated how many of each tool’s descriptors referenced each of the twelve dimensions in our campus definition and, to get a sense of the relative emphasis each tool gave to each of the twelve dimensions, we calculated the proportion of all descriptors listed that reflect each dimension. In this way, we denote what proportion of each tool’s definition reflects each of the twelve campus-based critical thinking dimensions.

Table 10.1 summarizes the commonalities and gaps among the various definitions. This table indicates how many of the critical thinking dimensions listed in each of the external assessment tools reflect each of the twelve campus critical thinking dimensions. To provide a very rough estimate of the relative emphasis or importance of these dimensions in our campus definition, we counted how many descriptors emerged in the workshop for each dimension and calculated the proportion of all descriptors that this dimension represents (under the assumption that the number of descriptors of a dimension generated by a group of faculty reflects greater centrality or emphasis for this dimension). Note that looking at the campus definition this way highlights the relative emphasis (10 percent or more of the descriptors) placed on five dimensions of critical thinking: judgment/argument, synthesis, perspective taking, application (representing the most emphasis with 19 percent of the descriptors reflecting this particular dimension), and metacognition. We followed the same method to determine the relative emphasis of each dimension in the external assessment tools. Looking at the CLA I column as an example, we found that twenty-four of the thirty descriptors listed in the CLA I definition of critical thinking reflect our campus’s construct of judgment/argument. These twenty-four occurrences represent 80 percent of the dimensions in the CLA I list.

As the results in Table 10.1 illustrate, judgment/argument is the predominant component of critical thinking reflected in all of the external assessment options (accounting for between one-half to over three-quarters of all the descriptors associated with critical thinking). For the three standardized tests and VALUE, there is also a substantial emphasis on drawing inferences. Evidence-based thinking is emphasized in all three standardized tests. To varying degrees, synthesizing, problem solving, and perspective taking also receive some attention from the external sources.

In our analysis, a number of the campus dimensions receive no attention from any of the standardized tests: application, suspending judgment, metacognition, and questioning/skepticism. Of those that are missing from the standardized tests, the VALUE rubrics do reflect meta-cognition and questioning/skepticism.

The results suggest differences among the four external sources. The CAAP appears the most focused or limited in scope, with primary emphasis on judgment/argument, use of evidence, and drawing inferences. The VALUE rubrics are the most expansive, with references to nine of the twelve dimensions from the campus-based definition. Two of the three dimensions that are not included, problem solving and integrative and applied learning, are actually present as separate VALUE rubrics (AAC&U, 2010b), so their absence from the rubrics used in this analysis is not surprising.

In addition to providing us with one perspective on the relationship between the four external assessment tools and our campus’s critical thinking definition, this analysis also provided us with the opportunity to revisit the campus definition. Our analysis helped us clarify a number of our dimensions in relationship to the four external sources. For example, our category of multiple perspectives/perspective taking emerged as the dimension where we coded all external source descriptions that referenced "dealing with complexity" in addition to items that indicated "addressing various perspectives." As we coded, we also noted that our campus descriptions of perspective taking tended toward the positive dimension of multiple perspectives (that is, taking into account these perspectives) but did not include more critical aspects of this dimension (that is, critiquing or refuting a perspective that is weak or uninformed, "considering and possibly refuting [italics added] alternative viewpoints" [CLA 11)). We also were made aware of dimensions of critical thinking present in the external sources that are not present in the campus definition.

Limitations

It is important to acknowledge that this analysis is not a study of test item validity. Instead, it focuses on how the basic construct of critical thinking is defined, and the dimensions emphasized, within both contexts and across the five sources. Obviously these definitions and emphases drive item development and have important implications for the appropriateness of each assessment tool as an indicator of institutional effectiveness as measured by students’ critical thinking performance. However, the technique we used to determine definitional emphases is limited.

The limitations fall into two categories: those having to do with the campus definition and those having to do with the sources used for the external definitions. First, to represent the campus definition, we used the results of a collaborative brainstorming session conducted as part of a campuswide workshop on critical thinking in general education. The definition that emerged is multidimensional, and the elements correspond to common elements of cridcal thinking as defined in various sources in the literature. However, the campus definition has not been systematically vetted or tested against the responses of other groups of faculty, so it is still very much an emerging document on our campus. Still, many of the dimensions of this definition are identified in other faculty-generated statements of general education learning objectives, including the learning objectives for the campus’s junior-year writing requirement (an upper-division writing requirement that addresses the writing conventions of the student’s major) and the results of a survey where instructors report emphasizing these objectives in their general education courses (Office of Academic Planning and Assessment, 2008). The definition also does not definitively reflect the faculty’s beliefs about the relative importance of each of these constructs. We used the number of references as a rough indicator of importance, but this is certainly not a systematically tested assumption.

With respect to the external sources, the characteristics we used for the three standardized tools (CLA I and II, PP, and CAAP) come from each test company’s description of the critical thinking components covered in their test. We took these descriptors and coded each against our campus-based definition. Because we do not know the relative emphasis on each component in the test itself (that is, the number of test items, or scoring weights, for each item), we considered each descriptor of equal importance and looked at the number of them that reflect each of our campus categories. Once they were coded, we then looked to see what proportion of the items reflect each campus construct. While we believe this was the most appropriate step to take given the available information, it may misrepresent the actual emphasis of the test. Conducting a more finely tuned analysis would require us to look at the actual tests and, for those with open-response items, the evaluative rubrics and weights used. This, of course, is an analysis of even greater complexity, requiring us to address proprietary constraints with the testing companies. The public relations representation of the test substance is the information most academics would use to make such determinations, so we felt it was a relevant source to use and dissect.

We set out to understand the relationship between our campus’s emerging definition of critical thinking and the definitions used by four external tools for assessing students’ critical thinking. This exploratory analysis was intended to help us understand the relevance (or fit) of each of these tools to our faculty’s priorities for students’ critical thinking development. The analysis process also ended up challenging us to clarify our own expectations for student performance and assessment. Finally, this research offers an analytical and evidence-based process for engaging faculty in reviewing teaching and learning priorities within the context of responding to external accountability demands.

Focusing first on the issue of fit between the four external sources and our campus definition, the results suggest that all three standardized tests address a narrow set of constructs present in the campus definition, with the primary focus on judgment/argument, evidence-based thinking, and drawing inferences. The VALUE rubrics provide more comprehensive coverage of the campus definitions, touching on nine of the twelve dimensions. Two that are not included (application and problem solving) are referenced in separate VALUE rubrics, which could be used to address the fuller range of campus dimensions.

These results help inform the campus discussion of which assessment options would be most appropriate. For example, if the faculty on our campus determine that judgment/argument is appropriate as the focus of our externally driven assessment, then any of the standardized tests might be acceptable. But if we decide we want our assessment strategy to reflect more of the dimensions of critical thinking present in the campus definition, the VALUE rubrics might be a better choice but would not necessarily reflect the same relative importance of these constructs as emerged from our faculty workshop results. This discrepancy could be remedied in part by including the integrative and applied learning VALUE rubric to the assessment since it would address the dimension that received the most attention from faculty application.

It should be noted, however, that selecting the VALUE rubric tool would not be sufficient for fulfilling the current VSA requirements for a standardized assessment method. VALUE rubrics also require more faculty time and expertise than standardized tests since rubrics require raters to be trained and then to assess samples of student work. The standardized tests have other costs (testing fees, incentives for the students, and staff effort in recruiting respondents) that, if used for VALUE analysis instead, would defray the costs described above. Clearly, associated costs also need to be a part of the campus’s decision-making process.

Our analysis has raised another essential question that the faculty need to address: What sort of evidence of students’ critical thinking is appropriate? The various descriptors of critical thinking used in these five sources (both the internal and the external sources) suggest the different kinds of performance tasks being used. The PP and CAAP rely on multiple-choice tasks-and their descriptors reflect identifying and recognizing aspects of an argument-for example, "identify accurate summaries of a passage" (Educational Testing Service, 2010) and "distinguish between rhetoric and argumentation" (ACT, 2010). The CLA, on the other hand, requires students to craft an argument. The CLA definition uses descriptors that reference creating an argument-for example, "constructing organized and logically cohesive arguments," "considering the implications of decisions and suggesting additional research when appropriate" (Council for Aid to Education, 2010). In this test, however, the parameters of student-generated responses are limited in scope. Students write answers to a set of narrowly focused prompts that address specific elements of the task and evidence presented. The VALUE rubrics were designed specifically to assess portfolios of students’ work from their courses-tasks that would be varied in focus, content, and types of writing contexts. The items in these rubrics reflect the comprehensiveness of these types of student work, referencing contextual analyses, identifying and describing a problem, and articulating the limits of one’s position. Students’ responses in this case would be unconstrained, reflecting the variety of ways one demonstrates a range of critical thinking dimensions across an array of courses and assignments.

Finally, our campus definition came from the discussions of a diverse group of instructors who responded to the prompt they were given by, quite naturally, thinking about the evidence of critical thinking they see in the assignments and tasks they ask of their students. Therefore, their responses focus to a larger degree on the doing: the creation of arguments, the application of theory to new settings, and the identification of evidence to support those arguments or assertions. The focus of these faculty-derived definitions, based as they are on what students are actually asked to do in the classroom, seems particularly distant from the tasks associated with the standardized multiple-choice tests that focus more on identifying and selecting over creating and constructing.

Another complexity emerges that is particularly relevant to assessment methods that use open-ended or constructed responses that are scored by sources outside the control of the faculty or the campus (like the CLA tasks and the CAAP and CLA essays). In these cases, it is important to make a distinction between what the assessment task is and what actually gets scored for performance assessment purposes. For example, the CLA task certainly seems to qualify as representing critical thinking application since it asks students to apply their analysis of various sources of information to a real-world question. It is therefore interesting that in our analysis, we did not find evidence of application in the CLA critical thinking definition-the elements of critical thinking they say their test addresses. Instead, their critical thinking descriptors focus primarily on judgment/argument, evidence-based thinking, synthesizing, and drawing inferences (CLA II).

Without more specific information about how the constructed responses are actually scored (that is, what elements of performance actually count) it is unclear whether application, for example, is actually a performance factor that is assessed or only the frame through which the performance of interest is stimulated. For example, is the student’s capacity to judge the relevance of evidence to a particular context scored, or is the focus on being able to make the distit\ction between correlation and causation? Both would be a reflection of evidence-based thinking. However, the first would also be a more complex or advanced form of critical thinking that reflects application. The second reflects a somewhat more basic but still important component of evidence-based thinking but would not reflect application as we have conceived it in our campus definition. This is an important point in reminding ourselves that the assessment task itself is only one component of the consideration of fit. When student performance is scored by parties removed from the campus context, it is also particularly important to be clear about what elements of student performance are included in the final score.

The importance of taking account of the types of tasks and the scoring criteria is illustrated in a recent study conducted by the University of Cincinnati and highlighted in an AAC&U publication (AAC&U, 2010a). Researchers compared first-year students’ performance on the CLA with those students’ performance on an e-portfolio assignment, assessed by faculty at the university using a slightly modified version of the VALUE rubrics. Researchers found no significant correlation between the two sets of assessment results, suggesting that the two assessment tools capture very different elements of students’ critical thinking performance. These results raise an important question for campuses to consider: Does our assessment strategy capture the kind of student learning and performance we emphasize and value? Tools that do not effectively measure what matters to faculty are not appropriate sources of evidence for promoting change or for accurately reflecting instructional and curricular effectiveness.

Connecting Research and Practice: A Note to Faculty Developers

Finally, and perhaps most important, this method of inquiry leads to productive and engaging faculty discussions of critical thinking teaching, learning, and assessment. This project illustrates a way to address external accountability pressures while also generating joint faculty and administration discussions and insights into campus-based teaching and learning priorities. The first example of this productive inquiry was the workshop activity that produced the cross-disciplinary definition of critical thinking for our campus. Having this definition in place made it possible to pursue the line of inquiry described here, which served as an essential starting point for our campus’s consideration of how to assess critical thinking in ways that are internally valid and externally legitimate.

The exercise of mapping our critical thinking dimensions against the definitions of the four assessment tools sparked a rich discussion among the coders. It was as we tried to code the external definitions using our internal critical thinking categories that we began to clarify the meaning of our own definition and see both the gaps and strengths of that definition. During this process, we also discovered the essential links between our definition and our faculty’s pedagogical values in facilitating students’ critical thinking. We believe that workshops that provide groups of faculty and administrators the opportunity to conduct this kind of analysis together can generate an important evidence-based dialogue about expectations for student learning, the assessment tools that most appropriately reflect those expectations, and the trade-offs inherent in making those kinds of decisions. The coding process opens up a conversation about what we mean when we use the term critical thinking, a process of clarification that informs one’s own teaching as well as the larger campus conversation about critical thinking assessment.

  • ACT. (2010). CAAP critical thinking content categories attd subskil/s. Iowa City, IA: Author.
  • ACT. (2011). Critical Thi11king Test. Retrieved from www.act.org/caap/test_thinking.html
  • Association of American Colleges and Universities. (2008). Our students’ best work: A framework for accou11tability worthy of our missiott (2nd ed.). Retrieved from www.aacu.org/publications/pdfs/studentsbestreport.pdf
  • Association of American Colleges and Universities. (2009). Learning and assessment: Tre11ds in undergraduate education. Retrieved from www.aacu.org/membership/documents/2009MemberSurvey_Panl.pdf
  • Association of American Colleges and Universities. (2010a). Assessing learning outcomes at the University of Cincinnati: Comparing rubric assessments to standardized tests. AAC&U News. Retrieved from www.aacu.org/aacu_news/AACUNewsl0/Aprill0/
  • Association of American Colleges and Universities. (2010b). VALUE: Valid assessment of learning in undergraduate education. Retrieved from www.aacu.org/value/rubrics/index.cfm
  • Bean, J. C. (1996). Engaging ideas: The professor’s guide to integrating writing, critical thinking, and active learning in the classroom. San Francisco, CA: Jossey-Bass.
  • Beyer, C. H., Gillmore, G. M., & Fisher, A. T. (2007). Inside the undergraduate experience: The U11iversity of Washington’s study of u11dergraduate education. San Francisco, CA: Jossey-Bass.
  • Brookfield, S. D. (1987). Developing critical thinkers: Challenging adults to explore alternative ways of thinking and acting. San Francisco, CA: Jossey-Bass.
  • Council for Aid to Education. (2008). Common scori11g rubric. Retrieved from www.cae.org/content/pdf/CLA_Scoring_Criteria_%28Jan%202008 %29.pdf
  • Council for Aid to Education. (2010). CLA scoring criteria. Retrieved from www.collegiatelearningassessment.org/files/CLAScoringCriteria.pdf
  • Donald, J. G. (2002). Leaming to think: Disciplinary perspectives. San Francisco, CA: Jossey-Bass.
  • Educational Testing Service. (2010). ETS Proficiency Profile user’s guide. Retrieved from www.ets.org/s/proficiencyprofile/pdf/Users_Guide.pdf
  • Facione, P. (1990). Critical thinking: A statement of expert consensus for purposes of educatio11al assessme11t and instruction [Executive summary]. Retrieved from ERIC database. (ED315423)
  • Foundation for Critical Thinking. (2009). Our concept of critical thi11king. Retrieved from www.criticalthinking.org/aboutCT/ourconceptCT.cfm
  • Halx, M. D., & Reybold, L. E. (2005). A pedagogy of force: Faculty perspectives of critical thinking capacity in undergraduate students. ]GE: The ]oumal of Ge11eral Education, 54(4), 293-315. doi:10.1353/ jge.2006.0009
  • Huber, M. T., & Morreale, S. P. (Eds.). (2002). Discipli11ary styles in the scholarship of teachi11g and learni11g: Explori11g commo11 ground. Sterling, VA: Stylus.
  • Kurfiss, J. G. (1988). Critical thi11ki11g: Theory, research, practice and possibilities (ASHE-ERIC Higher Education Report No. 2). Retrieved from ERIC database. (ED304041)
  • Lattuca, L. R., & Stark, J. S. (2009). Shaping the college cu"iculum: Academic plans in context (2nd ed.). San Francisco, CA: Jossey-Bass.
  • Office of Academic Planning and Assessment, University of Massachusetts Amherst. (2007). Defini11g critical thi11ki11g: Participa11t respo11ses. Retrieved from www.umass.edu/oapa/oapa/publications/gen_ed/critical_thinking_definitions.pdf
  • Office of Academic Planning and Assessment, University of Massachusetts Amherst. (2008). Ge11 ed curriculum mapping: Leaming objectives by gen ed course designations. Retrieved from www.umass.edu/oapa/oapa/pubIications/gen_ed/instructor_survey_results.pdf
  • Pace, D., & Middendorf, J. (2004). Decoding the disciplines: A model for helping students learn disciplinary ways of thinking. In D. Pace & J. Middendorf (Eds.), New directions for teaching and learning: No. 98. Decoding the disciplines: Helping students learn disciplinary ways of thinking (pp. 1-12). San Francisco, CA: Jossey-Bass.
  • Paul, R., Binker, A., Jensen, K., & Kreklau, H. (1990). Strategy list: 35 dimensions of critical thought. Retrieved from www.criticalthinking.org/page.cfm?PageID=466&CategoryID=63
  • Paul, R., Elder, L., & Bartell, T. (1997). Study of 38 public universities and 28 private universities to determine faculty emphasis on critical thinking in instruction. Retrieved from www.criticalthinking.org./research/Abstract-RPAUL-38public.cfm
  • Voluntary System of Accountability Program. (2010). Participation agreement. Retrieved from www.voluntarysystem.org./docs/SignUp/VSAParticipationAgreement.pdf

Link to Home Page

  • Plan for College and Career
  • Take the ACT
  • School and District Assessment
  • Career-Ready Solutions
  • Students & Parents
  • Higher Ed *
  • Open Search Form

Other ACT Services and Products

ACT Partners with Nexus Capital Management

ACT’s partnership with Nexus Capital Management will enhance and advance our education and work readiness solutions to support the needs of students, educators, and employers. Learn more.

Using Data and Analytics to Help Students Succeed at Your Institution

2024 act enrollment management summit.

July 15-17, 2024 | Chicago, Illinois

For 35+ years, the ACT Enrollment Management Summit has offered an unparalleled learning and growth opportunity for your entire enrollment management team. Each conference includes presentations from national leaders in your field, offering a rich and relevant learning experience. 

Research on Superscoring and Section Retesting

The Benefits of Superscoring: Lessons from Higher Ed Leaders

Superscoring: What Do the Data Say?

Superscoring: Is it fair?

Superscoring: Lessons from Enrollment Experts

Issue Brief: Impact of Superscoring on Subgroup Differences  (PDF)

Order Study  (PDF)

Technical Brief: Does Superscoring Increase Subgroup Differences?  (PDF)

Superscore Policy Language  (PPT)

Modularity Study (PDF)

Enroll Smarter with Encoura ® Data Lab

Access the most diverse and inclusive student population in a single platform while using data science, analytics, and research to reach your best-fit students.  , act college score reports now delivered via encoura data lab.

Learn more about what this transition will mean for your institution’s enrollment strategy.

Solutions for Higher Education Institutions

Like all ACT ® offerings, our solutions for higher education are developed based on a holistic view of students and include a range of assessment, enrollment, and research products and services.

ACT Enrollment Management

Programs to measure student abilities, interests, needs, and enrollment preferences.

ACT On-Campus (Residual) Testing

Available to ACT-participating colleges to administer the ACT to students who were unable to test on an ACT National Testing date.

ACT WorkKeys

A comprehensive program that aligns, builds and certifies the skills needed for workplace success.

The ACT Test

ACT test results provide excellent information to help you recruit, advise, and retain students.

ACT/SAT Concordance Released

ACT and the College Board have completed a concordance study that is designed to examine the relationship between scores on the ACT test and the SAT.

U.S. High School Class of 2023 Graduating Class Data

New act research report.

Higher Education Research Digest

Articles in the 2019 Higher Education Research Digest provide relevant, timely, and practical research insights to help higher-education professionals.

ACT College and Career Readiness Workshops

Set your students up for success. Learn how to use ACT assessments and data.

Stay current! Get the latest news and information about improvements to the ACT.

University success.

“It’s not just about successful recruiting and retention.  It’s about successful graduates in successful careers 10 years out.  We don’t want that student relationship to end.  We want them back on campus and involved for life.  EOS provides us with a strategy for forming lifelong relationships with students, graduates, and alumni.” –

Jim Maraviglia, Associate Vice Provost for Marketing and Enrollment Development, California Polytechnic State University, San Luis Obispo

Social Media Resources

Stay connected to ACT through a variety of social media channels.

This action will open a new window. Do you want to proceed?

Welcome to ACT

If you are accessing this site from outside the United States, Puerto Rico, or U.S. Territories, please proceed to the non-U.S. version of our website.

You are using an outdated browser.

Wabash College

  • About Wabash
  • Majors & More
  • Admissions & Aid
  • Student Life
  • Give to Wabash
  • Event Calendar
  • Directories
  • Wabash Employment
  • Alumni & Friends
  • Parents & Families
  • The Bachelor
  • Audio Podcasts
  • Mullin ’24: ‘I Hope I am Seen as Something More’
  • Chapel Talk #220: Jamie Sweet Douglas
  • Chapel Talk #219: Jim Amidon '87
  • Wabash On My Mind: Ahmoni Jones '24
  • Chapel Talk #218: Dr. Rick Warner H'13

News Archives

Wabash College Communications and Marketing

Video player.

Advertisement

Advertisement

Racial Inequality in Critical Thinking Skills: The Role of Academic and Diversity Experiences

  • Published: 01 July 2016
  • Volume 58 , pages 119–140, ( 2017 )

Cite this article

  • Josipa Roksa 1 ,
  • Teniell L. Trolian 2 ,
  • Ernest T. Pascarella 3 ,
  • Cindy A. Kilgo 4 ,
  • Charles Blaich 5 &
  • Kathleen S. Wise 5  

3705 Accesses

14 Citations

2 Altmetric

Explore all metrics

While racial inequalities in college entry and completion are well documented, much less is known about racial disparities in the development of general collegiate skills, such as critical thinking. Using data from the Wabash National Study of Liberal Arts Education, we find substantial inequality in the development of critical thinking skills over four years of college between African American and White students. The results indicate that these inequities are not related to students’ academic experiences in college but are substantially related to their experiences with diversity. These findings have important implications for understanding racial inequality in higher education and considering strategies for addressing observed disparities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

caap critical thinking test

Centering Race, Racism, and Black Learners in Mathematics Education: A Critical Race Theory Perspective

caap critical thinking test

Critical Examination of the Role of STEM in Propagating and Maintaining Race and Gender Disparities

caap critical thinking test

Beyond White: The Emotional Complexion of Critical Research on Race

Critical thinking is a term that is often used but rarely clearly defined. In essence, it aims to reflect one’s ability to analyze, synthesize, and/or evaluate information. To date, two standardized assessments of critical thinking most commonly used in published research are the Critical Thinking Test from the Collegiate Assessment of Academic Proficiency (CAAP) developed by the ACT and the collegiate learning assessment (CLA) developed by the Council for Aid to Education. CAAP includes three components: analyzing, evaluating, and extending an argument. CLA similarly includes an analysis and critique of an argument as well as critical reading and evaluation. Although CAAP and CLA are designed very differently, they produce similar results in terms of the overall gains in student performance (Pascarella et al.  2011a ). Moreover, a recent validity study of three different tests—CAAP, CLA and MAPP (Measure of Academic Proficiency and Progress, developed by the ETS)—supported the measures’ construct validity (Klein et al. 2009 ).

Three institutions participated in multiple waves of the study. A dummy variable for those institutions is included in analysis.

Although the number of students in different racial categories is low, it is important to note that the sample examined in this study includes students who entered higher education through four-year institutions and persisted through four years of college. Authors’ calculations indicate that the proportion of African American students in the WNS sample is similar to a comparable sample in the Education Longitudinal Study (ELS), which is a nationally representative sample. The proportion of Asian students is lower in the WNS sample and the comparison cannot be made for Hispanic students because ELS uses different racial/ethnic categories.

For a list of studies using the Wabash National Study data, see: http://www.liberalarts.wabash.edu/research-and-publications/ .

We include 0.10 statistical significance level in the table given the small number of students in different racial groups.

Some studies of cognitive development estimate conditional effects by race. However, that is rarely the focus of their inquiry. The focus is typically on understanding how specific experiences facilitate student development, and conditional effects are reported as a complement to the overall analysis.

American College Testing (ACT) Program. (1991). CAAP Technical Handbook, Iowa City, IA, Author.

antonio, A. L. (2001). The role of interracial interaction in the development of leadership skills and cultural knowledge and understanding. Research in Higher Education, 42 , 593–617.

Article   Google Scholar  

Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses . Chicago: University of Chicago Press.

Google Scholar  

Arum, R., & Roksa, J. (2014). Aspiring adults adrift: Tentative transitions to adulthood . Chicago: University of Chicago Press.

Book   Google Scholar  

Association of American Colleges and Universities (AAC&U). (2008). How should colleges assess and improve student learning? Employers’ views on the accountability challenge . Washington, DC: AAC&U.

Astin, A. (1993). What matters in college . San Francisco: Jossey-Bass.

Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51 (6), 1173–1182.

Bok, D. (2006). Our underachieving colleges: A candid look at how much students learn and why they should be learning more . Princeton, NJ: Princeton University Press.

Bowman, N. A. (2010). College diversity experiences and cognitive development: A meta-analysis. Review of Educational Research, 80 (1), 4–33.

Bowman, N. A. (2012). Promoting sustained engagement with diversity: The reciprocal relationships between informal and formal college diversity experiences. Review of Higher Education, 36 (1), 1–24.

Bowman, N. A. (2013). How much diversity is enough? The curvilinear relationship between college diversity interactions and first-year student outcomes. Research in Higher Education, 54 (8), 874–894.

Bowman, N. A., Brandenberger, J. W., Hill, P. L., & Lapsley, D. K. (2011). The long-term effects of college diversity experiences: Well-being and social concerns 13 years after graduation. Journal of College Student Development, 52 (6), 729–739.

Bowman, N. A., & Brandenberger, J. W. (2012). Toward a model of college diversity experiences and attitude change. The Review of Higher Education, 35 (2), 179–205.

Bray, G., Pascarella, E. T., & Pierson, C. (2004). Postsecondary education and some dimensions of literacy development: An exploration of longitudinal evidence. Reading Research Quarterly, 39 , 306–330.

Blaich C., & Wise, K. (2014). Clear and organized teaching . Center for Inquiry, Wabash College. Accessed http://www.liberalarts.wabash.edu/practitioners-corner/2014/4/15/clear-and-organized-teaching.html .

Cabrera, A. F., & Nora, A. (1994). College students’ perceptions of prejudice and discrimination and their feelings of alienation: A construct validation approach. Review of Education Pedagogy and Cultural Studies, 16 (3–4), 387–409.

Cabrera, A. F., Nora, A., Terenzini, P. T., Pascarella, E. T., & Hagedorn, L. S. (1999). Campus racial climate and the adjustment of students to college: A comparison between White students and African-American students. The Journal of Higher Education, 70 (2), 134–160.

Cacioppo, J., Petty, R., & Kao, C. F. (1984). The efficient assessment of need for cognition. Journal of Personality Assessment, 48 (3), 306–307.

Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47 (1), 1–32.

Chang, M. J., Astin, A. W., & Kim, D. (2004). Cross-racial interaction among undergraduates: Some consequences, causes, and patterns. Research in Higher Education, 45 (5), 529–553.

Chickering, A., & Gamson, Z. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39 , 3–7.

Chickering, A., & Gamson, Z. (1991). Applying the seven principles for good practice in undergraduate education . San Francisco: Jossey-Bass.

Cruce, T. M., Wolniak, G. C., Seifert, T. A., & Pascarella, E. T. (2006). Impacts of good practices on cognitive development, learning orientations, and graduate degree plans during the first year of college. Journal of College Student Development, 47 , 365–383.

Dee, J. R., & Daly, C. J. (2012). Engaging faculty in the process of cultural change in support of diverse student populations. In S. D. Museus & U. M. Jayakumar (Eds.), Creating campus cultures: Fostering success among racially diverse student populations (pp. 168–188). New York, NY: Routledge.

Deil-Amen, R., & Lopez-Turley, R. (2007). A review of the transition to college literature in sociology. Teachers College Record, 109 (10), 2324–2366.

Dorans, N. J. (2004). Freedle’s Table 2: Fact or fiction? Harvard Educational Review, 74 , 62–72.

Fischer, M. J. (2007). Settling into campus life: Differences by race/ethnicity in college involvement and outcomes. The Journal of Higher Education, 78 (2), 125–156. doi: 10.1353/jhe.2007.0009 .

Fischer, C. S., Hout, M., Jankowski, M. S., Luca, S. R., Swidler, A., & Voss, K. (1996). Inequality by Design . Princeton, NJ: Princeton University Press.

Flowers, L. A., & Pascarella, E. T. (2003). Cognitive effects of college: Differences between African American and Caucasian students. Research in Higher Education, 44 (1), 21–49.

Freedle, R. O. (2003). Correcting the SAT’s ethnic and social class bias: a method for reestimating SAT scores. Harvard Educational Review, 73 , 1–43.

Goldrick-Rab, S., & Roksa, J. (2008). A federal agenda for promoting student success and degree completion . Washington, DC: Center for American Progress.

Greene, T. G., Marti, C. N., & McClenney, K. (2008). The effort-outcome gap: Differences for African American and Hispanic community college students in student engagement and academic achievement. The Journal of Higher Education, 79 (5), 513–539. doi: 10.1353/jhe.0.0018 .

Grodsky, E., & Felts, E. (2009). Social stratification in higher education. Teachers College Record, 111 (10), 2347–2384.

Grodsky, E., Warrant, J. R., & Felts, E. (2008). Testing and social stratification in American education. Annual Review of Sociology, 34 , 385–404.

Gurin, P., Dey, E., Hurtado, S., & Gurin, G. (2002). Diversity and higher education: Theory and impact on educational outcomes. Harvard Educational Review, 72 (3), 330–366.

Harper, S. R. (2012). Foreword. In S.D. Museus & U.M. Jayakumar (Eds.), Creating campus cultures: Fostering success among racially diverse student populations (pp. ix-xi). New York, NY: Routledge.

Harper, S. R., & Hurtado, S. (2007). Nine themes in campus racial climates and implications for institutional transformation. New Directions for Student Services, 120 , 7–24.

Heck, R. H., & Thomas, S. L. (2008). An introduction to multilevel modeling techniques (2nd ed.). New York, NY: Routledge.

Hout, M. (1996). The politics of mobility. In A. C. Kerckhoff (Ed.), Generating social stratification: Toward a new research agenda (pp. 293–316). Boulder, CO: Westview Press.

Hu, S., & Kuh, G. D. (2003). Diversity experiences and college student learning and personal development. Journal of College Student Development, 44 (3), 320–334.

Hurtado, S. (2001). Linking diversity and educational purpose: How diversity affects the classroom environment and student development. In G. Orfield (Ed.), Diversity challenged: Evidence on the impact of affirmative action (pp. 187–203). Cambridge, MA: Harvard Educational Publishing Group.

Hurtado, S. (2005a). The next generation of diversity and intergroup relations research. Journal of Social Issues, 61 , 595–610.

Hurtado, S. (2005b). Diversity and learning for a pluralistic democracy. In W. R. Allen, M. Bonous-Hammarth, R. T. Teranishi, & O. C. Dano (Eds.), Higher education in a global society: Achieving diversity, equity and excellence (pp. 249–267). Bingley, UK: Emerald Group Publishing Limited.

Hurtado, S., & Carter, D. (1997). Effects of college transition and perceptions of the campus racial climate on Latina/o college students’ sense of belonging. Sociology of Education, 70 , 324–345.

Hurtado, S., Milem, J., Clayton-Pedersen, A., & Allen, W. (1998). Enhancing campus climates for racial/ethnic diversity: Educational policy and practice. The Review of Higher Education, 21 , 279–302.

Kim, M. M. (2002). Cultivating intellectual development: Comparing women-only colleges and coeducational colleges for educational effectiveness. Research in Higher Education, 43 (4), 447–481.

Klein, S., Liu, O. L, & Sconing, J. (2009). Test validity study report. Washington DC: Fund for the Improvement of postsecondary education. Accessed http://www.voluntarysystem.org/docs/reports/TVSReport_Final.pdf .

Kugelmass, H., & Ready, D. D. (2011). Racial disparities in collegiate cognitive gains: A multilevel analysis of institutional influences on learning and its equitable distribution. Research in Higher Education, 52 (4), 323–348.

Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement on first-year college grades and persistence. The Journal of Higher Education, 79 (5), 540–563. doi: 10.1353/jhe.0.0019 .

Kuh, G. D., & Pascarella, E. T. (2004). What does institutional selectivity tell us about educational quality? Change: The Magazine of Higher Learning, 36 (5), 52–59.

Lahmers, A. G., & Zulauf, C. R. (2000). Factors associated with academic time use and academic performance of college students: A recursive approach. Journal of College Student Development, 41 (5), 544–556.

Loes, C., Pascarella, E. T., & Umbach, P. (2012). Effects of diversity experiences on critical thinking skills: Who benefits? The Journal of Higher Education, 83 (1), 1–25.

Lundberg, C. A. (2012). Predictors of learning for students from five different racial groups. Journal of College Student Development, 53 (5), 636–655. doi: 10.1353/csd.2012.0071 .

Museus, S. D. (2014). The Culturally Engaging Campus Environments (CECE) Model: A new theory of college success among racially diverse student populations. In M. B. Paulsen (Ed.), Higher Education: Handbook of Theory and Research (pp. 189–227). New York: Springer.

Chapter   Google Scholar  

Museus, S. D., & Jayakumar, U. M. (Eds.). (2012). Creating campus cultures: Fostering success among racially diverse student populations . New York, NY: Routledge.

National Research Council (NRC). (2012). Education for life and work: Developing transferable knowledge and skills in the 21 st century . Washington, DC: The National Academies Press.

Nelson Laird, T. F. (2005). College students’ experiences with diversity and their effects on academic self-confidence. Research in Higher Education, 46 (4), 365–387.

Nora, A., & Cabrera, A. F. (1996). The role of perceptions of prejudice and discrimination on the adjustment of minority students to college. The Journal of Higher Education, 67 (2), 119–148.

Pascarella, E. T. (1985). College environmental influences on learning and cognitive development: A critical review and synthesis. In J. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 1). New York: Agathon.

Pascarella, E. T., Blaich, C., Martin, G. L., & Hanson, J. M. (2011a). How robust are the findings of academically adrift? Change: The Magazine of Higher Learning, 43 (3), 20–24.

Pascarella, E., Bohr, L., Nora, A., & Terenzini, P. (1995). Cognitive effects of 2-year and 4-year colleges: New evidence. Educational Evaluation and Policy Analysis, 17 (1), 83–96.

Pascarella, E. T., Cruce, T., Umbach, P. D., Wolniak, G. C., Kuh, G. D., Carini, R. M., et al. (2006). Institutional selectivity and good practices in undergraduate education: How strong is the link? The Journal of Higher Education, 77 (2), 251–285.

Pascarella, E. T., Edison, M., Nora, A., Hagedorn, L., & Braxton, J. (1996). Effects of teacher organization/preparation and teacher skill/clarity on general cognitive skills in college. Journal of College Student Development, 37 , 7–19.

Pascarella, E. T., Martin, G. L., Hanson, J. M., Trolian, T. L., Gillig, B., & Blaich, C. (2014). Effects of diversity experiences on critical thinking skills over four years of college. Journal of College Student Development, 55 (1), 86–92. doi: 10.1353/csd.2014.0009 .

Pascarella, E. T., Palmer, B., Moye, M., & Pierson, C. T. (2001). Do diversity experiences influence the development of critical thinking? Journal of College Student Development, 42 (3), 257–271.

Pascarella, E., Salisbury, M., & Blaich, C. (2011b). Exposure to effective instruction and college student persistence: A multi-institutional replication and extension. Journal of College Student Development, 52 , 4–19.

Pascarella, E. T., & Terenzini, P. (1991). How college affects students . San Francisco: Jossey-Bass.

Pascarella, E. T., & Terenzini, P. (2005). How college affects student(Vol. 2): A third decade of research . San Francisco: Jossey-Bass.

Pascarella, E. T., Wang, J., Trolian, T. L., & Blaich, C. (2013). How the instructional and learning environments of liberal arts colleges enhance cognitive development. Higher Education, 66 (5), 569–583.

Pascarella, E. T., Wolniak, G. C., Pierson, C. T., & Flowers, L. A. (2004). The role of race in the development of plans for a graduate degree. The Review of Higher Education, 27 (3), 299–320.

Plant, E. A., Ericsson, K. A., Hill, L., & Asberg, K. (2005). Why study time does not predict grade point average across college students: Implications of deliberate practice for academic performance. Contemporary Educational Psychology, 30 (1), 96–116.

Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods . Thousand Oaks, CA: Sage Publications.

Roksa, J., & Arum, R. (2015). Inequality in skill development on college campuses. Research in Social Stratification and Mobility, 39 , 18–31.

Roksa, J., Kilgo, C. A., Trolian, T. L., Pascarella, E. T., Blaich, C., Wise, K. S. (In press). Engaging with diversity: How positive and negative diversity interactions shape students’ cognitive outcomes. Journal of Higher Education .

Saenz, V. B., Ngai, H. N., & Hurtado, S. (2007). Factors influencing positive interactions across race for African-American, Asian-American, Latino, and White college students. Research in Higher Education, 48 (1), 1–38.

Strayhorn, T. L. (2010). When race and gender collide: Social and cultural capital’s influence on the academic achievement of African-American and Latino males. The Review of Higher Education, 33 (3), 307–332.

Terenzini, P. T., Springer, L., Pascarella, E. T., & Nora, A. (1995). Influences affecting the development of students’ critical thinking skills. Research in Higher Education, 36 (1), 23–39.

Tierney, W. G. (1992). An anthropological analysis of student participation in college. Journal of Higher Education, 63 (6), 603–618.

Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago, IL: The University of Chicago Press.

Trolian, T. L., Kilgo, C. A., Pascarella, E. T., Roksa, J., Blaich, C., Wise, K. (2014, November). Race and exposure to good teaching during college. Paper presented at the meeting of the Association for the Study of Higher Education, Washington, D.C.

Umbach, P. D., & Kuh, G. D. (2006). Student experiences with diversity at liberal arts colleges: Another claim for distinctiveness. Journal of Higher Education, 77 (1), 169–192.

United States Department of Education. (2006). A test of leadership: Charting the future of U.S. higher education . Washington, DC: U.S. Department of Education.

Download references

Acknowledgments

Research on this project is supported by a grant from the Spencer Foundation. Moreover, data collection and preparation is supported by a grant from the Center of Inquiry in the Liberal Arts at Wabash College to the Center for Research on Undergraduate Education at The University of Iowa.

Author information

Authors and affiliations.

Department of Sociology, University of Virginia, P.O. Box 400766, Charlottesville, Virginia, VA, 22904, USA

Josipa Roksa

Department of Educational Administration and Policy Studies, University at Albany, State University of New York, 1400 Washington Avenue, Albany, NY, 12222, USA

Teniell L. Trolian

Educational Policy and Leadership Studies, University of Iowa, N491 Lindquist Center, Iowa City, IA, 52242, USA

Ernest T. Pascarella

Department of Educational Leadership, Policy, and Technology Studies, University of Alabama, 302 Graves Hall, Tuscaloosa, AL, 35487, USA

Cindy A. Kilgo

Center of Inquiry, Wabash College, Crawfordsville, Indiana, IN, 47933, USA

Charles Blaich & Kathleen S. Wise

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Josipa Roksa .

Rights and permissions

Reprints and permissions

About this article

Roksa, J., Trolian, T.L., Pascarella, E.T. et al. Racial Inequality in Critical Thinking Skills: The Role of Academic and Diversity Experiences. Res High Educ 58 , 119–140 (2017). https://doi.org/10.1007/s11162-016-9423-1

Download citation

Received : 20 June 2015

Published : 01 July 2016

Issue Date : March 2017

DOI : https://doi.org/10.1007/s11162-016-9423-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Critical thinking
  • Academic experiences
  • College student development
  • Find a journal
  • Publish with us
  • Track your research

caap critical thinking test

Earl M. Kinkade

Order Number

IMAGES

  1. PPT

    caap critical thinking test

  2. PPT

    caap critical thinking test

  3. PPT

    caap critical thinking test

  4. PPT

    caap critical thinking test

  5. CAAP Critical Thinking Sample Questions

    caap critical thinking test

  6. PPT

    caap critical thinking test

VIDEO

  1. What if your early character traits could unlock more of your career potential?

COMMENTS

  1. PDF CAAP STUDENT GUIDE

    The CAAP Critical Thinking Test is a 32-item, 40-minute test that measures skills at analyzing, evaluating, and extending arguments. An argument is defined as a sequence of statements that includes a claim that one of the statements, the conclusion, follows from the other statements. The CAAP Critical Thinking Test consists of four passages ...

  2. PDF Standardized Critical Thinking Assessment Tools 1

    Assessment of Reasoning and Communication, Reasoning Subtest, (1986). College Outcome Measures Program, The American College Testing Program (ACT) General-Content. Multi-Aspect. Open-Ended. Students produce three short essays and three short speeches, which are graded on pertinence, relevance, plausibility, reasonableness, and realism.17. Tasks ...

  3. PDF CAAP Test Summary

    CAAP Critical Thinking Test Summary. Introduction. The Collegiate Assessment of Academic Proficiency (CAAP) is a standardized, nationally-normed assessment program from ACT that enables postsecondary institutions to assess,evaluate,and enhance student learning outcomes and general education program outcomes. There are six different CAAP tests ...

  4. CAAP Test

    Background. CAAP test is a standardized test designed by ACT, a non-profit organization, to measure student's knowledge in writing, reading, math, critical thinking, and science. Kean University used the CAAP Critical Thinking test in fall semesters of 2011-2013. "The CAAP Critical Thinking Test is a 32-item, 40-minute test that measures ...

  5. PDF Review of the Collegiate Assessment of Academic Proficiency (CAAP)

    to start using CAAP as UNA's primary assessment tool for General Education. During the fall of 1993, UNA administered all of the test modules except Critical Thinking to its students and continued to do so for a few years. Afterwards, test administrations became more sporadic. In fall 2003, UNA started using the Critical Thinking module

  6. Critical Thinking Module Collegiate Assessment of Academic Proficiency

    The CAAP Critical Thinking Test is a 32-item, 40-minute test that measures students' skills in clarifying, analyzing, evaluating, and extending arguments. An argument is defined as a sequence of statements that includes a claim that one of the statements, the conclusion, follows

  7. PDF Guide to Outcome Measures

    The Collegiate Assessment of Academic Proficiency (CAAP) is a national, standardized assessment program developed by ACT with six independent modules that test reading, writing, math, science, and critical thinking. "The CAAP Critical Thinking Test is a 32-item instrument that measures students' skills in clarifying, analyzing, evaluating, and ...

  8. The Collegiate Assessment of Academic Proficiency (CAAP)

    About the CAAP. The CAAP is a national standardized test developed by ACT. It has six independent modules that assess outcomes in core general education: reading, writing skills, writing essay, mathematics, science, and critical thinking. Each module has been analyzed and found to demonstrate acceptable internal consistency and reliability ...

  9. CAAP Critical Thinking Module

    The CAAP Critical Thinking Test is a 32-item, 40-minute test that measures students' skills in clarifying, analyzing, evaluating, and extending arguments. An argument is defined as a sequence of statements that includes a claim that one of the statements, the conclusion, follows from the other statements. The Critical Thinking Test consists of ...

  10. PDF CAAP Critical Thinking Test Results

    Title: Microsoft Word - CAAP Critical Thinking Test Results.doc Author: Joshua Berry Created Date: 12/15/2011 8:36:58 PM

  11. 10 Defining Critical Thinking in Higher Education

    ACT's Collegiate Assessment of Academic Proficiency (CAAP) comprises six independent test modules, of which the writing essays and critical thinking are relevant to this study. The critical thinking assessment is a forty-minute, thirty-two-item, multiple-choice test that, according to ACT, measures "students' skills in clarifying, analyzing ...

  12. Critical Thinking Sample Test Questions Booklet: Collegiate ...

    Critical Thinking Sample Test Questions Booklet: Collegiate Assessment of Academic Proficiency | PDF. ExampleQuestions-CAAPACTCriticalThinking.pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free.

  13. Assessing Critical Thinking in Higher Education: Current State and

    The Klein et al. study compared the construct validity of three standardized assessments of college learning outcomes (i.e., EPP, CLA, and CAAP) including critical thinking. The school-level correlation between a multiple-choice and a constructed-response critical thinking test was .93.

  14. PDF Critical Thinking Assessment at MXC

    The original critical thinking assessment did not focus on these student populations. Enrollment trends at the College indicate that approximately 65% of entering students test into developmental reading and 91% writing courses, and 96% test into developmental math courses. To gather baseline data on developmental students, the committee ...

  15. Examining the Relationship between the CAAP Critical Thinking Test and

    This study examined the relationship between the Collegiate Assessment of Academic Proficiency Critical Thinking test (CAAP-CT) and the College Outcome Measures Program objective test battery (COMP). Focus was on determining: (1) the relationship between the CAAP-CT and the COMP process-oriented subscores and the freshman to sophomore gains in these subscores; and (2) at a composite score ...

  16. ERIC

    Critical Thinking was the independent variable and the CAAP Critical Thinking Test (CT) developed by ACT was the measure of critical thinking employed. Nine core requirement areas were selected: Communications, Mathematics, Sciences, Philosophy, Literature, Fine Arts, Foreign Languages, History and non-history Social Sciences.

  17. Rethinking critical thinking for social justice: Introducing a new

    Comparison with CAAP critical thinking test. As a final analysis, critical being scale scores were compared to Collegiate Assessment of Academic Proficiency (CAAP) critical thinking test scores for several groups. CAAP is an existing reliable and valid instrument of critical thinking that measures students' skills in clarifying, analyzing ...

  18. Student perceptions of effective instruction and the ...

    Although there are many instruments that could be used to capture critical thinking levels (e.g., Halpern Critical Thinking Assessment, Collegiate Learning Assessment, California Critical Thinking Skills Test), the critical thinking module of the CAAP was chosen because it is commonly used in higher education research, is strongly correlated ...

  19. Higher Education Professionals

    July 15-17, 2024 | Chicago, Illinois. For 35+ years, the ACT Enrollment Management Summit has offered an unparalleled learning and growth opportunity for your entire enrollment management team. Each conference includes presentations from national leaders in your field, offering a rich and relevant learning experience. Learn More About the Summit.

  20. Wabash National Study of Liberal Arts Education<br>Liberal Arts

    The CAAP Critical Thinking Test is a 32-item, 40-minute instrument that measures students' skills in clarifying, analyzing, evaluating, and extending arguments. The Collegiate Assessment of Academic Proficiency is a national, standardized assessment program based on professional research and development by ACT.

  21. Racial Inequality in Critical Thinking Skills: The Role of ...

    Critical thinking is assessed using the Critical Thinking Test from the Collegiate Assessment of Academic Proficiency (CAAP) developed by the American College Testing Program (ACT). This is a 40-minute, 32-item instrument designed to measure students' ability to clarify, analyze, evaluate, and extend arguments.

  22. PDF An Annotated List of English Language Critical Thinking Tests

    ACT CAAP Test Module: Critical Thinking. ACT CAAP Operations (85), PO Box 1688, Iowa City, IA 52243. A "College Assessment of Academic Proficiency Test" done by ACT, and aimed at students at the end of their second year in college, often used to assess student mastery of critical thinking acquired in general education.

  23. The Caap Critical Thinking Test

    The Caap Critical Thinking Test, Thesis Statement Against Eating Meat, Mla Citation Bachelor Thesis, Best Research Proposal Writing Services For School, Resume Online Class, How Are College Application Essays Graded, Can A Research Paper Be Opinionated 1647 Orders prepared