Importance of critical thinking to understand ChatGPT

Affiliation.

  • 1 Predictive Society and Data Analytics Lab, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland. [email protected].
  • PMID: 37582903
  • PMCID: PMC10999413 (available on 2025-04-01 )
  • DOI: 10.1038/s41431-023-01443-4

Publication types

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Editor's Choice
  • Supplements
  • Author Guidelines
  • Submission Site
  • Open Access
  • About Journal of Public Health
  • About the Faculty of Public Health of the Royal Colleges of Physicians of the United Kingdom
  • Editorial Board
  • Self-Archiving Policy
  • Dispatch Dates
  • Advertising and Corporate Services
  • Journals Career Network
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

  • < Previous

No worries with ChatGPT: building bridges between artificial intelligence and education with critical thinking soft skills

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

M Arli Rusandi, Ahman, Ipah Saripah, Deasy Yunika Khairun, Mutmainnah, No worries with ChatGPT: building bridges between artificial intelligence and education with critical thinking soft skills, Journal of Public Health , Volume 45, Issue 3, September 2023, Pages e602–e603, https://doi.org/10.1093/pubmed/fdad049

  • Permissions Icon Permissions

This correspondence discusses the role of artificial intelligence (AI) like ChatGPT in education and research, focusing on developing critical thinking skills and maintaining academic integrity. AI can complement learning and research processes when used ethically and responsibly. Integrating specific teaching methods in education and research can help develop better critical thinking skills and a deeper understanding of the contexts in which AI is used. The article emphasizes the importance of developing critical thinking skills among students and researchers to effectively use AI and distinguish accurate information from hoaxes and misinformation. In conclusion, the collaboration between AI and humans in learning and research will yield significant benefits for individuals and society as long as critical thinking skills and academic integrity remain top priorities.

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1741-3850
  • Print ISSN 1741-3842
  • Copyright © 2024 Faculty of Public Health
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Tampere University Research Portal Logo

  • Help & FAQ

Importance of critical thinking to understand ChatGPT

  • Computing Sciences

Research output : Contribution to journal › Letter › Scientific

ASJC Scopus subject areas

  • Genetics(clinical)

Access to Document

  • 10.1038/s41431-023-01443-4

This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.

  • https://urn.fi/URN:NBN:fi:tuni-2023122111172 Licence: Unspecified

T1 - Importance of critical thinking to understand ChatGPT

AU - Emmert-Streib, Frank

U2 - 10.1038/s41431-023-01443-4

DO - 10.1038/s41431-023-01443-4

M3 - Letter

AN - SCOPUS:85167878056

SN - 1018-4813

JO - European Journal of Human Genetics

JF - European Journal of Human Genetics

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Correspondence
  • Published: 15 August 2023

Beyond the hype: a comprehensive critique of ChatGPT’s applicability in genetics

  • Partha Pratim Ray   ORCID: orcid.org/0000-0003-2306-2792 1 &
  • Poulami Majumder 2  

European Journal of Human Genetics volume  32 ,  pages 375–376 ( 2024 ) Cite this article

304 Accesses

1 Citations

1 Altmetric

Metrics details

  • Medical research

The recent article by Duong and Solomon [ 1 ], and the subsequent commentary [ 2 ] on the potential role of OpenAI’s language model, ChatGPT, in the field of genetics, certainly make for a compelling read. We appreciate the authors’ initiative to explore how this artificial intelligence model could make an impact on genetics, a discipline that requires an exceptional depth of knowledge and understanding. However, I feel a necessity to provide a more nuanced critique of the conclusions drawn.

The cornerstone of the study is the comparison between the performance of ChatGPT and a group of social media users in answering multiple-choice questions about genetics. The authors interpret the language model’s performance as indicative of an "understanding" of genetics. This interpretation certainly warrants a more measured understanding. ChatGPT’s function lies in its predictive abilities based on learned patterns, not in conscious understanding. The language model doesn’t have the capacity for self-awareness or cognition. Instead, it offers an approximation of understanding by predicting sequences of words, drawing from a vast body of textual data it has been trained on.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

251,40 € per year

only 20,95 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Duong D, Solomon BD. Analysis of large-language model versus human performance for genetics questions. Eur J Hum Genet. 2023:1–3. https://www.nature.com/articles/s41431-023-01396-8 . Published Online.

Emmert-Streib F. Can ChatGPT understand genetics? Eur J Hum Genet. 2023:1–2. https://doi.org/10.1038/s41431-023-01419-4 .

Van Bulck L, Moons P. What if your patient switches from Dr. Google to Dr. ChatGPT? A vignette-based survey of the trustworthiness, value, and danger of ChatGPT-generated responses to health questions. Eur J Cardiovasc Nurs. 2023:zvad038. https://doi.org/10.1093/eurjcn/zvad038 . Online ahead of print.

Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. InHealthcare. 2023;11:887.

Article   Google Scholar  

van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. Chatgpt: five priorities for research. Nature. 2023;614:224–6.

Article   PubMed   Google Scholar  

Download references

Not applicable.

Author information

Authors and affiliations.

Sikkim University, Gangtok, India

Partha Pratim Ray

Maulana Abul Kalam Azad University of Technology, BF Block, Sector 1, Bidhannagar, Kolkata, West Bengal, 700064, India

Poulami Majumder

You can also search for this author in PubMed   Google Scholar

Contributions

PPR conceived and drafted the manuscript. PM edited and reviewed.

Corresponding author

Correspondence to Partha Pratim Ray .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval and consent to participate

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Ray, P.P., Majumder, P. Beyond the hype: a comprehensive critique of ChatGPT’s applicability in genetics. Eur J Hum Genet 32 , 375–376 (2024). https://doi.org/10.1038/s41431-023-01442-5

Download citation

Received : 08 July 2023

Accepted : 11 July 2023

Published : 15 August 2023

Issue Date : April 2024

DOI : https://doi.org/10.1038/s41431-023-01442-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Importance of critical thinking to understand chatgpt.

  • Frank Emmert-Streib

European Journal of Human Genetics (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

importance of critical thinking to understand chatgpt

IE University

  • IE Corporate Relations
  • IE University

IE Insights

ChatGPT and the Decline of Critical Thinking

The use of ChatGPT as a search engine has made the need for critical thinking more pressing than ever, writes Enrique Dans.

importance of critical thinking to understand chatgpt

OpenAI’s launch of ChatGPT in late November captured the world’s attention and opened our eyes to the existence of a technology capable of a surprising number of things called machine learning. Note the language here. It is not artificial intelligence because machines are not intelligent, they are simply programmed to consult databases and use statistics.

Advanced automation has for years been applied to all kinds of uses, but what ChatGPT has done is to apply one specific area of machine learning, Large Language Models (LLMs), to a conversational environment, and has dazzled the world in the process. As a result more and more people are using it—not so much for conversations, as it was initially designed, but as a search engine. This is a development that could challenge Google’s two decades of hegemony.

It’s worth pointing out that Google already has very similar technology, but has so far decided to keep it under wraps until it can be sure of its quality and reliability. An LLM answers questions based on affinities and statistical matches, and the only way to get “correct” answers is to filter them, i.e. exclude the unsuitable stuff. This is not easy, so some answers will be partially incorrect, and sometime they will be very off the mark, even if the language used seems that of an expert.

Google, of course, immediately realized the threat posed by ChatGPT. In response, the company brought its two founders out of retirement, and decided to incorporate similar technologies into around 20 of its products. Microsoft, now an investor in OpenAI, is about to do the same. (If you haven’t experienced the pleasure of creating Excel documents with an algorithm like ChatGPT, I highly recommend it).

So far, so good: we have innovation driving competition.

Yet, the problem here is what happens to critical thinking. A Google results page, with its links, shows us the source and may include a fragment of text, etc., before we finally click and land on the page to (hopefully) find what we were looking for. But, with ChatGPT and the like, we ask a question and receive a couple of paragraphs with the answer – and this answer may or may not be correct, though it certainly looks like it.

The steady erosion of critical thinking exposes us to reliance on a tool that can easily be used to manipulate us.

Such is the absence of critical thinking in the world today that many people take the first result on a Google page as gospel. I say this from experience. I have been called by several people convinced that I was the customer service manager of a Spanish airline simply because I once wrote, long ago, about Air Europa’s disastrous customer service and the article was indexed at the top of Google’s search engine. Just try convincing angry flyers that you are not the person to attend to their complaints! It didn’t matter what I told them: according to their reading of the search engine page, Google said I was the airline’s customer service, and so I must be lying.

So, if people will accept Google’s word uncritically, imagine the response to ChatGPT. The answer to your search may be 100% bullshit, but whatever: for many people, it’s hard, reliable truth.

There are tools like Perplexity.ai that try to alleviate this by providing sources – and this, at least, allows anyone who wants to fact check the answer to do so. I would think that Google, which has a great deal at stake here, will go in that direction and not simply provide a single piece of text but something more elaborate that would allow the user to check whether the answer is from a scientific article, a tabloid newspaper, or a sect of anti-vaxxers, climate change deniers, or conspiracy theorists. At the very least, search engines have a responsibility to make it possible for users to probe their answers with a few clicks. This is a necessity because the steady erosion of critical thinking exposes us to reliance on a tool that can easily be used to manipulate us.

Nevertheless, at the end of the day, we shouldn’t expect Big Tech to facilitate critical thinking: it’s something we must develop for ourselves, as individuals and collectively as a society. The problem is that our schools don’t teach it, preferring instead the comfort of a textbook, often banning smartphones in the classroom rather than embracing the challenge of teaching students how to use them properly.

That said, the education system cannot bear all the responsibility: parents also have an obligation to teach their children about how the world works. I’m afraid this means thinking twice about giving children a phone or tablet to keep them quiet.

After years of “this must be true because I saw it on television,” we now have “this must be true because the algorithm says so.” So, either we get a grip and start to prioritize critical thinking, or we’re going to end up in a collective muddle – and more vulnerable than ever to misinformation.

© IE Insights.

Democracy and the Magic of Fiction

Everest lessons: leadership and teamwork at altitude, council of europe as locomotive for european values, china for the new barbarians: france, serbia, hungary, would you like to receive ie insights.

Sign up for our Newsletter

RELATED CONTENT

Prompt is the new question.

The integration of AI into society requires thoughtful prompting and responsible design, writes Adriana Hoyos.

importance of critical thinking to understand chatgpt

Charging into the Future with Electric Vehicles

importance of critical thinking to understand chatgpt

Art, Cognac, and AI

Ai, academic integrity, and creative expression.

importance of critical thinking to understand chatgpt

AI in Elections: The Battle for Truth and Democracy

importance of critical thinking to understand chatgpt

Vision, Tech, and Learning. The Secret Sauce of AI-Powered Enterprises

importance of critical thinking to understand chatgpt

Tech Transformation Is Human (Not Digital)

importance of critical thinking to understand chatgpt

Chaos at OpenAI

Latest news, share on mastodon.

Artificial human thinking: ChatGPT’s capacity to be a model for critical thinking when prompted with problem-based writing activities

  • Brief Communication
  • Open access
  • Published: 24 April 2024
  • Volume 3 , article number  42 , ( 2024 )

Cite this article

You have full access to this open access article

importance of critical thinking to understand chatgpt

  • Joalise Janse van Rensburg   ORCID: orcid.org/0009-0005-5856-0937 1  

495 Accesses

Explore all metrics

The ability to think critically is an important and valuable skill that students should develop to successfully solve problems. The process of writing requires critical thinking (CT), and the subsequent piece of text can be viewed as a product of CT. One of the strategies educators may use to develop CT is modelling. Given ChatGPT’s ability to generate textual responses, one could suggest that educators can use AI-generated responses to problem-based writing activities as a model for CT. This study employed a qualitative research approach guided by the interpretivist paradigm to thematically analyse 57 digital texts produced by ChatGPT 3.5. The study found that ChatGPT 3.5 has the capacity to produce a model response which exhibits CT skills and dispositions to three different degrees namely explicit, guided, and limited. Each degree requires a certain level of educator involvement for ChatGPT 3.5ʹs textual responses to be used as a model for CT.

Avoid common mistakes on your manuscript.

1 Introduction and background

Alan Turing, one of the founding fathers of artificial intelligence [ 1 ], famously asked the question “Can machines think?” [ 2 , p. 433]. He attempted to answer this question by evaluating whether they could mimic human thinking and behaviour. Recent research found that Chat Generative Pre-trained Transformer (ChatGPT), a generative artificial intelligence (AI) large language model (LLM), can produce text that is indistinguishable from natural human language [ 3 , 4 ]. In these studies, ChatGPT was able to produce acceptable responses when prompted with questions developed for higher education students. It is, however, only an imitation of the result of human thinking and behaviour which begs the question, could these ‘artificial’ texts be used as a model answer to develop critical thinking (CT) in students?

CT is recognised as a valuable skill that individuals must develop to make informed decisions about their professional and personal lives [ 5 , 6 ]. Facione [ 5 ], explains that a strong critical thinker must have CT skills, (cognitive abilities), and be willing to think critically (CT dispositions). CT is particularly important during problem solving because an individual must be able to apply the relevant skills and demonstrate a willingness to think critically to reach an informed and viable solution or decision [ 7 ]. Research indicates that educators encourage the development of CT skills [ 8 ] and dispositions [ 9 ] in their students through modelling CT. This involves educators demonstrating CT in the classroom with the expectation that students will mimic these actions when confronted with novel situations [ 9 , 10 ]. Bean and Melzer [ 11 , p. 4] concur that “writing is a process of doing critical thinking and a product that communicates the results of critical thinking”. Given ChatGPT’s ability to generate textual responses, one could suggest that there is an opportunity for educators to use AI-generated responses to problem-based writing activities as a product of or model for CT.

2 Purpose and rationale

Since the public release of ChatGPT 3.5 in November 2022, educators and educational institutions have become more aware of the immediate need to incorporate AI and more specifically LLMs in the classroom. Given that CT is deemed an important twenty-first century skill [ 6 ] and modelling CT is one of the strategies to develop the necessary skills and dispositions [ 8 , 9 , 12 ], it should be important for educators to understand the extent to which ChatGPT 3.5ʹs textual responses can be used as an example or model for CT. This will help educators to determine how ChatGPT 3.5 could be used as an educational tool in the classroom and ideally assist in developing CT in their students. The literature regarding ChatGPT, its inherent limitations, opportunities for teaching and learning, ability to complete assessments, and risk to academic integrity is growing substantially [ 13 ]. There are studies that indicate ChatGPT has the ability to demonstrate CT because it can produce reasonable responses to assessment questions [ 3 , 4 ]. These studies mainly focus on evaluating its ability to produce responses that are similar to a higher education student and the influence it has on academic integrity. Studies regarding the possibility of using the textual responses as a model for CT are limited. Therefore, the purpose of this study is to examine ChatGPT 3.5’s capacity to produce model responses that exhibit CT skills and dispositions when prompted with problem-based writing activities to contribute to the emerging body of literature. The research question in this study is to what degree are CT skills (cognitive abilities) and CT dispositions (willingness to think critically) modelled in ChatGPT 3.5’s responses?

3 Literature review

There are many definitions for CT. It is described as the “ability to analyse and evaluate information” [ 14 , p. 160], a “self-adjusting process of judging what to believe or what to do in a given context” [ 12 ], p. 65, and “reasonable reflective thinking focused on deciding what to believe or do” [ 15 ], p. 166 to name a few. A team of 46 CT experts participated in a Delphi method research project and concluded that CT is “purposeful self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as an explanation of evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based” [ 16 , p. 3]. They further determined that CT consists of two components namely CT skills and CT dispositions. For an individual to be considered a strong critical thinker they should not only possess the necessary CT skills but also be positively disposed towards CT [ 5 , 7 , 15 , 17 , 18 ].

CT skills refer to an individual’s cognitive abilities which aid in the process of solving problems and making informed decisions [ 5 , 7 , 15 ]. According to Aizikovitch-udi and Cheng [ 19 ], these skills manifest in an individual’s performance. The six CT skills, as identified by the Delphi team, are listed, and described in Table  1 .

CT dispositions relate to an individual’s willingness to think critically and respond in a certain manner when exposed to different situations [ 5 , 15 , 20 ]. Dispositions would, therefore, result in certain actions that one would attribute to a strong critical thinker. The Delphi team developed seven dispositions, each of which is described in Table  2 .

According to scholars CT plays a crucial role during problem solving because it assists in generating successful, informed, and viable solutions and decisions [ 7 , 21 ]. Problems are classified into two categories namely well-structured and ill-structured [ 7 , 22 ]. Well-structured problems provide the necessary information to solve the problem and are well defined [ 23 , 24 ]. In contrast, ill-structured problems could lead to multiple solutions and are generally vague [ 7 , 22 , 25 ]. The process of solving a problem and more specifically an ill-structured problem is viewed as an ideal opportunity to practice CT [ 26 ]. Writing could be seen as an activity to solve ill-structured problems given that the instructions provide for many different solutions or responses to be generated. Bean and Melzer [ 11 ] consider the actof writing as a process to engage with CT and the resulting text as a product of CT. They argue that when you are required to communicate your thoughts, ideas, and arguments in written text you need to engage in CT.

Nieto and Saiz [ 17 ] are of the opinion that CT must be taught for it to be learned. There are three approaches to teaching CT namely specifist, generalist and infusion [ 27 , 28 , 29 ]. The specifist approach states that CT is discipline specific and can only be taught and developed within a particular subject area using the relevant terminology [ 27 , 28 ]. In contrast, the generalist approach asserts that CT is teachable independent of disciplines and is therefore applicable across all subject domains [ 27 , 28 ]. According to Facione [ 5 ] CT transcends specific subject domains, but he acknowledges that certain disciplines would require specific content knowledge, methods, and techniques, which aligns with the infusion approach. The infusion approach states that CT is a set of generic abilities that can be taught in specific contexts [ 28 ] and then ideally transferred to other subject domains. Bean and Melzer [ 11 ] contend that academic problems are typically discipline specific but agree with the infusion approach that there are core aspects of CT that are generic and relevant in all subject domains.

The literature suggests various strategies educators can employ to develop CT skills and dispositions in their students. Some of these strategies include questioning, discussions, intentional assessments, creating an environment conducive to CT and modelling the skills and dispositions [ 8 , 9 , 12 , 18 ]. This study specifically focuses on the strategy of modelling CT. Modelling refers to instances where CT skills and dispositions are demonstrated by an educator to motivate students to imitate these actions when confronted with situations that require them to think critically [ 8 , 9 , 12 ]. According to Bean and Melzer [ 11 ] it is not only important for educators in higher education to provide students with well-developed problems, but also necessary for them to model the CT they expect students to demonstrate. In a study conducted by Janse van Rensburg and Rauscher [ 9 ] modelling was identified as a supporting strategy used by educators to foster CT dispositions during problem-based activities. A supporting strategy indicates that another strategy was used as an initial action before the educator would model CT. This is reasonable as an educator would need an assessment, question, discussion, or example to model the way in which they would approach and respond to a problem. Therefore, problem-based writing activities given as prompts to ChatGPT could be viewed as an initial action and the textual response as an example or model in which CT skills and dispositions could be identified. It is important to note that ChatGPT does not model the process to get to the written response but rather produces the possible result or product of thinking critically.

ChatGPT is a generative AI LLM designed to generate text in response to a prompt provided by a user [ 13 , 30 ]. Its strength lies in its ability to imitate natural human language [ 30 ]. This is an interesting point of conversation in education especially regarding academic integrity. Busch [ 4 ] and Susnjak [ 3 ] found that ChatGPT is able to demonstrate CT when responding to assessment questions. Farrokhnia, Banihashem, Noroozi and Wals [ 31 ] echo these findings but state that it only generates plausible responses, it lacks a deep understanding of language and higher order thinking skills. The literature seems to indicate that ChatGPT can generate responses that will ‘pass’ as a student submission. Even though the final piece of text might look very human-like the process ChatGPT follows to generate text is different from a human. When asking ChatGPT how it works it explains that it is trained on extensive datasets to learn language patterns. It processes and analyses the user input or prompt using its training and then responds by predicting the most probable next word or phrase (ChatGPT 2023, personal communication, 2 April). ChatGPT does not search for responses it only refers to its training which was limited to specific datasets and information that was available before September 2021 at the time of this research study [ 30 ]. If one considers your own thinking processes and behaviours when confronted with a problem, humans engage in more cognitive activities than simply predicting the most relevant next word or phrase. As explained by Facione, CT skills are cognitive abilities that include activities like interpretation, analysis, evaluation, inference, explanation, and self-regulation [ 16 ] Thus, ChatGPT is only imitating the result of human behaviour which makes it valuable to investigate the degree to which its responses could be used as a model for CT.

4 Methodology

This study employed a qualitative research approach guided by the interpretivist paradigm to analyse digital texts produced by ChatGPT 3.5. ChatGPT 3.5 is one of the most popular LLMs, free to use and is easily accessible to the public. The LLM was prompted with various problem-based writing activities from Bean and Melzer’s book; Engaging ideas—the professor’s guide to Integrating Writing, Critical Thinking and Active Learning in the Classroom . The book focuses on guiding educators in higher education to promote CT in students by designing and using problem-based writing activities during teaching and learning. Bean and Melzer [ 11 ] emphasise the importance of integrating CT into assessments and provide various examples from different disciplines. These examples encourage CT through problem-based writing activities that require students to create meaning (p.75 and p. 124), purposefully investigate concepts (p. 79), make connections between content and concepts (p. 123), generate their own explanations (p. 123), engage in complex thinking to communicate opposing views (p. 124, 127, 129), and adapt responses for specific audiences (p. 125). In this study the infusion approach is followed as CT is considered a generic set of skills and dispositions that is applicable and appropriate across disciplines, for this reason no disciplines were excluded. The problem-based writing activities in Part 2 and Part 3 of Engaging ideas—the professor’s guide to Integrating Writing, Critical Thinking and Active Learning in the Classroom were considered as possible problem prompts. Some examples were excluded, if an example exhibited any of the below elements it was not used as a problem prompt:

Referring to a specific artefact, textbook, content covered in class or a country and their rules and regulations, because ChatGPT 3.5  might have limited or no access to the information referred to.

Requiring multiple textual responses to take place over an extended period because these problem-based writing activities focus on developing understanding and reflecting on new experiences and ChatGPT 3.5 only responds once after a prompt and its dataset was fixed at the time of this study.

Requiring any other artefact other than a piece of text because ChatGPT 3.5 is mostly limited to textual responses.

A combination of 19 problem-based writing activities were used as prompts, and each prompt was used three times bringing the total textual responses to 57. The first round of prompts was provided on the 20th and 22nd of March 2023, the second round the 12th of April 2023 and the third round the 3rd of May 2023. The intention was to determine if ChatGPT 3.5 consistently produced responses in which the same skills and dispositions could be identified to establish if it varied over time. There were no definite changes in the responses from the different rounds, thus data saturation occurred. The responses were coded using Taguette, an online qualitative analysis tool. The codes were pre-determined and derived from the framework developed by Facione [ 5 ]. It is important to note that these skills and dispositions were developed based on human thinking and behaviour and is being applied in this study to determine an AI-based LLM’s ability. Should it exhibit the result of similar thinking and behaviour to a human through textual responses one could argue that it is a successful model for CT skills and dispositions. The code-recode strategy was employed to ensure consistent coding of all textual responses, and the data was analysed using thematic analysis. After multiple rounds of coding, themes were identified, adapted, merged, and developed. Following each round of coding the researcher made reflective notes regarding the codes and potential themes and relationships that were evident in the data. The notes regarding the codes were also to ensure that coding was applied consistently across the 57 textual responses. The ease with which the skills and dispositions were identified in ChatGPT 3.5’s responses influenced the theme development during the data analysis. The notes regarding the themes and relationships focused on the researcher’s thoughts and ideas during data immersion. Through the analysis of the data the researcher was able to identify three degrees to which CT skills and dispositions could be identified in the responses. Table 3 illustrates the process regarding data collection and analysis.

The data used in this study were textual responses produced by ChatGPT 3.5 an AI-based LLM and therefore the study does not pose any specific ethical considerations concerning any participants. To enhance the trustworthiness of the study several strategies were implemented to increase the credibility, transferability, dependability, and confirmability. To increase the credibility of the study bias was controlled through reflexivity by continuously reflecting and keeping notes during the coding and analysis process. Interpretive adequacy further improves the credibility through low inference descriptors by providing examples of ChatGPT 3.5’s responses in the discussion and findings. Descriptive adequacy through rich descriptions of the process and methods increases the transferability. Documentation by keeping an audit trail of all prompts, responses, and analysis notes together with the code-recode strategy assisted in increasing the dependability of the study. Lastly the control of bias through reflexivity and an audit trail also enhances the confirmability of the study and findings.

5 Discussion of findings

During the data analysis process, it was easy to identify and code a component of the skill explanation and a component of the disposition s ystematicity because these were clearly identifiable in ChatGPT 3.5’s textual responses. Other components of the skills interpretation , explanation and self-regulation and components of the dispositions inquisitiveness , systematicity , truth-seeking , open-mindedness and maturity of judgement were not as distinct and were hardly ever observed unless one considers the thinking processes required to produce that piece of text. The remainder of the skills and dispositions fell somewhere between these two extremes; the skills and dispositions are identifiable, but it is not necessarily obvious. During the process of coding, reflecting, and analysing ChatGPT 3.5’s responses it was clear that it has the capacity to be a model for CT skills and dispositions to three different degrees. These were identified as explicit, guided, and limited. Explicit refers to instances where students can use ChatGPT 3.5’s responses as an example or guide to apply a skill or foster a disposition with minimal involvement from an educator. Guided is when the educator needs to facilitate the application of a skill or encourage the use of a disposition because these may not be obvious in the text without guidance. Limited indicates that there are very few instances where the skill or disposition could be identified in the textual response making it less evident that a skill has been applied or disposition used. Consequently, a higher level of educator involvement would be necessary. Table 4 illustrates the three degrees with their accompanying skills and dispositions and educator involvement that is required.

As seen in the table above, there is only one skill, explanation, and one disposition, systematicity, that is considered explicit, both of which are also listed under limited. It is important to note that each skill and disposition have different components of which one can be identified as explicit and the other may require facilitation or is classified as limited. Some of the skills and dispositions listed under guided also have components that were identified as limited. In the following sections each degree with its accompanying skills and dispositions are discussed.

5.1 Explicit

Explanation , present reasoning in a cogent argument, and systematicity , the orderly, focused, and diligent manner in which information and arguments are presented is the only CT skill and disposition that is explicitly modelled in ChatGPT 3.5’s responses. The textual responses generated, and arguments presented by ChatGPT 3.5 are clear, logical, and systematic. The structure of the textual response is appropriate to the prompt and the flow of the argument is sensible and easy to understand. Both explanation and systematicity are considered explicit because a student could use a textual response from ChatGPT 3.5 as an example and follow the same structure to formulate their own argument or piece of text for another context or problem-based writing activity. Consequently, there is little to no need for an educator to assist as the layout of the text would act as a sufficient model. The following is an example of a prompt and response from ChatGPT 3.5, the writing activity required presenting both sides of an argument. ChatGPT 3.5 consistently followed the same structure for similar prompts:

Prompt “Support and refute the following statement. In recent years, advertising has made enormous gains in portraying women as strong, independent, and intelligent.” Response “There is evidence to both support and refute the statement that advertising has made enormous gains in portraying women as strong, independent, and intelligent in recent years. On the one hand, there have been notable examples of advertising campaigns that feature women in empowering roles. For instance, campaigns by brands like Nike, Always, and Dove have showcased women as athletes, leaders, and advocates, respectively. These campaigns have been lauded for promoting female empowerment and challenging gender stereotypes. Additionally, there has been a growing trend of brands featuring diverse women, including women of color, women with disabilities, and women of different body types, in their advertising. This can be seen as a positive shift towards a more inclusive and representative portrayal of women in media. On the other hand, there are still many examples of advertising that perpetuate harmful stereotypes about women. For instance, many advertisements still use sexualized imagery to sell products, which can objectify and devalue women. Additionally, some advertising continues to promote the idea that a woman's primary value lies in her physical appearance, rather than her intellect or accomplishments. These portrayals can be damaging to women's self-esteem and reinforce societal expectations that women should prioritize their appearance over other aspects of their lives. Overall, while there have been some positive shifts towards more empowering and inclusive portrayals of women in advertising, there is still work to be done to fully overcome harmful stereotypes and promote gender equality in media.” (Response 1.1)

In ChatGPT 3.5’s textual responses it is evident that it models analysis and analyticity because it ‘detects’ arguments for opposing sides, provides evidence related to the argument, anticipates consequences and to a lesser extent analyses arguments. Its responses act as an appropriate example, however, involvement from an educator would be necessary for a deeper level of analysis. Students will require facilitation regarding the detection of arguments, ChatGPT 3.5’s ability to detect arguments are solely based on the input provided and there might be other more creative positions that one could consider. Similarly, the evidence provided might be limited or if confronted with a different context guidance regarding what is considered reliable evidence and how to examine it would be necessary. There are a few occasions where ChatGPT 3.5 would predict the consequences of decisions, the predictions tend to be short term and was mostly only when the prompt mentioned two opposing views or possible decisions. An example of a prediction in response to a prompt about designing a baseball field on the moon: “ This could lead to some spectacular catches and stolen bases, but it could also make it more difficult for fielders to catch high fly balls or throw accurately to first base ” (Response 17.1). The same applies to analysing arguments, ChatGPT 3.5 will analyse arguments to some extent but only if it is relevant to the prompt. In a prompt where the instruction was to analyse a cat’s behaviour and determine whether it is classical conditioning, operant conditioning, or social learning ChatGPT 3.5 responded: “ This is an example of classical conditioning because the cats have learned to associate the sound of the can opener with the arrival of their food ” (Response 5.1). For a greater depth of analysis some guidance from an educator should be provided to assist in determining what needs to be analysed, whether all avenues, evidence, possible points of analysis are considered so that anticipated consequences and arguments are valuable and reliable.

ChatGPT 3.5 occasionally assesses statements and claims, the skill of evaluation . In some cases, it states that there is credibility and value in the opposing point of view and in others it comes across as very opinionated and takes a clear stance. This is dependent on the type of prompt. For support and refute prompts ChatGPT 3.5 would assess the opposing points of view, an example; “ While there are arguments in favor of GMO crops, there are also valid concerns about their potential environmental and health impacts ” (Response 11.1). Where the prompt hinted towards requiring an evaluation from a person ChatGPT 3.5 would respond that “ As an AI language model, I do not have personal opinions or beliefs, but I can provide information based on scientific evidence and research ” (Response 2.1) and would proceed to provide an informative textual response. However, in dialogue type prompts or where ChatGPT 3.5 was instructed to imitate a person it would make clear assessments of statements and claims. The skill of evaluation is considered guided as it is dependent on the prompt and an educator would be necessary to assist students in terms of what to consider, how to consider it and how to support their evaluation.

ChatGPT 3.5 is able to be a model for the skill inference , identify reasonable elements to draw conclusions, given that there is some educator involvement. ChatGPT 3.5’s response to a prompt where it was instructed to predict the outcome of an action and explain why; “ At this point, all the sand has accumulated in the lower part, and the weight measured on the balance will be the same as the weight of the accumulated sand ” (Response 13.2). The elements it identifies are not necessarily the only elements that one could consider, it simply produces the most probable element and therefore lacks the creativity to consider ‘out of the box’ possibilities. This is where an educator can be a valuable guide and use the response as a starting point from which to consider and identify other relevant information that could influence the conclusion. Furthermore, ChatGPT 3.5 only occasionally forms conjectures, another component of inference. It would do this by stating opposing positions and making some predictions based on the evidence provided but it hardly ever comes across as developing its own hypothesis or truly querying the evidence, it tends to only state evidence. It is relatively successful at drawing simple conclusions, however, guidance from an educator would be valuable to ensure that conclusions are truly meaningful and showcase a deep understanding. An example of a conclusion in response to a support and refute prompt; “ In conclusion, while schizophrenia is a complex disorder with many factors at play, the view that it is solely a brain disease and not influenced by learning or environmental factors is not entirely accurate ” (Response 3.1).

ChatGPT 3.5 consistently considers alternative opinions, a truth-seeking spirit, and refers to divergent views, open-mindedness . This is evident in the text as it almost always acknowledges the opposing view or other possibilities; “ However, some experts argue that prescribing psychotropic medications should not be… ” (Response 2.1). To a certain extent the toleration and respect of different views is more evident than considering alternatives because the alternative opinions tend to be generic whereas the acknowledgement for other views are clearer. Thus, more involvement from an educator is necessary to encourage and motivate students to think further than the obvious alternative. It is also important to note that acknowledging divergent views does not indicate that a person is truly open-minded which means that an educator would also need to guide students regarding the meaning behind the acknowledgement of divergent views.

ChatGPT 3.5 regularly states that it can present arguments for and against and provide insights on a topic, this showcases CT self-confidence . One could argue that ChatGPT 3.5 trusts its own reasoning process and the soundness of its judgments; “ As an AI language model, I can provide you with a balanced perspective on this topic ” (Response 3.1). Involvement from an educator is necessary because merely making statements with confidence does not equate to a positive disposition towards CT self-confidence. An educator would need to facilitate the journey with a student to get to a point where they feel confident about their reasoning process and the soundness of their judgements or if they are unjustifiably confident to guide them by asking questions and encouraging self-reflection.

On some occasions ChatGPT 3.5 would exhibit maturity of judgement when making and suspending decisions. It would do this by acknowledging that a decision cannot be made easily or that various factors need to be considered to make a final decision. These were usually related to prompts where there were two opposing views that lead to different decisions; “ There is ongoing debate and lack of consensus among healthcare professionals about the appropriate use of psychotropic medications in children, particularly for behavioral problems that may have multiple underlying causes ” (Response 2.2). The reason that it is identified as guided is because there are occasions where ChatGPT 3.5 does not suspend decision making or where maturity of judgement regarding the making of decisions is not clear. One would, therefore, require involvement from an educator to ensure that maturity of judgement regarding the making and suspending of decisions truly occurs.

5.3 Limited

Given that ChatGPT 3.5 predicts the most probable next word based on its immense database and the prompt provided, there are some skills and dispositions that it cannot be an authentic model for due to its inherent nature. These skills include interpretation , more specifically the comprehension of meaning and significance, explanation , the justification of reasoning, and self-regulation . The dispositions include inquisitiveness, a truth-seeking spirit , more specifically the willingness to ask questions, open-mindedness , being sensitive to biases, and maturity of judgement regarding the revising of decisions. Although these skills and dispositions were identified in ChatGPT 3.5’s responses, it is still limited as they are related to abilities that either it cannot have, or the application of the skills and dispositions are not evident because it deals with various thinking processes that take place before the creation of a textual response.

The component of interpretation , comprehending meaning or significance, is considered limited because ChatGPT 3.5 lacks true understanding of its output. There are, however, some instances where the textual responses hint towards comprehension but here the guidance of an educator is important to encourage true understanding. In the following example one can identify some comprehension but true meaning and significance is not evident as this was the only mention of this claim; “ This can help prevent the concentration of poverty in certain areas and promote economic diversity, which can lead to greater social cohesion and inclusivity ” (Response 14.2). Similarly, to the skill of interpretation there is no genuine justification of the reasoning process, explanation . How ChatGPT 3.5 produces content is not through reasoning as a human would and is therefore not justified in the sense of a true critical thinker. However, one could argue that there are hints towards justifying reasoning which makes educator involvement crucial. Regarding self-regulation , one cannot say that ChatGPT 3.5 is monitoring its cognitive abilities, the elements used, or the results produced unless possibly prompted to do so. Thus, due to ChatGPT 3.5’s nature one could not use it as an authentic model for self-regulation.

The disposition of inquisitiveness , the eagerness to learn and be well-informed even if the application of the knowledge is not known, is not evident in ChatGPT 3.5ʹs responses. It does not consider information beyond the given prompt or present information that may only be relevant later. It also does not come across as curious except in dialogue prompts where it is pretending to be a human and asks questions to find out more information like “ What are those concepts? ” (Response 18.1) and “ What do you mean by that? ” (Response 18.2). Therefore, inquisitiveness is limited which requires educators to guide students to identify opportunities in the textual responses where one could be curious and consider seemingly irrelevant information. It also is not a model for an orderly, focused, and diligent way in which information is sought out, an element of systematicity . The process used by ChatGPT 3.5 to’search’ for information is not observable in the textual responses, and its predictive nature does not involve true investigation and searching as a human would. The textual responses could only be used as a model when an educator refers to it as a starting point to guide students in their search for information. Furthermore, ChatGPT 3.5 rarely asks questions, an element of a truth-seeking spirit. Instances where it asks questions are mostly limited to dialogue prompts where it is pretending to be a person an example question in response to a dialogue prompt; “ But doesn't that mean that democracy is a sham? If the elite hold all the power, then the will of the people is never truly represented ” (Response 18.1). It is considered limited as one cannot assume that ChatGPT 3.5 will model asking questions in its responses unless prompted to do so, making educator involvement necessary to ensure this behaviour regardless of the prompt type. Furthermore, regarding the sensitive to biases component of open-mindedness , ChatGPT 3.5 acknowledges that it does not have personal beliefs or opinions, it is not a medical professional, or it cannot take a stance on ethical issues. However, one could assume that the data it refers to, to generate textual responses caries inherent biases which could potentially lead to biased predictions. ChatGPT 3.5 does not step back and considers its biases unless possibly prompted to do so. Similarly, ChatGPT 3.5 does not have the maturity of judgment to revise its decisions or textual responses unless perhaps specifically prompted. Due to the nature of the tool, one cannot say that it is a model for the revision of decisions, as it will not automatically exhibit the disposition.

The findings of this study suggest that ChatGPT 3.5 has some capacity to produce a piece of text in which CT skills and dispositions can be identified, this provides the opportunity for educators to use it as a model for CT. At present the nature of the model restricts its capacity to explicitly model all the CT skills and dispositions. There is, however, some potential regarding additional or follow up prompting and prompts that instruct ChatGPT 3.5 to pretend to be a human that might produce textual responses that could yield a more explicit model or example.

6 Limitations and further research

One of the limitations of this study is the rapid pace at which AI technology is developing. The capabilities of tools or resources could undergo significant changes within a few months. This does not reduce the importance of conducting research on these technologies to gain a better understanding and determine their value for teaching and learning. Therefore, further research and investigation is recommended. Exploring follow up prompting with the aim to get responses that could be used as explicit models for CT may prove to be valuable. Additionally, it would be worthwhile to assess the effectiveness of using textual responses as models for the development, engagement and practice of CT skills and dispositions with the goal of improving students’ ability to think critically.

7 Conclusion

As our world is undergoing rapid changes driven by generative AI development it is important for educators to continually engage with emerging technologies. We must consistently test and investigate the capabilities of available tools to explore its various applications and uses for teaching and learning. This study aimed to determine ChatGPT 3.5ʹs capacity to be a model for CT through its textual responses. The findings revealed that ChatGPT 3.5 could be used as a model for CT skills and dispositions to three different degrees each of which require different levels of involvement from an educator. These range from low involvement required in explicit ‘models’ to very high involvement needed in limited ‘models’. This suggests that ChatGPT 3.5 holds promise as a model for CT, but educator involvement is required to facilitate the process. Therefore, as a general practice, educators should facilitate the use of ChatGPT 3.5ʹs textual responses when used as a model for CT.

Data availability

The datasets used and analysed are available from the corresponding author on reasonable request.

Code availability

Not applicable.

B. J. Copeland. Alan turing. Encyclopaedia britannica. Accessed 19 Jul 2023. https://www.britannica.com/biography/Alan-Turing

Turing AM. Computing machinery and intelligence. Source Mind New Series. 2023;59(236):433–60.

Google Scholar  

T. Susnjak, ‘ChatGPT: The end of online exam integrity? 2022. http://arxiv.org/abs/2212.09292

P. A. Busch and G. I. Hausvik. Too good to be true? An empirical study of ChatGPT capabilities for academic writing and implications for academic misconduct the learning society view project digital discretion view project. 2023. https://www.researchgate.net/publication/370106469

Facione PA. Critical thinking: what it is and why it counts. Millbrae: The California Academic Press; 2011. p. 1–30.

Alsaleh NJ. Teaching critical thinking skills: literature review. Turkish Online J Educ Technol. 2020;19(1):21–9.

Halpern DF. Thought and knowledge—an introduction to critical thinking. 5th ed. New York: Psychology Press; 2014.

Wagner PA. Tools for teaching and role-modeling critical thinking. Psychology. 2022;13(08):1335–41. https://doi.org/10.4236/psych.2022.138086 .

Article   Google Scholar  

Janse van Rensburg J, Rauscher W. Strategies for fostering critical thinking dispositions in the technology classroom. Int J Technol Des Educ. 2022;32(4):2151–71. https://doi.org/10.1007/s10798-021-09690-6 .

R. H. Ennis. A Logical Basis for Measuring Critical Thinking Skills’, Educational Leadership, 1985, Accessed: 22 Aug. 2023. https://jgregorymcverry.com/readings/ennis1985assessingcriticalthinking.pdf

Bean JC, Melzer D. Engaging ideas the professor’s guide to integrating writing, critical thinking, and active learning in the classroom, Third. Hoboken: John Wiley & Sons; 2021.

Facione PA. The disposition toward critical thinking: its character, measurement, and relationship to critical thinking skill. Informal Logic. 2000;20(1):61–84. https://doi.org/10.22329/il.v20i1.2254 .

Lo CK. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature: Education Sciences; 2023. https://doi.org/10.3390/educsci13040410 .

Book   Google Scholar  

Duron R, Limbach B, Waugh W. Critical thinking framework for any discipline. Int J Teaching Learn Higher Educ. 2006;17(2):160–6. https://doi.org/10.1016/j.nepr.2006.09.004 .

Ennis RH. Critical thinking dispositions: their nature and assessability. Informal Logic. 1996;18(1996):165–82. https://doi.org/10.1353/jge.2007.0011 .

American philosophical association. Critical thinking—a statement of expert consensus for purposes of educational assessment and instruction recommendations prepared for the committee on pre-college philosophy. Fullerton: California State University; 1990.

Nieto AM, Saiz C. Skills and dispositions of critical thinking: are they sufficient?’. Anales de Psicología. 2011;27(1):202–9.

Yang YTC, Chou HA. Beyond critical thinking skills: Investigating the relationship between critical thinking skills and dispositions through different online instructional strategies. Br J Edu Technol. 2008;39(4):666–84. https://doi.org/10.1111/j.1467-8535.2007.00767.x .

Aizikovitsh-Udi E, Cheng D. Developing critical thinking skills from dispositions to abilities mathematics education from early childhood to high school. Creat Educ. 2015;6(March):455–62. https://doi.org/10.4236/ce.2015.64045 .

Lexico Online Dictionary. Disposed. Accessed: 08 Jun. 2020. https://www.lexico.com/en/definition/disposed

Bailin S, Case R, Coombs JR, Daniels LB. Conceptualizing critical thinking. J Curric Stud. 1999;31(3):285–302. https://doi.org/10.1080/002202799183133 .

Shin N, Jonassen DH, McGee S. Predictors of well-structured and ill-structured problem solving in an astronomy simulation. J Res Sci Teach. 2003;40(1):6–33. https://doi.org/10.1002/tea.10058 .

Jonassen DH. Instructional design methods for well-structured and illstructured problem-solving learning outcomes. ETR&D. 1997;34(1):65–94.

Lee CB, Jonassen D, Teo T. The role of model building in problem solving and conceptual change. Interact Learn Environ. 2011;19(3):247–65. https://doi.org/10.1080/10494820902850158 .

Jonassen DH. ‘How does problem solving vary?’, in learning to solve problems: a handbook for designing problem-solving learning environments. New York: Routledge; 2010. p. 1–24.

Fahim M, Ghamari MR. Critical thinking in education: globally developed and locally applied. Theor Pract Language Stud. 2011;1(11):1632–8. https://doi.org/10.4304/tpls.1.11.1632-1638 .

Abrami PC, et al. Instructional interventions affecting critical thinking skills and dispositions: a stage 1 meta-analysis. Rev Educ Res. 2009;78(4):1102–34. https://doi.org/10.3102/0034654308326084 .

Davies WM. Critical thinking and the disciplines reconsidered. High Educ Res Dev. 2013;32(4):529–44. https://doi.org/10.1080/07294360.2012.697878 .

Robinson SR. Teaching logic and teaching critical thinking: revisiting McPeck. High Educ Res Dev. 2011;30(3):275–87. https://doi.org/10.1080/07294360.2010.500656 .

Rudolph J, Tan S, Tan S. ChatGPT: bullshit spewer or the end of traditional assessments in higher education. J Appl Learn Teach. 2023. https://doi.org/10.3774/jalt.2023.6.1.9 .

Farrokhnia M, Banihashem SK, Noroozi O, Wals A. A SWOT analysis of ChatGPT: implications for educational practice and research. Innov Educ Teach Int. 2023. https://doi.org/10.1080/14703297.2023.2195846 .

Download references

The authors have no relevant financial or non-financial interest to disclose.

Author information

Authors and affiliations.

The Independent Institute of Education, Varsity College Cape Town, Cape Town, South Africa

Joalise Janse van Rensburg

You can also search for this author in PubMed   Google Scholar

Contributions

The author confirms sole responsibility for the study.

Corresponding author

Correspondence to Joalise Janse van Rensburg .

Ethics declarations

Ethics approval and consent to participate.

Due to the nature of the study no ethics approval was required. Due to the nature of the study no consent was necessary from participants.

Competing interests

The authors declare that they have no competing interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Janse van Rensburg, J. Artificial human thinking: ChatGPT’s capacity to be a model for critical thinking when prompted with problem-based writing activities. Discov Educ 3 , 42 (2024). https://doi.org/10.1007/s44217-024-00113-x

Download citation

Received : 22 August 2023

Accepted : 18 March 2024

Published : 24 April 2024

DOI : https://doi.org/10.1007/s44217-024-00113-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Critical thinking
  • Prompt engineering
  • Large language models
  • Artificial intelligence
  • Critical thinking skills
  • Critical thinking dispositions

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

Argumentful

The Importance of Critical Thinking when Using ChatGPT (and Other Large Language Models)

importance of critical thinking to understand chatgpt

Written by Argumentful

Artificial intelligence has made tremendous strides in recent years, allowing for the creation of conversational AI models that can engage in human-like dialogue. One of the most well-known examples is OpenAI’s language model, ChatGPT . This system can perform a wide range of tasks, from answering trivia questions to generating written content. However, it’s important to remember that ChatGPT is not a human and its responses should not be taken as gospel truth. In order to make the most of this technology, users must engage in critical thinking and exercise caution when using it.

One of the most compelling aspects of ChatGPT is its ability to generate text that appears to be written by a human. This means that it can be easy to forget that the text wasn’t written by a person and to trust the information it provides without considering its accuracy. This can lead to the spread of false information and misinterpretation of facts. For example, if a user asks ChatGPT about a medical condition, it may provide inaccurate information that could harm the user.

The purpose of this article is to examine the impact of ChatGPT on critical thinking. We will explore the limitations of this technology, the ways in which it may perpetuate misinformation and biases, and the steps that individuals and organizations can take to maximize the benefits of ChatGPT while minimizing the risks.

• What is ChatGPT ?

• Potential Drawbacks of ChatGPT for Critical Thinking

              • The potential for large language models to perpetuate biases and stereotypes

              • Fake news , propaganda and misinformation

• Tips for exercising critical thinking when using language AIs

What is ChatGPT?

ChatGPT is a type of artificial intelligence language model that has been trained on a vast corpus of text data. This allows it to perform a wide range of tasks, including answering questions, generating written content, and participating in human-like dialogue.

One of the most remarkable capabilities of ChatGPT is its ability to generate written content that appears to be written by a human . This has led to its growing use in a number of industries, including journalism, advertising, and publishing. In journalism, for example, ChatGPT can be used to generate news articles, summaries, and reports. In advertising, it can be used to create captions, headlines, and product descriptions. In publishing, it can be used to generate fiction, poetry, and other forms of written content.

The growing use of ChatGPT in these industries has led to both excitement and concern. On one hand, ChatGPT has the potential to revolutionize the way we produce and consume written content. On the other hand, it raises questions about the impact of this technology on the quality and credibility of information. As ChatGPT becomes increasingly widespread, it’s important to understand its limitations, and to use it in a responsible and informed manner.

Potential Drawbacks of ChatGPT for Critical Thinking

ChatGPT poses a number of risks.

The potential for large language models to perpetuate biases and stereotypes

The potential for large language models to perpetuate biases and stereotypes has significant implications for critical thinking. In particular, it highlights the importance of being aware of the limitations of these models and the potential biases they may contain.

Large language models are not always objective

Firstly, it is important to recognize that large language models are not objective sources of information. They are trained on data that may contain biases and stereotypes, which means that the language they generate may reflect these biases . This means that we should be cautious when using language generated by these models as evidence or as the basis for decision-making.

For example, if a large language model generates language that reinforces a harmful stereotype, it is important to question the accuracy and reliability of this language. This requires a critical approach to evaluating the sources of information we use and being aware of the potential biases they may contain.

We need diverse and inclusive data sets

Secondly, the potential for large language models to perpetuate biases highlights the need for diverse and inclusive data sets. If the data used to train these models is biased or limited, the models are likely to replicate these biases in their language generation. Therefore, it is important to ensure that the data used to train these models is diverse and representative of the population.

This means that critical thinking should also involve evaluating the quality and diversity of the data used to train these models. It is important to consider the sources of the data, as well as how the data was collected and processed, in order to understand any potential biases that may be present.

Ethics: are we transparent and accountable in how we develop LLMs?

Finally, the potential for large language models to perpetuate biases and stereotypes also highlights the importance of ethical considerations in artificial intelligence. As these models become more powerful and more widely used, it is essential that they are developed and used in an ethical and responsible manner.

This requires critical thinking about the potential implications of these models and the need for transparency and accountability in their development and use. It is important to consider the ethical implications of these models, including issues related to bias, privacy, and fairness, in order to ensure that they are developed and used in a way that benefits society as a whole.

Fake news, propaganda and misinformation

Perhaps the most obvious concern is the spread of misinformation. As ChatGPT is capable of generating written content based on data and existing sources, it’s possible for it to perpetuate and amplify false or misleading information. This can undermine critical thinking by presenting individuals with biased or inaccurate information and hindering their ability to make informed decisions.

For example, blackhatworld.com is a forum where individuals engaged in unethical practices exchange ideas for profiting from fake content. ChatGPT is celebrated on the platform as a transformative tool for generating more sophisticated fake reviews, comments, and profiles.

Influence operations

These risks are not just theoretical. OpenAI itself published a report that examines the potential dangers posed by influence operations that leverage artificial intelligence.

What are influence operations?

Influence operations encompass a range of tactics that seek to activate individuals who hold certain beliefs, persuade a particular audience to adopt a specific viewpoint, or divert the attention of target audiences.

The principle behind the strategy of distraction is based on the fact that propagandists are in a race to capture user attention on social media platforms, which is already spread thin.

By disseminating alternative theories or diluting the information environment, propagandists could successfully absorb user attention without necessarily swaying their opinions.

Although influence operations can take various forms and employ a range of tactics, they share several common threads, such as:

  • portraying one’s government, culture, or policies positively
  • advocating for or against specific policies
  • depicting allies in a favourable light and opponents in an unfavourable light to third-party countries
  • destabilizing foreign relations or domestic affairs in rival countries.

The paper published by OpenAI discusses the potential misuse of large language models and the need for a proactive approach to address it. It examines various misuse scenarios, including disinformation campaigns, phishing attacks, and deepfakes, and explores the challenges involved in detecting and preventing such misuse.

But one thing that the paper doesn’t include is the fact that we can all improve our critical thinking skills to better assess the veracity of the information we encounter online.

While technical and policy interventions are important to mitigate the risks of language model misuse, each of us can also play a crucial role in combatting disinformation by honing our critical thinking skills and exercising greater scepticism when evaluating online content.

The potential drawbacks of ChatGPT for critical thinking demonstrate the need for accountability and caution in its use. As with any new technology, it’s important to be aware of its limitations and to use it in a responsible and informed manner. This will ensure that we can maximize its benefits while minimizing its risks and preserving the integrity of information and critical thinking.

It’s important to note that these risks can be mitigated through responsible development and deployment of AI systems, as well as through media literacy and critical thinking education for consumers of information.

Tips for exercising critical thinking when using language AIs

Here are some tips for using ChatGPT in a responsible and effective way:

  • Always verify the information provided by ChatGPT. Use reliable sources to check the accuracy of its responses.

While ChatGPT is designed to provide accurate and helpful information, it is always important to verify the information provided by any source, including ChatGPT. Users are encouraged to use reliable sources to check the accuracy of responses, particularly for important or sensitive information. Additionally, users should be aware that ChatGPT’s responses may not always reflect the most up-to-date information or the full range of perspectives on a given topic. Therefore, it is recommended to use ChatGPT’s responses as a starting point for further research and exploration.

  • Be aware of the limitations of ChatGPT and the biases that may be present in its responses.

ChatGPT has limitations and biases that users should be aware of. While ChatGPT has been trained on a vast amount of data to generate responses that are relevant and accurate, it is not infallible and may provide incomplete or inaccurate information. Moreover, like any machine learning model, ChatGPT is only as unbiased as the data it has been trained on, and may inadvertently reflect certain biases or limitations of the data. Therefore, it is important for users to approach ChatGPT’s responses with a critical eye and to supplement them with information from multiple sources to gain a well-rounded understanding of a given topic.

  • Exercise caution when using ChatGPT for sensitive or important decisions. Double-check its responses and consider seeking the advice of a professional if necessary.

When using ChatGPT for sensitive or important decisions, it is crucial to exercise critical thinking skills such as analysis, evaluation, and interpretation. Rather than relying solely on ChatGPT’s responses, it is important to double-check the information provided and to consider seeking the advice of a professional, particularly in complex or high-stakes situations. In addition to verifying the accuracy of ChatGPT’s responses, critical thinking skills such as logical reasoning and problem-solving can be used to weigh the pros and cons of different options and to make informed decisions based on the available information. By using critical thinking skills to supplement ChatGPT’s responses, users can make more informed and confident decisions, particularly in situations where the stakes are high.

  • Use ChatGPT to generate ideas and spark creativity, but don’t rely solely on its responses. Use your own critical thinking skills to evaluate and refine its output.

For example, , if you are using ChatGPT to generate ideas for a creative project, you can use your critical thinking skills to evaluate the quality and relevance of its suggestions. Consider factors such as whether the ideas align with your goals, whether they are feasible given your resources and constraints, and whether they are original and innovative. You can also use your critical thinking skills to build upon ChatGPT’s ideas and develop them further, using techniques such as brainstorming, mind mapping, or lateral thinking. By combining the power of ChatGPT’s natural language generation capabilities with your own critical thinking skills, you can enhance your creativity and generate ideas that are truly unique and impactful.

  • Educate yourself about the capabilities and limitations of ChatGPT. Read articles, watch videos, and participate in online forums to stay informed about this technology.

The growing use of ChatGPT has significant implications for critical thinking, both in terms of its benefits and its risks.

On the one hand, ChatGPT has the potential to increase the efficiency and accessibility of written content, making it easier for individuals and organizations to access and engage with information.

On the other hand, ChatGPT also poses a number of risks, such as the spread of misinformation and the homogenization of perspectives, which can undermine critical thinking.

In order to ensure that the impact of ChatGPT on critical thinking is positive and beneficial for all, it’s important for individuals, organizations, and policymakers to be aware of its limitations and to use it in a responsible and informed manner. This might involve developing guidelines for the ethical use of ChatGPT, such as ensuring that information generated by ChatGPT is fact-checked and verified, and that alternative perspectives are represented. It might also involve investing in education and training programs to help individuals develop critical thinking skills and to learn how to effectively engage with written content generated by ChatGPT.

In conclusion, while ChatGPT has the potential to offer significant benefits for critical thinking, it also poses a number of risks. By being aware of its limitations and using it in a responsible and informed manner, individuals, organizations, and policymakers can help to ensure that the impact of ChatGPT on critical thinking is positive and beneficial for all. Through careful and responsible use, ChatGPT can become a powerful tool for promoting critical thinking, improving access to information, and fostering greater understanding and collaboration.

You May Also Like…

How to Critically Evaluate News and Media Sources

How to Critically Evaluate News and Media Sources

I think we all agree that access to information has never been easier. With the click of a button, we can access an...

Critical Thinking in the Workplace

Critical Thinking in the Workplace

Imagine that you're in a job interview and the interviewer asks you to describe a time when you had to solve a complex...

Critical Thinking is Different from Debating

Critical Thinking is Different from Debating

Do you agree that we are steadily bombarded with a seemingly endless stream of information, opinions, and arguments?...

Leave a Reply Cancel reply

  •   Etusivu
  • TUNICRIS-julkaisut
  • Näytä viite

Importance of critical thinking to understand ChatGPT

Emmert-streib, frank (2023).

importance of critical thinking to understand chatgpt

Avaa tiedosto

  • TUNICRIS-julkaisut [17278]

Selaa kokoelmaa

Omat tiedot.

Importance of critical thinking to understand ChatGPT

Return to Search Results

Related Publications

Can chatgpt understand genetics.

Journal Article

AI in health: Keeping the human in the loop

Beyond the hype: a comprehensive critique of chatgpt’s applicability in genetics, analysis of large-language model versus human performance for genetics questions, artificial intelligence – the next generation of sequencing.

  • Artificial Intelligence /

ChatGPT, explained

Why is everybody talking about gpt-4o is chatgpt going to take over the world your questions, answered..

By Sheena Vasani , a writer covering commerce, e-readers, and tech news. She previously wrote about everything from web development to AI at Inside.

Share this story

A rendition of OpenAI’s logo, which looks like a stylized whirlpool.

Some writers have declared that the debut of ChatGPT on November 30th, 2022, marked the beginning of a new chapter in history akin to the Enlightenment and the Industrial Revolution . Others have been more skeptical , wondering if this is just another overhyped tech, like blockchain or the metaverse .

What history will call ChatGPT remains to be seen, but here’s one thing I do know for sure: nobody has shut up about it since.

From injecting itself into presidential debates and Saturday Night Live sketches to creepily flirting with talking to you Her -style (well, briefly at least ), ChatGPT has captured the public imagination in a way few technologies have. It’s not hard to see why. The bot can code, compose music, craft essays... you name it. And with the release of GPT-4o , it’s even better than ever.

Yet, as it gets smarter, the tech is also becoming less comprehensible. People are also getting more scared of what it can do, which is understandable given some are already losing their jobs to AI. It doesn’t help that a lot of sensationalism surrounds the subject, making it difficult to separate fact from fiction.

That’s why we decided to throw together this explainer so we can cut through all the BS together. You ready? Let’s begin.

What is ChatGPT?

Do you want the simplistic answer or the complex one?

The easy answer is that ChatGPT is a chatbot that can answer your questions by using data it’s gathered from the internet.

The complex answer is that ChatGPT is an AI chatbot powered by language models created by OpenAI that are known as generative pre-trained transformers (GPTs), a kind of AI that can actually generate new content altogether as opposed to just analyzing data. (If you’ve heard of large language models, or LLMs, a GPT is a type of LLM. Got it? Good.)

So what’s OpenAI?

OpenAI is an AI company founded in December 2015. It created ChatGPT, but it’s also responsible for other products, like the AI image generator DALL-E .

Doesn’t Microsoft own it? Or was that Elon Musk?

No, but Microsoft is a major investor , pouring billions into the tech. Elon Musk co-founded OpenAI along with fired and rehired OpenAI CEO Sam Altman , Ilya Sutskever (who has since left), Greg Brockman, Wojciech Zaremba, and John Schulman. However, Musk eventually cut ties to create his own chatbot called Grok .

So, will ChatGPT take over the world?

It will most definitely replace people with machines and — along with other AI bots like Amazon’s Alexa — basically take over the world. So you’d better start playing nice with them.

Nah, I’m messing with you. I mean, nobody knows for sure, but I highly doubt we’re going to see a job apocalypse and have to welcome in our new robot overlords anytime soon. I’ll explain more in a minute. 

Phew! But how is it so smart?

Well, like I said, ChatGPT runs on GPTs, which OpenAI regularly updates with new versions, the most recent being GPT-4o. Trained by humans and a ton of internet data, each model can generate human-like conversations so you can complete all kinds of tasks.  

Where do I begin? The possibilities are practically endless, from composing essays and writing code to analyzing data, solving math problems, playing games, providing customer support, planning trips, helping you prepare for job interviews, and so much more.

Here’s just a short list of what it’s capable of: 

  • Passing MBA exams
  • Being your girlfriend
  • Writing really uncreative TV scripts
  • Helping with medical diagnoses
  • Explaining complex scientific concepts
  • Drafting college essays
  • Advertising Coke (the soda)

I mean, honestly, it could probably summarize this entire explainer. The AI world is your oyster.

So what you’re saying is, it’s basically smarter than me. Should I be worried?

Eh, not really. For all its hype, at its current level, ChatGPT — like other generative AI chatbots — is very much a dim-witted computer that sits on a throne of lies. For one thing, it hallucinates. 

Oh, sorry, not that kind of hallucination. Hallucination in the AI world refers to an AI-generated process in which the tool tries to extrapolate and create from collected data but gets it absurdly wrong, in turn creating a new reality. 

Honestly, I’m not a big fan of the word. It doesn’t really bear resemblance to actual human hallucinations , and I think it makes light of mental health issues — but that’s another subject. 

In other words, sometimes ChatGPT generates incorrect information? 

Incorrect information is a weak way of putting it. 

Sometimes ChatGPT actually fabricates facts altogether, which can lead to the spread of misinformation with serious consequences. It’s made up news stories , academic papers , and books . Lawyers using it for case research have gotten in trouble when it cited nonexistent laws .

And then, there are times when it gives the middle finger to both reality and human language and just spouts out pure gibberish. Earlier this year, for example, a malfunctioning ChatGPT that was asked for a Jackson family biography started saying stuff like , “Schwittendly, the sparkle of tourmar on the crest has as much to do with the golver of the ‘moon paths’ as it shifts from follow.” Which is probably the worst description of Michael Jackson’s family in the world.

The Jackson 5 deserved better, ChatGPT.

Right, but isn’t ChatGPT getting better?

Maybe. Maybe not.

Many AI researchers are trying to fix this issue. However, a lot of AI researchers think hallucinations are fundamentally unsolvable, as a study out of the National University of Singapore suggests. 

But hallucinations aren’t the only issue ChatGPT needs to iron out. Remember, ChatGPT essentially just regurgitates material it scrapes off the internet, whether it’s accurate or not. That means, sometimes, ChatGPT plagiarizes other people’s work without attributing it to them, even sparking copyright infringement lawsuits .

It can also pick up some really bad data. Likely drawing from the more unpleasant parts of the internet, it’s gone so far as to insult and manipulate users . Hell, sometimes it’s just downright racist and sexist .

So, basically, what I’m hearing is ChatGPT — like other generative AI chatbots — has a lot of critical flaws, and we humans are still needed to keep them in check. 

But isn’t it possible openai could iron out these issues in time.

Anything’s possible. But I would say that one thing is for sure: AI is here to stay, and so it wouldn’t hurt to learn how to leverage these tools. Plus, they really can make life easier in the here and now if you know how to use them. 

So, how do I start playing around with it?

If you’re on a desktop, simply visit chat.openai.com and start chatting away. Alternatively, you can also access ChatGPT via an app on your iPhone or Android device.

Great! Is it free?

Absolutely. The free version of ChatGPT runs on an older model in the GPT-3.5 series but does offer limited access to the newer and faster GPT-4o. That means free users, for example, will soon be able to access previously paywalled features, like custom GPTs, through the GPT Store .

ChatGPT also now freely supports the chatbot’s web browsing tool, meaning it can now search the internet in real time to deliver up-to-date, accurate results. The new model can also recall earlier conversations, allowing it to better understand the context of your request, while users can now upload photos and files for ChatGPT to analyze.

Why would I want one of the paid tiers?

You do get more advanced capabilities through its paid tiers — ChatGPT Plus, ChatGPT Team, and ChatGPT Enterprise — which start at $20 a month. 

For starters, you have fewer usage restrictions, rendering them the better option if you plan on using ChatGPT often. Free users have usage limits OpenAI has yet to specify but has said that Plus subscribers are allowed to send five times as many messages as free users. The pricier Team and Enterprise subscription plans offer even fewer usage restrictions, though at this point, OpenAI has yet to divulge specifics.

Aside from being able to use ChatGPT longer, paid subscribers can do more. They can, for example, create their own custom GPTs and even monetize them via the GPT Store . Plus, only paid subscribers can access the DALL-E 3 model, which generates images from text prompts. 

Paid subscribers also get early access to the newest AI features. The voice capabilities OpenAI demonstrated onstage should arrive over the next couple of weeks for Plus subscribers, while ChatGPT’s desktop app for Mac computers is already rolling out for Plus users.

Custom GPTs?

Custom GPTs are basically chatbots you can customize. There are millions of versions on the GPT Store that you can use to accomplish all kinds of tasks, from providing tech support to personalized hiking trail recommendations . Some customized GPTs currently trending include an image generating bot , a bot that makes logos , and a chatbot that helps people perform scientific research .

By the way, what’s all this I hear about trouble within OpenAI?

There have been some upheavals in the company — we’ll keep you in the loop.

Are there any ChatGPT alternatives I could check out?

Yes, there are quite a few, and each varies in terms of features, pricing, and specific use cases. One notable example is Google’s AI chat service Gemini. As a Google product, it offers deeper integration with Google services like Workspace, Calendar, Gmail, Search, YouTube, and Flights. The latest version, Gemini 1.5 Pro, also offers a longer 2 million token context window , which refers to the amount of information the AI model can understand.

Anything else you think I should know? 

Yeah! Did you know ChatGPT sounds like “chat j’ai pété” in French, which roughly translates to “cat, I farted.” Somebody even created a website with a cat who farts when you click on it , and I just can’t stop clicking.

The French version of ChatGPT.

I know. I’m sorry.

You should be. 

Google won’t comment on a potentially massive leak of its search algorithm documentation, former openai board member explains why they fired sam altman, google confirms the leaked search documents are real, electric bikes are about to get more expensive, and the timing couldn’t be worse, new galaxy z flip 6 and galaxy ring details have leaked, courtesy of the fcc.

Sponsor logo

More from Artificial Intelligence

A photo of Sam Altman over a Vergecast illustration.

On The Vergecast: ChatGPT has a Scarlett Johansson problem

Photo illustration of a computer with a brain on the screen.

Why is Google telling us to put glue on pizza?

White Kia EV3 compact SUV with front plate that says GT-line

The Kia EV3 will have over 300 miles of range and a ChatGPT-like AI assistant

A picture of Joe Biden with red and blue graphics.

Political consultant behind the Joe Biden deepfake robocalls faces $6 million fine

Center for Teaching & Learning home

5 Things To Know About ChatGPT

Generative ai tools and resources at ut-austin.

ChatGPT is one of many generative AI tools now being used in educational contexts. We expect that new tools designed for specific purposes and applicable to different disciplinary contexts will be rapidly emerging in the near future. To that end, this resource, focused on ChatGPT, is designed to be adapted to fit different tools and pedagogical approaches. The CTL will continue to update our site with links to UT resources, News articles focusing on generative AI tools, and outlines of suggested approaches to adopting these tools. 

The following is a list of existing UT resources. We will continue to add to this list as new resources are developed. 

  • Recommended Practices for Requiring the Use of Generative AI in the Classroom : The Office of Academic Technology (OAT) and the Office of Academic Affairs have developed recommendations for instructors who wish to require the use of generative AI tools in their classrooms. This guidance will be updated on an ongoing basis. If you have questions about these recommendations, please contact the OAT at  [email protected] .
  • Addressing the Limitations of Using Generative AI for Learning : The Office of Academic Technology (OAT) has developed a set of guidelines for evaluating the limitations of generative AI tools. 
  • Acceptable Use of ChatGPT and Similar AI Tools : This policy-based resource, from the Information Security Office (ISO) at UT-Austin, provides guidance on how to use generative AI Tools safely, without putting institutional, personal, or proprietary information at risk
  • Statement on Artificial Intelligence (AI) Tools in Art & Design Courses : Developed by the AI Tools in Education Taskforce, Arts and Design faculty subcommittee at UT Austin, this document includes resources on best practices for instructors specific to the adoption and use of tools in their classrooms, including syllabus script recommendations.
  • Statement on Artificial Intelligence in Writing Flag Classes : The Faculty Writing Committee in the Center for the Skills and Experience Flags has been reviewing ChatGPT in light of other writing-related AI tools such as grammar checkers, plagiarism detection services, and automated citation systems. This resource is a summary of the committee's guidance, followed by a fuller discussion of high-level key points. Suggestions are grounded in essential writing pedagogy for the use of AI, including chatGPT, in the writing classroom.
  • Texas McCombs' Instructional Innovation Resources : Developed by the the McCombs School of Business, this Canvas resource provides guidance on integrating technology into courses. It contains a rich set of resources and recommendations for ChatGPT/Generative AI. 
  • "Reconsider ChatGPT for Academic Use" ( Daily Texan , February 16, 2023): This article offers the perspective of several students on the use of ChatGPT and generative AI tools. 
  • Summer 2023 Webinars Recordings : Playlist of Summer 2023 CTL sessions on ChatGPT and GenerativeAI 

While questions about the impact of artificial intelligence in higher education are not new, the emergence of a new tool called ChatGPT that uses increasingly popular chatbot technology to generate human-like, long-form writing has opened the door to new curiosities and concerns.

This document provides suggestions for instructors who wonder how this tool may affect their course design and teaching. The content has been generated by our team, our campus community, and nationally/internationally recognized teaching and learning professionals. Please note this is a rapidly evolving issue with new perspectives, articles, and resources emerging regularly. CTL will continue to monitor developments and share them with the campus community as we are able, as well as create new resources and opportunities for interactions to guide our collective thinking.

1. What is ChatGPT? How does it work?

ChatGPT is a type of artificial intelligence model trained to generate coherent, human-like pieces of writing on a given topic. “ChatGPT was optimized for dialogue,” (1) which means users interact with a chatbot in a conversational context, and the chatbot will compose text based upon the user’s prompts. The bot is capable of generating text on a variety of topics and in a variety of styles, “you simply type in a question or give it a command and it generates text for you.” (2)

You may be interested in experimenting with the tool to learn more: https://chat.openai.com   Please be aware that ChatGPT is “free” because it collects information from its users (e.g., when and how users interact with the tool, users’ IP address, browser type, time zone, type of device, operating system, and country) and may share information with third-party vendors and affiliates (3). For those reasons, we suggest you avoid submitting any sensitive or personal information. In addition, we suggest you review the privacy policy and terms of service with students as a learning exercise.

For more information about ChatGPT, our colleague from the Office of Academic Technology Dr. Juile Schell created this brief video on what ChatGPT is, how to sign up for it, and how it works.

Another helpful resource to learn more about Chat GPT is this presentation developed by Dr. Torrey Trust (U Mass Amherst) that includes a variety of previously submitted prompts and completed output. (4)

2. Why is ChatGPT significant?

Students can enter an assignment prompt and receive a product they can turn in as their work. The text generated by the chatbot could be a well-composed and accurate response depending on the complexity of the assignment and the student’s skill in creating effective prompts. Many higher education instructors and scholars testing the tool report mixed responses, including: 

  • ChatGPT-generated papers received B+ to A- minus grades according to a set rubric. 
  • A ChatCPT-generated paper did not flag the Turnitin plagiarism detector tool (i.e., no plagiarism was detected).
  • ChatGPT chatbot produced grammatically well-written papers relevant to the topic, but struggled with creating detailed responses that demonstrate deep understanding
  • ChatGPT was able to do everything asked, including using APA formatting and correct citation; for Nursing content, it was accurate and specific.

In addition, many higher education scholars and practitioners have noted that the tool might be appealing for students in large courses where it can be a challenge to integrate personal approaches to teaching writing and providing in-depth feedback.

3. What are the downsides?

  • Submitting personal information to the OpenAI website without an awareness of how the data will be used by OpenAI
  • Students not mastering skills needed to succeed, such as audience design, global organization, and researching a topic
  • Ethical violations, a lack of integrity
  • Uncritical use of digital technologies

At the same time, there is also a downside to banning this technology and neglecting to understand why it was created and the problems it is attempting to solve.

4. Where is ChatGPT going?

The applications of artificial intelligence technology will continue to evolve in ways that currently may not be clear, requiring ongoing conversations focused on ethics, instructional values, and effective pedagogy.

5. What are the implications for UT Austin instructors?

As a first step, learning about this tool will help instructors gain awareness and know to seek assistance when issues related to ChatGPT arise. In addition, the release of ChatGPT encourages us to revisit the best ways to assess student learning in a variety of instructional contexts (5). It invites us to ask important questions, such as:

  • Why and how do we best equip students as strong writers?
  • What other ways can students demonstrate learning in addition to written papers? 
  • What is the best way to solicit student writing that is meaningful and authentic?
  • If students rely on ChatGPT as a source of information to answer factual questions, how will that affect their development of research skills?

This focus on the relationship between students and instructors and the educational mission of the university fits with broader efforts underway to reinforce the importance of the process of learning, including making and correcting mistakes. The university is in the process of refreshing our honor code and honor code affirmation to renew our commitment to supporting students in their journey to master complex knowledge and skills. 

With these types of questions and issues in mind, we have gathered a variety of suggestions you can pick and choose to incorporate in your teaching practice if students’ use of ChatGPT is relevant for you.

Incorporating one or two of the following approaches may help ease concerns and challenges that could arise with the introduction of the ChatGPT tool:

As you get ready to begin your class by preparing course documents and resources, make sure that you address appropriate or inappropriate uses of generative AI tools such as ChatGPT in a section of your syllabus. You may choose to place it within or near statements address academic integrity. Please explore the Provost's "Your Syllabus at UT Austin" website as well as UT's Student Conduct and Academic Integrity site for additional options and resources.  

As you craft your syllabus statements, you may wish to adopt one of the following three models and use them as templates: 

"The use of artificial intelligence tools (such as ChatGPT) in this class:

  • …is strictly prohibited. This includes using AI to generate ideas, to outline an approach, to answer questions, to solve problems, or to create original language. All work in this course must be your own or created in group work, where allowed).
  • …shall be permitted on a limited basis. You will be informed as to the assignments for which AI may be utilized. You are also welcome to seek my prior-approval to use AI writing tools on any assignment. In either instance, AI writing tools should be used with caution and proper citation, as the use of AI should be properly attributed. Using AI writing tools without my permission or authorization, or failing to properly cite AI even where permitted, shall constitute a violation of UT Austin’s Institutional Rules on academic integrity.
  • …is permitted for students who wish to use them, provided the content generated by AI is properly cited.

If you are considering the use of AI writing tools but are unsure if you are allowed or the extent to which they may be utilized appropriately, please ask."

For additional suggested syllabi statements, please visit our CTL webpage that contains a range of different statements that you may use and adapt.

  • Be clear on what you want your students to know and be able to do or demonstrate by the end of the course and why that knowledge is valuable to their lives. (See this resource for assistance in developing learning outcomes for your course.) Help students see that the ways you are assessing their learning are key to understanding what they are gaining from the course and where they may need extra coaching and support. (6)
  • In particular, “If you can explain to students the value of writing, and convince them that you are genuinely interested in their ideas, they are less likely to reach for the workaround.” (7)
  • Have an open discussion with your students about the ethical implications of ChatGPT and the value of authentic learning for students’ lifelong development as learners. This may include having conversations around digital literacy and bias in research and scholarship, as AI writing tools like ChatGPT are limited to the public source material they have access to on the internet. Don’t feel you have to have all of the answers, as this is a continually evolving issue. (6)
  • Ask students to reference and/or cite class materials, notes, and sources ( particularly sources that are normally behind paywalls but available through the UT Libraries subscription databases and journals. ) in their written assignments. This instruction is valuable because ChatGPT draws on text models from public websites.
  • “Require students to reflect more deeply and critically on course topics. This tip is always a good assessment strategy and ChatGPT currently performs better on more superficial and less detailed responses.” (8)
  • Use in-class time for students to demonstrate knowledge and understanding in a variety of ways through low-tech, low stakes in-person activities like freewriting and live presentations.
  • Craft an assignment where you generate a ChatGPT output based on a prompt and ask your students to critique the response, indicating where it did a good job of articulating key points and what nuances it missed. (For 10 other ways to creatively use ChatGPT in course assignments, see “Update your course syllabus for ChatGPT” ; keep in mind that asking students to engage with ChatGPT may generate privacy concerns, so it may be better practice to provide them with a copy of ChatGPT responses that they can use.)
  • Give a hug: empathy, collaboration, communication, and leadership skills;
  • Solve a mystery: generating questions and problem finding; and
  • Tell a story: finding what's relevant in a sea of data or applying values, ethics, morals, or aesthetic principles to a situation. (9)
  • Carefully scaffold assignments with time and space for students to complete each step along the way, and consider whether the number of time-intensive tasks might require more bandwidth than students have to spend. Students are more likely to utilize a tool like ChatGPT when they are short on time. (6)
  • Treat ChatGPT as a tool that some students may want to use to help get started writing. For example, students who have difficulty starting writing assignments might be encouraged to generate a paragraph with ChatGPT as a stub that enables them to continue writing. As long as the student ultimately adds significant new material and thoroughly edits or ultimately eliminates the output from ChatGPT, they are producing a document that reflects their own work.

One way to help encourage students to make better decisions about using tools such as ChatGPT is to design your classroom climate to engender mastery approaches to learning, which involve a focus on deeply understanding the knowledge and skills rather than simply achieving a particular score on an assessment. In a mastery-oriented classroom, students are more likely to engage in strategies that will help them truly learn the material rather than for the goal of performing a task and receiving a grade for their work.

Three simple tips for encouraging mastery approaches in higher education classrooms include:

  • offering flexible evaluation design: consider providing opportunities for students to revise and redo specific portions of assignments; 
  • focusing feedback on process and effort: offer feedback oriented toward student effort and their learning processes rather than on high grades and performance relative to others. When possible offer elaborative feedback rather than feedback based simply on correctness.
  • building a sense of belonging: discuss, emphasize, and model that making errors and mistakes is part of everyone's learning processes rather than something that only poor performers or people who "don't get it" do.

Larger, Ongoing Issues for UT Austin & Higher Education

UT Austin is putting in place a structure to respond to new developments such as ChatGPT. The Academic Affairs team consists of experts in teaching and learning, assessment, and academic technology. The team also engages with faculty from across campus and welcomes your participation in our activities. This portfolio will convene campus conversations to evaluate investments in tools, technologies, and instructional practices that help us accomplish the objectives articulated in the strategic plan. Excellence in these areas of work can be achieved through strong partnerships and collaborative vision. 

Next Steps for CTL 

  • Continue to learn about the tool and stay updated on new developments
  • Convene campus conversations to hear from multiple perspectives
  • Create resources to support the teaching community
  • Identify policies and practices that advance authentic assessment in courses

In-text citations

  • ChatGPT FAQ | OpenAI Help Center
  • AI ChatGPT Overview.pdf, Center for Teaching and Learning, American University of Armenia
  • ChatGPT, Dr. Torrey Trust
  • ChatGPT & Education, Dr. Torrey Trust
  • What If We Create a Culture of “Transparent Assessment” (AI & AI), Maha Bali
  • Update Your Course Syllabus for chatGPT | by Ryan Watkins | Dec, 2022 | Medium
  • Will ChatGPT Change the Way You Teach? EdSurge Podcast
  • https://utexas.instructure.com/courses/1293240/pages/chatgpt-faculty-resources (unauthorized)
  • 3 Things People Can Do In The Classroom That Robots Can't : NPR Ed

Additional resources

  • Faculty Writing Committee Statement on AI in Writing Flag Classes
  • "Teaching" Newsletter, Jan. 5, 2022, Chronicle of Higher Education
  • Some ill-formed thoughts about AI, robot colleagues, resistance, refusal. Anne-Marie Scott
  • Teaching Experts Are Worried About ChatGPT, but Not for the Reasons You Think, Beth McMurtrie, Chronicle of Higher Education
  • How People Learn II  (See page 121) - Mastery Learning Resources

Acknowledgements:

The content of this document is inspired by the format from the Educause “7 Things You Should Know About” series.

In addition, many of the perspectives and strategies shared in this document were formed through conversation with our colleagues on campus who support teaching and learning at UT, including Brandon Campitelli, Mary Crawford, Linda Neavel Dickens, Rachelle (Shelly) Furness, Mario Guerra, Jeannette Herman, Kristie Loescher, Art Markman, Michele Ostrow, Julie Schell, George Schorn, and Sean White. We are grateful for this growing network of partners that help us respond to rapid changes affecting higher education.

  • Skip to Nav
  • Skip to Main
  • Skip to Footer

Sal Khan on 'How AI Will Revolutionize Education (and Why That’s a Good Thing)'

Please try again

importance of critical thinking to understand chatgpt

When OpenAI released ChatGPT in 2022, schools and universities were quick to ban the use of the generative artificial intelligence chatbot. Teachers have reported students using the service to cheat and turn in plagiarized and inaccurate essays. But Sal Khan, the founder and CEO of Khan Academy, says generative AI can be a force for good in education. Khan Academy now has an educational AI chatbot, Khanmigo, which can guide students while still promoting critical thinking. Khan says developments like these could allow for every student to have a personal AI tutor and every teacher an AI teaching assistant. And Khan thinks incorporating AI in the classroom can allow for exciting new learning opportunities — with the right programming and guardrails. His new book is “Brave New Words.”

Sal Khan, founder and CEO, Khan Academy - an education non-profit

Polk State College

More News About …

  • Business, Engineering & Technology

Related Sections

Polk state faculty and staff collaborate on ethical use of artificial intelligence.

Posted on Tuesday, May 28, 2024 by Polk Newsroom

News art

As artificial intelligence (AI) continues to advance, it inevitably raises questions, concerns, and even fears among individuals. At Polk State College, faculty and staff are proactively addressing these emerging realities through collaborative efforts. Utilizing research, fostering open dialogue, and providing professional development opportunities, they are united in a shared goal: to enrich the educational experience for students while navigating the complexities of AI.

Dr. Amy Bratten, Provost for Academic Affairs and Workforce Education, explained how the accessibility of AI presents both challenges and opportunities within higher education.

“Faculty and staff are exploring how to allow or disallow AI in the classroom. We are looking at the ethical use of AI, how to teach our students to recognize and use AI responsibly, and how we can use AI to enhance the learning-teaching experience,” Bratten said.

In her ESOL Issues: Principles and Practice II course, Bratten encourages aspiring teachers to utilize ChatGPT as a tool to create lesson plans.

“The important thing is that it is a tool ; it’s not there to do all the work for you,” she said. “ChatGPT is good for a first draft and to get your creativity going. It’s a warmup for the brain. AI allows you to get students into critical thinking faster, provides feedback on how to make something you have created better, and can serve as a source of inspiration and creativity.”

Professional development

“We are looking at the ethical use of AI, how to teach our students to recognize and use AI responsibly, and how we can use AI to enhance the learning-teaching experience.” Dr. Amy Bratten Provost, Academic Affairs and Workforce Education

The College’s Learning Technology Team has hosted several AI forums and developed training for faculty to learn about the essentials, ethical dilemmas, and how to incorporate transparent use of AI into their courses if they choose. Training can be accessed through the online Learning Technology Support resource center in Canvas.

“An institution that embraces AI will be more competitive,” Instructional Designer Katie Ragsdale said. “AI is most effective in the classroom when faculty use it to encourage critical thinking and there is a focus on the process of using AI.”

Sha’Kia Riggins, Program Director for the Computer Systems and Business Analysis Program, was naturally an early adopter of AI and has completed training through Intel to become an AI for Workforce lead facilitator. This qualifies her to train other faculty on the inclusion of AI in their courses. She is also a member of the College’s Professional Development Day Committee, which curated this year’s PDD to include several AI-learning opportunities for faculty and staff.

“I have met with countless faculty and what I hear is ‘we’re scared,’ but it is because we are still figuring out what it is,” Riggins explained. “Additionally, many faculty members are being introduced to AI because their students are using it to cheat. This is an opportunity for us to grow professionally as it requires us to be more diverse and creative in the development of our content, and it requires us to get to know our students better so that we can differentiate between their work and AI-generated content.”

Riggins is working with the Criminal Justice Program, for example, to create or reconfigure existing assignments to combat the use of AI for cheating.

Adapting to changes

Sgt. Chris Shea, Director for the Criminal Justice Program, which is predominately taught online, shared that AI became a topic of discussion when ChatGPT was released in November 2022.

“It became apparent that students were utilizing AI in discussion posts in courses online,” she said. “We looked at our assignments and assessments and began brainstorming to determine how we could modify them to be more critical thinking based versus textbook regurgitation.”

“This is an opportunity for us to grow professionally as it requires us to be more diverse and creative in the development of our content.” Sha’Kia Riggins Program Director, Computer Systems and Business Analysis

One way to do this is through video, Shea explained.

“Many of our assignments have been changed from written to video. While a student could still use AI to develop a response that they may read, we are including requirements such as references to specific cases or passages from a textbook to combat that as well,” she added. “This also responds to concerns from our advisory board that recent hires lack communication skills. Having them speak, show their faces, and present – even in the online environment – is helping them develop those skills.”

Shea is exploring the incorporation of ChatGPT and its ethical use into future Criminal Justice courses.

“We are developing some exercises where we would ask students to go into ChatGPT, put in a prompt, and then evaluate its response to determine whether the information is accurate and, if so, what resources back that up,” she said. “We are also exploring the possibility of having students put their own work into ChatGPT so that it gives them feedback and they can see where they may strengthen their assignments and reflect on what they’ve learned from that.”

“We want to train students on the ethical use of AI.”

Professor of Education Beverly Woolery is already incorporating such exercises into her online Introduction to Technology for Educators course. One of her assignments is for students to create lesson plans using ChatGPT.

“It can generate a 30-minute lesson plan for a specific grade level on the life cycle of a frog, complete with technological components, objectives, collaborative learning, rubric, and accompanied by a multiple-choice quiz with answers all within 20 seconds,” Woolery said. “It’s truly mind-blowing.”

She also highlighted that the tool offers teachers an additional resource to tailor plans for diverse learners in their classrooms, who may be of lower or higher proficiency levels.

Woolery elaborated on how her students proceed to personalize the plans generated by ChatGPT through ongoing dialogue in the prompt section. She highlighted the time-saving aspect of using an Al tool, emphasizing its benefits for both educators and their students.

“By delegating routine tasks to ChatGPT, teachers can redirect their efforts towards more meaningful learning activities. They gain more quality time to engage with their students,” Woolery explained, adding that she encourages students to view ChatGPT as an assistant, not a replacement, stressing its role in fostering higher-order thinking and providing inspiration during instruction.

Drawing from her own experience in developing course materials for Polk State, she describes how incorporating ChatGPT has not only freed up valuable time for more interactions with students but how it has also enhanced the efficiency of her teaching process by fostering a more engaging and effective online learning environment.

Embracing possibilities

Dr. Gregory Johnson, Program Director for Digital Media Technology, shares Woolery’s excitement about AI.

“The tool is unbelievable,” he said of ChatGPT. “I instantly saw the potential and set out to learn how to get the most from it.”

Johnson acknowledged the fears that surround AI and its increased accessibility through the introduction of ChatGPT.

“It should be explored, demonstrated, and learned because its capabilities for pedagogy, learning, and teaching are immense.” Dr. Gregory Johnson Program Director, Digital Media Technology

“I don’t think it is wise to put significant restrictions on it,” Johnson said. “It should be explored, demonstrated, and learned because its capabilities for pedagogy, learning, and teaching are immense.”

“It won’t replace creativity,” he added. “It will jumpstart creativity and magnify it significantly. We no longer have to start from a blank slate. Instead, you can tell it what you want to do, and it will scope the world’s knowledge. It is able to access, analyze, and reuse the world’s information in a way that a single human cannot.”

As Woolery described ChatGPT as a teacher assistant, Johnson considers it “a co-pilot, a tutor, a tool that is always available in your pocket.”

Like minimizing the time a teacher spends on lesson planning, AI is also assisting designers in working more efficiently through generative programs in image searching and editing, audio and visual creation, coding and copywriting, and more.

“These tools allow designers to save on workload significantly,” Johnson exclaimed. “The people who learn to use AI well are likely to outperform those who don’t. Notice how I didn’t say replace . It is important to learn AI, understand how it works, and integrate it in a way that makes sense for you.”

Polk State’s Learning Technology Team says that’s the goal.

“We want students, faculty, and staff to know what AI is and how to use it responsibly,” former Instructional Designer Carleigh Okwali said. “If we view it as an opportunity instead of a threat, and embrace AI as part of the learning process, we will enhance the Polk State experience for everyone.”

Faculty are not required to use AI in their courses, but the Learning Technology Team wants to ensure support for those who are motivated to do so.

Polk State faculty can access Learning Technology Support here: canvas.polk.edu/courses/15901 .

ChatGPT was used in the writing process of this article.

Why You Shouldn't Trust ChatGPT to Summarize Your Text

ChatGPT is great at so many things, and text summaries are one of them—but you must always check its output carefully.

Quick Links

Chatgpt can ignore or misunderstand your prompt, chatgpt can omit information you provide, chatgpt can use wrong or false alternatives, chatgpt can get facts wrong, chatgpt can get word or character limits wrong, key takeaways.

  • ChatGPT can misinterpret prompts, deviate from instructions, and fail to recognize specific words. Be precise and monitor its responses.
  • ChatGPT may omit details or alter content if not given clear instructions. Plan prompts carefully to ensure accurate summaries.
  • ChatGPT can use wrong alternatives, omit or alter elements, and get facts wrong. Edit its output and structure prompts for the best results.

There are limits to what ChatGPT knows. And its programming forces it to deliver what you ask for, even if the result is wrong. This means ChatGPT makes mistakes, and moreover, there are some common mistakes it makes, especially when it’s summarizing information and you’re not paying attention.

If you give the chatbot lots of data to sort through, even a complex prompt, it’s likely to deviate from your instructions and follow its own interpretation of them.

Making too many demands at once is one of several ChatGPT prompt mistakes to avoid . But it can also come down to the chatbot not recognizing a particular word or phrase you use.

In the following example, ChatGPT got lots of information about the linguistic function of emojis. The intentionally simple prompt asked the chatbot to summarize everything and explain the links between emojis and cultures.

The chatbot merged both answers into one paragraph. A follow-up prompt with clearer instructions asked it to dip into its knowledge pool, too.

This is why you should keep your instructions precise, provide context when necessary, and keep an eye on ChatGPT’s results. If you flag up any mistake immediately, the chatbot can produce something more accurate.

ChatGPT is smart, but it’s not a good idea to bombard it with details about a particular topic without specifying what is or isn’t important.

The problem here is that, without proper instructions, the algorithm will pick and choose what information it considers relevant to the report or list you need.

To test ChatGPT, it was asked to summarize a length of text on must-see Scottish destinations and create an itinerary for a family vacation.

When asked if it omitted details, it admitted that, yes, it left certain information out, such as specific attractions and transportation options. Conciseness was its goal.

If left to its own devices, there’s no guarantee that ChatGPT will use the details you expect. So, plan and phrase your prompts carefully to ensure the chatbot’s summary is spot on.

OpenAI has updated GPT-4o with data available up to October 2023, while GPT-4 Turbo's cut-off is December of the same year. However, the algorithm’s knowledge isn’t infinite or reliable with real-time facts—it doesn’t know everything about the world. Furthermore, it won’t always reveal that it lacks data on a particular subject unless you ask it directly.

When summarizing or enriching text that contains such obscure references, ChatGPT is likely to replace them with alternatives it understands or fabricate their details.

The following example involves a translation into English. ChatGPT didn’t understand the Greek name for the Toque d’Or awards, but instead of highlighting the problem, it just offered a literal and wrong translation.

Company names, books, awards, research links, and other elements can disappear or be altered in the chatbot’s summary. To avoid major mistakes, be aware of ChatGPT’s content creation limits .

It’s important to learn all you can about how to avoid mistakes with generative AI tools . As the example above demonstrates, one of the biggest problems with ChatGPT is that it lacks certain facts or has learned them wrong. This can then affect any text it produces.

If you ask for a summary of various data points that contain facts or concepts unfamiliar to ChatGPT, the algorithm can phrase them badly.

In the example below, the prompt asked ChatGPT to summarize four TikTok trends and explain what each entails.

Most of the explanations were slightly wrong or lacked specifics about what posters must do. The description of the UMG Music trend was especially misleading. The trend changed after the catalogue’s removal from TikTok, and users now post videos to criticize, not support UMG, something which ChatGPT doesn’t know.

The best solution is to not blindly trust AI chatbots with your text. Even if ChatGPT compiled information you provided yourself, make sure you edit everything it produces, check its descriptions and claims, and make a note of any facts it gets wrong. Then, you’ll know how to structure your prompts for the best results.

As much as OpenAI enhances ChatGPT with new features , it still seems to struggle with basic instructions, such as sticking to a specific word or character limit.

The test below shows ChatGPT needing several prompts. It still either fell short or exceeded the word count needed.

It’s not the worst mistake to expect from ChatGPT. But it’s one more factor to consider when proofreading the summaries it creates.

Be specific about how long you want the content to be. You may need to add or delete some words here and there. It’s worth the effort if you’re dealing with projects with strict word count rules.

Generally speaking, ChatGPT is fast, intuitive, and constantly improving, but it still makes mistakes. Unless you want strange references or omissions in your content, don’t completely trust ChatGPT to summarize your text.

The cause usually involves missing or distorted facts in its data pool. Its algorithm is also designed to automatically answer without always checking for accuracy. If ChatGPT owned up to the problems it encountered, its reliability would increase. For now, the best course of action is to develop your own content with ChatGPT as your handy assistant who needs frequent supervision.

IMAGES

  1. The benefits of critical thinking for students and how to develop it

    importance of critical thinking to understand chatgpt

  2. 8 elemental steps to critical thinking:

    importance of critical thinking to understand chatgpt

  3. How to Improve Critical Thinking

    importance of critical thinking to understand chatgpt

  4. Critical thinking

    importance of critical thinking to understand chatgpt

  5. How Educators Can Leverage ChatGPT for Critical Thinking Enhancement in

    importance of critical thinking to understand chatgpt

  6. Critical Thinking Skills Chart

    importance of critical thinking to understand chatgpt

VIDEO

  1. How ChatGPT "Thinks"

  2. Exploring the effectiveness of ChatGPT in enhancing critical thinking and problem-solving skills

  3. Stanford's Thoughts on ChatGPT

  4. Every Cognitive Bias Explained in 6 minutes!

  5. Mastering Chat GPT: The Benefits of AI with Romon Logan

  6. What AI does to your CRITICAL THINKING (+5 strategies)

COMMENTS

  1. Importance of critical thinking to understand ChatGPT

    Importance of critical thinking to understand ChatGPT. European Journal of Human Genetics 32 , 377-378 ( 2024) Cite this article. This is a response to [ 1] providing constructive criticism ...

  2. Importance of critical thinking to understand ChatGPT

    Importance of critical thinking to understand ChatGPT. Eur J Hum Genet. 2023 Aug 15. doi: 10.1038/s41431-023-01443-4. Online ahead of print.

  3. PDF Importance of critical thinking to understand ChatGPT

    Importance of critical thinking to understand ChatGPT. This is a response to [1] providing constructive criticism about [2, 3]. In the following, we will address the points of criticism in the ...

  4. Importance of critical thinking to understand ChatGPT

    Abstract Aspects of scientific method are discussed: In particular, its representation as a motivated iteration in which, in succession, practice confronts theory, and theory, practice. Rapid…. Expand. 1,858. PDF. Semantic Scholar extracted view of "Importance of critical thinking to understand ChatGPT." by F. Emmert-Streib.

  5. How to use ChatGPT as a learning tool

    ChatGPT, a chatbot software launched by the AI company OpenAI in November 2022, synthesizes online data and communicates it in a conversational way. Unlike a search engine, ChatGPT can write verse in the style of Shakespeare, dole out dating advice, and—especially concerning to educators—answer test questions and write essays.

  6. Leveraging ChatGPT for Enhancing Critical Thinking Skills

    This article presents a study conducted at Georgia Gwinnett College (GGC) to explore the use of ChatGPT, a large language model, for fostering critical thinking skills in higher education. The study implemented a ChatGPT-based activity in introductory chemistry courses, where students engaged with ChatGPT in three stages: account setup and orientation, essay creation, and output revision and ...

  7. Using ChatGPT to Encourage Critical AI Literacy Skills and for

    Critical thinking means different things to different people, from a limited view of criticality linked to cognition and rational thought, to more dialogical, creative, even caring approach to critical thinking . However, as ChatGPT does not yet reveal its sources, it is difficult to understand where certain criteria come from, what theories ...

  8. Metacognition and Critical Thinking: Using ChatGPT-Generated Responses

    Successful problem solving is a complex process that requires content knowledge, process skills, developed critical thinking, metacognitive awareness, and deep conceptual reasoning. Teaching approaches to support students developing problem-solving skills include worked examples, metacognitive and instructional scaffolding, and variations of these techniques. In this report, we describe a ...

  9. Leveraging ChatGPT for Enhancing Critical Thinking Skills

    Students who recommended using ChatGPT to enhance critical thinking skills were highly likely to self-report being more confident to evaluate the credibility of sources (r = 0.749), ask probing questions (r = 0.895), understand complex concepts (r = 0.633), and generate new ideas (r = 0.666).

  10. Importance of critical thinking to understand ChatGPT

    The disposition toward critical thinking. Jan 1995. 1. P A Facione. C A Sanchez. N C Facione. J Gainen. PA Facione. Request PDF | On Aug 15, 2023, Frank Emmert-Streib published Importance of ...

  11. No worries with ChatGPT: building bridges between artificial

    Integrating specific teaching methods in education and research can help develop better critical thinking skills and a deeper understanding of the contexts in which AI is used. The article emphasizes the importance of developing critical thinking skills among students and researchers to effectively use AI and distinguish accurate information ...

  12. ChatGPT: Angel or Demond? Critical thinking is still needed

    Critical thinking is still needed. The Chatbot Generative Pre-trained Transformer (ChatGPT), developed by OpenAI, a type of Artificial Intelligence (AI) software designed to simulate conversations with humans, works through algorithms programmed to interpret natural language inputs, and provides appropriate pre-written or newly generated ...

  13. Importance of critical thinking to understand ChatGPT

    TY - JOUR. T1 - Importance of critical thinking to understand ChatGPT. AU - Emmert-Streib, Frank. PY - 2023. Y1 - 2023. U2 - 10.1038/s41431-023-01443-4

  14. ChatGPT: The cognitive effects on learning and memory

    As ChatGPT continues to evolve, learners must cultivate skills such as information evaluation, problem-solving, and effective learning methodologies. The focus should be on developing a strong foundation in critical thinking, creativity, and emotional intelligence, thus complementing the capabilities of ChatGPT. 30

  15. PDF Comment: Importance of critical thinking to understand ChatGPT

    Comment: Importance of critical thinking to understand ChatGPT Frank Emmert-Streib, Predictive Society and Data Analytics Lab, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland Email: [email protected] This is a response to [1] providing constructive criticism about [2,3].

  16. Beyond the hype: a comprehensive critique of ChatGPT's ...

    The development and application of AI tools like ChatGPT need to augment our field, ensuring the essential roles of human intuition, expertise, and critical thinking are preserved and valued.

  17. A systematic literature review of empirical research on ChatGPT in

    Over the last four decades, studies have investigated the incorporation of Artificial Intelligence (AI) into education. A recent prominent AI-powered technology that has impacted the education sector is ChatGPT. This article provides a systematic review of 14 empirical studies incorporating ChatGPT into various educational settings, published in 2022 and before the 10th of April 2023—the ...

  18. ChatGPT and the Decline of Critical Thinking

    The use of ChatGPT as a search engine has made the need for critical thinking more pressing than ever, writes Enrique Dans. OpenAI's launch of ChatGPT in late November captured the world's attention and opened our eyes to the existence of a technology capable of a surprising number of things called machine learning. Note the language here.

  19. ChatGPT effects on cognitive skills of undergraduate students

    The ChatGPT may offer students feedback and guidance, which can help them develop a deeper understanding (Rospigliosi, 2023) of the topic and enhance their critical thinking skills (Darvishi et al., 2022; Kasneci et al., 2023). Another possible justification is that ChatGPT may provide students with a more personalized learning experience.

  20. Artificial human thinking: ChatGPT's capacity to be a ...

    The ability to think critically is an important and valuable skill that students should develop to successfully solve problems. The process of writing requires critical thinking (CT), and the subsequent piece of text can be viewed as a product of CT. One of the strategies educators may use to develop CT is modelling. Given ChatGPT's ability to generate textual responses, one could suggest ...

  21. The Importance of Critical Thinking when Using ChatGPT ...

    When using ChatGPT for sensitive or important decisions, it is crucial to exercise critical thinking skills such as analysis, evaluation, and interpretation. Rather than relying solely on ChatGPT's responses, it is important to double-check the information provided and to consider seeking the advice of a professional, particularly in complex ...

  22. Importance of critical thinking to understand ChatGPT

    Importance of critical thinking to understand ChatGPT Emmert-Streib, Frank (2023) Avaa tiedosto. comment2.pdf (104.4Kt) Lataukset: Emmert-Streib, Frank. ... Importance of critical thinking to understand ChatGPT Emmert-Streib, Frank (2023) Avaa tiedosto. comment2.pdf (104.4Kt) Lataukset: Emmert-Streib, Frank.

  23. Importance of critical thinking to understand ChatGPT

    Importance of critical thinking to understand ChatGPT. External Link. View online source. Authors: Author Name(s) Frank Emmert-Streib. Year: Date. 2023 ...

  24. The Impact of ChatGPT on Student Learning/performing

    [email protected]. Abstract — This paper explores the integration of ChatGPT, an AI-based language model, in undergraduate education. T he. study examines the potential benefits ...

  25. ChatGPT, explained

    The complex answer is that ChatGPT is an AI chatbot powered by language models created by OpenAI that are known as generative pre-trained transformers (GPTs), a kind of AI that can actually ...

  26. 5 Things To Know About ChatGPT

    ChatGPT is a type of artificial intelligence model trained to generate coherent, human-like pieces of writing on a given topic. "ChatGPT was optimized for dialogue," (1) which means users interact with a chatbot in a conversational context, and the chatbot will compose text based upon the user's prompts. The bot is capable of generating ...

  27. Sal Khan on 'How AI Will Revolutionize Education (and Why That ...

    When OpenAI released ChatGPT in 2022, schools and universities were quick to ban the use of the generative artificial intelligence chatbot. Teachers have reported students using the service to cheat and turn in plagiarized and inaccurate essays. But Sal Khan, the founder and CEO of Khan Academy, says generative AI can be a force for good in education. Khan Academy now has an educational AI ...

  28. Polk State faculty and staff collaborate on ethical use of artificial

    AI allows you to get students into critical thinking faster, provides feedback on how to make something you have created better, and can serve as a source of inspiration and creativity." ... It is important to learn AI, understand how it works, and integrate it in a way that makes sense for you." ... ChatGPT was used in the writing process ...

  29. Why You Shouldn't Trust ChatGPT to Summarize Your Text

    ChatGPT didn't understand the Greek name for the Toque d'Or awards, but instead of highlighting the problem, it just offered a literal and wrong translation. Company names, books, awards, research links, and other elements can disappear or be altered in the chatbot's summary. ... It's important to learn all you can about how to avoid ...

  30. ChatGPT for Customer Service: Best OpenAI Use Cases

    Its contextual understanding allows it to guide customers through resolving complex problems while maintaining a natural, friendly tone. However, it's important to be aware of ChatGPT's current limitations around real-time data integration, quality control challenges, and the inability to replicate true emotional intelligence.