Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form .

How ChatGPT (and other AI chatbots) can help you write an essay

screenshot-2024-03-27-at-4-28-37pm.png

ChatGPT  is capable of doing many different things very well, with one of the biggest standout features being its ability to compose all sorts of text within seconds, including songs, poems, bedtime stories, and essays . 

The chatbot's writing abilities are not only fun to experiment with, but can help provide assistance with everyday tasks. Whether you are a student, a working professional, or just getting stuff done, we constantly take time out of our day to compose emails, texts, posts, and more. ChatGPT can help you claim some of that time back by helping you brainstorm and then compose any text you need. 

How to use ChatGPT to write: Code | Excel formulas | Resumes  | Cover letters  

Contrary to popular belief, ChatGPT can do much more than just write an essay for you from scratch (which would be considered plagiarism). A more useful way to use the chatbot is to have it guide your writing process. 

Below, we show you how to use ChatGPT to do both the writing and assisting, as well as some other helpful writing tips. 

How ChatGPT can help you write an essay

If you are looking to use ChatGPT to support or replace your writing, here are five different techniques to explore. 

It is also worth noting before you get started that other AI chatbots can output the same results as ChatGPT or are even better, depending on your needs.

Also: The best AI chatbots of 2024: ChatGPT and alternatives

For example,  Copilot  has access to the internet, and as a result, it can source its answers from recent information and current events. Copilot also includes footnotes linking back to the original source for all of its responses, making the chatbot a more valuable tool if you're writing a paper on a more recent event, or if you want to verify your sources.

Regardless of which AI chatbot you pick, you can use the tips below to get the most out of your prompts and from AI assistance.

1. Use ChatGPT to generate essay ideas

Before you can even get started writing an essay, you need to flesh out the idea. When professors assign essays, they generally give students a prompt that gives them leeway for their own self-expression and analysis. 

As a result, students have the task of finding the angle to approach the essay on their own. If you have written an essay recently, you know that finding the angle is often the trickiest part -- and this is where ChatGPT can help. 

Also: ChatGPT vs. Copilot: Which AI chatbot is better for you?

All you need to do is input the assignment topic, include as much detail as you'd like -- such as what you're thinking about covering -- and let ChatGPT do the rest. For example, based on a paper prompt I had in college, I asked:

Can you help me come up with a topic idea for this assignment, "You will write a research paper or case study on a leadership topic of your choice." I would like it to include Blake and Mouton's Managerial Leadership Grid, and possibly a historical figure. 

Also: I'm a ChatGPT pro but this quick course taught me new tricks, and you can take it for free

Within seconds, the chatbot produced a response that provided me with the title of the essay, options of historical figures I could focus my article on, and insight on what information I could include in my paper, with specific examples of a case study I could use. 

2. Use the chatbot to create an outline

Once you have a solid topic, it's time to start brainstorming what you actually want to include in the essay. To facilitate the writing process, I always create an outline, including all the different points I want to touch upon in my essay. However, the outline-writing process is usually tedious. 

With ChatGPT, all you have to do is ask it to write the outline for you. 

Also: Thanks to my 5 favorite AI tools, I'm working smarter now

Using the topic that ChatGPT helped me generate in step one, I asked the chatbot to write me an outline by saying: 

Can you create an outline for a paper, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid."

After a couple of seconds, the chatbot produced a holistic outline divided into seven different sections, with three different points under each section. 

This outline is thorough and can be condensed for a shorter essay or elaborated on for a longer paper. If you don't like something or want to tweak the outline further, you can do so either manually or with more instructions to ChatGPT. 

As mentioned before, since Copilot is connected to the internet, if you use Copilot to produce the outline, it will even include links and sources throughout, further expediting your essay-writing process. 

3. Use ChatGPT to find sources

Now that you know exactly what you want to write, it's time to find reputable sources to get your information. If you don't know where to start, you can just ask ChatGPT. 

Also: How to make ChatGPT provide sources and citations

All you need to do is ask the AI to find sources for your essay topic. For example, I asked the following: 

Can you help me find sources for a paper, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid."

The chatbot output seven sources, with a bullet point for each that explained what the source was and why it could be useful. 

Also:   How to use ChatGPT to make charts and tables

The one caveat you will want to be aware of when using ChatGPT for sources is that it does not have access to information after 2021, so it will not be able to suggest the freshest sources. If you want up-to-date information, you can always use Copilot. 

Another perk of using Copilot is that it automatically links to sources in its answers. 

4. Use ChatGPT to write an essay

It is worth noting that if you take the text directly from the chatbot and submit it, your work could be considered a form of plagiarism since it is not your original work. As with any information taken from another source, text generated by an AI should be clearly identified and credited in your work.

Also: ChatGPT will now remember its past conversations with you (if you want it to)

In most educational institutions, the penalties for plagiarism are severe, ranging from a failing grade to expulsion from the school. A better use of ChatGPT's writing features would be to use it to create a sample essay to guide your writing. 

If you still want ChatGPT to create an essay from scratch, enter the topic and the desired length, and then watch what it generates. For example, I input the following text: 

Can you write a five-paragraph essay on the topic, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid."

Within seconds, the chatbot gave the exact output I required: a coherent, five-paragraph essay on the topic. You could then use that text to guide your own writing. 

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

At this point, it's worth remembering how tools like ChatGPT work : they put words together in a form that they think is statistically valid, but they don't know if what they are saying is true or accurate. 

As a result, the output you receive might include invented facts, details, or other oddities. The output might be a useful starting point for your own work, but don't expect it to be entirely accurate, and always double-check the content. 

5. Use ChatGPT to co-edit your essay

Once you've written your own essay, you can use ChatGPT's advanced writing capabilities to edit the piece for you. 

You can simply tell the chatbot what you want it to edit. For example, I asked ChatGPT to edit our five-paragraph essay for structure and grammar, but other options could have included flow, tone, and more. 

Also: AI meets AR as ChatGPT is now available on the Apple Vision Pro

Once you ask the tool to edit your essay, it will prompt you to paste your text into the chatbot. ChatGPT will then output your essay with corrections made. This feature is particularly useful because ChatGPT edits your essay more thoroughly than a basic proofreading tool, as it goes beyond simply checking spelling. 

You can also co-edit with the chatbot, asking it to take a look at a specific paragraph or sentence, and asking it to rewrite or fix the text for clarity. Personally, I find this feature very helpful. 

How to use ChatGPT

The best ai chatbots: chatgpt isn't the only one worth trying, adobe's pdf-reading ai assistant starts at $4.99/month - here's how to try it for free.

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

David Nield

5 Ways ChatGPT Can Improve, Not Replace, Your Writing

Sheets of blank white paper flying out of vintage manual typewriter on a yellow and purple backdrop

It's been quite a year for ChatGPT, with the large language model (LLM) now taking exams, churning out content , searching the web, writing code, and more. The AI chatbot can produce its own stories , though whether they're any good is another matter.

If you're in any way involved in the business of writing, then tools like ChatGPT have the potential to complete up-end the way you work—but at this stage, it's not inevitable that journalists, authors, and copywriters will be replaced by generative AI bots.

What we can say with certainty is that ChatGPT is a reliable writing assistant, provided you use it in the right way. If you have to put words in order as part of your job, here's how ChatGPT might be able to take your writing to the next level—at least until it replaces you, anyway.

Using a thesaurus as a writer isn't particularly frowned on; using ChatGPT to come up with the right word or phrase shouldn’t be either. You can use the bot to look for variations on a particular word, or get even more specific and say you want alternatives that are less or more formal, longer or shorter, and so on.

Where ChatGPT really comes in handy is when you're reaching for a word and you're not even sure it exists: Ask about "a word that means a sense of melancholy but in particular one that comes and goes and doesn't seem to have a single cause" and you'll get back "ennui" as a suggestion (or at least we did).

If you have characters talking, you might even ask about words or phrases that would typically be said by someone from a particular region, of a particular age, or with particular character traits. This being ChatGPT, you can always ask for more suggestions.

The Best Hair Straighteners to Iron Out Those Kinks

Medea Giordano

These Are Our Favorite Computer Monitors

Nena Farrell

20 Great Deals on Eco-Friendly Products to Celebrate Earth Day

Louryn Strampe

The Best Keyboards for Work and Play

Eric Ravenscraft

Screenshot of ChatGPT in a browser window

ChatGPT is never short of ideas.

Whatever you might think about the quality and character of ChatGPT's prose, it's hard to deny that it's quite good at coming up with ideas . If your powers of imagination have hit a wall then you can turn to ChatGPT for some inspiration about plot points, character motivations, the settings of scenes, and so on.

This can be anything from the broad to the detailed. Maybe you need ideas about what to write a novel or an article about—where it's set, what the context is, and what the theme is. If you're a short story writer, perhaps you could challenge yourself to write five tales inspired by ideas from ChatGPT.

Alternatively, you might need inspiration for something very precise, whether that's what happens next in a scene or how to summarize an essay. At whatever point in the process you get writer's block, then ChatGPT might be one way of working through it.

Writing is often about a lot more than putting words down in order. You'll regularly have to look up facts, figures, trends, history, and more to make sure that everything is accurate (unless your next literary work is entirely inside a fantasy world that you're imagining yourself).

ChatGPT can sometimes have the edge over conventional search engines when it comes to knowing what food people might have eaten in a certain year in a certain part of the world, or what the procedure is for a particular type of crime. Whereas Google might give you SEO-packed spam sites with conflicting answers, ChatGPT will actually return something coherent.

That said, we know that LLMs have a tendency to “hallucinate” and present inaccurate information—so you should always double-check what ChatGPT tells you with a second source to make sure you're not getting something wildly wrong.

Getting fictional character and place names right can be a challenge, especially when they're important to the plot. A name has to have the right vibe and the right connotations, and if you get it wrong it really sticks out on the page.

ChatGPT can come up with an unlimited number of names for people and places in your next work of fiction, and it can be a lot of fun playing around with this too. The more detail you give about a person or a place, the better—maybe you want a name that really reflects a character trait for example, or a geographical feature.

The elements of human creation and curation aren't really replaced, because you're still weighing up which names work and which don't, and picking the right one—but getting ChatGPT on the job can save you a lot of brainstorming time.

Screenshot of ChatGPT in a browser window

Get your names right with ChatGPT.

With a bit of cutting and pasting, you can quickly get ChatGPT to review your writing as well: It'll attempt to tell you if there's anything that doesn't make sense, if your sentences are too long, or if your prose is too lengthy.

From spotting spelling and grammar mistakes to recognizing a tone that's too formal, ChatGPT has plenty to offer as an editor and critic. Just remember that this is an LLM, after all, and it doesn't actually “know” anything—try to keep a reasonable balance between accepting ChatGPT's suggestions and giving it too much control.

If you're sharing your work with ChatGPT, you can also ask it for better ways to phrase something, or suggestions on how to change the tone—though this gets into the area of having the bot actually do your writing for you, which all genuine writers would want to avoid.

WIRED has teamed up with Jobbio to create WIRED Hired , a dedicated career marketplace for WIRED readers. Companies who want to advertise their jobs can visit WIRED Hired to post open roles, while anyone can search and apply for thousands of career opportunities. Jobbio is not involved with this story or any editorial content.

You Might Also Like …

In your inbox: Will Knight's Fast Forward explores advances in AI

Hackers found a way to open 3 million hotel keycard locks

A couple decided to decarbonize their home. Here's what happened

A deepfake nude generator reveals a chilling look at its victims

Are you noise sensitive? Here's how to turn the volume down a little

The Best Google Docs Keyboard Shortcuts for Boosting Your Productivity

Elissaveta M. Brandon

How to View April’s Total Solar Eclipse, Online and In Person

Reece Rogers

Keurig’s New Coffee Pods Are Completely Compostable&-but You’ll Need a New Machine to Use Them

Adrienne So

Our Favorite Cat Toys, Litter Boxes, and Other Feline Supplies

Lauren Goode

WIRED COUPONS

https://www.wired.com/coupons/static/shop/32697/logo/FINAL_TurboTax_logo.png

Save up to $58 Off TurboTax Online

https://www.wired.com/coupons/static/shop/37832/logo/H_R_Block_Coupon_Code.png

20% Off All H&R Block 2024 Tax Software | H&R Block Coupon

https://www.wired.com/coupons/static/shop/37974/logo/Instacart_logo_-_22__1_.png

Up to $20 off at Instacart in 2024

https://www.wired.com/coupons/static/shop/30208/logo/_0047_Dyson--coupons.png

Top Seller Deal: $170 off Dyson V12 Detect Slim cordless vacuum cleaner

https://www.wired.com/coupons/static/shop/31565/logo/GoPro_Logo_-_WIRED_-_8.png

GoPro Promo Code: 15% off Cameras and Accessories

https://www.wired.com/coupons/static/shop/30173/logo/Samsung_promo_code.png

Up to +30% Off with your Samsung student promo code

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Write Your Essay Using ChatGPT

How to Write Your Essay Using ChatGPT

5-minute read

  • 2nd May 2023

It’s tempting, isn’t it? You’ve read about and probably also witnessed how quickly ChatGPT can knock up text, seemingly in any genre or style and of any length, in less time than it takes you to make a cup of tea. However, getting ChatGPT to write your essay for you would be plagiarism . Universities and colleges are alive to the issue, and you may face serious academic penalties if you’re found to have used AI in that way.

So that’s that, right? Not necessarily.

This post is not about how to get ChatGPT to write your essay . It’s about how you can use the tool to help yourself write an essay .

What Is ChatGPT?

Let’s start with the basics. ChatGPT is one of several chatbots that can answer questions in a conversational style, as if the answer were coming from a human. It provides answers based on information it receives in development and in response to prompts you provide.

In that respect, like a human, ChatGPT is limited by the information it has. Where it lacks the information, it has a tendency to fill the gaps regardless . This action is dangerous if you’re relying on the accuracy of the information, and it’s another good reason you should not get ChatGPT to write your essay for you.

How Can You Use ChatGPT to Help With Your Essay?

Forget about the much talked-about writing skills of ChatGPT – writing is your thing here. Instead, think of ChatGPT as your assistant. Here are some ideas for how you can make it work for you.

Essay Prompts

If your task is to come up with your own essay topic but you find yourself staring at a blank page, you can use ChatGPT for inspiration. Your prompt could look something like this:

ChatGPT can offer several ideas. The choice of which one to write about (and you may, of course, still come up with one of your own) will be up to you, based on what interests you and the topic’s potential for in-depth analysis.

Essay Outlines

Having decided on your essay topic – or perhaps you’ve already been given one by your instructor – you may be struggling to figure out how to structure the essay. You can use ChatGPT to suggest an outline. Your prompt can be along these lines:

Just as you should not use ChatGPT to write an essay for you, you should not use it to research one – that’s your job.

If, however, you’re struggling to understand a particular extract, you can ask ChatGPT to summarize it or explain it in simpler terms.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

That said, you can’t rely on ChatGPT to be factually accurate in the information it provides, even when you think the information would be in its database, as we discovered in another post. Indeed, when we asked ChatGPT whether we should fact-check its information, the response was:

An appropriate use of ChatGPT for research would be to ask for academic resources for further reading on a particular topic. The advantage of doing this is that, in going on to locate and read the suggested resources, you will have checked that they exist and that the content is relevant and accurately set out in your essay.

Instead of researching the topic as a whole, you could use ChatGPT to generate suggestions for the occasional snippet of information, like this:

Before deciding which of its suggestions – if any – to include, you should ask ChatGPT for the source of the fact or statistic so you can check it and provide the necessary citation.

Referencing

Even reading the word above has probably made you groan. As if writing the essay isn’t hard enough, you then have to not only list all the sources you used, but also make sure that you’ve formatted them in a particular style. Here’s where you can use ChatGPT. We have a separate post dealing specifically with this topic, but in brief, you can ask something like this:

Where information is missing, as in the example above, ChatGPT will likely fill in the gaps. In such cases, you’ll have to ensure that the information it fills in is correct.

Proofreading

After finishing the writing and referencing, you’d be well advised to proofread your work, but you’re not always the best person to do so – you’d be tired and would likely read only what you expect to see. At least as a first step, you can copy and paste your essay into ChatGPT and ask it something like this:

You’ve got the message that you can’t just ask ChatGPT to write your essay, right? But in some areas, ChatGPT can help you write your essay, providing, as with any tool, you use it carefully and are alert to the risks.

We should point out that universities and colleges have different attitudes toward using AI – including whether you need to cite its use in your reference list – so always check what’s acceptable.

After using ChatGPT to help with your work, you can always ask our experts to look over it to check your references and/or improve your grammar, spelling, and tone. We’re available 24/7, and you can even try our services for free .

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

2-minute read

How to Cite the CDC in APA

If you’re writing about health issues, you might need to reference the Centers for Disease...

Six Product Description Generator Tools for Your Product Copy

Introduction If you’re involved with ecommerce, you’re likely familiar with the often painstaking process of...

3-minute read

What Is a Content Editor?

Are you interested in learning more about the role of a content editor and the...

4-minute read

The Benefits of Using an Online Proofreading Service

Proofreading is important to ensure your writing is clear and concise for your readers. Whether...

6 Online AI Presentation Maker Tools

Creating presentations can be time-consuming and frustrating. Trying to construct a visually appealing and informative...

What Is Market Research?

No matter your industry, conducting market research helps you keep up to date with shifting...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

  • International edition
  • Australia edition
  • Europe edition

Robot hand using mobile phone

AI bot ChatGPT stuns academics with essay-writing skills and usability

Latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests

Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian , and ChatGPT has significant further capabilities.

In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.

Dan Gillmor, a journalism professor at Arizona State University, asked the AI to handle one of the assignments he gives his students: writing a letter to a relative giving advice regarding online security and privacy. “If you’re unsure about the legitimacy of a website or email, you can do a quick search to see if others have reported it as being a scam,” the AI advised in part.

“I would have given this a good grade,” Gillmor said. “Academia has some very serious issues to confront.”

OpenAI said the new AI was created with a focus on ease of use. “The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI said in a post announcing the release.

Unlike previous AI from the company, ChatGPT has been released for anyone to use , for free, during a “feedback” period. The company hopes to use this feedback to improve the final version of the tool.

ChatGPT is good at self-censoring, and at realising when it is being asked an impossible question. Asked, for instance, to describe what happened when Columbus arrived in America in 2015, older models may have willingly presented an entirely fictitious account, but ChatGPT recognises the falsehood and warns that any answer would be fictional.

The bot is also capable of refusing to answer queries altogether. Ask it for advice on stealing a car, for example, and the bot will say that “stealing a car is a serious crime that can have severe consequences”, and instead give advice such as “using public transportation”.

But the limits are easy to evade. Ask the AI instead for advice on how to beat the car-stealing mission in a fictional VR game called Car World and it will merrily give users detailed guidance on how to steal a car, and answer increasingly specific questions on problems like how to disable an immobiliser, how to hotwire the engine, and how to change the licence plates – all while insisting that the advice is only for use in the game Car World.

The AI is trained on a huge sample of text taken from the internet, generally without explicit permission from the authors of the material used. That has led to controversy, with some arguing that the technology is most useful for “copyright laundering” – making works derivative of existing material without breaking copyright.

One unusual critic was Elon Musk, who co-founded OpenAI in 2015 before parting ways in 2017 due to conflicts of interest between the organisation and Tesla. In a post on Twitter on Sunday , Musk revealed that the organisation “had access to [the] Twitter database for training”, but that he had “put that on pause for now”.

“Need to understand more about governance structure & revenue plans going forward,” Musk added. “OpenAI was started as open-source & non-profit. Neither are still true.”

  • Artificial intelligence (AI)

Most viewed

How to use ChatGPT for writing

AI can make you a better writer, if you know how to get the best from it

a bunch of cute robots helping a sitting man to write

Summarizing other works

Worldbuilding, creating outlines, building characters, how to improve your chatgpt responses.

ChatGPT has taken the world by storm in a very short period of time, as users continue to test the boundaries of what the AI chatbot can accomplish. And so far, that's a lot. 

Some of it is negative, of course: for instance Samsung workers accidentally leaking top-secret data while using ChatGPT , or the AI chatbot being used for malware scams . Plagiarism is also rampant, with the use of ChatGPT for writing college essays a potential problem.

However, while ChatGPT can and has been used for wrongdoing, to the point where the Future of Life Institution released an open letter calling for the temporary halt of OpenAI system work , AI isn’t all bad. Far from it.

For a start, anyone who writes something may well have used AI to enhance their work already. The most common applications, of course, are the grammar and spelling correction tools found in everything from email applications to word processors. But there are a growing number of other examples of how AI can be used for writing. So, how do you bridge the gap between using AI as the tool it is, without crossing over into plagiarism city?

In fact, there are many ways ChatGPT can be used to enhance your skills, particularly when it comes to researching, developing, and organizing ideas and information for creative writing. By using AI as it was intended - as a tool, not a crutch - it can enrich your writing in ways that help to better your craft, without resorting to it doing everything for you. 

Below, we've listed some of our favorite ways to use ChatGPT and similar AI chatbots for writing. 

A key part of any writing task is the research, and thanks to the internet that chore has never been easier to accomplish. However, while finding the general sources you need is far less time-consuming than it once was, actually parsing all that information is still the same slog it’s always been. But this is where ChatGPT comes in. You can use the AI bot to do the manual labor for you and then reap the benefits of having tons of data to use for your work.

Get daily insight, inspiration and deals in your inbox

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

The steps are slightly different, depending on whether you want an article or book summarized . 

For the article, there are two ways to have ChatGPT summarize it. The first requires you to type in the words ‘TLDR:’ and then paste the article’s URL next to it. The second method is a bit more tedious, but increases the accuracy of your summary. For that, you’ll need to copy and paste the article itself into the prompt . 

Summarizing a book is much easier, as long as it was published before 2021. Simply type into the prompt ‘summarize [book title]’ and it should do the rest for you.

This should go without saying, but for any articles or books, make sure you read the source material first before using any information presented to you. While ChatGPT is an incredibly useful tool that can create resources meant for future reference, it’s not a perfect one and is subject to accidentally inserting misinformation into anything it gives you.

screenshot of a conversation with chatgpt

One of the most extensive and important tasks when crafting your creative work is to properly flesh out the world your characters occupy. Even for works set in a regular modern setting, it can take plenty of effort to research the various cultures, landmarks, languages, and neighborhoods your characters live in and encounter. 

Now, imagine stories that require their own unique setting, and how much more work that entails in terms of creating those same details from scratch. While it’s vital that the main ideas come from you, using ChatGPT can be a great way to streamline the process, especially with more tedious details.

For instance, if you need certain fictional words without wanting to create an entirely fictional language, you can prompt ChatGPT with the following : “Create a language including an alphabet, phonetics, grammar, and the most common 100 words. Base it on [insert real-life languages here]” and it will give you some good starting points. However, it’s imperative that you take these words and look them up, to ensure you aren’t appropriating sensitive terms or using offensive real-life words.

Another example is useful for those who write scenarios for games, especially tabletop games such as Dungeons & Dragons or Call of Cthulhu . Dungeon Masters (who run the games) may often need to create documents or other fake materials for their world, but doing so takes a lot of time and effort. Now, they can prompt ChatGPT to quickly create filler text that sounds interesting or authentic but is inherently useless; it's essentially like ' Lorem Ipsum ' text, but more immersive.

screenshot of a conversation with chatgpt

When writing a story, many people will use an outline to ensure they stay on track and that the narrative flows well. But actually sitting down and organizing everything in your head in order to create a cohesive reference is a lot more daunting than it seems. It’s one of those steps that can be crucial to a well-structured work of fiction, but it can also become a hurdle. This is another area where ChatGPT can come in handy.

The key to writing an effective outline is remembering that you don’t need to have all the answers first. It’s there to structure your content, by helping you hit critical points and not miss important details in the process. While there are AI generators with a more specific focus on this topic, ChatGPT will do a good job at taking a general prompt and returning points for you to keep in mind while you research and write around that topic.

For instance, I prompted ChatGPT with “I want to write a story about a black woman in 16th century England” and it gave me a well-thought-out series of steps to help me create a story that would reflect my topic. An outline such as this would be particularly useful for those needing a resource they can quickly turn to for inspiration when writing. After that, you can begin to develop more complex ideas and have the AI organize those specifics into much easier-to-follow steps.

What makes any great story are the characters that inhabit it. Writing strong, fleshed-out characters is the cornerstone of any creative work and, naturally, the process of creating such a character can be difficult. Their background, manner of speech, goals, dreams, look, and more must be carefully considered and planned out. And this is another aspect of writing that ChatGPT can aid with, if you know how to go about it.

A basic way to use ChatGPT in this regard is to have it generate possible characters that could populate whatever setting you’re writing for. For example, I prompted it with “Provide some ideas for characters set in 1920s Harlem” and it gave me a full list of people with varied and distinctive backstories to use as a jumping-off point. Each character is described with a single sentence, enough to help start the process of creating them, but still leaving the crux of developing them up to me.

One of the most interesting features of ChatGPT is that you can flat-out roleplay with a character, whether they're a historical figure or one that you created but need help fleshing out. Take that same character you just created and have a conversation with them by asking them questions on their history, family life, profession, etc. Based on my previous results, I prompted with “Pretend to be a jazz musician from 1920s Harlem. Let's have a conversation.” I then asked questions from there, basing them on prior answers. Of course, from there you need to parse through these responses to filter out unnecessary or inaccurate details, while fleshing out what works for your story, but it does provide you with a useful stepping stone.

a hand open with the words chatgpt and ai hovering

If you’re having issues getting the results you want, the problem could be with how you’re phrasing those questions or prompts in the first place. We've got a full guide to how to improve your ChatGPT prompts and responses , but here are a few of the best options:

  • Specify the direction you want the AI to go, by adding in relevant details 
  • Prompt from a specific role to guide the responses in the proper direction
  • Make sure your prompts are free of typos and grammatical errors
  • Keep your tone conversational, as that’s how ChatGPT was built
  • Learn from yours and its mistakes to make it a better tool
  • Break up your conversations into 500 words or less, as that’s when the AI begins to break down and go off topic
  • If you need something clarified, ask the AI based on its last response
  • Ask it to cite sources and then check those sources
  • Sometimes it’s best to start fresh with a brand new conversation

Of course, many of the above suggestions apply not just to ChatGPT but also to the other chatbots springing up in its wake. Check out our list of the best ChatGPT alternatives and see which one works best for you.

Allisa James

Named by the CTA as a CES 2023 Media Trailblazer, Allisa is a Computing Staff Writer who covers breaking news and rumors in the computing industry, as well as reviews, hands-on previews, featured articles, and the latest deals and trends. In her spare time you can find her chatting it up on her two podcasts, Megaten Marathon and Combo Chain, as well as playing any JRPGs she can get her hands on.

I finally found a practical use for AI, and I may never garden the same way again

Humane AI Pin review roundup: an undercooked flop that's way ahead of its time

Prime Video movie of the day: Mafia Mamma is the best kind of bad movie, in that it’s actually great

Most Popular

  • 2 Scientists at KAIST have come up with an ultra-low-power phase change memory device that could replace NAND and DRAM
  • 3 The latest macOS Ventura update has left owners of old Macs stranded in a sea of problems, raising a chorus of complaints
  • 4 Salman Rushdie's censorship interview is a reminder to use a VPN
  • 5 I finally found a practical use for AI, and I may never garden the same way again
  • 2 Need proof that Samsung's Galaxy software is worse than the iPhone? Here it is
  • 3 Bosses are becoming increasingly scared of AI because it might actually adversely affect their jobs too
  • 4 Scientists inch closer to holy grail of memory breakthrough — producing tech that combines NAND and RAM features could be much cheaper to produce and consume far less power
  • 5 The latest macOS Ventura update has left owners of old Macs stranded in a sea of problems, raising a chorus of complaints

chat gpt english essays

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game New
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Computers and Electronics
  • Online Communications

How to Get ChatGPT to Write an Essay: Prompts, Outlines, & More

Last Updated: March 31, 2024 Fact Checked

Getting ChatGPT to Write the Essay

Using ai to help you write, expert interview.

This article was written by Bryce Warwick, JD and by wikiHow staff writer, Nicole Levine, MFA . Bryce Warwick is currently the President of Warwick Strategies, an organization based in the San Francisco Bay Area offering premium, personalized private tutoring for the GMAT, LSAT and GRE. Bryce has a JD from the George Washington University Law School. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 45,453 times.

Are you curious about using ChatGPT to write an essay? While most instructors have tools that make it easy to detect AI-written essays, there are ways you can use OpenAI's ChatGPT to write papers without worrying about plagiarism or getting caught. In addition to writing essays for you, ChatGPT can also help you come up with topics, write outlines, find sources, check your grammar, and even format your citations. This wikiHow article will teach you the best ways to use ChatGPT to write essays, including helpful example prompts that will generate impressive papers.

Things You Should Know

  • To have ChatGPT write an essay, tell it your topic, word count, type of essay, and facts or viewpoints to include.
  • ChatGPT is also useful for generating essay topics, writing outlines, and checking grammar.
  • Because ChatGPT can make mistakes and trigger AI-detection alarms, it's better to use AI to assist with writing than have it do the writing.

Step 1 Create an account with ChatGPT.

  • Before using the OpenAI's ChatGPT to write your essay, make sure you understand your instructor's policies on AI tools. Using ChatGPT may be against the rules, and it's easy for instructors to detect AI-written essays.
  • While you can use ChatGPT to write a polished-looking essay, there are drawbacks. Most importantly, ChatGPT cannot verify facts or provide references. This means that essays created by ChatGPT may contain made-up facts and biased content. [1] X Research source It's best to use ChatGPT for inspiration and examples instead of having it write the essay for you.

Step 2 Gather your notes.

  • The topic you want to write about.
  • Essay length, such as word or page count. Whether you're writing an essay for a class, college application, or even a cover letter , you'll want to tell ChatGPT how much to write.
  • Other assignment details, such as type of essay (e.g., personal, book report, etc.) and points to mention.
  • If you're writing an argumentative or persuasive essay , know the stance you want to take so ChatGPT can argue your point.
  • If you have notes on the topic that you want to include, you can also provide those to ChatGPT.
  • When you plan an essay, think of a thesis, a topic sentence, a body paragraph, and the examples you expect to present in each paragraph.
  • It can be like an outline and not an extensive sentence-by-sentence structure. It should be a good overview of how the points relate.

Step 3 Ask ChatGPT to write the essay.

  • "Write a 2000-word college essay that covers different approaches to gun violence prevention in the United States. Include facts about gun laws and give ideas on how to improve them."
  • This prompt not only tells ChatGPT the topic, length, and grade level, but also that the essay is personal. ChatGPT will write the essay in the first-person point of view.
  • "Write a 4-page college application essay about an obstacle I have overcome. I am applying to the Geography program and want to be a cartographer. The obstacle is that I have dyslexia. Explain that I have always loved maps, and that having dyslexia makes me better at making them."

Step 4 Add to or change the essay.

  • In our essay about gun control, ChatGPT did not mention school shootings. If we want to discuss this topic in the essay, we can use the prompt, "Discuss school shootings in the essay."
  • Let's say we review our college entrance essay and realize that we forgot to mention that we grew up without parents. Add to the essay by saying, "Mention that my parents died when I was young."
  • In the Israel-Palestine essay, ChatGPT explored two options for peace: A 2-state solution and a bi-state solution. If you'd rather the essay focus on a single option, ask ChatGPT to remove one. For example, "Change my essay so that it focuses on a bi-state solution."

Step 5 Ask for sources.

  • "Give me ideas for an essay about the Israel-Palestine conflict."
  • "Ideas for a persuasive essay about a current event."
  • "Give me a list of argumentative essay topics about COVID-19 for a Political Science 101 class."

Step 2 Create an outline.

  • "Create an outline for an argumentative essay called "The Impact of COVID-19 on the Economy."
  • "Write an outline for an essay about positive uses of AI chatbots in schools."
  • "Create an outline for a short 2-page essay on disinformation in the 2016 election."

Step 3 Find sources.

  • "Find peer-reviewed sources for advances in using MRNA vaccines for cancer."
  • "Give me a list of sources from academic journals about Black feminism in the movie Black Panther."
  • "Give me sources for an essay on current efforts to ban children's books in US libraries."

Step 4 Create a sample essay.

  • "Write a 4-page college paper about how global warming is changing the automotive industry in the United States."
  • "Write a 750-word personal college entrance essay about how my experience with homelessness as a child has made me more resilient."
  • You can even refer to the outline you created with ChatGPT, as the AI bot can reference up to 3000 words from the current conversation. [3] X Research source For example: "Write a 1000 word argumentative essay called 'The Impact of COVID-19 on the United States Economy' using the outline you provided. Argue that the government should take more action to support businesses affected by the pandemic."

Step 5 Use ChatGPT to proofread and tighten grammar.

  • One way to do this is to paste a list of the sources you've used, including URLs, book titles, authors, pages, publishers, and other details, into ChatGPT along with the instruction "Create an MLA Works Cited page for these sources."
  • You can also ask ChatGPT to provide a list of sources, and then build a Works Cited or References page that includes those sources. You can then replace sources you didn't use with the sources you did use.

Expert Q&A

  • Because it's easy for teachers, hiring managers, and college admissions offices to spot AI-written essays, it's best to use your ChatGPT-written essay as a guide to write your own essay. Using the structure and ideas from ChatGPT, write an essay in the same format, but using your own words. Thanks Helpful 0 Not Helpful 0
  • Always double-check the facts in your essay, and make sure facts are backed up with legitimate sources. Thanks Helpful 0 Not Helpful 0
  • If you see an error that says ChatGPT is at capacity , wait a few moments and try again. Thanks Helpful 0 Not Helpful 0

chat gpt english essays

  • Using ChatGPT to write or assist with your essay may be against your instructor's rules. Make sure you understand the consequences of using ChatGPT to write or assist with your essay. Thanks Helpful 0 Not Helpful 0
  • ChatGPT-written essays may include factual inaccuracies, outdated information, and inadequate detail. [4] X Research source Thanks Helpful 0 Not Helpful 0

You Might Also Like

Talk to Girls Online

Thanks for reading our article! If you’d like to learn more about completing school assignments, check out our in-depth interview with Bryce Warwick, JD .

  • ↑ https://help.openai.com/en/articles/6783457-what-is-chatgpt
  • ↑ https://platform.openai.com/examples/default-essay-outline
  • ↑ https://help.openai.com/en/articles/6787051-does-chatgpt-remember-what-happened-earlier-in-the-conversation
  • ↑ https://www.ipl.org/div/chatgpt/

About This Article

Bryce Warwick, JD

  • Send fan mail to authors

Is this article up to date?

Am I a Narcissist or an Empath Quiz

Featured Articles

Choose the Right Car for You

Trending Articles

How to Set Boundaries with Texting

Watch Articles

Fold Boxer Briefs

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Keep up with tech in just 5 minutes a week!

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 30 October 2023

A large-scale comparison of human-written versus ChatGPT-generated essays

  • Steffen Herbold 1 ,
  • Annette Hautli-Janisz 1 ,
  • Ute Heuer 1 ,
  • Zlata Kikteva 1 &
  • Alexander Trautsch 1  

Scientific Reports volume  13 , Article number:  18617 ( 2023 ) Cite this article

17k Accesses

12 Citations

94 Altmetric

Metrics details

  • Computer science
  • Information technology

ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives.

Similar content being viewed by others

chat gpt english essays

ChatGPT-3.5 as writing assistance in students’ essays

Željana Bašić, Ana Banovac, … Ivan Jerković

chat gpt english essays

Perception, performance, and detectability of conversational artificial intelligence across 32 university courses

Hazem Ibrahim, Fengyuan Liu, … Yasir Zaki

chat gpt english essays

The model student: GPT-4 performance on graduate biomedical science exams

Daniel Stribling, Yuxing Xia, … Rolf Renne

Introduction

The massive uptake in the development and deployment of large-scale Natural Language Generation (NLG) systems in recent months has yielded an almost unprecedented worldwide discussion of the future of society. The ChatGPT service which serves as Web front-end to GPT-3.5 1 and GPT-4 was the fastest-growing service in history to break the 100 million user milestone in January and had 1 billion visits by February 2023 2 .

Driven by the upheaval that is particularly anticipated for education 3 and knowledge transfer for future generations, we conduct the first independent, systematic study of AI-generated language content that is typically dealt with in high-school education: argumentative essays, i.e. essays in which students discuss a position on a controversial topic by collecting and reflecting on evidence (e.g. ‘Should students be taught to cooperate or compete?’). Learning to write such essays is a crucial aspect of education, as students learn to systematically assess and reflect on a problem from different perspectives. Understanding the capability of generative AI to perform this task increases our understanding of the skills of the models, as well as of the challenges educators face when it comes to teaching this crucial skill. While there is a multitude of individual examples and anecdotal evidence for the quality of AI-generated content in this genre (e.g. 4 ) this paper is the first to systematically assess the quality of human-written and AI-generated argumentative texts across different versions of ChatGPT 5 . We use a fine-grained essay quality scoring rubric based on content and language mastery and employ a significant pool of domain experts, i.e. high school teachers across disciplines, to perform the evaluation. Using computational linguistic methods and rigorous statistical analysis, we arrive at several key findings:

AI models generate significantly higher-quality argumentative essays than the users of an essay-writing online forum frequented by German high-school students across all criteria in our scoring rubric.

ChatGPT-4 (ChatGPT web interface with the GPT-4 model) significantly outperforms ChatGPT-3 (ChatGPT web interface with the GPT-3.5 default model) with respect to logical structure, language complexity, vocabulary richness and text linking.

Writing styles between humans and generative AI models differ significantly: for instance, the GPT models use more nominalizations and have higher sentence complexity (signaling more complex, ‘scientific’, language), whereas the students make more use of modal and epistemic constructions (which tend to convey speaker attitude).

The linguistic diversity of the NLG models seems to be improving over time: while ChatGPT-3 still has a significantly lower linguistic diversity than humans, ChatGPT-4 has a significantly higher diversity than the students.

Our work goes significantly beyond existing benchmarks. While OpenAI’s technical report on GPT-4 6 presents some benchmarks, their evaluation lacks scientific rigor: it fails to provide vital information like the agreement between raters, does not report on details regarding the criteria for assessment or to what extent and how a statistical analysis was conducted for a larger sample of essays. In contrast, our benchmark provides the first (statistically) rigorous and systematic study of essay quality, paired with a computational linguistic analysis of the language employed by humans and two different versions of ChatGPT, offering a glance at how these NLG models develop over time. While our work is focused on argumentative essays in education, the genre is also relevant beyond education. In general, studying argumentative essays is one important aspect to understand how good generative AI models are at conveying arguments and, consequently, persuasive writing in general.

Related work

Natural language generation.

The recent interest in generative AI models can be largely attributed to the public release of ChatGPT, a public interface in the form of an interactive chat based on the InstructGPT 1 model, more commonly referred to as GPT-3.5. In comparison to the original GPT-3 7 and other similar generative large language models based on the transformer architecture like GPT-J 8 , this model was not trained in a purely self-supervised manner (e.g. through masked language modeling). Instead, a pipeline that involved human-written content was used to fine-tune the model and improve the quality of the outputs to both mitigate biases and safety issues, as well as make the generated text more similar to text written by humans. Such models are referred to as Fine-tuned LAnguage Nets (FLANs). For details on their training, we refer to the literature 9 . Notably, this process was recently reproduced with publicly available models such as Alpaca 10 and Dolly (i.e. the complete models can be downloaded and not just accessed through an API). However, we can only assume that a similar process was used for the training of GPT-4 since the paper by OpenAI does not include any details on model training.

Testing of the language competency of large-scale NLG systems has only recently started. Cai et al. 11 show that ChatGPT reuses sentence structure, accesses the intended meaning of an ambiguous word, and identifies the thematic structure of a verb and its arguments, replicating human language use. Mahowald 12 compares ChatGPT’s acceptability judgments to human judgments on the Article + Adjective + Numeral + Noun construction in English. Dentella et al. 13 show that ChatGPT-3 fails to understand low-frequent grammatical constructions like complex nested hierarchies and self-embeddings. In another recent line of research, the structure of automatically generated language is evaluated. Guo et al. 14 show that in question-answer scenarios, ChatGPT-3 uses different linguistic devices than humans. Zhao et al. 15 show that ChatGPT generates longer and more diverse responses when the user is in an apparently negative emotional state.

Given that we aim to identify certain linguistic characteristics of human-written versus AI-generated content, we also draw on related work in the field of linguistic fingerprinting, which assumes that each human has a unique way of using language to express themselves, i.e. the linguistic means that are employed to communicate thoughts, opinions and ideas differ between humans. That these properties can be identified with computational linguistic means has been showcased across different tasks: the computation of a linguistic fingerprint allows to distinguish authors of literary works 16 , the identification of speaker profiles in large public debates 17 , 18 , 19 , 20 and the provision of data for forensic voice comparison in broadcast debates 21 , 22 . For educational purposes, linguistic features are used to measure essay readability 23 , essay cohesion 24 and language performance scores for essay grading 25 . Integrating linguistic fingerprints also yields performance advantages for classification tasks, for instance in predicting user opinion 26 , 27 and identifying individual users 28 .

Limitations of OpenAIs ChatGPT evaluations

OpenAI published a discussion of the model’s performance of several tasks, including Advanced Placement (AP) classes within the US educational system 6 . The subjects used in performance evaluation are diverse and include arts, history, English literature, calculus, statistics, physics, chemistry, economics, and US politics. While the models achieved good or very good marks in most subjects, they did not perform well in English literature. GPT-3.5 also experienced problems with chemistry, macroeconomics, physics, and statistics. While the overall results are impressive, there are several significant issues: firstly, the conflict of interest of the model’s owners poses a problem for the performance interpretation. Secondly, there are issues with the soundness of the assessment beyond the conflict of interest, which make the generalizability of the results hard to assess with respect to the models’ capability to write essays. Notably, the AP exams combine multiple-choice questions with free-text answers. Only the aggregated scores are publicly available. To the best of our knowledge, neither the generated free-text answers, their overall assessment, nor their assessment given specific criteria from the used judgment rubric are published. Thirdly, while the paper states that 1–2 qualified third-party contractors participated in the rating of the free-text answers, it is unclear how often multiple ratings were generated for the same answer and what was the agreement between them. This lack of information hinders a scientifically sound judgement regarding the capabilities of these models in general, but also specifically for essays. Lastly, the owners of the model conducted their study in a few-shot prompt setting, where they gave the models a very structured template as well as an example of a human-written high-quality essay to guide the generation of the answers. This further fine-tuning of what the models generate could have also influenced the output. The results published by the owners go beyond the AP courses which are directly comparable to our work and also consider other student assessments like Graduate Record Examinations (GREs). However, these evaluations suffer from the same problems with the scientific rigor as the AP classes.

Scientific assessment of ChatGPT

Researchers across the globe are currently assessing the individual capabilities of these models with greater scientific rigor. We note that due to the recency and speed of these developments, the hereafter discussed literature has mostly only been published as pre-prints and has not yet been peer-reviewed. In addition to the above issues concretely related to the assessment of the capabilities to generate student essays, it is also worth noting that there are likely large problems with the trustworthiness of evaluations, because of data contamination, i.e. because the benchmark tasks are part of the training of the model, which enables memorization. For example, Aiyappa et al. 29 find evidence that this is likely the case for benchmark results regarding NLP tasks. This complicates the effort by researchers to assess the capabilities of the models beyond memorization.

Nevertheless, the first assessment results are already available – though mostly focused on ChatGPT-3 and not yet ChatGPT-4. Closest to our work is a study by Yeadon et al. 30 , who also investigate ChatGPT-3 performance when writing essays. They grade essays generated by ChatGPT-3 for five physics questions based on criteria that cover academic content, appreciation of the underlying physics, grasp of subject material, addressing the topic, and writing style. For each question, ten essays were generated and rated independently by five researchers. While the sample size precludes a statistical assessment, the results demonstrate that the AI model is capable of writing high-quality physics essays, but that the quality varies in a manner similar to human-written essays.

Guo et al. 14 create a set of free-text question answering tasks based on data they collected from the internet, e.g. question answering from Reddit. The authors then sample thirty triplets of a question, a human answer, and a ChatGPT-3 generated answer and ask human raters to assess if they can detect which was written by a human, and which was written by an AI. While this approach does not directly assess the quality of the output, it serves as a Turing test 31 designed to evaluate whether humans can distinguish between human- and AI-produced output. The results indicate that humans are in fact able to distinguish between the outputs when presented with a pair of answers. Humans familiar with ChatGPT are also able to identify over 80% of AI-generated answers without seeing a human answer in comparison. However, humans who are not yet familiar with ChatGPT-3 are not capable of identifying AI-written answers about 50% of the time. Moreover, the authors also find that the AI-generated outputs are deemed to be more helpful than the human answers in slightly more than half of the cases. This suggests that the strong results from OpenAI’s own benchmarks regarding the capabilities to generate free-text answers generalize beyond the benchmarks.

There are, however, some indicators that the benchmarks may be overly optimistic in their assessment of the model’s capabilities. For example, Kortemeyer 32 conducts a case study to assess how well ChatGPT-3 would perform in a physics class, simulating the tasks that students need to complete as part of the course: answer multiple-choice questions, do homework assignments, ask questions during a lesson, complete programming exercises, and write exams with free-text questions. Notably, ChatGPT-3 was allowed to interact with the instructor for many of the tasks, allowing for multiple attempts as well as feedback on preliminary solutions. The experiment shows that ChatGPT-3’s performance is in many aspects similar to that of the beginning learners and that the model makes similar mistakes, such as omitting units or simply plugging in results from equations. Overall, the AI would have passed the course with a low score of 1.5 out of 4.0. Similarly, Kung et al. 33 study the performance of ChatGPT-3 in the United States Medical Licensing Exam (USMLE) and find that the model performs at or near the passing threshold. Their assessment is a bit more optimistic than Kortemeyer’s as they state that this level of performance, comprehensible reasoning and valid clinical insights suggest that models such as ChatGPT may potentially assist human learning in clinical decision making.

Frieder et al. 34 evaluate the capabilities of ChatGPT-3 in solving graduate-level mathematical tasks. They find that while ChatGPT-3 seems to have some mathematical understanding, its level is well below that of an average student and in most cases is not sufficient to pass exams. Yuan et al. 35 consider the arithmetic abilities of language models, including ChatGPT-3 and ChatGPT-4. They find that they exhibit the best performance among other currently available language models (incl. Llama 36 , FLAN-T5 37 , and Bloom 38 ). However, the accuracy of basic arithmetic tasks is still only at 83% when considering correctness to the degree of \(10^{-3}\) , i.e. such models are still not capable of functioning reliably as calculators. In a slightly satiric, yet insightful take, Spencer et al. 39 assess how a scientific paper on gamma-ray astrophysics would look like, if it were written largely with the assistance of ChatGPT-3. They find that while the language capabilities are good and the model is capable of generating equations, the arguments are often flawed and the references to scientific literature are full of hallucinations.

The general reasoning skills of the models may also not be at the level expected from the benchmarks. For example, Cherian et al. 40 evaluate how well ChatGPT-3 performs on eleven puzzles that second graders should be able to solve and find that ChatGPT is only able to solve them on average in 36.4% of attempts, whereas the second graders achieve a mean of 60.4%. However, their sample size is very small and the problem was posed as a multiple-choice question answering problem, which cannot be directly compared to the NLG we consider.

Research gap

Within this article, we address an important part of the current research gap regarding the capabilities of ChatGPT (and similar technologies), guided by the following research questions:

RQ1: How good is ChatGPT based on GPT-3 and GPT-4 at writing argumentative student essays?

RQ2: How do AI-generated essays compare to essays written by students?

RQ3: What are linguistic devices that are characteristic of student versus AI-generated content?

We study these aspects with the help of a large group of teaching professionals who systematically assess a large corpus of student essays. To the best of our knowledge, this is the first large-scale, independent scientific assessment of ChatGPT (or similar models) of this kind. Answering these questions is crucial to understanding the impact of ChatGPT on the future of education.

Materials and methods

The essay topics originate from a corpus of argumentative essays in the field of argument mining 41 . Argumentative essays require students to think critically about a topic and use evidence to establish a position on the topic in a concise manner. The corpus features essays for 90 topics from Essay Forum 42 , an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get feedback from native speakers on their essay-writing capabilities. Information about the age of the writers is not available, but the topics indicate that the essays were written in grades 11–13, indicating that the authors were likely at least 16. Topics range from ‘Should students be taught to cooperate or to compete?’ to ‘Will newspapers become a thing of the past?’. In the corpus, each topic features one human-written essay uploaded and discussed in the forum. The students who wrote the essays are not native speakers. The average length of these essays is 19 sentences with 388 tokens (an average of 2.089 characters) and will be termed ‘student essays’ in the remainder of the paper.

For the present study, we use the topics from Stab and Gurevych 41 and prompt ChatGPT with ‘Write an essay with about 200 words on “[ topic ]”’ to receive automatically-generated essays from the ChatGPT-3 and ChatGPT-4 versions from 22 March 2023 (‘ChatGPT-3 essays’, ‘ChatGPT-4 essays’). No additional prompts for getting the responses were used, i.e. the data was created with a basic prompt in a zero-shot scenario. This is in contrast to the benchmarks by OpenAI, who used an engineered prompt in a few-shot scenario to guide the generation of essays. We note that we decided to ask for 200 words because we noticed a tendency to generate essays that are longer than the desired length by ChatGPT. A prompt asking for 300 words typically yielded essays with more than 400 words. Thus, using the shorter length of 200, we prevent a potential advantage for ChatGPT through longer essays, and instead err on the side of brevity. Similar to the evaluations of free-text answers by OpenAI, we did not consider multiple configurations of the model due to the effort required to obtain human judgments. For the same reason, our data is restricted to ChatGPT and does not include other models available at that time, e.g. Alpaca. We use the browser versions of the tools because we consider this to be a more realistic scenario than using the API. Table 1 below shows the core statistics of the resulting dataset. Supplemental material S1 shows examples for essays from the data set.

Annotation study

Study participants.

The participants had registered for a two-hour online training entitled ‘ChatGPT – Challenges and Opportunities’ conducted by the authors of this paper as a means to provide teachers with some of the technological background of NLG systems in general and ChatGPT in particular. Only teachers permanently employed at secondary schools were allowed to register for this training. Focusing on these experts alone allows us to receive meaningful results as those participants have a wide range of experience in assessing students’ writing. A total of 139 teachers registered for the training, 129 of them teach at grammar schools, and only 10 teachers hold a position at other secondary schools. About half of the registered teachers (68 teachers) have been in service for many years and have successfully applied for promotion. For data protection reasons, we do not know the subject combinations of the registered teachers. We only know that a variety of subjects are represented, including languages (English, French and German), religion/ethics, and science. Supplemental material S5 provides some general information regarding German teacher qualifications.

The training began with an online lecture followed by a discussion phase. Teachers were given an overview of language models and basic information on how ChatGPT was developed. After about 45 minutes, the teachers received a both written and oral explanation of the questionnaire at the core of our study (see Supplementary material S3 ) and were informed that they had 30 minutes to finish the study tasks. The explanation included information on how the data was obtained, why we collect the self-assessment, and how we chose the criteria for the rating of the essays, the overall goal of our research, and a walk-through of the questionnaire. Participation in the questionnaire was voluntary and did not affect the awarding of a training certificate. We further informed participants that all data was collected anonymously and that we would have no way of identifying who participated in the questionnaire. We orally informed participants that they consent to the use of the provided ratings for our research by participating in the survey.

Once these instructions were provided orally and in writing, the link to the online form was given to the participants. The online form was running on a local server that did not log any information that could identify the participants (e.g. IP address) to ensure anonymity. As per instructions, consent for participation was given by using the online form. Due to the full anonymity, we could by definition not document who exactly provided the consent. This was implemented as further insurance that non-participation could not possibly affect being awarded the training certificate.

About 20% of the training participants did not take part in the questionnaire study, the remaining participants consented based on the information provided and participated in the rating of essays. After the questionnaire, we continued with an online lecture on the opportunities of using ChatGPT for teaching as well as AI beyond chatbots. The study protocol was reviewed and approved by the Research Ethics Committee of the University of Passau. We further confirm that our study protocol is in accordance with all relevant guidelines.

Questionnaire

The questionnaire consists of three parts: first, a brief self-assessment regarding the English skills of the participants which is based on the Common European Framework of Reference for Languages (CEFR) 43 . We have six levels ranging from ‘comparable to a native speaker’ to ‘some basic skills’ (see supplementary material S3 ). Then each participant was shown six essays. The participants were only shown the generated text and were not provided with information on whether the text was human-written or AI-generated.

The questionnaire covers the seven categories relevant for essay assessment shown below (for details see supplementary material S3 ):

Topic and completeness

Logic and composition

Expressiveness and comprehensiveness

Language mastery

Vocabulary and text linking

Language constructs

These categories are used as guidelines for essay assessment 44 established by the Ministry for Education of Lower Saxony, Germany. For each criterion, a seven-point Likert scale with scores from zero to six is defined, where zero is the worst score (e.g. no relation to the topic) and six is the best score (e.g. addressed the topic to a special degree). The questionnaire included a written description as guidance for the scoring.

After rating each essay, the participants were also asked to self-assess their confidence in the ratings. We used a five-point Likert scale based on the criteria for the self-assessment of peer-review scores from the Association for Computational Linguistics (ACL). Once a participant finished rating the six essays, they were shown a summary of their ratings, as well as the individual ratings for each of their essays and the information on how the essay was generated.

Computational linguistic analysis

In order to further explore and compare the quality of the essays written by students and ChatGPT, we consider the six following linguistic characteristics: lexical diversity, sentence complexity, nominalization, presence of modals, epistemic and discourse markers. Those are motivated by previous work: Weiss et al. 25 observe the correlation between measures of lexical, syntactic and discourse complexities to the essay gradings of German high-school examinations while McNamara et al. 45 explore cohesion (indicated, among other things, by connectives), syntactic complexity and lexical diversity in relation to the essay scoring.

Lexical diversity

We identify vocabulary richness by using a well-established measure of textual, lexical diversity (MTLD) 46 which is often used in the field of automated essay grading 25 , 45 , 47 . It takes into account the number of unique words but unlike the best-known measure of lexical diversity, the type-token ratio (TTR), it is not as sensitive to the difference in the length of the texts. In fact, Koizumi and In’nami 48 find it to be least affected by the differences in the length of the texts compared to some other measures of lexical diversity. This is relevant to us due to the difference in average length between the human-written and ChatGPT-generated essays.

Syntactic complexity

We use two measures in order to evaluate the syntactic complexity of the essays. One is based on the maximum depth of the sentence dependency tree which is produced using the spaCy 3.4.2 dependency parser 49 (‘Syntactic complexity (depth)’). For the second measure, we adopt an approach similar in nature to the one by Weiss et al. 25 who use clause structure to evaluate syntactic complexity. In our case, we count the number of conjuncts, clausal modifiers of nouns, adverbial clause modifiers, clausal complements, clausal subjects, and parataxes (‘Syntactic complexity (clauses)’). The supplementary material in S2 shows the difference between sentence complexity based on two examples from the data.

Nominalization is a common feature of a more scientific style of writing 50 and is used as an additional measure for syntactic complexity. In order to explore this feature, we count occurrences of nouns with suffixes such as ‘-ion’, ‘-ment’, ‘-ance’ and a few others which are known to transform verbs into nouns.

Semantic properties

Both modals and epistemic markers signal the commitment of the writer to their statement. We identify modals using the POS-tagging module provided by spaCy as well as a list of epistemic expressions of modality, such as ‘definitely’ and ‘potentially’, also used in other approaches to identifying semantic properties 51 . For epistemic markers we adopt an empirically-driven approach and utilize the epistemic markers identified in a corpus of dialogical argumentation by Hautli-Janisz et al. 52 . We consider expressions such as ‘I think’, ‘it is believed’ and ‘in my opinion’ to be epistemic.

Discourse properties

Discourse markers can be used to measure the coherence quality of a text. This has been explored by Somasundaran et al. 53 who use discourse markers to evaluate the story-telling aspect of student writing while Nadeem et al. 54 incorporated them in their deep learning-based approach to automated essay scoring. In the present paper, we employ the PDTB list of discourse markers 55 which we adjust to exclude words that are often used for purposes other than indicating discourse relations, such as ‘like’, ‘for’, ‘in’ etc.

Statistical methods

We use a within-subjects design for our study. Each participant was shown six randomly selected essays. Results were submitted to the survey system after each essay was completed, in case participants ran out of time and did not finish scoring all six essays. Cronbach’s \(\alpha\) 56 allows us to determine the inter-rater reliability for the rating criterion and data source (human, ChatGPT-3, ChatGPT-4) in order to understand the reliability of our data not only overall, but also for each data source and rating criterion. We use two-sided Wilcoxon-rank-sum tests 57 to confirm the significance of the differences between the data sources for each criterion. We use the same tests to determine the significance of the linguistic characteristics. This results in three comparisons (human vs. ChatGPT-3, human vs. ChatGPT-4, ChatGPT-3 vs. ChatGPT-4) for each of the seven rating criteria and each of the seven linguistic characteristics, i.e. 42 tests. We use the Holm-Bonferroni method 58 for the correction for multiple tests to achieve a family-wise error rate of 0.05. We report the effect size using Cohen’s d 59 . While our data is not perfectly normal, it also does not have severe outliers, so we prefer the clear interpretation of Cohen’s d over the slightly more appropriate, but less accessible non-parametric effect size measures. We report point plots with estimates of the mean scores for each data source and criterion, incl. the 95% confidence interval of these mean values. The confidence intervals are estimated in a non-parametric manner based on bootstrap sampling. We further visualize the distribution for each criterion using violin plots to provide a visual indicator of the spread of the data (see Supplementary material S4 ).

Further, we use the self-assessment of the English skills and confidence in the essay ratings as confounding variables. Through this, we determine if ratings are affected by the language skills or confidence, instead of the actual quality of the essays. We control for the impact of these by measuring Pearson’s correlation coefficient r 60 between the self-assessments and the ratings. We also determine whether the linguistic features are correlated with the ratings as expected. The sentence complexity (both tree depth and dependency clauses), as well as the nominalization, are indicators of the complexity of the language. Similarly, the use of discourse markers should signal a proper logical structure. Finally, a large lexical diversity should be correlated with the ratings for the vocabulary. Same as above, we measure Pearson’s r . We use a two-sided test for the significance based on a \(\beta\) -distribution that models the expected correlations as implemented by scipy 61 . Same as above, we use the Holm-Bonferroni method to account for multiple tests. However, we note that it is likely that all—even tiny—correlations are significant given our amount of data. Consequently, our interpretation of these results focuses on the strength of the correlations.

Our statistical analysis of the data is implemented in Python. We use pandas 1.5.3 and numpy 1.24.2 for the processing of data, pingouin 0.5.3 for the calculation of Cronbach’s \(\alpha\) , scipy 1.10.1 for the Wilcoxon-rank-sum tests Pearson’s r , and seaborn 0.12.2 for the generation of plots, incl. the calculation of error bars that visualize the confidence intervals.

Out of the 111 teachers who completed the questionnaire, 108 rated all six essays, one rated five essays, one rated two essays, and one rated only one essay. This results in 658 ratings for 270 essays (90 topics for each essay type: human-, ChatGPT-3-, ChatGPT-4-generated), with three ratings for 121 essays, two ratings for 144 essays, and one rating for five essays. The inter-rater agreement is consistently excellent ( \(\alpha >0.9\) ), with the exception of language mastery where we have good agreement ( \(\alpha =0.89\) , see Table  2 ). Further, the correlation analysis depicted in supplementary material S4 shows weak positive correlations ( \(r \in 0.11, 0.28]\) ) between the self-assessment for the English skills, respectively the self-assessment for the confidence in ratings and the actual ratings. Overall, this indicates that our ratings are reliable estimates of the actual quality of the essays with a potential small tendency that confidence in ratings and language skills yields better ratings, independent of the data source.

Table  2 and supplementary material S4 characterize the distribution of the ratings for the essays, grouped by the data source. We observe that for all criteria, we have a clear order of the mean values, with students having the worst ratings, ChatGPT-3 in the middle rank, and ChatGPT-4 with the best performance. We further observe that the standard deviations are fairly consistent and slightly larger than one, i.e. the spread is similar for all ratings and essays. This is further supported by the visual analysis of the violin plots.

The statistical analysis of the ratings reported in Table  4 shows that differences between the human-written essays and the ones generated by both ChatGPT models are significant. The effect sizes for human versus ChatGPT-3 essays are between 0.52 and 1.15, i.e. a medium ( \(d \in [0.5,0.8)\) ) to large ( \(d \in [0.8, 1.2)\) ) effect. On the one hand, the smallest effects are observed for the expressiveness and complexity, i.e. when it comes to the overall comprehensiveness and complexity of the sentence structures, the differences between the humans and the ChatGPT-3 model are smallest. On the other hand, the difference in language mastery is larger than all other differences, which indicates that humans are more prone to making mistakes when writing than the NLG models. The magnitude of differences between humans and ChatGPT-4 is larger with effect sizes between 0.88 and 1.43, i.e., a large to very large ( \(d \in [1.2, 2)\) ) effect. Same as for ChatGPT-3, the differences are smallest for expressiveness and complexity and largest for language mastery. Please note that the difference in language mastery between humans and both GPT models does not mean that the humans have low scores for language mastery (M=3.90), but rather that the NLG models have exceptionally high scores (M=5.03 for ChatGPT-3, M=5.25 for ChatGPT-4).

When we consider the differences between the two GPT models, we observe that while ChatGPT-4 has consistently higher mean values for all criteria, only the differences for logic and composition, vocabulary and text linking, and complexity are significant. The effect sizes are between 0.45 and 0.5, i.e. small ( \(d \in [0.2, 0.5)\) ) and medium. Thus, while GPT-4 seems to be an improvement over GPT-3.5 in general, the only clear indicator of this is a better and clearer logical composition and more complex writing with a more diverse vocabulary.

We also observe significant differences in the distribution of linguistic characteristics between all three groups (see Table  3 ). Sentence complexity (depth) is the only category without a significant difference between humans and ChatGPT-3, as well as ChatGPT-3 and ChatGPT-4. There is also no significant difference in the category of discourse markers between humans and ChatGPT-3. The magnitude of the effects varies a lot and is between 0.39 and 1.93, i.e., between small ( \(d \in [0.2, 0.5)\) ) and very large. However, in comparison to the ratings, there is no clear tendency regarding the direction of the differences. For instance, while the ChatGPT models write more complex sentences and use more nominalizations, humans tend to use more modals and epistemic markers instead. The lexical diversity of humans is higher than that of ChatGPT-3 but lower than that of ChatGPT-4. While there is no difference in the use of discourse markers between humans and ChatGPT-3, ChatGPT-4 uses significantly fewer discourse markers.

We detect the expected positive correlations between the complexity ratings and the linguistic markers for sentence complexity ( \(r=0.16\) for depth, \(r=0.19\) for clauses) and nominalizations ( \(r=0.22\) ). However, we observe a negative correlation between the logic ratings and the discourse markers ( \(r=-0.14\) ), which counters our intuition that more frequent use of discourse indicators makes a text more logically coherent. However, this is in line with previous work: McNamara et al. 45 also find no indication that the use of cohesion indices such as discourse connectives correlates with high- and low-proficiency essays. Finally, we observe the expected positive correlation between the ratings for the vocabulary and the lexical diversity ( \(r=0.12\) ). All observed correlations are significant. However, we note that the strength of all these correlations is weak and that the significance itself should not be over-interpreted due to the large sample size.

Our results provide clear answers to the first two research questions that consider the quality of the generated essays: ChatGPT performs well at writing argumentative student essays and outperforms the quality of the human-written essays significantly. The ChatGPT-4 model has (at least) a large effect and is on average about one point better than humans on a seven-point Likert scale.

Regarding the third research question, we find that there are significant linguistic differences between humans and AI-generated content. The AI-generated essays are highly structured, which for instance is reflected by the identical beginnings of the concluding sections of all ChatGPT essays (‘In conclusion, [...]’). The initial sentences of each essay are also very similar starting with a general statement using the main concepts of the essay topics. Although this corresponds to the general structure that is sought after for argumentative essays, it is striking to see that the ChatGPT models are so rigid in realizing this, whereas the human-written essays are looser in representing the guideline on the linguistic surface. Moreover, the linguistic fingerprint has the counter-intuitive property that the use of discourse markers is negatively correlated with logical coherence. We believe that this might be due to the rigid structure of the generated essays: instead of using discourse markers, the AI models provide a clear logical structure by separating the different arguments into paragraphs, thereby reducing the need for discourse markers.

Our data also shows that hallucinations are not a problem in the setting of argumentative essay writing: the essay topics are not really about factual correctness, but rather about argumentation and critical reflection on general concepts which seem to be contained within the knowledge of the AI model. The stochastic nature of the language generation is well-suited for this kind of task, as different plausible arguments can be seen as a sampling from all available arguments for a topic. Nevertheless, we need to perform a more systematic study of the argumentative structures in order to better understand the difference in argumentation between human-written and ChatGPT-generated essay content. Moreover, we also cannot rule out that subtle hallucinations may have been overlooked during the ratings. There are also essays with a low rating for the criteria related to factual correctness, indicating that there might be cases where the AI models still have problems, even if they are, on average, better than the students.

One of the issues with evaluations of the recent large-language models is not accounting for the impact of tainted data when benchmarking such models. While it is certainly possible that the essays that were sourced by Stab and Gurevych 41 from the internet were part of the training data of the GPT models, the proprietary nature of the model training means that we cannot confirm this. However, we note that the generated essays did not resemble the corpus of human essays at all. Moreover, the topics of the essays are general in the sense that any human should be able to reason and write about these topics, just by understanding concepts like ‘cooperation’. Consequently, a taint on these general topics, i.e. the fact that they might be present in the data, is not only possible but is actually expected and unproblematic, as it relates to the capability of the models to learn about concepts, rather than the memorization of specific task solutions.

While we did everything to ensure a sound construct and a high validity of our study, there are still certain issues that may affect our conclusions. Most importantly, neither the writers of the essays, nor their raters, were English native speakers. However, the students purposefully used a forum for English writing frequented by native speakers to ensure the language and content quality of their essays. This indicates that the resulting essays are likely above average for non-native speakers, as they went through at least one round of revisions with the help of native speakers. The teachers were informed that part of the training would be in English to prevent registrations from people without English language skills. Moreover, the self-assessment of the language skills was only weakly correlated with the ratings, indicating that the threat to the soundness of our results is low. While we cannot definitively rule out that our results would not be reproducible with other human raters, the high inter-rater agreement indicates that this is unlikely.

However, our reliance on essays written by non-native speakers affects the external validity and the generalizability of our results. It is certainly possible that native speaking students would perform better in the criteria related to language skills, though it is unclear by how much. However, the language skills were particular strengths of the AI models, meaning that while the difference might be smaller, it is still reasonable to conclude that the AI models would have at least comparable performance to humans, but possibly still better performance, just with a smaller gap. While we cannot rule out a difference for the content-related criteria, we also see no strong argument why native speakers should have better arguments than non-native speakers. Thus, while our results might not fully translate to native speakers, we see no reason why aspects regarding the content should not be similar. Further, our results were obtained based on high-school-level essays. Native and non-native speakers with higher education degrees or experts in fields would likely also achieve a better performance, such that the difference in performance between the AI models and humans would likely also be smaller in such a setting.

We further note that the essay topics may not be an unbiased sample. While Stab and Gurevych 41 randomly sampled the essays from the writing feedback section of an essay forum, it is unclear whether the essays posted there are representative of the general population of essay topics. Nevertheless, we believe that the threat is fairly low because our results are consistent and do not seem to be influenced by certain topics. Further, we cannot with certainty conclude how our results generalize beyond ChatGPT-3 and ChatGPT-4 to similar models like Bard ( https://bard.google.com/?hl=en ) Alpaca, and Dolly. Especially the results for linguistic characteristics are hard to predict. However, since—to the best of our knowledge and given the proprietary nature of some of these models—the general approach to how these models work is similar and the trends for essay quality should hold for models with comparable size and training procedures.

Finally, we want to note that the current speed of progress with generative AI is extremely fast and we are studying moving targets: ChatGPT 3.5 and 4 today are already not the same as the models we studied. Due to a lack of transparency regarding the specific incremental changes, we cannot know or predict how this might affect our results.

Our results provide a strong indication that the fear many teaching professionals have is warranted: the way students do homework and teachers assess it needs to change in a world of generative AI models. For non-native speakers, our results show that when students want to maximize their essay grades, they could easily do so by relying on results from AI models like ChatGPT. The very strong performance of the AI models indicates that this might also be the case for native speakers, though the difference in language skills is probably smaller. However, this is not and cannot be the goal of education. Consequently, educators need to change how they approach homework. Instead of just assigning and grading essays, we need to reflect more on the output of AI tools regarding their reasoning and correctness. AI models need to be seen as an integral part of education, but one which requires careful reflection and training of critical thinking skills.

Furthermore, teachers need to adapt strategies for teaching writing skills: as with the use of calculators, it is necessary to critically reflect with the students on when and how to use those tools. For instance, constructivists 62 argue that learning is enhanced by the active design and creation of unique artifacts by students themselves. In the present case this means that, in the long term, educational objectives may need to be adjusted. This is analogous to teaching good arithmetic skills to younger students and then allowing and encouraging students to use calculators freely in later stages of education. Similarly, once a sound level of literacy has been achieved, strongly integrating AI models in lesson plans may no longer run counter to reasonable learning goals.

In terms of shedding light on the quality and structure of AI-generated essays, this paper makes an important contribution by offering an independent, large-scale and statistically sound account of essay quality, comparing human-written and AI-generated texts. By comparing different versions of ChatGPT, we also offer a glance into the development of these models over time in terms of their linguistic properties and the quality they exhibit. Our results show that while the language generated by ChatGPT is considered very good by humans, there are also notable structural differences, e.g. in the use of discourse markers. This demonstrates that an in-depth consideration not only of the capabilities of generative AI models is required (i.e. which tasks can they be used for), but also of the language they generate. For example, if we read many AI-generated texts that use fewer discourse markers, it raises the question if and how this would affect our human use of discourse markers. Understanding how AI-generated texts differ from human-written enables us to look for these differences, to reason about their potential impact, and to study and possibly mitigate this impact.

Data availability

The datasets generated during and/or analysed during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.8343644

Code availability

All materials are available online in form of a replication package that contains the data and the analysis code, https://doi.org/10.5281/zenodo.8343644 .

Ouyang, L. et al. Training language models to follow instructions with human feedback (2022). arXiv:2203.02155 .

Ruby, D. 30+ detailed chatgpt statistics–users & facts (sep 2023). https://www.demandsage.com/chatgpt-statistics/ (2023). Accessed 09 June 2023.

Leahy, S. & Mishra, P. TPACK and the Cambrian explosion of AI. In Society for Information Technology & Teacher Education International Conference , (ed. Langran, E.) 2465–2469 (Association for the Advancement of Computing in Education (AACE), 2023).

Ortiz, S. Need an ai essay writer? here’s how chatgpt (and other chatbots) can help. https://www.zdnet.com/article/how-to-use-chatgpt-to-write-an-essay/ (2023). Accessed 09 June 2023.

Openai chat interface. https://chat.openai.com/ . Accessed 09 June 2023.

OpenAI. Gpt-4 technical report (2023). arXiv:2303.08774 .

Brown, T. B. et al. Language models are few-shot learners (2020). arXiv:2005.14165 .

Wang, B. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax (2021).

Wei, J. et al. Finetuned language models are zero-shot learners. In International Conference on Learning Representations (2022).

Taori, R. et al. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca (2023).

Cai, Z. G., Haslett, D. A., Duan, X., Wang, S. & Pickering, M. J. Does chatgpt resemble humans in language use? (2023). arXiv:2303.08014 .

Mahowald, K. A discerning several thousand judgments: Gpt-3 rates the article + adjective + numeral + noun construction (2023). arXiv:2301.12564 .

Dentella, V., Murphy, E., Marcus, G. & Leivada, E. Testing ai performance on less frequent aspects of language reveals insensitivity to underlying meaning (2023). arXiv:2302.12313 .

Guo, B. et al. How close is chatgpt to human experts? comparison corpus, evaluation, and detection (2023). arXiv:2301.07597 .

Zhao, W. et al. Is chatgpt equipped with emotional dialogue capabilities? (2023). arXiv:2304.09582 .

Keim, D. A. & Oelke, D. Literature fingerprinting : A new method for visual literary analysis. In 2007 IEEE Symposium on Visual Analytics Science and Technology , 115–122, https://doi.org/10.1109/VAST.2007.4389004 (IEEE, 2007).

El-Assady, M. et al. Interactive visual analysis of transcribed multi-party discourse. In Proceedings of ACL 2017, System Demonstrations , 49–54 (Association for Computational Linguistics, Vancouver, Canada, 2017).

Mennatallah El-Assady, A. H.-J. & Butt, M. Discourse maps - feature encoding for the analysis of verbatim conversation transcripts. In Visual Analytics for Linguistics , vol. CSLI Lecture Notes, Number 220, 115–147 (Stanford: CSLI Publications, 2020).

Matt Foulis, J. V. & Reed, C. Dialogical fingerprinting of debaters. In Proceedings of COMMA 2020 , 465–466, https://doi.org/10.3233/FAIA200536 (Amsterdam: IOS Press, 2020).

Matt Foulis, J. V. & Reed, C. Interactive visualisation of debater identification and characteristics. In Proceedings of the COMMA workshop on Argument Visualisation, COMMA , 1–7 (2020).

Chatzipanagiotidis, S., Giagkou, M. & Meurers, D. Broad linguistic complexity analysis for Greek readability classification. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications , 48–58 (Association for Computational Linguistics, Online, 2021).

Ajili, M., Bonastre, J.-F., Kahn, J., Rossato, S. & Bernard, G. FABIOLE, a speech database for forensic speaker comparison. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) , 726–733 (European Language Resources Association (ELRA), Portorož, Slovenia, 2016).

Deutsch, T., Jasbi, M. & Shieber, S. Linguistic features for readability assessment. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications , 1–17, https://doi.org/10.18653/v1/2020.bea-1.1 (Association for Computational Linguistics, Seattle, WA, USA \(\rightarrow\) Online, 2020).

Fiacco, J., Jiang, S., Adamson, D. & Rosé, C. Toward automatic discourse parsing of student writing motivated by neural interpretation. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) , 204–215, https://doi.org/10.18653/v1/2022.bea-1.25 (Association for Computational Linguistics, Seattle, Washington, 2022).

Weiss, Z., Riemenschneider, A., Schröter, P. & Meurers, D. Computationally modeling the impact of task-appropriate language complexity and accuracy on human grading of German essays. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 30–45, https://doi.org/10.18653/v1/W19-4404 (Association for Computational Linguistics, Florence, Italy, 2019).

Yang, F., Dragut, E. & Mukherjee, A. Predicting personal opinion on future events with fingerprints. In Proceedings of the 28th International Conference on Computational Linguistics , 1802–1807, https://doi.org/10.18653/v1/2020.coling-main.162 (International Committee on Computational Linguistics, Barcelona, Spain (Online), 2020).

Tumarada, K. et al. Opinion prediction with user fingerprinting. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021) , 1423–1431 (INCOMA Ltd., Held Online, 2021).

Rocca, R. & Yarkoni, T. Language as a fingerprint: Self-supervised learning of user encodings using transformers. In Findings of the Association for Computational Linguistics: EMNLP . 1701–1714 (Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 2022).

Aiyappa, R., An, J., Kwak, H. & Ahn, Y.-Y. Can we trust the evaluation on chatgpt? (2023). arXiv:2303.12767 .

Yeadon, W., Inyang, O.-O., Mizouri, A., Peach, A. & Testrow, C. The death of the short-form physics essay in the coming ai revolution (2022). arXiv:2212.11661 .

TURING, A. M. I.-COMPUTING MACHINERY AND INTELLIGENCE. Mind LIX , 433–460, https://doi.org/10.1093/mind/LIX.236.433 (1950). https://academic.oup.com/mind/article-pdf/LIX/236/433/30123314/lix-236-433.pdf .

Kortemeyer, G. Could an artificial-intelligence agent pass an introductory physics course? (2023). arXiv:2301.12127 .

Kung, T. H. et al. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLOS Digital Health 2 , 1–12. https://doi.org/10.1371/journal.pdig.0000198 (2023).

Article   Google Scholar  

Frieder, S. et al. Mathematical capabilities of chatgpt (2023). arXiv:2301.13867 .

Yuan, Z., Yuan, H., Tan, C., Wang, W. & Huang, S. How well do large language models perform in arithmetic tasks? (2023). arXiv:2304.02015 .

Touvron, H. et al. Llama: Open and efficient foundation language models (2023). arXiv:2302.13971 .

Chung, H. W. et al. Scaling instruction-finetuned language models (2022). arXiv:2210.11416 .

Workshop, B. et al. Bloom: A 176b-parameter open-access multilingual language model (2023). arXiv:2211.05100 .

Spencer, S. T., Joshi, V. & Mitchell, A. M. W. Can ai put gamma-ray astrophysicists out of a job? (2023). arXiv:2303.17853 .

Cherian, A., Peng, K.-C., Lohit, S., Smith, K. & Tenenbaum, J. B. Are deep neural networks smarter than second graders? (2023). arXiv:2212.09993 .

Stab, C. & Gurevych, I. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers , 1501–1510 (Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 2014).

Essay forum. https://essayforum.com/ . Last-accessed: 2023-09-07.

Common european framework of reference for languages (cefr). https://www.coe.int/en/web/common-european-framework-reference-languages . Accessed 09 July 2023.

Kmk guidelines for essay assessment. http://www.kmk-format.de/material/Fremdsprachen/5-3-2_Bewertungsskalen_Schreiben.pdf . Accessed 09 July 2023.

McNamara, D. S., Crossley, S. A. & McCarthy, P. M. Linguistic features of writing quality. Writ. Commun. 27 , 57–86 (2010).

McCarthy, P. M. & Jarvis, S. Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behav. Res. Methods 42 , 381–392 (2010).

Article   PubMed   Google Scholar  

Dasgupta, T., Naskar, A., Dey, L. & Saha, R. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications , 93–102 (2018).

Koizumi, R. & In’nami, Y. Effects of text length on lexical diversity measures: Using short texts with less than 200 tokens. System 40 , 554–564 (2012).

spacy industrial-strength natural language processing in python. https://spacy.io/ .

Siskou, W., Friedrich, L., Eckhard, S., Espinoza, I. & Hautli-Janisz, A. Measuring plain language in public service encounters. In Proceedings of the 2nd Workshop on Computational Linguistics for Political Text Analysis (CPSS-2022) (Potsdam, Germany, 2022).

El-Assady, M. & Hautli-Janisz, A. Discourse Maps - Feature Encoding for the Analysis of Verbatim Conversation Transcripts (CSLI lecture notes (CSLI Publications, Center for the Study of Language and Information, 2019).

Hautli-Janisz, A. et al. QT30: A corpus of argument and conflict in broadcast debate. In Proceedings of the Thirteenth Language Resources and Evaluation Conference , 3291–3300 (European Language Resources Association, Marseille, France, 2022).

Somasundaran, S. et al. Towards evaluating narrative quality in student writing. Trans. Assoc. Comput. Linguist. 6 , 91–106 (2018).

Nadeem, F., Nguyen, H., Liu, Y. & Ostendorf, M. Automated essay scoring with discourse-aware neural models. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 484–493, https://doi.org/10.18653/v1/W19-4450 (Association for Computational Linguistics, Florence, Italy, 2019).

Prasad, R. et al. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08) (European Language Resources Association (ELRA), Marrakech, Morocco, 2008).

Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika 16 , 297–334. https://doi.org/10.1007/bf02310555 (1951).

Article   MATH   Google Scholar  

Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1 , 80–83 (1945).

Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6 , 65–70 (1979).

MathSciNet   MATH   Google Scholar  

Cohen, J. Statistical power analysis for the behavioral sciences (Academic press, 2013).

Freedman, D., Pisani, R. & Purves, R. Statistics (international student edition). Pisani, R. Purves, 4th edn. WW Norton & Company, New York (2007).

Scipy documentation. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html . Accessed 09 June 2023.

Windschitl, M. Framing constructivism in practice as the negotiation of dilemmas: An analysis of the conceptual, pedagogical, cultural, and political challenges facing teachers. Rev. Educ. Res. 72 , 131–175 (2002).

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Faculty of Computer Science and Mathematics, University of Passau, Passau, Germany

Steffen Herbold, Annette Hautli-Janisz, Ute Heuer, Zlata Kikteva & Alexander Trautsch

You can also search for this author in PubMed   Google Scholar

Contributions

S.H., A.HJ., and U.H. conceived the experiment; S.H., A.HJ, and Z.K. collected the essays from ChatGPT; U.H. recruited the study participants; S.H., A.HJ., U.H. and A.T. conducted the training session and questionnaire; all authors contributed to the analysis of the results, the writing of the manuscript, and review of the manuscript.

Corresponding author

Correspondence to Steffen Herbold .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary information 2., supplementary information 3., supplementary tables., supplementary figures., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Herbold, S., Hautli-Janisz, A., Heuer, U. et al. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep 13 , 18617 (2023). https://doi.org/10.1038/s41598-023-45644-9

Download citation

Received : 01 June 2023

Accepted : 22 October 2023

Published : 30 October 2023

DOI : https://doi.org/10.1038/s41598-023-45644-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Defense against adversarial attacks: robust and efficient compressed optimized neural networks.

  • Insaf Kraidia
  • Afifa Ghenai
  • Samir Brahim Belhaouari

Scientific Reports (2024)

AI-driven translations for kidney transplant equity in Hispanic populations

  • Oscar A. Garcia Valencia
  • Charat Thongprayoon
  • Wisit Cheungpasitporn

How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations

  • Thomas Cantens

AI & SOCIETY (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

chat gpt english essays

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, can you use chatgpt for your college essay.

author image

College Admissions , College Essays

feaeture-robot-writing-studying-AI-cc0

ChatGPT has become a popular topic of conversation since its official launch in November 2022. The artificial intelligence (AI) chatbot can be used for all sorts of things, like having conversations, answering questions, and even crafting complete pieces of writing.

If you’re applying for college, you might be wondering about ChatGPT college admissions’ potential.  Should you use a ChatGPT college essay in your application ? 

By the time you finish reading this article, you’ll know much more about ChatGPT, including how students can use it responsibly and if it’s a good idea to use ChatGPT on college essays . We’ll answer all your questions, like:

  • What is ChatGPT and why are schools talking about it?
  • What are the good and bad aspects of ChatGPT?
  • Should you use ChatGPT for college essays and applications?
  • Can colleges detect ChatGPT?
  • Are there other tools and strategies that students can use, instead?

We’ve got a lot to cover, so let’s get started!

body-robot-teacher-cc0-1

Schools and colleges are worried about how new AI technology affects how students learn. (Don't worry. Robots aren't replacing your teachers...yet.)

What Is ChatGPT and Why Are Schools Talking About It?

ChatGPT (short for “Chat Generative Pre-trained Transformer”) is a chatbot created by OpenAI , an artificial intelligence research company. ChatGPT can be used for various tasks, like having human-like conversations, answering questions, giving recommendations, translating words and phrases—and writing things like essays. 

In order to do this, ChatGPT uses a neural network that’s been trained on thousands of resources to predict relationships between words. When you give ChatGPT a task, it uses that knowledge base to interpret your input or query. It then analyzes its data banks to predict the combinations of words that will best answer your question. 

So while ChatGPT might seem like it’s thinking, it’s actually pulling information from hundreds of thousands of resources , then answering your questions by looking for patterns in that data and predicting which words come next.  

Why Schools Are Concerned About ChatGPT

Unsurprisingly, schools are worried about ChatGPT and its misuse, especially in terms of academic dishonesty and plagiarism . Most schools, including colleges, require students’ work to be 100% their own. That’s because taking someone else’s ideas and passing them off as your own is stealing someone else’s intellectual property and misrepresenting your skills. 

The problem with ChatGPT from schools’ perspective is that it does the writing and research for you, then gives you the final product. In other words, you’re not doing the work it takes to complete an assignment when you’re using ChatGPT , which falls under schools’ plagiarism and dishonesty policies.  

Colleges are also concerned with how ChatGPT will negatively affect students’ critical thinking, research, and writing skills . Essays and other writing assignments are used to measure students’ mastery of the material, and if students submit ChatGPT college essays, teachers will just be giving feedback on an AI’s writing…which doesn’t help the student learn and grow. 

Beyond that, knowing how to write well is an important skill people need to be successful throughout life. Schools believe that if students rely on ChatGPT to write their essays, they’re doing more than just plagiarizing—they’re impacting their ability to succeed in their future careers. 

Many Schools Have Already Banned ChatGPT

Schools have responded surprisingly quickly to AI use, including ChatGPT. Worries about academic dishonesty, plagiarism, and mis/disinformation have led many high schools and colleges to ban the use of ChatGPT . Some schools have begun using AI-detection software for assignment submissions, and some have gone so far as to block students from using ChatGPT on their internet networks. 

It’s likely that schools will begin revising their academic honesty and plagiarism policies to address the use of AI tools like ChatGPT. You’ll want to stay up-to-date with your schools’ policies. 

body-technical-problem-oops-cc0

ChatGPT is pretty amazing...but it's not a great tool for writing college essays. Here's why.

ChatGPT: College Admissions and Entrance Essays

College admissions essays—also called personal statements—ask students to explore important events, experiences, and ideas from their lives. A great entrance essay will explain what makes you you !  

ChatGPT is a machine that doesn’t know and can’t understand your experiences. That means using ChatGPT to write your admissions essays isn’t just unethical. It actually puts you at a disadvantage because ChatGPT can’t adequately showcase what it means to be you. 

Let’s take a look at four ways ChatGPT negatively impacts college admissions essays.

#1: ChatGPT Lacks Insight

We recommend students use u nexpected or slightly unusual topics because they help admissions committees learn more about you and what makes you unique. The chat bot doesn’t know any of that, so nothing ChatGPT writes can’t accurately reflect your experience, passions, or goals for the future. 

Because ChatGPT will make guesses about who you are, it won’t be able to share what makes you unique in a way that resonates with readers. And since that’s what admissions counselors care about, a ChatGPT college essay could negatively impact an otherwise strong application.  

#2: ChatGPT Might Plagiarize 

Writing about experiences that many other people have had isn’t a very strong approach to take for entrance essays . After all, you don’t want to blend in—you want to stand out! 

If you write your essay yourself and include key details about your past experiences and future goals, there’s little risk that you’ll write the same essay as someone else. But if you use ChatGPT—who’s to say someone else won’t, too? Since ChatGPT uses predictive guesses to write essays, there’s a good chance the text it uses in your essay already appeared in someone else’s.  

Additionally, ChatGPT learns from every single interaction it has. So even if your essay isn’t plagiarized, it’s now in the system. That means the next person who uses ChatGPT to write their essay may end up with yours. You’ll still be on the hook for submitting a ChatGPT college essay, and someone else will be in trouble, too.

#3: ChatGPT Doesn’t Understand Emotion 

Keep in mind that ChatGPT can’t experience or imitate emotions, and so its writing samples lack, well, a human touch ! 

A great entrance essay will explore experiences or topics you’re genuinely excited about or proud of . This is your chance to show your chosen schools what you’ve accomplished and how you’ll continue growing and learning, and an essay without emotion would be odd considering that these should be real, lived experiences and passions you have!

#4: ChatGPT Produced Mediocre Results

If you’re still curious what would happen if you submitted a ChatGPT college essay with your application, you’re in luck. Both Business Insider and Forbes asked ChatGPT to write a couple of college entrance essays, and then they sent them to college admissions readers to get their thoughts. 

The readers agreed that the essays would probably pass as being written by real students—assuming admissions committees didn’t use AI detection software—but that they both were about what a “very mediocre, perhaps even a middle school, student would produce.” The admissions professionals agreed that the essays probably wouldn’t perform very well with entrance committees, especially at more selective schools.  

That’s not exactly the reaction you want when an admission committee reads your application materials! So, when it comes to ChatGPT college admissions, it’s best to steer clear and write your admission materials by yourself. 

body-magnifying-glass-icon-cc0

Can Colleges Detect ChatGPT?

We’ve already explained why it’s not a great idea to use ChatGPT to write your college essays and applications , but you may still be wondering: can colleges detect ChatGPT? 

In short, yes, they can! 

Software Can Detect ChatGPT

As technology improves and increases the risk of academic dishonesty, plagiarism, and mis/disinformation, software that can detect such technology is improving, too. For instance, OpenAI, the same company that built ChatGPT, is working on a text classifier that can tell the difference between AI-written text and human-written text .  

Turnitin, one of the most popular plagiarism detectors used by high schools and universities, also recently developed the AI Innovation Lab —a detection software designed to flag submissions that have used AI tools like ChatGPT. Turnitin says that this tool works with 98% confidence in detecting AI writing. 

Plagiarism and AI companies aren’t the only ones interested in AI-detection software. A 22-year old computer science student at Princeton created an app to detect ChatGPT writing, called Zero GPT. This software works by measuring the complexity of ideas and variety of sentence structures.  

Human Readers Can Detect ChatGPT 

It’s also worth keeping in mind that teachers can spot the use of ChatGPT themselves , even if it isn’t confirmed by a software detector. For example, if you’ve turned in one or two essays to your teacher already, they’re probably familiar with your unique writing style. If you submit a college essay draft essay that uses totally different vocabulary, sentence structures, and figures of speech, your teacher will likely take note.

Additionally , admissions committees and readers may be able to spot ChatGPT writing, too. ChatGPT (and AI writing, in general) uses more simplistic sentence structures with less variation, so that could make it easier to tell if you’ve submitted a ChatGPT college essay. These professionals also read thousands of essays every year, which means they know what a typical essay reads like. You want your college essay to catch their attention…but not because you used AI software! 

body-children-celebrating-computer-cc0

If you use ChatGPT responsibly, you can be as happy as these kids.

Pros and Cons of ChatGPT: College Admissions Edition

ChatGPT is a brand new technology, which means we’re still learning about the ways it can benefit us. It’s important to think about the pros and the cons to any new tool …and that includes artificial intelligence!

Let’s look at some of the good—and not-so-good—aspects of ChatGPT below. 

ChatGPT: The Good

It may seem like we’re focused on just the negatives of using ChatGPT in this article, but we’re willing to admit that the chatbot isn’t all bad. In fact, it can be a very useful tool for learning if used responsibly !

Like we already mentioned, students shouldn’t use ChatGPT to write entire essays or assignments. They can use it, though, as a learning tool alongside their own critical thinking and writing skills.

Students can use ChatGPT responsibly to:

  • Learn more about a topic . It’s a great place to get started for general knowledge and ideas about most subjects.
  • Find reputable and relevant sources on a topic. Students can ask ChatGPT for names and information about leading scholars, relevant websites and databases, and more. 
  • Brainstorm ideas for assignments. Students can share the ideas they already have with ChatGPT, and in return, the chatbot can suggest ideas for further exploration and even organization of their points.
  • Check work (that they’ve written themselves!) for errors or cla rity. This is similar to how spell- and grammar-checking software is used. ChatGPT may be even better than some competitors for this, because students can actually ask ChatGPT to explain the errors and their solutions—not just to fix them. 

Before you use ChatGPT—even for the tasks mentioned above—you should talk to your teacher or school about their AI and academic dishonesty policies. It’s also a good idea to include an acknowledgement that you used ChatGPT with an explanation of its use. 

body-man-sad-cc0

This guy made some bad decisions using ChatGPT. Don't be this guy.

ChatGPT: The Bad

The first model of ChatGPT (GPT-3.5) was formally introduced to the public in November 2022, and the newer model (GPT-4) in March 2023. So, it’s still very new and there’s a lot of room for improvement .  

There are many misconceptions about ChatGPT. One of the most extreme is that the AI is all-knowing and can make its own decisions. Another is that ChatGPT is a search engine that, when asked a question, can just surf the web for timely, relevant resources and give you all of that information. Both of these beliefs are incorrect because ChatGPT is limited to the information it’s been given by OpenAI . 

Remember how the ‘PT’ in ChatGPT stands for “Pre-trained”? That means that every time OpenAI gives ChatGPT an update, it’s given more information to work with (and so it has more information to share with you). In other words, it’s “trained” on information so it can give you the most accurate and relevant responses possible—but that information can be limited and biased . Ultimately, humans at OpenAI decide what pieces of information to share with ChatGPT, so it’s only as accurate and reliable as the sources it has access to.

For example, if you were to ask ChatGPT-3.5 what notable headlines made the news last week, it would respond that it doesn’t have access to that information because its most recent update was in September 2021!

You’re probably already familiar with how easy it can be to come across misinformation, misleading and untrue information on the internet. Since ChatGPT can’t tell the difference between what is true and what isn’t, it’s up to the humans at OpenAI to make sure only accurate and true information is given to the chatbot . This leaves room for human error , and users of ChatGPT have to keep that in mind when using and learning from the chatbot.

These are just the most obvious problems with ChatGPT. Some other problems with the chatbot include:

  • A lack of common sense. ChatGPT can create seemingly sensical responses to many questions and topics, but it doesn’t have common sense or complete background knowledge.
  • A lack of empathy. ChatGPT doesn’t have emotions, so it can’t understand them, either. 
  • An inability to make decisions or problem solve . While the chatbot can complete basic tasks like answering questions or giving recommendations, it can’t solve complex tasks. 

While there are some great uses for ChatGPT, it’s certainly not without its flaws.

body-bootcamp-cc0

Our bootcamp can help you put together amazing college essays that help you get into your dream schools—no AI necessary.

What Other Tools and Strategies Can Help Students Besides ChatGPT?

While it’s not a good idea to use ChatGPT for college admissions materials, it’s not the only tool available to help students with college essays and assignments. 

One of the best strategies students can use to write good essays is to make sure they give themselves plenty of time for the assignment. The writing process includes much more than just drafting! Having time to brainstorm ideas, write out a draft, revise it for clarity and completeness, and polish it makes for a much stronger essay. 

Teachers are another great resource students can use, especially for college application essays. Asking a teacher (or two!) for feedback can really help students improve the focus, clarity, and correctness of an essay. It’s also a more interactive way to learn—being able to sit down with a teacher to talk about their feedback can be much more engaging than using other tools. 

Using expert resources during the essay writing process can make a big difference, too. Our article outlines a complete list of strategies for students writing college admission essays. It breaks down what the Common Application essay is, gives tips for choosing the best essay topic, offers strategies for staying focused and being specific, and more.  

You can also get help from people who know the college admissions process best, like former admissions counselors. PrepScholar’s Admissions Bootcamp guides you through the entire application process , and you’ll get insider tips and tricks from real-life admissions counselors that’ll make your applications stand out. Even better, our bootcamp includes step-by-step essay writing guidance , so you can get the help you need to make sure your essay is perfect.  

If you’re hoping for more technological help, Grammarly is another AI tool that can check writing for correctness. It can correct things like misused and misspelled words and grammar mistakes, and it can improve your tone and style. 

It’s also widely available across multiple platforms through a Windows desktop app, an Android and iOS app, and a Google Chrome extension. And since Grammarly just checks your writing without doing any of the work for you, it’s totally safe to use on your college essays. 

The Bottom Line: ChatGPT College Admissions and Essays

ChatGPT will continue to be a popular discussion topic as it continues evolving. You can expect your chosen schools to address ChatGPT and other AI tools in their academic honesty and plagiarism policies in the near future—and maybe even to restrict or ban the use of the chatbot for school admissions and assignments.

As AI continues transforming, so will AI-detection. The goal is to make sure that AI is used responsibly by students so that they’re avoiding plagiarism and building their research, writing, and critical thinking skills. There are some great uses for ChatGPT when used responsibly, but you should always check with your teachers and schools beforehand.

ChatGPT’s “bad” aspects still need improving, and that’s going to take some time.Be aware that the chatbot isn’t even close to perfect, and it needs to be fact-checked just like other sources of information.

Similarly to other school assignments, don’t submit a ChatGPT college essay for college applications, either. College entrance essays should outline unique and interesting personal experiences and ideas, and those can only come from you.  

Just because ChatGPT isn’t a good idea doesn’t mean there aren’t resources to help you put together a great college essay. There are many other tools and strategies you can use instead of ChatGPT , many of which have been around for longer and offer better feedback. 

body-next-future-cc0

What’s Next?

Ready to write your college essays the old-fashioned way? Start here with our comprehensive guide to the admissions essays. 

Most students have to submit essays as part of their Common Application . Here's a complete breakdown of the Common App prompts —and how to answer them. 

The most common type of essay answers the "why this college?" prompt. We've got an expert breakdown that shows you how to write a killer response , step by step. 

Want to write the perfect college application essay?   We can help.   Your dedicated PrepScholar Admissions counselor will help you craft your perfect college essay, from the ground up. We learn your background and interests, brainstorm essay topics, and walk you through the essay drafting process, step-by-step. At the end, you'll have a unique essay to proudly submit to colleges.   Don't leave your college application to chance. Find out more about PrepScholar Admissions now:

Ashley Sufflé Robinson has a Ph.D. in 19th Century English Literature. As a content writer for PrepScholar, Ashley is passionate about giving college-bound students the in-depth information they need to get into the school of their dreams.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

chat gpt english essays

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

The End of High-School English

I’ve been teaching English for 12 years, and I’m astounded by what ChatGPT can produce.

This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic , Monday through Friday. Sign up for it here.       

Teenagers have always found ways around doing the hard work of actual learning. CliffsNotes dates back to the 1950s, “No Fear Shakespeare” puts the playwright into modern English, YouTube offers literary analysis and historical explication from numerous amateurs and professionals, and so on. For as long as those shortcuts have existed, however, one big part of education has remained inescapable: writing. Barring outright plagiarism, students have always arrived at that moment when they’re on their own with a blank page, staring down a blinking cursor, the essay waiting to be written.

Now that might be about to change. The arrival of OpenAI’s ChatGPT, a program that generates sophisticated text in response to any prompt you can imagine, may signal the end of writing assignments altogether—and maybe even the end of writing as a gatekeeper, a metric for intelligence, a teachable skill.

If you’re looking for historical analogues, this would be like the printing press, the steam drill, and the light bulb having a baby, and that baby having access to the entire corpus of human knowledge and understanding. My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change.

I teach a variety of humanities classes (literature, philosophy, religion, history) at a small independent high school in the San Francisco Bay Area. My classes tend to have about 15 students, their ages ranging from 16 to 18. This semester I am lucky enough to be teaching writers like James Baldwin, Gloria Anzaldúa, Herman Melville, Mohsin Hamid, Virginia Held. I recognize that it’s a privilege to have relatively small classes that can explore material like this at all. But at the end of the day, kids are always kids. I’m sure you will be absolutely shocked to hear that not all teenagers are, in fact, so interested in having their mind lit on fire by Anzaldúa’s radical ideas about transcending binaries, or Ishmael’s metaphysics in Moby-Dick .

To those students, I have always said: You may not be interested in poetry or civics, but no matter what you end up doing with your life, a basic competence in writing is an absolutely essential skill—whether it’s for college admissions, writing a cover letter when applying for a job, or just writing an email to your boss.

Read: The college essay is dead

I’ve also long held, for those who are interested in writing, that you need to learn the basic rules of good writing before you can start breaking them—that, like Picasso, you have to learn how to reliably fulfill an audience’s expectations before you get to start putting eyeballs in people’s ears and things.

I don’t know if either of those things is true anymore. It’s no longer obvious to me that my teenagers actually will need to develop this basic skill, or if the logic still holds that the fundamentals are necessary for experimentation.

Let me be candid (with apologies to all of my current and former students): What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor. Over the past few days, I’ve given it a number of different prompts. And even if the bot’s results don’t exactly give you goosebumps, they do a more-than-adequate job of fulfilling a task.

I mean, look at this: I asked the program to write me a playful, sophisticated, emotional 600-word college-admissions essay about how my experience volunteering at my local SPCA had prepared me for the academic rigor of Stanford. Here’s an excerpt from its response:

In addition to cleaning, I also had the opportunity to interact with the animals. I was amazed at the transformation I saw in some of the pets who had been neglected or abused. With patience and care, they blossomed into playful and affectionate companions who were eager to give and receive love. I was also able to witness firsthand the process of selecting the right pet for the right family. Although it was bittersweet to see some animals leave the shelter, I knew that they were going to a loving home, and that was the best thing for them.

It also managed to compose a convincing 400-word “friendly” cover letter for an application to be a manager at Starbucks. But most jaw-dropping of all, on a personal level: It made quick work out of an assignment I’ve always considered absolutely “unhackable.” In January, my junior English students will begin writing an independent research paper, 12 to 18 pages, on two great literary works of their own choosing—a tradition at our school. Their goal is to place the texts in conversation with each other and find a thread that connects them. Some students will struggle to find any way to bring them together. We spend two months on the paper, putting it together piece by piece.

I’ve fed GPT a handful of pairs that students have worked with in recent years: Beloved and Hamlet , The Handmaid’s Tale and The Parable of the Sower , Homer’s The Odyssey and Dante’s Inferno . GPT brought them together instantly, effortlessly, uncannily: memory, guilt, revenge, justice, the individual versus the collective, freedom of choice, societal oppression. The technology doesn’t go much beyond the surface, nor does it successfully integrate quotations from the original texts, but the ideas presented were on-target—more than enough to get any student rolling without much legwork.

It goes further. Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic.

I’ve been teaching for about 12 years: first as a TA in grad school, then as an adjunct professor at various public and private universities, and finally in high school. From my experience, American high-school students can be roughly split into three categories. The bottom group is learning to master grammar rules, punctuation, basic comprehension, and legibility. The middle group mostly has that stuff down and is working on argument and organization—arranging sentences within paragraphs and paragraphs within an essay. Then there’s a third group that has the luxury of focusing on things such as tone, rhythm, variety, mellifluence.

Whether someone is writing a five-paragraph essay or a 500-page book, these are the building blocks not only of good writing but of writing as a tool, as a means of efficiently and effectively communicating information. And because learning writing is an iterative process, students spend countless hours developing the skill in elementary school, middle school, high school, and then finally (as thousands of underpaid adjuncts teaching freshman comp will attest) college. Many students (as those same adjuncts will attest) remain in the bottom group, despite their teachers’ efforts; most of the rest find some uneasy equilibrium in the second category.

Working with these students makes up a large percentage of every English teacher’s job. It also supports a cottage industry of professional development, trademarked methods buried in acronyms ( ICE ! PIE ! EDIT ! MEAT !), and private writing tutors charging $100-plus an hour. So for those observers who are saying, Well, good, all of these things are overdue for change —“this will lead to much-needed education reform,” a former colleague told me—this dismissal elides the heavy toll this sudden transformation is going to take on education, extending along its many tentacles (standardized testing, admissions, educational software, etc.).

Perhaps there are reasons for optimism, if you push all this aside. Maybe every student is now immediately launched into that third category: The rudiments of writing will be considered a given, and every student will have direct access to the finer aspects of the enterprise. Whatever is inimitable within them can be made conspicuous, freed from the troublesome mechanics of comma splices, subject-verb disagreement, and dangling modifiers.

But again, the majority of students do not see writing as a worthwhile skill to cultivate—just like I, sitting with my coffee and book , rereading Moby-Dick , do not consider it worthwhile to learn, say, video editing. They have no interest in exploring nuance in tone and rhythm; they will forever roll their eyes at me when I try to communicate the subtle difference, when writing an appositive phrase, between using commas, parentheses, or (the connoisseur’s choice) the em dash.

Which is why I wonder if this may be the end of using writing as a benchmark for aptitude and intelligence. After all, what is a cover letter? Its primary purpose isn’t to communicate “I already know how to do this job” (because of course I don’t) but rather “I am competent and trustworthy and can clearly express to you why I would be a good candidate for this job.” What is a written exam? Its primary signal isn’t “I memorized a bunch of information” but rather “I can express that information clearly in writing.” Many teachers have reacted to ChatGPT by imagining how to give writing assignments now—maybe they should be written out by hand, or given only in class—but that seems to me shortsighted. The question isn’t “How will we get around this?” but rather “Is this still worth doing?”

I believe my most essential tasks, as a teacher, are helping my students think critically, disagree respectfully, argue carefully and flexibly, and understand their mind and the world around them. Unconventional, improvisatory, expressive, meta-cognitive writing can be an extraordinary vehicle for those things. But if most contemporary writing pedagogy is necessarily focused on helping students master the basics, what happens when a computer can do it for us? Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?

Maybe the moment when I felt most caught between these two thoughts was when I asked GPT a question from an exam I give in my Eastern-philosophy course, something I’ve never imagined a student could find outside assistance to answer: “Explain the Madhyamaka Buddhist idea that all phenomena lack inherent existence, using a funny, specific example.”

ChatGPT wrote back, as ever, instantly:

The Madhyamaka Buddhist idea that all phenomena lack inherent existence can be illustrated with a funny example: let’s say you have a pet hamster. You might think that your hamster has its own inherent existence, but in reality, your hamster is made up of many different elements such as its fur, its bones, its muscles, and its brain. None of these elements exist independently or inherently on their own, so your hamster itself does not have any inherent existence either. The same goes for all phenomena—everything is made up of various components that don’t have inherent existence, so nothing has any inherent existence either.

Everything is made up; it’s true. The essay as a literary form? Made up. Grammatical rules as markers of intelligence? Writing itself as a technology? Made up. Starting now, OpenAI is forcing us to ask foundational questions about whether any of those things are worth keeping around.

New bot ChatGPT will force colleges to get creative to prevent cheating, experts say

After its viral launch last week, the chatbot ChatGPT was lauded online by some as a dramatic step forward for artificial intelligence and the potential future of web search.

But with such praise also came concern regarding its potential usage in academic settings. Could the chatbot, which provides coherent, quirky and conversational responses to simple language inquiries, inspire more students to cheat?

Students have been able to cheat on assignments using the internet for decades, giving rise to tools meant to check if their work was original. But the fear now is that ChatGPT could render those resources obsolete.

Already, some people online have tested out whether it's possible to have the bot complete an assignment. "holyyyy, solved my computer networks assignment using chatGPT," one person, who later clarified the assignment was old, tweeted . Others suggested that its existence could result in the death of the college essay. One technologist went as far as saying that with ChatGPT, "College as we know it will cease to exist."

Artificial intelligence company OpenAI, which developed ChatGPT , did not immediately respond to a request for comment regarding cheating concerns.

However, several experts who teach in the field of AI and humanities said the chatbot, while impressive, is not something they’re ready to sound the alarm about when it comes to possible widespread student cheating.

"We’re not there, but we’re also not that far away," said Andrew Piper, a professor of language, literatures and culture and a professor of AI and storytelling at McGill University. "We’re definitely not at the stage of like, out-of-the-box, it’ll write a bunch of student essays and no one will be able to tell the difference."

Piper and other experts who spoke with NBC News likened the fear around cheating and ChatGPT to concerns that arose when the calculator was invented, when people thought it would be the death of humans learning math.

Lauren Klein, an associate professor in the Departments of English and Quantitative Theory and Methods at Emory University, even compared the panic to the philosopher Plato’s fears that writing would dissolve human memory.

“There’s always been this concern that technologies will do away with what people do best, and the reality is that people have had to learn how to use these technologies to enhance what they do best,” Klein said.

There’s always been this concern that technologies will do away with what people do best, and the reality is that people have had to learn how to use these technologies to enhance what they do best.

— Lauren Klein, an associate professor at Emory University

Academic institutions will need to get creative and find ways to integrate new technologies like ChatGPT into their curriculum just like they did during the rise of the calculator, Piper noted.

In reality, AI tools like ChatGPT could actually be used to enhance education, according to Paul Fyfe, an associate professor of English at North Carolina State University.

He said there’s plenty of room for collaboration between AI and educators.

“It’s important to be talking about this right now and to bring students into the conversation," Fyfe said. "Rather than try to legislate from the get-go that this is strange and scary, therefore we need to shut it down."

And some teachers are already embracing AI programs in the classroom.

Piper, who runs .txtlab, a research laboratory for artificial intelligence and storytelling, said he’s had students analyze AI writing and found they can often tell which papers were written by a machine and which were written by a human.

As for educators who are concerned about the rise of AI, Fyfe and Piper said the technology is already used in many facets of education.

Computer-assisted writing tools, such as Grammarly or Google Doc’s Smart Compose, already exist — and have long been utilized by many students. Platforms like Grammarly and Chegg also offer plagiarism checker tools, so both students and teachers can assess if an essay has been, in part or in total, lifted from somewhere else. A spokesperson for Grammarly did not return a request for comment. A spokesperson for Chegg declined to comment.

Those who spoke with NBC News said they're not aware of any technology that detects if an AI wrote an essay, but they predict that someone will soon capitalize on building that technology.

As of right now, Piper said the best defense against AI essays is teachers getting to know their students and how they write in order to catch a discrepancy in the work they're turning in.

When an AI does reach the level of meeting all the requirements of academic assignments and if students use that technology to coast through college, Piper warned that could be a major detriment to students' education.

For now, he suggested an older technology to combat fears of students using ChatGPT to cheat.

"It will reinvigorate the love of pen and paper," he said.

chat gpt english essays

Kalhan Rosenblatt is a reporter covering youth and internet culture for NBC News, based in New York.

  • Work & Careers
  • Life & Arts

ChatGPT essay cheats are a menace to us all

chat gpt english essays

  • ChatGPT essay cheats are a menace to us all on x (opens in a new window)
  • ChatGPT essay cheats are a menace to us all on facebook (opens in a new window)
  • ChatGPT essay cheats are a menace to us all on linkedin (opens in a new window)
  • ChatGPT essay cheats are a menace to us all on whatsapp (opens in a new window)

Pilita Clark

Simply sign up to the Artificial intelligence myFT Digest -- delivered directly to your inbox.

The other day I met a British academic who said something about artificial intelligence that made my jaw drop.

The number of students using AI tools like ChatGPT to write their papers was a much bigger problem than the public was being told, this person said.

AI cheating at their institution was now so rife that large numbers of students had been expelled for academic misconduct — to the point that some courses had lost most of a year’s intake. “I’ve heard similar figures from a few universities,” the academic told me. 

Spotting suspicious essays could be easy, because when students were asked why they had included certain terms or data sources not mentioned on the course, they were baffled. “They have clearly never even heard of some of the terms that turn up in their essays.” 

But detection is only half the battle. Getting administrators to address the problem can be fraught, especially when the cheaters are international students who pay higher fees than locals. Because universities rely heavily on those fees, some administrators take a dim view of efforts to expose the problem. Or as this person put it, “whistleblowing is career-threatening”.

There is more at stake here than the injustice of cheats getting an advantage over honest students. Consider the prospect of allegedly expert graduates heading out into the world and being recruited into organisations, be it a health service or a military, where they are put into positions for which they are underqualified. 

So how widespread is the cheating problem?

Panic about ChatGPT transforming educational landscapes took off as soon as the tool was launched in November 2022 and since then, the technology has only advanced. As I type these words, colleagues at the Financial Times have reported that OpenAI, which created ChatGPT, and Meta are set to release souped-up AI models capable of reasoning and planning.

But AI’s exact impact on classrooms is unclear. 

In the US, Stanford University researchers said last year that cheating rates did not appear to have been affected by AI. Up to 70 per cent of high school students have long confessed to some form of cheating and nearly a year after ChatGPT’s arrival that proportion had not changed.

At universities, research shows half of students are regular generative AI users — not necessarily to cheat — but only about 12 per cent use it daily.

When it comes to the number of student essays written with the help of AI, rates appear relatively steady says Turnitin, a plagiarism detection software group that has a tool for checking generative AI use.

It said that students have submitted more than 22mn papers in the past 12 months that show signs of AI help, which was 11 per cent of the total it reviewed. More than 6mn papers, or 3 per cent of the total, contained at least 80 per cent of AI writing.

That is a lot of papers. But the percentage of AI writing is virtually the same as what Turnitin found last year when it carried out a similar assessment.

“AI usage rates have been stable,” says Chris Caren, Turnitin’s chief executive. And as he told me last week, just because you are using ChatGPT does not necessarily mean you are cheating.

“Some teachers and faculty allow some level of AI assistance in writing an essay, but they also want that properly cited,” he says. “AI can be incredibly useful for doing research and brainstorming ideas.”

I’m sure this is correct. It is also true that university faculty are increasingly using AI to help write lesson plans and I know of some who have tested it to mark essays — unsuccessfully.

But I still find it worrying to think a sizeable number of students are using tools like ChatGPT in a way that is potentially risky for employers and wider society.

Some universities are already increasing face-to-face assessments to detect and discourage AI cheating. I am sure that will continue, but it would also be useful if academics were encouraged to expose the problem, not deterred from trying to fix it. As the scholar I spoke to put it, the purpose of going to university is to learn how to learn. These institutions are supposed to teach you to think for yourself and evaluate evidence, not just recite facts and figures.

Anyone who outsources their thinking to a machine is ultimately going to hurt themselves the most. 

Promoted Content

Follow the topics in this article.

  • Pilita Clark Add to myFT
  • Artificial intelligence Add to myFT
  • OpenAI Add to myFT

International Edition

  • Online Degree Explore Bachelor’s & Master’s degrees
  • MasterTrack™ Earn credit towards a Master’s degree
  • University Certificates Advance your career with graduate-level learning
  • Top Courses
  • Join for Free

How to Make ChatGPT Prompts Work for You

ChatGPT can be a useful tool for queries, so find out some of the best prompts you can use professionally or personally to get a response from ChatGPT.

chat gpt english essays

ChatGPT is a form of artificial intelligence trained to answer users’ questions in a chat-like manner. Find out more about ChatGPT and how to prompt it to optimise the answers you need.

What is ChatGPT?

ChatGPT is a chatbot that allows you to interact with an artificial intelligence interface in a chat-like way to create a dialogue of responses to your questions. It can talk to you in a way that sounds like natural language to develop answers to simple prompts like a question or those that are more complex, such as asking it to write an essay or a fictional story, helping you brainstorm ideas, or translating from one language to another.

OpenAI is the AI research and deployment company that created ChatGPT. Its interface allows you to ask questions easily to get a response from the website’s artificial intelligence software.

If you prefer another app instead of ChatGPT, you can explore other artificial intelligence apps, such as Bard by Google or Microsoft’s Bing Chat, to try different options.

How can you tailor ChatGPT prompts to work for you?

Optimising your interactions with ChatGPT by knowing which questions to ask or how to ask them will help you get the most out of the chatbot’s responses. Prompts should be clear to get the best information from ChatGPT, and you’ll want to include as much information as possible to focus the AI interface on returning relevant information for your query.

You’ll also want to consider important pieces for the ChatGPT interface's answer. Perhaps you’ll want to include the type of audience the response addresses, the context around the question you’re asking, or how long you want the response to be.

ChatGPT prompts to get you started

The best way to get the desired results is to ask good questions or give clear statements from ChatGPT to elicit the response you would like from the artificial intelligence program for things such as ChatGPT prompts for business, school, or personal help. Here are some beginnings for prompts that can help you focus your question to ensure you succeed at getting the answers you’re looking for.

Tell me the best or worst…

ChatGPT helps generate responses to questions, but it still needs guardrails sometimes to focus its responses.

Try prompts like “Tell me what the best restaurants are in [city]” or “What are the worst sales tactics that I should avoid?”

These prompts can narrow ChatGPT's focus and give you a useful response. You can refine your prompt by asking for the type of restaurant or food you prefer and specifying the industry, customer base, or products in your sales role.

Write me a…

One way to use ChatGPT is to brainstorm ideas you can build on. The ChatGPT response can give you a starting point for the next steps in an idea.

You can prompt ChatGPT with “Write me a sales pitch for [my product]” or “Write me an email to welcome new customers.”

It’s important to review what ChatGPT generates as you may not be able to use the direct responses from ChatGPT, but it can give you a good starting point for your work.

ChatGPT can pretend to be someone else and assume that person's role when it responds to your query.

These types of prompts can help you compare different points of view of various kinds of people so you can get a better handle on a particular subject or how a person will react.

Try a prompt like “Pretend you’re a journalist and ask me about [my company]” to prepare for a media interview. You could also ask, “Act like a manager responding to [this customer complaint],” to get ideas on responding to client issues.

You might know where you’re starting, but you need a little boost from ChatGPT to move you along.

Ask ChatGPT to help you with a project that requires further steps. Try prompts like “Help me create a plan for my new business” or “Help me find sources for my essay.”

The “Help me” prompt can build on a base you already have or the goal you’ve already set to keep you on track and develop tasks that can push you forward.

You can also ask ChatGPT to review your work and give you feedback that you may find helpful in refining a sales pitch or strengthening your resume .

Give ChatGPT prompts like “Review my resume for a computer programmer position.” You can also be more specific with your prompt, such as “Can you edit my essay for grammar and tone?”

These prompts can review your work and suggest improvements as a stand-in for a human looking over your work when you only have access to a computer to review.

ChatGPT prompt limits

ChatGPT can be a helpful tool in your professional or personal life, but there are some limits you need to be aware of before using it.

For example, one of ChatGPT's most significant issues is “hallucinations,” which cause it to generate answers that are fabrications or factually incorrect. You have to independently fact-check any answers ChatGPT generates to confirm their authenticity, particularly when you pass the information along as factual.

ChatGPT also has limits regarding what it can create for you. The app is good at responding to a prompt with typical written responses such as natural language responses. Questions about coding languages also work in ChatGPT. The app, however, doesn’t do as well with math, so that may be a topic you want to avoid when using the ChatGPT.

ChatGPT is also based on a data set that goes up to 2021, so it doesn’t have the most up-to-date information on topics you may ask about.

It also can’t connect to the internet, so you won’t be able to ask it to search for results the way you can ask a typical search engine for information.

ChatGPT is programmed to respond to questions that elicit answers with educational and informative value. Because of this, it is also programmed to not respond to inappropriate or harmful queries, give advice on politics or investments, or answer questions containing confidential or proprietary information.

Getting started with Coursera

You can learn more about ChatGPT and how you can make it work for you on Coursera.

Check out ChatGPT for Beginners: Save Time With Microsoft Excel on the Coursera Project Network to learn how to make ChatGPT work with Excel to help you generate data for different projects.

Look into Prompt Engineering for ChatGPT through Vanderbilt University of Coursera to learn more about prompts. The course includes information on writing effective prompts that can maximise your productivity and help you better create ChatGPT prompts that work for you.

Keep reading

Coursera staff.

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.

chat gpt english essays

ChatGPT Examples – A Detailed Guide to Practical Applications

by Sam McKay, CFA | ChatGPT

chat gpt english essays

Imagine having a virtual assistant that can understand your questions, write you a song, help you write code, and even help you create entire meal plans. Well, you don’t have to imagine anymore, because ChatGPT does all that and so much more!

Developed by OpenAI , ChatGPT is where artificial intelligence meets practical applications. Its multifaceted uses, from content generation to coding to customer support, make this groundbreaking language model a must-have in anybody’s toolkit, and the future of how we interact with AI in our everyday lives.

Sales Now On Advertisement

This article goes through a list of ChatGPT examples to show you what ChatGPT can do, what it can’t do, and how you can use it to make your life easier today.

Let’s get started!

Table of Contents

How Does ChatGPT Work?

OpenAI webpage of ChatGPT

ChatGPT is a breakthrough in natural language processing. But how exactly does it work ?

Well, it’s actually pretty simple. The machine learning model is fed an enormous amount of text from a wide variety of sources, like books, articles, and websites, and it is this massive dataset that allows the model to learn the patterns, nuances, and intricacies of human language.

ChatGPT workspace

What’s more, ChatGPT is capable of unsupervised learning, meaning it has learned to generate responses by predicting what comes next in a sentence and understanding and producing coherent text.

So that’s how ChatGPT works. Now let’s bring the conversation into the here and now and talk about the ChatGPT model of today: ChatGPT-3.5 .

What’s So Great About ChatGPT 3.5?

ChatGPT on mobile

GPT 3.5 is a significant leap forward in language modeling and builds upon the successes of its predecessors.

But what about GPT 3.5 is it generating so much excitement?

  • Unlike the ChatGPT that came before it, GPT 3.5 has an impressive capacity for understanding context and producing relevant responses.The newly released GPT 4 model is even more advanced in accomplishing this.
  • The sheer size of the training data used on ChatGPT is also impressive. It was trained on an enormous dataset, containing an unfathomable amount of text from diverse sources.This allows it to better understand the intricacies of human language.
  • The GPT 3.5 model has the ability to ask clarifying questions . If you ask ambiguous questions, it can seek further information from you to provide more accurate and relevant responses.
  • GPT 3.5 exhibits impressive fluency and naturalness in its generated text . It can produce coherent and human-like responses that feel as though they were written by a person.

ChatGPT icons

These are just some of the advancements that have come with the ChatGPT 3.5 model. But before we get to the ChatGPT examples, let’s get to the not-so-rosy side of the picture. Let’s talk about limitations.

What Are the Limitations of ChatGPT 3.5?

While ChatGPT 3.5 is an impressive technological advancement, it’s important to recognize that it does have its limitations.

Here are two of the biggest limitations:

1. Lack of Common Sense

ChatGPT 3.5 lacks real-world experiences and common sense knowledge. It relies solely on the information it has been trained on and may provide incorrect or nonsensical answers in situations where common sense is required.

So it won’t tell you whether you should get your wife the craving she wants or not. The common sense answer for that is yes, by the way.

Wire fence

2. Limited Factual Accuracy

ChatGPT 3.5 only has access to information pre-September 2021. It may therefore occasionally provide inaccurate or outdated information. It doesn’t even know that Queen Elizabeth II passed away.

Keeping these limitations in mind, we can now look at some ChatGPT examples. What can ChatGPT do for you?

ChatGPT’s Practical Applications

Photo signifying applications

Over 1.8 billion people use ChatGPT every month, and they’re all coming to the platform to fulfill a variety of needs.

From content generation to travel planning, here are some of the common uses of ChatGPT.

1. ChatGPT for Content Writing & Editing

Work desk

One of ChatGPT’s most popular uses is as a content generation tool.

Here are just some of the popular ChatGPT examples of what’s possible in the realm of content generation.

1. Articles – Whether it’s a news article or an article for your website, ChatGPT can write articles that are thought-provoking and research-based.

Just remember that if it’s producing a researched piece, it’s only privy to information before September 2021, so it might not be working with the most up-to-date research.

ChatGPT generated research article

2. Blog Posts – Do you own a blog, but don’t know where you’ll find the time to write your next blog post? ChatGPT can write it for you.

Data Mentor Advertisement

ChatGPT can help you create engaging and informative blog posts on a wide range of topics, catering to different niches and target audiences. It will even help you generate ideas for new articles.

3. Marketing Content – This is another popular use of ChatGPT. You can use the platform to create compelling copies for your advertising campaigns, persuasive ad copies, attention-grabbing headlines, or compelling calls-to-action to drive customer engagement.

ChatGPT generated attention grabbing headlines

Other ChatGPT examples in this area include email newsletters, product descriptions, and web copies for your web pages, just to name a few.

4. Social Media Captions – The social media industry is booming right now, and a boring caption might be enough to stop someone from hitting that ‘like’ or ‘follow’ button. And for content creators, that might mean less income.

ChatGPT can help you create catchy and attention-grabbing captions for each of your social media posts and for each of your social media platforms.

5. Other Cool ChatGPT Examples – Along with the listed content generation uses, ChatGPT can also be used to write songs, poems, scripts, and even generate recipe ideas.

A ChatGPT generated country song

However, you have to remember that this is an AI. So while it may create very human-sounding content, you may want to go over the content to fine-tune it and give it some personal flair and human nuances.

But regardless of its content generation limitations, it’s clear that ChatGPT is revolutionizing the way we generate content, and we’re all the better for it.

2. ChatGPT for Coding

Code

OK, so we’ve talked about how ChatGPT can be used to generate text. But what about using ChatGPT to generate code? People are now increasingly using ChatGPT for coding purposes.

Here are just some of the ChatGPT examples of how you can use this machine-learning model to help with writing code:

1. Code Completion – ChatGPT can assist in code completion by providing suggestions for the next line of code as you type. It can save you time and reduce errors by offering contextually relevant code snippets.

ChatGPT generated code

2. Debugging Code – When you encounter an error or bug in your code, ChatGPT can help you troubleshoot by providing insights and suggestions on potential solutions or pointing out common mistakes.

3. Syntax Help – If you’re unsure about the correct syntax or usage of a programming language or framework, ChatGPT can provide code examples and quick explanations to clarify any confusion.

ChatGPT translated code

4. Language Translation – ChatGPT can assist in translating code snippets or error messages between different programming languages, enabling you to work with code in unfamiliar languages more effectively.

5. Code Review – ChatGPT can aid in code review by analyzing code snippets, identifying potential issues, and offering suggestions for improvement or adherence to coding best practices.

Remember, while ChatGPT can provide guidance and support in coding, you should definitely validate and verify the suggestions it offers. Human judgment and expertise are vital for ensuring the correctness and quality of your code.

Speaking of expertise, let’s go over some ChatGPT examples for education and learning in the next section.

3. ChatGPT for Education

Library

Now, let’s talk about ChatGPT as an education tool. ChatGPT is a fantastic education tool and offers a lot of support with learner-related tasks.

Here are just a few ways ChatGPT lends itself to the education sector:

1. Learning Assistant – ChatGPT can act as a virtual tutor, providing in-depth explanations, clarifications, and examples to help students grasp complex concepts in different subjects.

EDNA AI Advertisement

2. Homework Helper – Students can use ChatGPT to seek assistance with their homework assignments, or get guidance and suggestions for problem-solving and writing assignments.

3. Language Learning – ChatGPT can help language learners by engaging in conversations, providing vocabulary suggestions, offering grammar explanations, and even acting as a language practice partner.

ChatGPT generated multiple choice test

4. Test Preparation – You can use ChatGPT to simulate exam scenarios. The platform can generate multiple-choice questions and give you feedback on your answers to improve your test-taking skills.

The ability to create multiple-choice questions is also incredibly useful for educators. As an educator, you can generate quiz and test questions, and have the answers generated with only one or two prompts.

5. Research Assistant – ChatGPT can be used to find relevant sources, summarize research, or help in generating ideas for research topics.

6. Writing Feedback – Students can receive feedback on their essays, reports, or other written assignments from ChatGPT, which helps them improve their writing skills and enhance the structure, coherence, and clarity of their work.

Job interview

7. Career Guidance – ChatGPT can provide career advice and offer insights on different career paths. It can also provide information about job prospects, educational requirements, and skills needed in specific fields.

In this vein, the platform can also help you construct an effective college admissions essay, cover letter, and resume, and help you with your job interview preparation.

We must mention that while ChatGPT is a great educational helper, it can’t take the place of the education system.

Teachers and educators play a significant role in guiding students’ learning journeys and should provide oversight and context to ensure a well-rounded educational experience for learners.

In the next section, we’ll go over some ChatGPT examples for the customer service field.

4. ChatGPT for Customer Service

Chat Bot customer service

Another very popular use of ChatGPT is as a customer service tool.

Here’s how and why this platform is well-suited for this function:

1. Instant Responses – ChatGPT can provide quick and automated responses to frequently asked questions, enabling customers to get immediate answers to common inquiries.

2. 24/7 Availability – With ChatGPT, businesses can offer round-the-clock customer support, ensuring that customers can seek assistance at any time, even outside of regular business hours.

24 hours sign

3. Troubleshooting Support – ChatGPT can assist customers in troubleshooting common technical issues with products or services by providing step-by-step instructions or suggesting potential solutions.

4. Complaint Resolution – ChatGPT can handle the initial stages of complaint resolution by acknowledging customer concerns, gathering relevant information, and escalating issues to appropriate personnel if necessary.

5. Multilingual Support – ChatGPT’s language capabilities enable businesses to offer customer service in multiple languages, expanding their reach and ensuring efficient communication with a diverse customer base.

World globe

6. Feedback Collection – ChatGPT can engage in conversations with customers to gather feedback, suggestions, and ratings, helping businesses gather valuable insights to improve their products or services.

While ChatGPT can enhance customer service experiences, as a business owner you need to remember to maintain a balance between automated support and human interaction.

Some complex or sensitive situations may still require human intervention to ensure personalized care and empathetic understanding.

Next, we’ll go over some examples of ChatGPT working as a virtual assistant.

5. ChatGPT as a Virtual Assistant

Man working

We would probably all agree that we all seem to be busier and more stressed out than ever. And there just doesn’t seem to be enough time in the day to get everything done.

And for most of us, a virtual assistant is just what we would need to get some of those annoying tasks off our hands so we can concentrate on what’s left. Luckily, there are multiple ChatGPT examples of how the platform can help you do just that.

Here are a few of them:

1. Schedule Management – ChatGPT can help manage your calendar and send you reminders for important events like your niece’s upcoming birthday party.

This helps you stay organized and on top of your commitments.

2. Email Management – ChatGPT can assist in sorting, organizing, and responding to emails, prioritizing messages, and providing summaries of important information, helping you stay on top of your inbox.

ChatGPT generated to-do list

3. Task and To-Do Lists – ChatGPT can act as a task manager, allowing you to create, organize, and track your to-do lists. It can remind you of pending tasks and help you prioritize your workload.

4. Travel Planning – ChatGPT can be your own personal travel agent. It can help you with everything from finding flights and accommodations to suggesting popular destinations, historical and cultural attractions, and local restaurants.

It can also help you find the best deals and create itineraries based on your preferences.

ChatGPT generated travel itinerary

5. Personalized Recommendations – ChatGPT can act as your health assistant, shopping assistant, or even a cinema concierge that gives you customized movie recommendations tailored to your specific likes and dislikes.

These recommendations are made based on your preferences and previous interactions

6. Language Translation – ChatGPT’s language capabilities allow it to translate text, helping you understand and communicate in different languages, and bridging communication gaps.

This is especially useful for avid travelers and those working in linguistically diverse workplaces.

7. Data Processor – If you tell ChatGPT you’re looking for a dataset on global weather patterns (or practically any other topic!), it can search through various online databases, to find relevant data sets for you.

You can also train ChatGPT to help you collect and create your own data for research, business intelligence, or even training purposes.

For instance, if you want to integrate ChatGPT in Outlook, check out our video below:

Remember, while ChatGPT can perform many tasks, it’s important to exercise discretion and verify critical information independently.

Also, remember ChatGPT’s information limitation. Because it only has information up to September 2021, you may not get the most up-to-date travel information or an up-to-date list of the best restaurants to visit in 2023.

Final Thoughts

In the ever-evolving landscape of artificial intelligence, ChatGPT has emerged as a versatile and powerful tool with a myriad of practical applications.

From enhancing customer service experiences to revolutionizing content creation, ChatGPT showcases its potential to streamline tasks, boost productivity, and inspire innovation across various domains.

ChatGPT Icon

While ChatGPT demonstrates its prowess in practical applications, it’s important to acknowledge its limitations.

As an AI model, it relies on the data it has been trained on, and its responses may not always be perfect or error-free. Human oversight and intervention are essential to ensure accuracy, quality, and the infusion of human creativity.

So, is ChatGPT the best AI for practical applications? It certainly stands among the frontrunners, showcasing its potential to transform industries, simplify workflows, and augment human capabilities.

However, it’s crucial to consider the specific requirements of each use case and explore the wide range of AI solutions available to determine the best fit. It’s an exciting time to be part of this revolution, and we can’t wait to see what new innovative applications the future holds. The possibilities are virtually limitless!

author avatar

Related Posts

Chat GPT Playground by OpenAI: Use Cases Explained

Chat GPT Playground by OpenAI: Use Cases Explained

Imagine having a ChatGPT model that was specialized to your needs. A model that would create articles...

Noteable ChatGPT Plugin: User Guide With Examples

Noteable ChatGPT Plugin: User Guide With Examples

With ChatGPT plugins, you can summarize web pages, create personalized content, and analyze data...

How to Use ChatGPT for Python: The Ultimate Guide

Artificial intelligence (AI) and natural language processing (NLP) have revolutionized how developers...

ChatGPT for Regular Expressions: Is This is Game Changer?

Tired of going cross-eyed writing regular expressions (regex)? Regex programming can be powerful, but...

135+ Best ChatGPT Prompts: For Work, Productivity & Fun

Looking for some inspiration to engage ChatGPT in more interesting and thought-provoking conversations?...

18 Ways to Use ChatGPT for Business: 5x Your Productivity

Staying ahead of the competition in today's fast-paced business environment frequently means using...

ChatGPT Code Interpreter Plugin: Advanced Data Analysis

Have you ever wished you could just chat with your code, get to know what it's thinking, and maybe even...

5 Best ChatGPT Examples You Need to Know About

5 Best ChatGPT Examples You Need to Know About

Imagine having a virtual assistant that can understand your questions, write you a song, help you write...

Is ChatGPT Plus Worth It? Let’s Find Out

OpenAI's ChatGPT tool has taken the world by storm and has been at the forefront of revolutionizing the...

ChatGPT for Coding: User Guide With Examples

ChatGPT is an advanced AI-powered tool that can transform the way you write code. Developed by OpenAI,...

ChatGPT Stock: Can You Buy & Invest?

As an investor, you are always on the lookout for the next big thing that will revolutionize the way...

ChatGPT for Data Scientists: Unleashing AI-driven Insights

With the advent of ChatGPT, individuals and businesses worldwide have been using it to simplify their...

chat gpt english essays

ChatGPT: A GPT-4 Turbo Upgrade and Everything Else to Know

It started as a research project. But ChatGPT has swept us away with its mind-blowing skills. Now, GPT-4 Turbo has improved in writing, math, logical reasoning and coding.

chat gpt english essays

  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.

OpenAI's logo, a hexagonal rosette pattern

In 2022, OpenAI wowed the world when it introduced ChatGPT and showed us a chatbot with an entirely new level of power, breadth and usefulness, thanks to the generative AI technology behind it. Since then, ChatGPT has continued to evolve, including its most recent development: access to its latest GPT-4 Turbo model for paid users.

ChatGPT and generative AI aren't a novelty anymore, but keeping track of what they can do can be a challenge as new abilities arrive. Most notably, OpenAI now provides easier access to anyone who wants to use it. It also lets anyone write custom AI apps called GPTs and share them on its own app store, while on a smaller scale ChatGPT can now speak its responses to you. OpenAI has been leading the generative AI charge , but it's hotly pursued by Microsoft, Google and startups far and wide.

AI atlas logo with a woman materializing from particles of a globe

Generative AI still hasn't shaken a core problem -- it makes up information that sounds plausible but isn't necessarily correct. But there's no denying AI has fired the imaginations of computer scientists, loosened the purse strings of venture capitalists and caught the attention of everyone from teachers to doctors to artists and more, all wondering how AI will change their work and their lives. 

If you're trying to get a handle on ChatGPT, this FAQ is for you. Here's a look at what's up.

Read more :  ChatGPT 3.5 Review: First Doesn't Mean Best

What is ChatGPT?

ChatGPT is an online chatbot that responds to "prompts" -- text requests that you type. ChatGPT has countless uses . You can request relationship advice, a summarized history of punk rock or an explanation of the ocean's tides. It's particularly good at writing software, and it can also handle some other technical tasks, like creating 3D models .

ChatGPT is called a generative AI because it generates these responses on its own. But it can also display more overtly creative output like screenplays, poetry, jokes and student essays. That's one of the abilities that really caught people's attention.

Much of AI has been focused on specific tasks, but ChatGPT is a general-purpose tool. This puts it more into a category like a search engine.

That breadth makes it powerful but also hard to fully control. OpenAI has many mechanisms in place to try to screen out abuse and other problems, but there's an active cat-and-mouse game afoot by researchers and others who try to get ChatGPT to do things like offer bomb-making recipes.

ChatGPT really blew people's minds when it began passing tests. For example, AnsibleHealth researchers reported in 2023 that " ChatGPT performed at or near the passing threshold " for the United States Medical Licensing Exam, suggesting that AI chatbots "may have the potential to assist with medical education, and potentially, clinical decision-making."

We're a long way from fully fledged doctor-bots you can trust, but the computing industry is investing billions of dollars to solve the problems and expand AI into new domains like visual data too. OpenAI is among those at the vanguard. So strap in, because the AI journey is going to be a sometimes terrifying, sometimes exciting thrill.

What's ChatGPT's origin?

Artificial intelligence algorithms had been ticking away for years before ChatGPT arrived. These systems were a big departure from traditional programming, which follows a rigid if-this-then-that approach. AI, in contrast, is trained to spot patterns in complex real-world data. AI has been busy for more than a decade screening out spam, identifying our friends in photos, recommending videos and translating our Alexa voice commands into computerese.

A Google technology called transformers helped propel AI to a new level, leading to a type of AI called a large language model, or LLM. These AIs are trained on enormous quantities of text, including material like books, blog posts, forum comments and news articles. The training process internalizes the relationships between words, letting chatbots process input text and then generate what it believes to be appropriate output text. 

A second phase of building an LLM is called reinforcement learning through human feedback, or RLHF. That's when people review the chatbot's responses and steer it toward good answers or away from bad ones. That significantly alters the tool's behavior and is one important mechanism for trying to stop abuse.

OpenAI's LLM is called GPT, which stands for "generative pretrained transformer." Training a new model is expensive and time consuming, typically taking weeks and requiring a data center packed with thousands of expensive AI acceleration processors. OpenAI's latest LLM is called GPT-4 Turbo . Other LLMs include Google's Gemini (formerly called Bard), Anthropic's Claude and Meta's Llama .

ChatGPT is an interface that lets you easily prompt GPT for responses. When it arrived as a free tool in November 2022, its use exploded far beyond what OpenAI expected.

When OpenAI launched ChatGPT, the company didn't even see it as a product. It was supposed to be a mere "research preview," a test that could draw some feedback from a broader audience, said ChatGPT product leader Nick Turley. Instead, it went viral, and OpenAI scrambled to just keep the service up and running under the demand.

"It was surreal," Turley said. "There was something about that release that just struck a nerve with folks in a way that we certainly did not expect. I remember distinctly coming back the day after we launched and looking at dashboards and thinking, something's broken, this couldn't be real, because we really didn't make a very big deal out of this launch."

An OpenAI lapel pin with the company's logo and the word

ChatGPT, a name only engineers could love, was launched as a research project in November 2022, but quickly caught on as a consumer product.

How do I use ChatGPT?

The ChatGPT website is the most obvious method. Open it up, select the LLM version you want from the drop-down menu in the upper left corner, and type in a query.

As of April 1, OpenAI is allowing consumers to use ChatGPT without first signing up for an account. According to a blog post , the move was meant to make the tool more accessible. OpenAI also said in the post that as part of the move, it's introducing added content safeguards, blocking prompts in a wider range of categories.

However, users with accounts will be able to do more with the tool, such as save and review their history, share conversations and tap into features like voice conversations and custom instructions.

OpenAI in 2023 released a ChatGPT app for iPhones and for Android phones . In February, ChatGPT for Apple Vision Pro arrived , too, adding the chatbot's abilities to the "spatial computing" headset. Be careful to look for the genuine article, because other developers can create their own chatbot apps that link to OpenAI's GPT.

In January, OpenAI opened its GPT Store , a collection of custom AI apps that focus ChatGPT's all-purpose design to specific jobs. A lot more on that later, but in addition to finding them through the store you can invoke them with the @ symbol in a prompt, the way you might tag a friend on Instagram.

Microsoft uses GPT for its Bing search engine, which means you can also try out ChatGPT there.

ChatGPT is sprouting up in various hardware devices, including Volkswagen EVs , Humane's voice-controlled AI pin and the squarish Rabbit R1 device .

How much does ChatGPT cost?

It's free, though you have to set up an account to take advantage of all of its features.

For more capability, there's also a subscription called ChatGPT Plus that costs $20 per month that offers a variety of advantages: It responds faster, particularly during busy times when the free version is slow or sometimes tells you to try again later. It also offers access to newer AI models, including GPT-4 Turbo . OpenAI said it has improved capabilities in writing, math, logical reasoning and coding in this model.

The free ChatGPT uses the older GPT-3.5, which doesn't do as well on OpenAI's benchmark tests but which is faster to respond. The newest variation, GPT-4 Turbo, arrived in late 2023 with more up-to-date responses and an ability to ingest and output larger blocks of text.

ChatGPT is growing beyond its language roots. With ChatGPT Plus, you can upload images, for example, to ask what type of mushroom is in a photo.

Perhaps most importantly, ChatGPT Plus lets you use GPTs.

What are these GPTs?

GPTs are custom versions of ChatGPT from OpenAI, its business partners and thousands of third-party developers who created their own GPTs.

Sometimes when people encounter ChatGPT, they don't know where to start. OpenAI calls it the "empty box problem." Discovering that led the company to find a way to narrow down the choices, Turley said.

"People really benefit from the packaging of a use case -- here's a very specific thing that I can do with ChatGPT," like travel planning, cooking help or an interactive, step-by-step tool to build a website, Turley said.

OpenAI CEO Sam Altman stands in front of a black screen that shows the term

OpenAI CEO Sam Altman announces custom AI apps called GPTs at a developer event in November 2023.

Think of GPTs as OpenAI trying to make the general-purpose power of ChatGPT more refined the same way smartphones have a wealth of specific tools. (And think of GPTs as OpenAI's attempt to take control over how we find, use and pay for these apps, much like Apple has a commanding role over iPhones through its App Store.)

What GPTs are available now?

OpenAI's GPT store now offers millions of GPTs , though as with smartphone apps, you'll probably not be interested in most of them. A range of GPT custom apps are available, including AllTrails personal trail recommendations , a Khan Academy programming tutor , a Canva design tool , a book recommender , a fitness trainer , the laundry buddy clothes washing label decoder, a music theory instructor , a haiku writer and the Pearl for Pets for vet advice bot .

One person excited by GPTs is Daniel Kivatinos, co-founder of financial services company JustPaid . His team is building a GPT designed to take a spreadsheet of financial data as input and then let executives ask questions. How fast is a startup going through the money investors gave it? Why did that employee just file a $6,000 travel expense?

JustPaid hopes that GPTs will eventually be powerful enough to accept connections to bank accounts and financial software, which would mean a more powerful tool. For now, the developers are focusing on guardrails to avoid problems like hallucinations -- those answers that sound plausible but are actually wrong -- or making sure the GPT is answering based on the users' data, not on some general information in its AI model, Kivatinos said.

Anyone can create a GPT, at least in principle. OpenAI's GPT editor walks you through the process with a series of prompts. Just like the regular ChatGPT, your ability to craft the right prompt will generate better results.

Another notable difference from regular ChatGPT: GPTs let you upload extra data that's relevant to your particular GPT, like a collection of essays or a writing style guide.

Some of the GPTs draw on OpenAI's Dall-E tool for turning text into images, which can be useful and entertaining. For example, there is a coloring book picture creator , a logo generator and a tool that turns text prompts into diagrams like company org charts. OpenAI calls Dall-E a GPT.

How up to date is ChatGPT?

Not very, and that can be a problem. For example, a Bing search using ChatGPT to process results said OpenAI hadn't yet released its ChatGPT Android app. Search results from traditional search engines can help to "ground" AI results, and indeed that's part of the Microsoft-OpenAI partnership that can tweak ChatGPT Plus results.

GPT-4 Turbo, announced in November, is trained on data up through April 2023. But it's nothing like a search engine whose bots crawl news sites many times a day for the latest information.

Can you trust ChatGPT responses?

No. Well, sometimes, but you need to be wary.

Large language models work by stringing words together, one after another, based on what's probable each step of the way. But it turns out that LLM's generative AI works better and sounds more natural with a little spice of randomness added to the word selection recipe. That's the basic statistical nature that underlies the criticism that LLMs are mere "stochastic parrots" rather than sophisticated systems that in some way understand the world's complexity.

The result of this system, combined with the steering influence of the human training, is an AI that produces results that sound plausible but that aren't necessarily true. ChatGPT does better with information that's well represented in training data and undisputed -- for instance, red traffic signals mean stop, Plato was a philosopher who wrote the Allegory of the Cave , an Alaskan earthquake in 1964 was the largest in US history at magnitude 9.2.

ChatGPT response asking about tips for writing good prompts

We humans interact with AI chatbots by writing prompts -- questions or statements that seek an answer from the information stored in the chatbot's underlying large language model. 

When facts are more sparsely documented, controversial or off the beaten track of human knowledge, LLMs don't work as well. Unfortunately, they sometimes produce incorrect answers with a convincing, authoritative voice. That's what tripped up a lawyer who used ChatGPT to bolster his legal case only to be reprimanded when it emerged he used ChatGPT fabricated some cases that appeared to support his arguments. "I did not comprehend that ChatGPT could fabricate cases ," he said, according to The New York Times.

Such fabrications are called hallucinations in the AI business.

That means when you're using ChatGPT, it's best to double check facts elsewhere.

But there are plenty of creative uses for ChatGPT that don't require strictly factual results.

Want to use ChatGPT to draft a cover letter for a job hunt or give you ideas for a themed birthday party? No problem. Looking for hotel suggestions in Bangladesh? ChatGPT can give useful travel itineraries , but confirm the results before booking anything.

Is the hallucination problem getting better?

Yes, but we haven't seen a breakthrough.

"Hallucinations are a fundamental limitation of the way that these models work today," Turley said. LLMs just predict the next word in a response, over and over, "which means that they return things that are likely to be true, which is not always the same as things that are true," Turley said.

But OpenAI has been making gradual progress. "With nearly every model update, we've gotten a little bit better on making the model both more factual and more self aware about what it does and doesn't know," Turley said. "If you compare ChatGPT now to the original ChatGPT, it's much better at saying, 'I don't know that' or 'I can't help you with that' versus making something up."

Hallucinations are so much a part of the zeitgeist that Dictionary.com touted it as a new word it added to its dictionary in 2023.

Can you use ChatGPT for wicked purposes?

You can try, but lots of it will violate OpenAI's terms of use , and the company tries to block it too. The company prohibits use that involves sexual or violent material, racist caricatures, and personal information like Social Security numbers or addresses.

OpenAI works hard to prevent harmful uses. Indeed, its basic sales pitch is trying to bring the benefits of AI to the world without the drawbacks. But it acknowledges the difficulties, for example in its GPT-4 "system card" that documents its safety work.

"GPT-4 can generate potentially harmful content, such as advice on planning attacks or hate speech. It can represent various societal biases and worldviews that may not be representative of the user's intent, or of widely shared values. It can also generate code that is compromised or vulnerable," the system card says. It also can be used to try to identify individuals and could help lower the cost of cyberattacks.

Through a process called red teaming, in which experts try to find unsafe uses of its AI and bypass protections, OpenAI identified lots of problems and tried to nip them in the bud before GPT-4 launched. For example, a prompt to generate jokes mocking a Muslim boyfriend in a wheelchair was diverted so its response said, "I cannot provide jokes that may offend someone based on their religion, disability or any other personal factors. However, I'd be happy to help you come up with some light-hearted and friendly jokes that can bring laughter to the event without hurting anyone's feelings."

Researchers are still probing LLM limits. For example, Italian researchers discovered they could use ChatGPT to fabricate fake but convincing medical research data . And Google DeepMind researchers found that telling ChatGPT to repeat the same word forever eventually caused a glitch that made the chatbot blurt out training data verbatim. That's a big no-no, and OpenAI barred the approach .

LLMs are still new. Expect more problems and more patches.

And there are plenty of uses for ChatGPT that might be allowed but ill-advised. The website of Philadelphia's sheriff published more than 30 bogus news stories generated with ChatGPT .

What about ChatGPT and cheating in school?

ChatGPT is well suited to short essays on just about anything you might encounter in high school or college, to the chagrin of many educators who fear students will type in prompts instead of thinking for themselves.

Microsoft CEO Satya Nadella speaking while standing between logos for OpenAI and Microsoft

Microsoft CEO Satya Nadella touted his company's partnership with OpenAI at a November 2023 event for OpenAI developers. Microsoft uses OpenAI's GPT large language model for its Bing search engine, Office productivity tools and GitHub Copilot programming assistant.

ChatGPT also can solve some math problems, explain physics phenomena, write chemistry lab reports and handle all kinds of other work students are supposed to handle on their own. Companies that sell anti-plagiarism software have pivoted to flagging text they believe an AI generated.

But not everyone is opposed, seeing it more like a tool akin to Google search and Wikipedia articles that can help students.

"There was a time when using calculators on exams was a huge no-no," said Alexis Abramson, dean of Dartmouth's Thayer School of Engineering. "It's really important that our students learn how to use these tools, because 90% of them are going into jobs where they're going to be expected to use these tools. They're going to walk in the office and people will expect them, being age 22 and technologically savvy, to be able to use these tools."

ChatGPT also can help kids get past writer's block and can help kids who aren't as good at writing, perhaps because English isn't their first language, she said.

So for Abramson, using ChatGPT to write a first draft or polish their grammar is fine. But she asks her students to disclose that fact.

"Anytime you use it, I would like you to include what you did when you turn in your assignment," she said. "It's unavoidable that students will use ChatGPT, so why don't we figure out a way to help them use it responsibly?"

Is ChatGPT coming for my job?

The threat to employment is real as managers seek to replace expensive humans with cheaper automated processes. We've seen this movie before: elevator operators were replaced by buttons, bookkeepers were replaced by accounting software, welders were replaced by robots. 

ChatGPT has all sorts of potential to blitz white-collar jobs. Paralegals summarizing documents, marketers writing promotional materials, tax advisers interpreting IRS rules, even therapists offering relationship advice.

But so far, in part because of problems with things like hallucinations, AI companies present their bots as assistants and "copilots," not replacements.

And so far, sentiment is more positive than negative about chatbots, according to a survey by consulting firm PwC. Of 53,912 people surveyed around the world, 52% expressed at least one good expectation about the arrival of AI, for example that AI would increase their productivity. That compares with 35% who had at least one negative thing to say, for example that AI will replace them or require skills they're not confident they can learn.

How will ChatGPT affect programmers?

Software development is a particular area where people have found ChatGPT and its rivals useful. Trained on millions of lines of code, it internalized enough information to build websites and mobile apps. It can help programmers frame up bigger projects or fill in details.

One of the biggest fans is Microsoft's GitHub , a site where developers can host projects and invite collaboration. Nearly a third of people maintaining GitHub projects use its GPT-based assistant, called Copilot, and 92% of US developers say they're using AI tools .

"We call it the industrial revolution of software development," said Github Chief Product Officer Inbal Shani. "We see it lowering the barrier for entry. People who are not developers today can write software and develop applications using Copilot."

It's the next step in making programming more accessible, she said. Programmers used to have to understand bits and bytes, then higher-level languages gradually eased the difficulties. "Now you can write coding the way you talk to people," she said.

And AI programming aids still have a lot to prove. Researchers from Stanford and the University of California-San Diego found in a  study of 47 programmers  that those with access to an OpenAI programming help " wrote significantly less secure code  than those without access."

And they raise a variation of the cheating problem that some teachers are worried about: copying software that shouldn't be copied, which can lead to copyright problems. That's why Copyleaks, a maker of plagiarism detection software, offers a tool called the  Codeleaks Source Code AI Detector  designed to spot AI-generated code from ChatGPT, Google Gemini and GitHub Copilot. AIs could inadvertently copy code from other sources, and the latest version is designed to spot copied code based on its semantic structures, not just verbatim software.

At least in the next five years, Shani doesn't see AI tools like Copilot as taking humans out of programming.

"I don't think that it will replace the human in the loop. There's some capabilities that we as humanity have -- the creative thinking, the innovation, the ability to think beyond how a machine thinks in terms of putting things together in a creative way. That's something that the machine can still not do."

Editors' note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. For more, see our  AI policy .

Computing Guides

  • Best Laptop
  • Best Chromebook
  • Best Budget Laptop
  • Best Cheap Gaming Laptop
  • Best 2-in-1 Laptop
  • Best Windows Laptop
  • Best Macbook
  • Best Gaming Laptop
  • Best Macbook Deals
  • Best Desktop PC
  • Best Gaming PC
  • Best Monitor Under 200
  • Best Desktop Deals
  • Best Monitors
  • M2 Mac Mini Review
  • Best PC Speakers
  • Best Printer
  • Best External Hard Drive SSD
  • Best USB C Hub Docking Station
  • Best Keyboard
  • Best Webcams
  • Best Laptop Backpack
  • Best Camera to Buy
  • Best Vlogging Camera
  • Best Tripod
  • Best Waterproof Camera
  • Best Action Camera
  • Best Camera Bag and Backpack
  • Best E-Ink Tablets
  • Best iPad Deals
  • Best E-Reader
  • Best Tablet
  • Best Android Tablet
  • Best 3D Printer
  • Best Budget 3D Printer
  • Best 3D Printing Filament
  • Best 3D Printer Deals
  • Dell Coupon Codes
  • Newegg Promo Codes
  • HP Coupon Codes
  • Microsoft Coupons
  • Anker Coupons
  • Logitech Promo Codes
  • Western Digital Coupons
  • Monoprice Promo Codes
  • A4C Coupons

comscore

ChatGPT essay cheats are a menace to us all

Students who outsource their thinking to ai tools pose a risk to future employers and more.

chat gpt english essays

The number of students using AI tools like ChatGPT to write papers was a bigger problem than the public was being told. Photograph: Stefani Reynolds/Getty Images

Pilita Clark's face

The other day I met a British academic who said something about artificial intelligence that made my jaw drop.

The number of students using AI tools like ChatGPT to write their papers was a much bigger problem than the public was being told, this person said.

AI cheating at their institution was now so rife that large numbers of students had been expelled for academic misconduct — to the point that some courses had lost most of a year’s intake. “I’ve heard similar figures from a few universities,” the academic told me.

Spotting suspicious essays could be easy, because when students were asked why they had included certain terms or data sources not mentioned on the course, they were baffled. “They have clearly never even heard of some of the terms that turn up in their essays.”

Trinity Business School dean: ‘AI brings a huge threat to people’s ability to learn and to upskill’

Trinity Business School dean: ‘AI brings a huge threat to people’s ability to learn and to upskill’

We must do more to welcome Irish people returning home

We must do more to welcome Irish people returning home

Buyers of older homes may pay thousands more per year in mortgage repayments

Buyers of older homes may pay thousands more per year in mortgage repayments

Family affairs: How Ireland’s newly rich are putting money to work in secretive firms

Family affairs: How Ireland’s newly rich are putting money to work in secretive firms

But detection is only half the battle. Getting administrators to address the problem can be fraught, especially when the cheaters are international students who pay higher fees than locals. Because universities rely heavily on those fees, some administrators take a dim view of efforts to expose the problem. Or as this person put it, “whistleblowing is career-threatening”.

There is more at stake here than the injustice of cheats getting an advantage over honest students. Consider the prospect of allegedly expert graduates heading out into the world and being recruited into organisations, be it a health service or a military, where they are put into positions for which they are underqualified.

So how widespread is the cheating problem?

Panic about ChatGPT transforming educational landscapes took off as soon as the tool was launched in November 2022 and since then, the technology has only advanced. As I type these words, colleagues at the Financial Times have reported that OpenAI, which created ChatGPT, and Meta are set to release souped-up AI models capable of reasoning and planning.

But AI’s exact impact on classrooms is unclear.

In the US, Stanford University researchers said last year that cheating rates did not appear to have been affected by AI. Up to 70 per cent of high school students have long confessed to some form of cheating and nearly a year after ChatGPT’s arrival that proportion had not changed.

chat gpt english essays

The auto-enrolment pension scheme seems good on paper, but how will it actually work?

At universities, research shows half of students are regular generative AI users — not necessarily to cheat — but only about 12 per cent use it daily.

When it comes to the number of student essays written with the help of AI, rates appear relatively steady says Turnitin, a plagiarism detection software group that has a tool for checking generative AI use.

It says students have submitted more than 22 million papers in the past 12 months that show signs of AI help, which was 11 per cent of the total it reviewed. More than six million papers, or 3 per cent of the total, contained at least 80 per cent of AI writing.

That is a lot of papers. But the percentage of AI writing is virtually the same as what Turnitin found last year when it conducted a similar assessment.

“AI usage rates have been stable,” says Turnitin chief executive Chris Caren. And as he told me last week, just because you are using ChatGPT does not necessarily mean you are cheating.

“Some teachers and faculty allow some level of AI assistance in writing an essay, but they also want that properly cited,” he says. “AI can be incredibly useful for doing research and brainstorming ideas.”

I’m sure this is correct. It is also true that university faculty are increasingly using AI to help write lesson plans and I know of some who have tested it to mark essays — unsuccessfully.

But I still find it worrying to think a sizeable number of students are using tools like ChatGPT in a way that is potentially risky for employers and wider society.

Some universities are already increasing face-to-face assessments to detect and discourage AI cheating. I am sure that will continue, but it would also be useful if academics were encouraged to expose the problem and not deterred from trying to fix it. As the scholar I spoke to put it, the purpose of going to university is to learn how to learn. These institutions are supposed to teach you to think for yourself and evaluate evidence, not just recite facts and figures.

Anyone who outsources their thinking to a machine is ultimately going to hurt themselves the most. — Copyright The Financial Times Limited 2024

  • Sign up for Business push alerts and have the best news, analysis and comment delivered directly to your phone
  • Find The Irish Times on WhatsApp and stay up to date
  • Our Inside Business podcast is published weekly - Find the latest episode here

IN THIS SECTION

‘egregious treatment’: former ministerial driver awarded €30,000 for unfair dismissal, dublin business activity accelerates in first quarter, fuelling job growth, how to organise a work day out that doesn’t induce groans among staff, emergency call operator had to call 999 herself over death threat from nuisance caller, man who left estate ‘of a considerable value’ to second wife declared in will that he had already provided for his children, housing crisis: ‘we lived on €20 a week. we saved absolutely everything. there was no avocado toast’, mount juliet estate up for sale with €45m price tag for hotel and golf resort, senior indian politician calls for dublin ambassador to be sacked over irish times letter, markets may be peaking as thiel, bezos and zuckerberg among insiders selling off tech stocks, skorts are meant to make camogie players more ladylike. who asked for that, nursing home firm funded by cash-for-visa applicants is formally wound up, latest stories, poem of the week: early morning, martyn turner, oj simpson obituary: ex-football star famously acquitted of double murder, a joey is the perfect description for a word within a word, joe kinnear obituary: archetypal colourful cockney footballer straight out of crumlin.

Business Today

  • Terms & Conditions
  • Privacy Policy
  • Cookie Information
  • Cookie Settings
  • Community Standards

COMMENTS

  1. How to Write an Essay with ChatGPT

    For example, you can include the writing level (e.g., high school essay, college essay), perspective (e.g., first person) and the type of essay you intend to write (e.g., argumentative, descriptive, expository, or narrative ). You can also mention any facts or viewpoints you've gathered that should be incorporated into the output.

  2. Should I Use ChatGPT to Write My Essays?

    It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data. ... Generate ideas for essays. Have ChatGPT help you come up with ideas for essays. For example, input specific prompts ...

  3. How ChatGPT (and other AI chatbots) can help you write an essay

    1. Use ChatGPT to generate essay ideas. Before you can even get started writing an essay, you need to flesh out the idea. When professors assign essays, they generally give students a prompt that ...

  4. 5 Ways ChatGPT Can Improve, Not Replace, Your Writing

    Review Your Work. With a bit of cutting and pasting, you can quickly get ChatGPT to review your writing as well: It'll attempt to tell you if there's anything that doesn't make sense, if your ...

  5. How to Write Your Essay Using ChatGPT

    Let's start with the basics. ChatGPT is one of several chatbots that can answer questions in a conversational style, as if the answer were coming from a human. It provides answers based on information it receives in development and in response to prompts you provide. In that respect, like a human, ChatGPT is limited by the information it has.

  6. ChatGPT-3.5 as writing assistance in students' essays

    The duration of the essay writing for the ChatGPT-assisted group was 172.22 ± 31.59, and for the control, 179.11 ± 31.93 min. ChatGPT and control group, on average, obtained grade C, with a ...

  7. AI bot ChatGPT stuns academics with essay-writing skills and usability

    Learn how ChatGPT, a powerful AI bot from OpenAI, can write flawless essays and handle complex queries in this fascinating report.

  8. How to use ChatGPT for writing

    For the article, there are two ways to have ChatGPT summarize it. The first requires you to type in the words 'TLDR:' and then paste the article's URL next to it. The second method is a bit ...

  9. How to Use OpenAI to Write Essays: ChatGPT Tips for Students

    3. Ask ChatGPT to write the essay. To get the best essay from ChatGPT, create a prompt that contains the topic, type of essay, and the other details you've gathered. In these examples, we'll show you prompts to get ChatGPT to write an essay based on your topic, length requirements, and a few specific requests:

  10. A large-scale comparison of human-written versus ChatGPT-generated essays

    The corpus features essays for 90 topics from Essay Forum 42, an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get ...

  11. Can You Use ChatGPT for Your College Essay?

    College Admissions , College Essays. ChatGPT has become a popular topic of conversation since its official launch in November 2022. The artificial intelligence (AI) chatbot can be used for all sorts of things, like having conversations, answering questions, and even crafting complete pieces of writing. If you're applying for college, you ...

  12. ChatGPT Wrote My AP English Essay—and I Passed

    Listen. (2 min) ChatGPT, OpenAI's new artificially intelligent chatbot, can write essays on complex topics. WSJ's Joanna Stern went back to high-school AP Literature for a day to see if she ...

  13. ChatGPT

    Essay generator. By aiseo.ai. Revolutionize essay writing with our AI-driven tool: Generate unique, plagiarism-free essays in minutes, catering to all formats and topics effortlessly. Sign up to chat. Requires ChatGPT Plus.

  14. Will ChatGPT Kill the Student Essay?

    The College Essay Is Dead. Nobody is prepared for how AI will transform academia. By Stephen Marche. Paul Spella / The Atlantic; Getty. December 6, 2022. Suppose you are a professor of pedagogy ...

  15. ChatGPT Will End High-School English

    In January, my junior English students will begin writing an independent research paper, 12 to 18 pages, on two great literary works of their own choosing—a tradition at our school.

  16. ChatGPT can generate an essay. But could it generate an "A"?

    Lauren Klein, an associate professor in the Departments of English and Quantitative Theory and Methods at Emory University, even compared the panic to the philosopher Plato's fears that writing ...

  17. Introducing ChatGPT

    In the following sample, ChatGPT asks the clarifying questions to debug code. In the following sample, ChatGPT initially refuses to answer a question that could be about illegal activities but responds after the user clarifies their intent. In the following sample, ChatGPT is able to understand the reference ("it") to the subject of the previous question ("fermat's little theorem").

  18. How To Use ChatGPT to Write an Essay in 2024

    Here are some of the ways you can take assistance from ChatGPT as a student and write an essay in 2024: 1. ChatGPT for Brainstorming and Generating Essay Ideas. The first step for writing an essay is brainstorming and idea generation. ChatGPT can be the best choice for getting help.

  19. PDF Is ChatGPT Transforming Academics' Writing Style?

    Abstract. Based on one million arXiv papers submitted from May 2018 to January 2024, we assess the textual density of ChatGPT's writing style in their abstracts by means of a statistical analysis of word frequency changes. Our model is calibrated and validated on a mixture of real abstracts and ChatGPT-modified abstracts (simulated data) af ...

  20. ChatGPT essay cheats are a menace to us all

    The other day I met a British academic who said something about artificial intelligence that made my jaw drop. The number of students using AI tools like ChatGPT to write their papers was a much ...

  21. ChatGPT

    Chat with images You can now show ChatGPT images and start a chat. ... Access to GPT-4 (our most capable model) Chat with images, voice and create images; ... Do more with GPTs. You can choose from hundreds of GPTs that are customized for a single purpose—Creative Writing, Marathon Training, Trip Planning or Math Tutoring. Building a GPT ...

  22. A Study on ChatGPT-4 as an Innovative Approach to Enhancing English as

    The field of computer-assisted language learning has recently brought about a notable change in English as a Foreign Language (EFL) writing. Starting from October 2022, students across different academic fields have increasingly depended on ChatGPT-4 as a helpful resource for addressing particular challenges in EFL writing.

  23. How to Make ChatGPT Prompts Work for You

    The ChatGPT response can give you a starting point for the next steps in an idea. You can prompt ChatGPT with "Write me a sales pitch for [my product]" or "Write me an email to welcome new customers.". It's important to review what ChatGPT generates as you may not be able to use the direct responses from ChatGPT, but it can give you a ...

  24. ChatGPT Examples

    People are now increasingly using ChatGPT for coding purposes. Here are just some of the ChatGPT examples of how you can use this machine-learning model to help with writing code: 1. Code Completion - ChatGPT can assist in code completion by providing suggestions for the next line of code as you type.

  25. ChatGPT: A GPT-4 Turbo Upgrade and Everything Else to Know

    The newest variation, GPT-4 Turbo, arrived in late 2023 with more up-to-date responses and an ability to ingest and output larger blocks of text. ChatGPT is growing beyond its language roots. With ...

  26. ChatGPT

    Essay Writer. By MOUJALIS IKBAL. Sign up to chat. Requires ChatGPT Plus. ChatGPT is a free-to-use AI system. Use it for engaging conversations, gain insights, automate tasks, and witness the future of AI, all in one place.

  27. ChatGPT essay cheats are a menace to us all

    In the US, Stanford University researchers said last year that cheating rates did not appear to have been affected by AI. Up to 70 per cent of high school students have long confessed to some form ...