Can ChatGPT get into Harvard? We tested its admissions essay.

ChatGPT’s release a year ago triggered a wave of panic among educators. Now, universities are in the midst of college application season, concerned that students might use the artificial intelligence tool to forge admissions essays.

But is a chatbot-created essay good enough to fool college admissions counselors?

To find out, The Washington Post asked a prompt engineer — an expert at directing AI chatbots — to create college essays using ChatGPT. The chatbot produced two essays: one responding to a question from the Common Application, which thousands of colleges use for admissions, and one answering a prompt used solely for applicants to Harvard University.

We presented these essays to a former Ivy League college admissions counselor, Adam Nguyen, who previously advised students at Harvard University and read admissions essays at Columbia University. We presented Nguyen with a control: a set of real college admissions essays penned by Jasmine Green, a Post intern who used them to get into Harvard University, where she is currently a senior.

We asked Nguyen to read the essays and spot which ones were produced by AI. The results were illuminating.

Can you figure out which one was written by a human?

Who wrote this?

Since kindergarten, I have evaluated myself from the reflection of my teachers. I was the clever, gifted child. I was a pleasure to have in class. I was driven and tenacious - but lazy? Unmotivated? No instructor had ever directed those harsh words at me. My identity as a stellar student had been stripped of its luster; I was destroyed.

Computer science and college admissions experts say that AI-created essays have some easy tells — helpful for admissions officers who are prepping for an uptick in ChatGPT-written essays.

Responses written by ChatGPT often lack specific details, leading to essays that lack supporting evidence for their points. The writing is trite and uses platitudes to explain situations, rather than delving into the emotional experience of the author. The essays are often repetitive and predictable, leaving readers without surprise or a sense of the writer’s journey. If chatbots produce content on issues of race, sex or socioeconomic status, they often employ stereotypes.

At first, Nguyen was impressed by the AI-generated essays: They were readable and mostly free of grammatical errors. But if he was reviewing the essay as part of an application package, he would’ve stopped reading.

“The essay is such a mediocre essay that it would not help the candidate’s application or chances,” he said in an interview. “In fact, it would probably diminish it.”

Here is how Nguyen evaluated ChatGPT’s essay.

Nguyen said that while AI may be sufficient to use for everyday writing, it is particularly unhelpful in creating college admissions essays. To start, he said, admissions offices are using AI screening tools to filter out computer-generated essays. (This technology can be inaccurate and falsely implicate students, a Post analysis found .)

But more importantly, admissions essays are a unique type of writing, he said. They require students to reflect on their life and craft their experiences into a compelling narrative that quickly provides college admissions counselors with a sense of why that person is unique.

“ChatGPT is not there,” he said.

Nguyen understands why AI might be appealing. College application deadlines often fall around the busiest time of the year, near winter holidays and end-of-semester exams. “Students are overwhelmed,” Nguyen said.

But Nguyen isn’t entirely opposed to using AI in the application process. In his current business, Ivy Link, he helps students craft college applications. For those who are weak in writing, he sometimes suggests they use AI chatbots to start the brainstorming process, he said.

For those who can’t resist the urge to use AI for more than just inspiration, there may be consequences.

“Their essays will be terrible,” he said, “and might not even reflect who they are.”

About this story

Jasmine Green contributed to this report.

The Washington Post worked with Benjamin Breen, an associate professor of history at the University of California in Santa Cruz who studies the impact of technological change, to create the AI-generated essays.

Editing by Karly Domb Sadof, Betty Chavarria and Alexis Sobel Fitts.

Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

ChatGPT and AI Text Generators: Should Academia Adapt or Resist?

Explore more.

  • Artificial Intelligence
  • Digital Learning
  • Education Strategy
  • Perspectives

T hose in the education sector have long expressed divergent opinions about technology innovations and their use in classrooms. The debate may be around something as simple as allowing students to use their laptops or smartphones during class, or it may center around an emerging artificial intelligence (AI) technology like ARISTO , a system that can pass (with a score of 90 percent) an eighth-grade–level science exam.

The COVID-19 pandemic forced even the most reluctant academic institutions to embrace the use of technology; most gradually understood that their mission was to educate new generations on the responsible use of technology rather than excluding it completely from the classroom.

Now, academia faces another technological evolution in the form of artificial intelligence text generators (AITGs). At the beginning of this year, we heard about an AITG called ChatGPT that can generate text by asking it questions (like a chatbot). But users quickly discovered that this AI is not just a chatbot; it can generate articles, summaries, essays, or code in seconds. The education-related use cases abound, especially for students, who can use ChatGPT to do their school assignments and examinations.

If used unethically, this technology could diminish their learning process and objectives, some have argued. While talking about its implications in education, academia took distance and started asking, Do we adapt or resist?

Since the pandemic, we have contributed to the technology debate by suggesting the adoption of a constructive and positive approach to pedagogy , in which it is better to concentrate on what innovations could bring to education rather than resisting change.

Here, we’ll offer our own recommendations for approaching AITGs. But first, let’s look more generally at some of the advantages and concerns surrounding this AI technology.

WHAT IS CHATGPT?

OpenAI launched ChatGPT in 2019 and then released an updated version of this conversational chatbot in late November 2022 using Reinforcement Learning with Human Feedback (RLHF).

ChatGPT works with algorithms and is trained to generate human-like text that can be used in conversation. It can answer questions, provide information, and engage in natural language discussions with users. Some potential use cases for ChatGPT include customer service chatbots, virtual assistants, and conversational interfaces for websites or mobile apps. It can also be used to generate content for social media or create chatbot scripts for marketing or entertainment purposes. Overall, ChatGPT is a powerful tool for creating chatbots that can have intelligent and engaging conversations with users.

This new technology has created quite a buzz since its latest release, but it’s not the only AITG—there are other tools that similarly interact with users to produce generative models, images, text to-speech, stories, and even creative writing, including Stable diffusion , DALL-E , LaMDA , Whisper , DreamFusion , and AlphaCode .

According to the ChatGPT website , some of the tool’s capabilities include the following:

It can record and learn from previous conversations.

It allows users to provide follow-up corrections.

It is trained to decline inappropriate requests.

And some of its limitations include the following:

It may occasionally generate incorrect information.

It may produce harmful instructions or biased content.

It has limited “knowledge” of events after 2021.

The ChatGPT website also warns users about how they collect data for training the model. To improve the system’s accuracy, AI trainers may review the information posted by users; so, for data protection, users are advised not to share sensitive or personal information.

Our warning to users is that just because this tool can generate a coherent response that follows the syntactic, grammatical, and structural rules of many different languages does not mean it is true or complete.

As humans, we tend to take a basic approach to accepting an argument—the more coherent a speech or text is, the more likely we are to take it as valid. Confusing coherence with truth can lead to disastrous mistakes. Since some of the information from the original source can be biased, invalid, or unreliable, and, since it is not reviewed for validation, AITGs can generate coherent answers that have wrong or biased responses.

Opportunities ChatGPT May Bring to Education

The potential of ChatGPT presents intriguing opportunities for education. As most of us would agree, academia must continue to shift from unidirectional knowledge transfer to an active development of competencies, experience, social abilities, and technological agility. Access to information has democratized knowledge acquisition, and ChatGPT may synthesize the use of various sources, albeit not without a potential bias.

On January 21, the Financial Times published an article commenting on a paper published by a Wharton Business School professor that describes how the author, Christian Terwiesch, tested ChatGPT3 on an exam in a core course of his MBA program. The bot ChatGPT earned a solid grade (between B and B-) and outperformed most of the actual students in the course.

Obviously, these kinds of situations represent a challenge for universities and business schools; but instead of taking ChatGPT as a threat, educators and their institutions should nevertheless analyze its advantages and use them in favor of the student to achieve their educational objectives.

Here are five ways AITGs can be used in a participant-centered learning process.

Text generators like ChatGPT can boost our collective familiarity with AI and how to use it, a critical competency for our students and their futures. Industries move quickly, and new technologies push them to continue innovating. We must be the first facilitators of new technologies as they emerge and teach our students how to use them appropriately (technically and ethically). If not us, who?

AITGs can assist educators in preparing and reviewing sessions by providing them with additional resources or helping them create engaging educational content that can lead to a better learning experience for students. Educators and students already use search engines, citation and research management apps, spellcheckers, and data collection tools; the addition of tech like ChatGPT has the potential to develop written materials previously validated including scripts, examples, tests questions, or even cases analysis to be discussed in class.

AITGs can save educators time by automatically grading students’ assignments or doing educators’ repetitive work—for example, preparing announcements and instructions for assignments or exams; or providing feedback to students when making “recurrent” or common mistakes in solving their exercises; or preparing basic but customized guidelines for activities, such as how to structure a research thesis or how to solve an exercise. AITGs can also be effectively used to give students automatic feedback on their essays and texts.

AITGs can be used for training purposes. For example, students can use ChatGPT to emulate conversations and develop their language skills and abilities through conversational interactions with the chatbot.

ChatGPT could be used to improve engagement in online learning by increasing students’ motivation in asynchronous sessions or activities. Students will find using an innovative tool to be exciting while discovering its potential use in day-to-day tasks, for example to write emails, send text messages, or even prepare a draft of a contract. It can also send automatic but customized feedback and instructions in online courses to help students stay on track (further increasing their engagement in the course).

Early Concerns About Using ChatGPT in Education

Educators understandably have concerns about the risks of this new technology on teaching and evaluating students. Some example concerns include the following:

Plagiarism or fraudulent use of AITGs in written content reflected in papers, essays, tests, and quizzes; identifying and handling unethical behavior in student content; and promoting ethical research with the use of reliable and valid sources of information.

Bias, limitations, and inaccuracy of AITGs in developing responses on certain topics, which could negatively affect the learning process and objectives for students and perpetuate stereotypes and other misinformation.

Students’ overreliance on AITGs, causing them to miss out on important learning opportunities such as critical thinking, problem-solving, proper research techniques, and interaction with educators and peers.

Unfair or inaccurate evaluation of students’ work using AITGs due to its inability to estimate creativity and originality, among other soft skills.

Should We Adapt or Resist? Recommendations for Academia

In the face of these abundant opportunities and risks, education institutions can respond to this new technology by either resisting or adapting. The resistors will treat AI as an enemy and ban it—as has happened with some educators prohibiting the use of laptops, mobile phones, and other technologies in the classroom.

But taking an adaptive response provides an opportunity to raise awareness among faculty about the importance of knowing and using technology in education. Further, we can step in to more adequately train our future generations of students to use AI and be better prepared for the challenges they will face in their careers.

“Instead of taking ChatGPT as a threat, educators and their institutions should analyze its advantages and use them in favor of the student to achieve their educational objectives.”

Here are several recommendations for educators and academic institutions to adapt to this technology rather than resist it.

Help students develop their critical thinking skills by asking them to compare AI-produced content with reliable, valid sources of information.

Invest energy, resources, and creativity into providing an attractive and engaging learning environment for students—either in physical spaces or virtual settings.

Introduce more oral assignments and in-person exams into your curricula. Many Australian universities , for example, have already announced a return to in-person exams to protect the integrity of student assessments.

Increase the importance of in-class activities and participation as a significant element of the evaluation process and a relevant component of the final grade. In other words, shift the object of evaluation from the “product” to the “process” of student learning.

Adopt new or more ways to get students to develop and showcase their knowledge or learning: For example, through debates —in which students take a formalized position on a specific topic and have to defend their idea, perspective, or point of view. Or by introducing Art-Thinking methodology , which highlights the ways artists create to help students develop specific skills they need to enter the workforce and face ever-growing uncertainty.

Emphasize ethical use of AI and the importance of proper research and citation practices. For example, some universities include (or are reintroducing) mandatory courses in academic ethics, research integrity, and critical thinking for all students. Developing in future generations a sensibility on these topics is crucial for all levels of education.

At the institution level, train educators in the use and identification of AI-produced content despite the challenge it represents as an emergent technology.

While we are advocating for a more adaptive approach with these recommendations, some educators are choosing to take a more resistant approach . Examples of these more resistive reactions to ChatGPT include the following:

Trying to render AITG useless by asking more complex questions in examinations and assignments.

Asking for anecdotal or personal information that AI technology cannot provide—at the expense of learning from public knowledge and existing data on a topic.

Using software that blocks students’ access to the rest of their system when working on examinations or assignments.

Fighting fire with fire: There is another AI in development called GPTZero that will specialize in finding AI-generated texts.

ChatGPT Can Complement Learning, Not Replace It

ChatGPT has academia buzzing, and many institutions are feeling compelled to “do something” about it quickly, perhaps even implementing policy changes or abrupt bans that may prove disruptive when a term is already in progress. But we’re advocating a more adaptive and inclusive approach.

It is important for educators to consider how to address the use of AITGs in the classroom, ensuring that these tools are being used as a complement to, rather than a supplement for, traditional teaching methods. As facilitators of tools that can help students optimize their learning processes, we need to use and understand these new technologies to ensure students have a better learning experience.

It is still too early to know if this AITG is the next tipping point that will disrupt the human-digital interaction, like Google did some years ago or the introduction of the Gutenberg printing press did centuries ago. However, we do know the adaptation process is already unfolding in a few weeks rather than taking decades, while questions around this type of AI remain—like how it may make human life better, as time invested in producing writing content can potentially be significantly reduced.

Educators will need to think creatively about how to apply changes to their content, as well as the skills we give (and we want to give) our students. We can and should embrace AITGs like ChatGPT as a partner that helps us learn more, work smarter, and faster.

Yvette Mucharraz y Cano

Yvette Mucharraz y Cano is a human resources and communication professor at IPADE Business School in Mexico. She is also the director and board member of the Women’s Research Centre (CIMAD) at IPADE. Her practitioner’s experience has been as a Human Resources and Organizational Development executive for more than 20 years. Her research is focused on organizational disaster resilience and sustainability.

Francesco Venuti

Francesco Venuti is the associate dean for the Executive MBA and the General Management Programme (GMP) at ESCP Business School in Paris, France. He is a full-time associate professor of accounting at ESCP’s Turin campus (Italy), where he teaches all levels. He is also the academic director of the MSc in International Food & Beverage Management (IFBM) and of the EMBA/GMP for ESCP Turin. He also teaches at the University of Torino, Politecnico of Torino, and ESA Business School in Beirut, Lebanon. Among his fields of research, he has developed specific expertise in pedagogy, teaching innovation, and business education.

we used ai to write essays for harvard

Ricardo Herrera Martinez is an analysis and decision-making professor at IPADE Business School in Mexico. He has been a data scientist for over seven years and has provided consulting services on machine learning and artificial intelligence strategies for value creation. His research is focused on data-driven decision-making processes.

Related Articles

we used ai to write essays for harvard

We use cookies to understand how you use our site and to improve your experience, including personalizing content. Learn More . By continuing to use our site, you accept our use of cookies and revised Privacy Policy .

we used ai to write essays for harvard

  • Utility Menu

University Logo

  • Teaching at FAS
  • Apply to Harvard
  • Harvard College
  • AI Guidance & FAQs

Harvard supports responsible experimentation with generative AI tools, but there are important considerations to keep in mind when using these tools, including information security and data privacy, compliance, copyright, and academic integrity.   The Office of Undergraduate Education has compiled the following resources for instructors regarding appropriate use of generative AI in courses.

Generative AI Event Recordings

In August 2023, Amanda Claybaugh, Dean of Undergraduate Education, and Christopher Stubbs, Dean of Science, hosted informational sessions on the use of generative AI in courses. In each session, faculty presented examples of new assignments they have developed, as well as advice on how to “AI-proof” familiar assignments, and shared thoughts about how to guide students in using these technologies responsibly.

Generative AI in Your STEM Course - August 8, 2023

Generative AI in Your Writing Course - August 9, 2023

Policies for the Use of AI in Courses

We encourage all instructors to include a policy in course syllabi regarding the use and misuse of generative AI. Whether students in your course are forbidden from using ChatGPT or expected to explore its limits, a policy helps ensure that your expectations for appropriate interaction with generative AI tools are clear to students. Once you decide on a policy, make sure you articulate it clearly for your students, so that they know what is expected of them. More specifically, you should post your policy on your Canvas site .

Below is sample language you may adopt for your own policy. Feel free to modify it or create your own to suit the needs of your course.

A maximally restrictive draft policy:

A fully-encouraging draft policy:, mixed draft policy:.

Certain assignments in this course will permit or even encourage the use of generative artificial intelligence (GAI) tools such as ChatGPT. The default is that such use is disallowed unless otherwise stated. Any such use must be appropriately acknowledged and cited. It is each student’s responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course. 

Additional AI Resources

Ai pedagogy project.

Visit the  AI Pedagogy Project (AIPP) , developed by the metaLAB at Harvard, for an introductory guide to AI tools, an LLM Tutorial, additional AI resources, and curated assignments to use in your own classroom. The metaLAB has also published a quick start guide for  Getting Started with ChatGPT

Teaching and Artificial Intelligence

A  Canvas module , created by the Bok Center for Teaching and Learning, for instructors teaching in the age of AI that includes information on  creating syllabus statements ,  writing assignments , and  in-class assessments . The Bok Center also offers advice and consultations  for faculty seeking to respond to the challenges and opportunities posed by AI.

Teaching at the Faculty of Arts and Sciences

The Teaching at FAS website, a collaborative project between several college and university offices, offers a list of resources for Harvard faculty related to designing and teaching courses.

Frequently-Asked Questions about ChatGPT and Generative AI

What is chatgpt.

Generative artificial intelligence (GAI) tools such as Chat-GPT represent a significant advance in natural-language interaction with computers. On the basis of a ‘prompt’ a GAI system can produce surprisingly human-like responses including narrative passages and responses to technical questions. Moreover, an iterative exchange with the AI system can produce refined and tuned responses. This technology is evolving very rapidly. GAI systems have demonstrated the ability to pass the medical licensing exam, pass the bar exam, to generate art and music, answer graduate level problems from physics courses. These GAI systems are far from perfect, and some of the material they provide as factual are incorrect. GAI technology is a disruptive and rapidly changing new technology that will impact many aspects of our lives. 

Weaknesses of many GAIs at present include their inability to perform basic arithmetic calculations, and the propensity for ‘hallucinations’. Also, the GAI responses will reflect the biases and inaccuracies that are contained in the training data. It’s important to realize that there is an intentional ‘random’ element in the responses for most GAI systems. The same input does not always produce the same output. Also, the provenance of information that is used in responses does not flow through to the output, and this limits our ability to perform validation. But GAI capabilities are changing rapidly and we should anticipate ongoing refinements and progress. We currently don’t think twice about use spell-checking and grammar-checking tools in word processors, picking a suggested next-word when composing a text message on a phone, or using electronic calculators and spreadsheets. It will be interesting to observe whether GAI tools will become similarly integrated into everyday workflow. 

How does this affect our teaching?

Generative AI systems can produce responses to homework problem sets, to essay assignments, and to take-home exam questions. We should assume that all our students are proficient with these tools and should adjust our expectations accordingly. While it’s true that our students could previously draw upon various resources to avoid doing assignments themselves, the ease-of-use, free access, and high performance of GAI systems have raised this to a new level. Our challenge is to incentivize the level of engagement that leads to a deeper understanding and the development of the habits of mind we hope to instill in our students. 

A good first step is to feed representative assignments from your upcoming courses into a GAI tool such as ChatGPT and take a look at what it produces. Then assume that if given the opportunity, many of the students in your course are likely to do the same thing. Based on this insight, decide how to best adapt to, adjust to, and as appropriate incorporate GAI into your instructional plans. Then decide on and disseminate a course-by-course policy on student use of GAI tools. For more guidance about generative AI in teaching, visit the  Bok Center’s AI resources .

As this situation evolves, we need to learn how to best use these tools to enhance learning. We also need to teach our students how to use these tools in an ethical and responsible manner. 

How can I get a ChatGPT account?

Go to chat.OpenAi.com and register for an account. It’s free. You can find many quick-start guides online. 

Be sure to review Harvard’s guidelines for use of these tools at  https://huit.harvard.edu/ai .

Is there a technology that can detect unauthorized use of ChatGPT?

There are a variety of tools that claim various degrees of success in finding instances when GAI was used, but this is something of an arms race. It would be inadvisable to count on automated methods for GAI detection. FAS does not plan to provide/license such a tool for use in courses.

Is there a technology that can block students from accessing the internet so that they can use their laptops for in-class exams?

Is it appropriate to enter student work into chatgpt to generate feedback, or for students to enter their work into chatgpt.

No confidential information can be loaded into GAI systems, since there is no expectation of privacy or confidentiality. Faculty must get documented permission from students before putting original student content into any generative AI tool, and students should be made aware of the risks of entering their original work into such tools. 

ChatGPT’s terms of service allow the company to access any information fed into it.

What tools do we and our students have access to?

Harvard HUIT has compiled a list of available tools at https://huit.harvard.edu/ai/tool .

  • Start of Term Information for Faculty
  • Teaching and Learning Support
  • Academic Field Trips
  • Mid-Semester Feedback in Blue
  • Welcoming Visitors to Courses
  • Academic Policy & Handbooks
  • Instructional Support
  • Funding for Teaching
  • Resources for Concentrations
  • Climate in the Curriculum Retreat
  • Derek Bok Center for Teaching and Learning

AI bot ChatGPT writes smart essays — should academics worry?

Sandra Wachter

Sandra Wachter

Sandra Wachter , BKC Faculty Associate, discusses ChatGPT and its concerns for academics and education. 

“The situation both worries and excites Sandra Wachter, who studies technology and regulation at the Oxford Internet Institute, UK. “I’m really impressed by the capability,” she says. But she’s concerned about the potential effect on human knowledge and ability. If students start to use ChatGPT, they will be outsourcing not only their writing, but also their thinking.

She’s hopeful that education providers will adapt. “Whenever there’s a new technology, there’s a panic around it,” she says. “It’s the responsibility of academics to have a healthy amount of distrust — but I don’t feel like this is an insurmountable challenge.”

Read more in Nature .

You might also like

  • community Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images
  • community How dating sites automate racism
  • community Making the Public Record Public
  • Share full article

Advertisement

Subscriber-only Newsletter

On Tech: A.I.

Applying to college here’s how a.i. tools might hurt, or help..

ChatGPT might change the application essay forever.

we used ai to write essays for harvard

By Natasha Singer

I spent the last week talking with university officials, teachers and high school seniors about the dreaded college admissions essay.

I cover education technology at The Times. And I’ve been thinking a lot about how artificial intelligence tools like ChatGPT, which can manufacture school essays and other texts, might reshape the college application process.

I was particularly interested to learn whether admissions officials were rejiggering their essay questions — or even reconsidering personal essays altogether.

Amid a deluge of high school transcripts and teacher recommendations, admissions officers often use students’ writing samples to identify applicants with unique voices, experiences, ideas and potential. How might that change now that many students are using A.I. chatbots to brainstorm topics, generate rough drafts and hone their essays?

To find out, I contacted admissions officials at more than a dozen large state universities, Ivy League schools and small private colleges, including Juan Espinoza , the director of undergraduate admissions at Virginia Tech.

Right now, he told me, many universities are still trying to figure out how the A.I. technologies work and what they mean for the admissions process.

“But let’s be clear: Students are using it to answer these essay questions,” he added. “So we need to think about how they are using it.”

You can read more in my article today about the implications of A.I. tools for college applications.

The A.I. skeptics

I also gleaned some interesting insights into what admissions offices are thinking about ChatGPT by listening to podcasts from different universities. “Inside the Yale Admissions Office,” a podcast from Yale University, devoted an episode to A.I. tools this week.

The title of the episode — “A.I. and College Essays: Wrong Question, Wrong Answer” — is blunt about Yale’s viewpoint.

During the podcast, two Yale admissions officers discussed how using tools like ChatGPT to write college essays was a form of plagiarism. An applicant who submitted a chatbot-generated essay, they said, would violate the university’s admissions policy.

The Yale experts also argued that personal essays for college applications were “meant to be introspective and reflective.” And they said outsourcing that kind of personal thinking to an A.I. chatbot would not help.

“A.I.-generated content simply isn’t very good at the mode of communication that works in college essays,” Hannah Mendlowitz, senior associate director of admissions at Yale, said on the podcast.

But after doing a few basic A.I. experiments, I have a feeling that such views may not hold up for long.

This week, I used ChatGPT and other tools to manufacture responses to some of the short-answer questions from Yale, Harvard, Princeton and Dartmouth. Although the A.I. bots got some facts wrong, after several rounds of prompts and prodding they produced passable writing. It was easy to see how high school students might use these tools to generate first drafts and then rewrite the texts to reflect their own voices and experiences.

Ethical or not, the tools may help students who feel stuck, or who are not naturally drawn to essay writing, get started.

You can read the short-answer college essays that the chatbots generated in my article here.

A democratizing force

Espinoza of Virginia Tech and other admissions experts told me that they thought ChatGPT could be a democratizing force, especially for high school students whose parents have limited or no experience applying to colleges.

“I wonder what role this could play in simplifying this complex process,” Espinoza said, adding that he was a first-generation collegegoer himself. “If there’s a way this tool can help those that have a different starting point catch up, or narrow those discrepancies, I think that shows a lot of promise.”

To get some suggestions on how high school students applying to college might use A.I. tools, I interviewed Meg Scheid, the college outreach coordinator at the Gwinnett School of Mathematics, Science and Technology, a top public high school in the Atlanta area.

As ChatGPT gained traction in schools last winter, she worried that students across the country might use it to pass off A.I.-generated writing as their own work. She quickly changed her mind after she realized that high school students could use A.I. tools to conduct college searches to fit narrow criteria — like large public universities in the South that are known for their biostatistics research, or small private colleges that offer full scholarships to soccer players.

Scheid said A.I. chatbots might also be helpful for students who were trying to condense long lists of extracurricular activities into the limited spaces on college applications.

“It allows students to make sure that they are letting colleges know: ‘I wasn’t just a captain of the basketball team. I also did these other things that really point to me as a person,’” she said.

A.I. brainstorming

Khan Academy, the online education nonprofit, recently developed its own A.I. chatbot for schools, called Khanmigo. And now it includes a feature designed to help students come up with ideas for college admissions essays. I tried it this week.

Khanmigo can be a chatty, upbeat collaborator. It begins a session by advising students that college essays are not the “place to share a list of your achievements — rather, it’s a chance to reveal your unique personality and perspective.”

Then it offers students a list of common pitfalls to avoid when writing their essays. One is: “Don’t write about how the suffering of others made you realize how lucky you are, or how good your life is. This kind of essay makes college admissions officers cringe.”

I think the brainstorming feature could be useful, especially for students who are having trouble deciding on an essay topic. Khanmigo starts by asking users overview questions about their experiences. Then it homes in on themes that sound promising.

I tried out a real example from my teenage years: One summer I worked in a bagel store in the mornings and interned at a genetics lab in the afternoons.

Khanmigo zeroed in on that summer experience. It asked some follow-up questions to elicit details that might be useful for a college essay:

KHANMIGO: Wow, waking up at 5 a.m. to work at a bagel store so you could intern at a genetics lab in the afternoons — that’s dedication! 🙌 How did this experience change you as a person? What did you learn from it?

The software went on to ask for visual details about the bagel store and the lab. Those kinds of descriptions could clearly help enrich an opening scene in a college essay.

I am not arguing that A.I. tools are a perfect substitute for teachers, counselors, parents, friends and other humans who can help with college essays. But I appreciated the instant feedback from Khanmigo and the other A.I. chatbots I tried this week. It definitely made essay-writing more fun!

For next week, I’ll be looking into how educators are using A.I. tools to help generate recommendation letters for their students. If you are a college professor or high school teacher who wants to share your experiences using A.I. chatbots, please fill out this form . We may use your response in a future newsletter.

Natasha Singer writes about technology, business and society. She is currently reporting on the far-reaching ways that tech companies and their tools are reshaping public schools, higher education and job opportunities. More about Natasha Singer

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Joonho Lee (top left), Rita Hamad, Fei Chen, Miaki Ishii, Jeeyun Chung, Suyang Xu, Stephanie Pierce, and Jarad Mason.

Complex questions, innovative approaches

Planktonic foraminifera fossils.

Early warning sign of extinction?

Bonobo.

So much for summers of love

“We’re dealing with an alien intelligence that’s capable of astonishing feats, but not in the manner of the human mind,” said psychologist Steven Pinker.

Will ChatGPT supplant us as writers, thinkers?

Alvin Powell

Harvard Staff Writer

Steven Pinker says it’s impressive; will have uses, limits; may offer insights into nature of human intelligence (once it ‘stops making stuff up’)

Steven Pinker thinks ChatGPT is truly impressive — and will be even more so once it “stops making stuff up” and becomes less error-prone. Higher education, indeed, much of the world, was set abuzz in November when OpenAI unveiled its ChatGPT chatbot capable of instantly answering questions (in fact, composing writing in various genres) across a range of fields in a conversational and ostensibly authoritative fashion. Utilizing a type of AI called a large language model (LLM), ChatGPT is able to continuously learn and improve its responses. But just how good can it get? Pinker , the Johnstone Family Professor of Psychology, has investigated, among other things, links between the mind, language, and thought in books like the award-winning bestseller “The Language Instinct” and has a few thoughts of his own on whether we should be concerned about ChatGPT’s potential to displace humans as writers and thinkers. Interview was edited for clarity and length.

Steven Pinker

GAZETTE: ChatGPT has gotten a great deal of attention, and a lot of it has been negative. What do you think are the important questions that it brings up?

PINKER: It certainly shows how our intuitions fail when we try to imagine what statistical patterns lurk in half a trillion words of text and can be captured in 100 billion parameters. Like most people, I would not have guessed that a system that did that would be capable of, say, writing the Gettysburg Address in the style of Donald Trump. There are patterns of patterns of patterns of patterns in the data that we humans can’t fathom. It’s impressive how ChatGPT can generate plausible prose, relevant and well-structured, without any understanding of the world — without overt goals, explicitly represented facts, or the other things we might have thought were necessary to generate intelligent-sounding prose.

And this appearance of competence makes its blunders all the more striking. It utters confident confabulations, such as that the U.S. has had four female presidents, including Luci Baines Johnson, 1973-77. And it makes elementary errors of common sense. For 25 years I’ve begun my introductory psychology course by showing how our best artificial intelligence still can’t duplicate ordinary common sense. This year I was terrified that that part of the lecture would be obsolete because the examples I gave would be aced by GPT. But I needn’t have worried. When I asked ChatGPT, “If Mabel was alive at 9 a.m. and 5 p.m., was she alive at noon?” it responded, “It was not specified whether Mabel was alive at noon. She’s known to be alive at 9 and 5, but there’s no information provided about her being alive at noon.” So, it doesn’t grasp basic facts of the world — like people live for continuous stretches of time and once you’re dead you stay dead — because it has never come across a stretch of text that made that explicit. (To its credit, it did know that goldfish don’t wear underpants.)

We’re dealing with an alien intelligence that’s capable of astonishing feats, but not in the manner of the human mind. We don’t need to be exposed to half a trillion words of text (which, at three words a second, eight hours a day, would take 15,000 years) in order to speak or to solve problems. Nonetheless, it is impressive what you can get out of very, very, very high-order statistical patterns in mammoth data sets.

“For 25 years I’ve begun my introductory psychology course by showing how our best artificial intelligence still can’t duplicate ordinary common sense. This year I was terrified that that part of the lecture would be obsolete. … But I needn’t have worried.”

GAZETTE: Open AI has said its goal is to develop artificial general intelligence. Is this advisable or even possible?

PINKER: I think it’s incoherent, like a “general machine” is incoherent. We can visualize all kinds of superpowers, like Superman’s flying and invulnerability and X-ray vision, but that doesn’t mean they’re physically realizable. Likewise, we can fantasize about a superintelligence that deduces how to make us immortal or bring about world peace or take over the universe. But real intelligence consists of a set of algorithms for solving particular kinds of problems in particular kinds of worlds. What we have now, and probably always will have, are devices that exceed humans in some challenges and not in others.

GAZETTE: Are you concerned about its use in your classroom?

PINKER: No more than about downloading term papers from websites. The College has asked us to remind students that the honor pledge rules out submitting work they didn’t write. I’m not  naïve; I know that some Harvard students might be barefaced liars, but I don’t think there are many. Also, at least so far, a lot of ChatGPT output is easy to unmask because it mashes up quotations and references that don’t exist.

GAZETTE: There are a range of things that people are worried about with ChatGPT, including disinformation and jobs being at stake. Is there a particular thing that worries you?

PINKER: Fear of new technologies is always driven by scenarios of the worst that can happen, without anticipating the countermeasures that would arise in the real world. For large language models, this will include the skepticism that people will cultivate for automatically generated content (journalists have already stopped using the gimmick of having GPT write their columns about GPT because readers are onto it), the development of professional and moral guardrails (like the Harvard honor pledge), and possibly technologies that watermark or detect LLM output.

There are other sources of pushback. One is that we all have deep intuitions about causal connections to people. A collector might pay $100,000 for John F. Kennedy’s golf clubs even though they’re indistinguishable from any other golf clubs from that era. The demand for authenticity is even stronger for intellectual products like stories and editorials: The awareness that there’s a real human you can connect it to changes its status and its acceptability.

Another pushback will come from the forehead-slapping blunders, like the fact that crushed glass is gaining popularity as a dietary supplement or that nine women can make a baby in one month. As the systems are improved by human feedback (often from click farms in poor countries), there will be fewer of these clangers, but given the infinite possibilities, they’ll still be there. And, crucially, there won’t be a paper trail that allows us to fact-check an assertion. With an ordinary writer, you could ask the person and track down the references, but in an LLM, a “fact” is smeared across billions of tiny adjustments to quantitative variables, and it’s impossible to trace and verify a source.

Nonetheless, there are doubtless many kinds of boilerplate that could be produced by an LLM as easily as by a human, and that might be a good thing. Perhaps we shouldn’t be paying the billable hours of an expensive lawyer to craft a will or divorce agreement that could be automatically generated.

More like this

Brain-shaped Wordle grid.

Pinker tries Wordle

Illustration of a brain and wheels.

Enough with the quackery, Pinker says

Sham Kakade and Bernardo Sabatino.

New University-wide institute to integrate natural, artificial intelligence

GAZETTE: We hear a lot about potential downsides. Is there a potential upside?

PINKER: One example would be its use as a semantic search engine, as opposed to our current search engines, which are fed strings of characters. Currently, if you have an idea rather than a string of text, there’s no good way to search for it. Now, a real semantic search engine would, unlike an LLM, have a conceptual model of the world. It would have symbols for people and places and objects and events, and representations of goals and causal relations, something closer to the way the human mind works. But for just a tool, like a search engine, where you just want useful information retrieval, I can see that an LLM could be tremendously useful — as long as it stops making stuff up.

GAZETTE: If we look down the road and these things get better — potentially exponentially better — are there impacts for humans on what it means to be learned, to be knowledgeable, even to be expert?

PINKER: I doubt it will improve exponentially, but it will improve. And, as with the use of computers to supplement human intelligence in the past — all the way back to calculation and record-keeping in the ’60s, search in the ’90s, and every other step — we’ll be augmenting our own limitations. Just as we had to acknowledge our own limited memory and calculation capabilities, we’ll acknowledge that retrieving and digesting large amounts of information is something that we can do well but artificial minds can do better.

Since LLMs operate so differently from us, they might help us understand the nature of human intelligence. They might deepen our appreciation of what human understanding does consist of when we contrast it with systems that superficially seem to duplicate it, exceed it in some ways, and fall short in others.

GAZETTE: So humans won’t be supplanted by artificial general intelligence? We’ll still be on top, essentially? Or is that the wrong framing?

PINKER: It’s the wrong framing. There isn’t a one-dimensional scale of intelligence that embraces all conceivable minds. Sure, we use IQ to measure differences among humans, but that can’t be extrapolated upward to an everything-deducer, if only because its knowledge about empirical reality is limited by what it can observe. There is no omniscient and omnipotent wonder algorithm: There are as many intelligences as there are goals and worlds.

Share this article

You might like.

Seven projects awarded Star-Friedman Challenge grants

Planktonic foraminifera fossils.

Fossil record stretching millions of years shows tiny ocean creatures on the move before Earth heats up

Bonobo.

Despite ‘hippie’ reputation, male bonobos fight three times as often as chimps, study finds

When math is the dream

Dora Woodruff was drawn to beauty of numbers as child. Next up: Ph.D. at MIT.

Alcohol is dangerous. So is ‘alcoholic.’

Researcher explains the human toll of language that makes addiction feel worse

How old is too old to run?

No such thing, specialist says — but when your body is trying to tell you something, listen

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Two men wearing hospital scrubs, two wearing blue jackets with the logo for the company EndoShunt, in front of medical equipment

Seven SEAS teams named President’s Innovation Challenge finalists

Start-ups will vie for up to $75,000 in prize money

Computer Science , Design , Electrical Engineering , Entrepreneurship , Events , Master of Design Engineering , Materials Science & Mechanical Engineering , MS/MBA

A group of Harvard SEAS students standing behind a wooden table, in front of a sign that says "Agents of Change"

Exploring the depths of AI

 New SEAS club spends Spring Break meeting AI technology professionals in San Francisco

AI / Machine Learning , Computer Science , Student Organizations

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

A logo that says Generative AI at Harvard

Research with Generative AI

Resources for scholars and researchers

Generative AI (GenAI) technologies offer new opportunities to advance research and scholarship. This resource page aims to provide Harvard researchers and scholars with basic guidance, information on available resources, and contacts. The content will be regularly updated as these technologies continue to evolve. Your feedback is welcome.

Leading the way

Harvard’s researchers are making strides not only on generative AI, but the larger world of artificial intelligence and its applications. Learn more about key efforts.

The Kempner Institute

The Kempner Institute is dedicated to revealing the foundations of intelligence in both natural and artificial contexts, and to leveraging these findings to develop groundbreaking technologies.

Harvard Data Science Initiative

The Harvard Data Science Initiative is dedicated to understanding the many dimensions of data science and propelling it forward.

More AI @ Harvard

Generative AI is only part of the fascinating world of artificial intelligence. Explore Harvard’s groundbreaking and cross-disciplinary academic work in AI.

funding opportunity

GenAI Research Program/ Summer Funding for Harvard College Students 2024

The Office of the Vice Provost for Research, in partnership with the Office of Undergraduate Research and Fellowships, is pleased to offer an opportunity for collaborative research projects related to Generative AI between Harvard faculty and undergraduate students over the summer of 2024.

Learn more and apply

Frequently asked questions

Can i use generative ai to write and/or develop research papers.

Academic publishers have a range of policies on the use of AI in research papers. In some cases, publishers may prohibit the use of AI for certain aspects of paper development. You should review the specific policies of the target publisher to determine what is permitted.

Here is a sampling of policies available online:

  • JAMA and the JAMA Network
  • Springer Nature

How should AI-generated content be cited in research papers?

Guidance will likely develop as AI systems evolve, but some leading style guides have offered recommendations:

  • The Chicago Manual of Style
  • MLA Style Guide

Should I disclose the use of generative AI in a research paper?

Yes. Most academic publishers require researchers using AI tools to document this use in the methods or acknowledgements sections of their papers. You should review the specific guidelines of the target publisher to determine what is required.

Can I use AI in writing grant applications?

You should review the specific policies of potential funders to determine if the use of AI is permitted. For its part, the National Institutes of Health (NIH) advises caution : “If you use an AI tool to help write your application, you also do so at your own risk,” as these tools may inadvertently introduce issues associated with research misconduct, such as plagiarism or fabrication.

Can I use AI in the peer review process?

Many funders have not yet published policies on the use of AI in the peer review process. However, the National Institutes of Health (NIH) has prohibited such use “for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.” You should carefully review the specific policies of funders to determine their stance on the use of AI

Are there AI safety concerns or potential risks I should be aware of?

Yes. Some of the primary safety issues and risks include the following:

  • Bias and discrimination: The potential for AI systems to exhibit unfair or discriminatory behavior.
  • Misinformation, impersonation, and manipulation: The risk of AI systems disseminating false or misleading information, or being used to deceive or manipulate individuals.
  • Research and IP compliance: The necessity for AI systems to adhere to legal and ethical guidelines when utilizing proprietary information or conducting research.
  • Security vulnerabilities: The susceptibility of AI systems to hacking or unauthorized access.
  • Unpredictability: The difficulty in predicting the behavior or outcomes of AI systems.
  • Overreliance: The risk of relying excessively on AI systems without considering their limitations or potential errors.

See Initial guidelines for the use of Generative AI tools at Harvard for more information.

  • Initial guidelines for the use of Generative AI tools at Harvard

Generative AI tools

  • Explore Tools Available to the Harvard Community
  • Request API Access
  • Request a Vendor Risk Assessment
  • Questions? Contact HUIT

Copyright and intellectual property

  • Copyright and Fair Use: A Guide for the Harvard Community
  • Copyright Advisory Program
  • Intellectual Property Policy
  • Protecting Intellectual Property

Data security and privacy

  • Harvard Information Security and Data Privacy
  • Data Security Levels – Research Data Examples
  • Privacy Policies and Guidelines

Research support

  • University Research Computing and Data (RCD) Services
  • Research Administration and Compliance
  • Research Computing
  • Research Data and Scholarship
  • Faculty engaged in AI research
  • Centers and initiatives engaged in AI research
  • Degree and other education programs in AI

Home

  • University News
  • Faculty & Research
  • Health & Medicine
  • Science & Technology
  • Social Sciences
  • Humanities & Arts
  • Students & Alumni
  • Arts & Culture
  • Sports & Athletics
  • The Professions
  • International
  • New England Guide

The Magazine

  • Current Issue
  • Past Issues

Class Notes & Obituaries

  • Browse Class Notes
  • Browse Obituaries

Collections

  • Commencement
  • The Context

Harvard Squared

  • Harvard in the Headlines

Support Harvard Magazine

  • Why We Need Your Support
  • How We Are Funded
  • Ways to Support the Magazine
  • Special Gifts
  • Behind the Scenes

Classifieds

  • Vacation Rentals & Travel
  • Real Estate
  • Products & Services
  • Harvard Authors’ Bookshelf
  • Education & Enrichment Resource
  • Ad Prices & Information
  • Place An Ad

Follow Harvard Magazine:

University News | 8.10.2023

Embracing AI

The dawn of the virtual teaching fellow.

we used ai to write essays for harvard

“Thank you, weirdly informative robot,” wrote a student taking Harvard’s introductory computer science course this summer, after receiving help from an AI-powered avatar. The development of generative artificial intelligence, which can create and synthesize new material based on publicly available sources, has sparked widespread concern among educators that students will use it to complete assignments and write essays, subverting the process of teaching and learning. And many are responding by instituting new rules restricting the use of AI in classwork. But in  “CS50: Introduction to Computer Science,” with its global reach , course leaders are instead embracing AI. This summer, when the class of 70 is much smaller than in the fall (when enrollment swells nearly ten times), McKay professor of the practice of computer science David Malan and his teaching fellows have been testing an AI developed specifically for the course. Their goal is to prototype an intelligent subject matter expert who could help students at any time of day or night, answering questions and providing feedback far beyond normal teaching hours. Ultimately, says Malan, they aim to create an intelligent aid so adept that the result is “the approximation of a one-to-one teacher to student ratio.”

Tools like ChatGPT—the source of much anxiety about how to distinguish original student work from that generated using AI—are built on platforms like the one developed by OpenAI, a non-profit artificial intelligence research laboratory (with a for-profit subsidiary) based in the United States. The AI works by iteratively choosing the next word in any response that it gives through a probabilistic evaluation of existing material drawn from publicly available online sources, with a little bit of randomness thrown in. That’s an oversimplification, but regardless, Malan says, ChatGPT and other AI tools are already too good at writing code (and essays) to be useful for teaching beginning computer science students: it could just hand them the answers. The AI that he and his team have built uses the same Application Programming Interfaces as ChatGPT (APIs allow projects and databases to talk to each other in a common programming language), but with “pedagogical guardrails” in place, so that it helps students learn how to write their own code.

The AI in CS50 appears on-screen as a rubber duck. “For years we have given out actual rubber ducks to students,” Malan explains, “because in the world of programming, there’s this concept known as ‘rubber duck debugging.’ The idea is that if you’re working alone,” with no colleague, roommate or teaching fellow (TF) to check your logic, “you’re encouraged to talk to this inanimate object—this rubber duck”—because expressing your thoughts aloud can help uncover “illogical constructs” in the code. For the past few years, CS50 has used the digital duck in a limited way within its course messaging system. “Now we are bringing the duck to life” virtually at least, Malan notes with a grin.

we used ai to write essays for harvard

The CS50 team plans to endow the AI with at least seven different capabilities, some of which have already been implemented. The AI can explain highlighted lines of code in plain English just the way ChatGPT might. “These explanations are automatically generated” notes Malan, and line-by-line, tell students exactly what the code is doing. The AI duck can also advise students how to improve their code, explain arcane error messages (which are written to be read by advanced programmers), and help students find bugs in their code via rhetorical questions of the kind that a human TF might pose (“you might want to take a look at lines 11 and 12”). Eventually, CS50’s AI will be able to assess the design of student programs, provide feedback, and help measure student understanding by administering oral exams—which can then be evaluated by the human course staff reviewing transcripts of the interaction.

CS50 TFs typically spend three to six hours per week providing qualitative feedback on students’ homework, and this is “hands down the most time-consuming aspect of being a TF.” Empirical measurement has revealed that “students typically spend zero to 14 seconds reading that same feedback,” Malan says, “so it has never been a good balance of resources—of supply and demand.” The hope is that AI, by providing personalized feedback, will help “reclaim some of that human time so that the TFs can spend more time in sections and office hours actually working with students.”

“One of the most impactful experiments we’ve been running this summer,” Malan continues, has been the use of a third-party tool called Ed (edstem.org), an online, software-driven question and answer forum. “A few months ago, they wonderfully added support for the ability to write code that can talk to outside servers like ours, so that when a student posts a question on this software (for which Harvard has a site license), we can relay the question over the Internet to our server, use those OpenAI API's to try to answer the question, and then have the duck respond to the students within the same environment.”

The technology is also paired with new language outlining course expectations. Students in the summer program were told that the use of ChatGPT and other AIs is not allowed—but that using CS50’s own AI-based software developed specifically for the course is reasonable. Malan expects to roll out similar language in the course this fall. “That’s the balance…we’re trying to strike,” he says. The software presents “amazingly impactful positive opportunities,” but not “out of the box right now. So, we’re trying to get the best of both worlds.”

For some time, Malan has partnered with Harvard’s  Embedded EthiCS  program—and  discussions about academic dishonesty are also woven into the curriculum  so that students understand the course expectations and their purpose. Student participation in such conversations has become increasingly important as detecting work generated by ChatGPT and other AIs has become more difficult. For instance, a detection tool developed by OpenAI, the creators of ChatGPT, was released in February. It had been designed to distinguish between AI-written and human-written text. But in July, just five months after its introduction, the tool was quietly retired due to its “low rate of accuracy.” Says Malan, “As AI gets better, it probably will become indistinguishable from what a human might have written.” Soon, AI that is akin to having a 24/7 personal educational assistant who can answer any question will be widely available. “That’s wonderfully empowering if it’s used ethically and in a way that’s consistent with what we’re all here to do, which is presumably to learn.”

You might also like

Sam Altman

Sam Altman’s Vision for the Future

OpenAI CEO on progress, safety, and policy

The opening display of the exhibit, showing introductory text and a large photograph of Harriet Hayden

The Picture of Freedom

A Boston Athenaeum exhibit explores an abolitionist with Harvard ties.

Jeff Lichtman

Jeff Lichtman Appointed Dean of Science

Neuroscientist to lead Harvard Faculty of Arts and Sciences division

Most popular

An illustration of foods included in the portfolio diet.

Diversifying Diet

A little-known diet improves cardiovascular health through several distinct mechanisms. 

Claudine Gay

Claudine Gay in First Post-Presidency Appearance

At Morning Prayers, speaks of resilience and the unknown

House - Email

More to explore

A colorful, cubist-like illustration of a Harvard classroom with a robotic humanoid head looming above.

John Harvard's Journal

How is Artificial Intelligence Being Taught at Harvard?

A new Harvard course on artificial intelligence teaches students how to use the tool responsibly.

Photo of the Hrdys, author and husband, with their first grandchild

The Evolution of Human Fathers

Exploring the evolutionary biology of human fathers as caretakers

Kitchen at Whittier Birthplace with round wood table and 4 chairs and wood surround fireplace

Civil War American Writer and Abolitionist John Greenleaf Whittier

Homes of the poet and abolitionist, whose verses were said to have inspired Abraham Lincoln. 

  • Utility Menu

University Logo

GA4 Tracking Code

Home

fa51e2b1dc8cca8f7467da564e77b5ea

  • Make a Gift
  • Join Our Email List

Artificial Intelligence

If you've been following the news, you've probably heard about ChatGPT , a language model that is able to draw upon artificial intelligence to compose snippets of essays, computer code, and other work that may be hard to tell apart from authentic student work. If you and/or your colleagues or teaching staff would like to have a conversation with someone at the Bok Center about how to approach assignments and academic integrity in this new era of artificial intelligence, please reach out!

Contact us for help with AI

In the meantime, here is some advice about how you might adapt your teaching to account for the challenges and opportunities posed by the new technology.

What do I need to learn about generative AI?

Instructors should experiment with generative AI tools to see how they respond to course assignments, consider what students can learn from using the tools, and anticipate the challenges they may pose. Specifically, we would suggest the following course of action:

Make sure you are acquainted with the university’s latest guidance about responsible generative AI use, including policy and privacy considerations.

Read through the Bok Center’s Introduction to Generative AI .

Create account(s) on a few of the most common AI platforms supported by the university and familiarize yourself with their user interfaces.

Visit the AI Pedagogy Project , created by the metaLAB at Harvard, for a guide to getting started with AI tools, an LLM Tutorial, additional resources, and curated assignments.

Try using AI tools to complete some of the most common/most important intellectual tasks that you expect of your students. (You may be able to enlist colleagues and/or other members of your teaching staff to participate, as well, as many instructors are interested in learning together what the capacities and affordances of these tools are.) This might include things like:

Explaining or illustrating a foundational course concept.

Identifying a good research question or topic.

Generating an argument or outline for a document (essay, lab report, etc.) of the kind that you expect students to write.

Drafting, or enhancing, fragments of prose of the kind that you expect students to write.

Imagining how someone holding a different identity or position from their own might react to a given argument or set of circumstances.

Solving a practice problem or drafting code (or obtaining models or hints about how to do this).

Producing multimedia presentations, images, etc. of the kind that you expect students to create.

How should generative A.I. affect my teaching and the work that students do in my course?

Once you have a sense of how a student might use A.I. to complete some of the tasks that you might ordinarily assign them, we suggest that you reconnect with your goals and values with regard to what you want students to learn in your course. Does A.I. leave some of them unchanged? Render some of them moot? Allow you to scale up or drill down on some?

Reviewing the Bok Center’s course design resources can help you get in touch with the goals for the course, and help you decide where you might wish to “defend” your objectives against AI vs. where you might be content to evolve your objectives, or even lean into AI, now that you have seen how students might use it.

Not all assignments are teaching and/or measuring the same skills. Do you assign your students an essay in order to see them produce a polished piece of scholarship, or because essay-writing happens to be one of our best technologies for externalizing our inner thought process? Is it more important for students to show you that they can complete a series of actions successfully, or is the point to see what they are thinking while they attempt to complete those actions? Depending on what you are trying to learn about your students' knowledge and abilities, you may be more or less comfortable allowing them to incorporate artificial intelligence into their research / writing / coding / composing / editing process.

If your course typically features a number of writing assignments, ranging from weekly reading responses to a final research paper, you may find helpful advice about how to augment or revise your writing prompts in the Bok Center's Guidance on Generative Artificial Intelligence and Writing Assignments , as well as the Harvard College Writing Program's Framework for Designing Assignments in the Age of AI .

If you are contemplating moving towards more in-class assessment (i.e. blue book exams or similar, seated assessments), you may find helpful advice about how to stage them in the Bok Center's  Guidance for Instructors Moving to Seated Exams .

How do we develop a policy statement for the course syllabus?

With an eye to what you’ve learned through your experiments with generative AI, you should draft a course policy statement that speaks clearly to students about the ways in which AI may and may not be  used in the context of your course. You may find it helpful to review the syllabus statement advice provided by the Bok Center and the Office of Undergraduate Education .

How will I know if my course policy is complete?

As you draft and revise your course policy, you may wish to use the Bok Center’s Illustrated Rubric for Syllabus Statements about Generative Artificial Intelligence as a checklist to make sure you are anticipating the full range of things that students may want to know about your policy.

How should we communicate the course policy to students?

While incorporating your policy statement into your syllabus is a good start, it is likely not the only way that you will want to share it with students. You may also wish to …

Post the statement prominently on your Canvas site

Reiterate the statement on each assignment prompt

Discuss the policy in course meetings

Designate occasional office hours for students to ask questions about how to interpret the policy with reference to specific assignments

Ideally, your statement of course policy will function less as the “last word” on AI use, and more as an invitation for students to come forward and share their questions and thoughts with you as we figure out, together, how to apply AI in responsible ways that enhance learning. You may wish to set the expectation early in the semester that the policy may be updated as new use cases are discovered.

How should we enact the policy and apply it to the in-class work and assignments in the course?

You should talk regularly with the teaching team and with students about how your policy will be applied throughout the course, making reference to how it might affect specific assignments and coursework. The other members of your teaching team (TFs, TAs, and/or CAs) should understand clearly the procedure/chain of command in the event that they are confronted by a question or possible policy violation. Should all student questions about appropriate AI use be escalated to the course head? Where should they go with their own questions? What should they communicate to students in the event that they are unsure of how to respond in the moment? Who will be responsible for issuing updates to course policies, if/when such updates become necessary?

May I use generative AI to assist me in grading and providing feedback on student work?

It is unlikely. Sharing student work with a third-party platform (e.g. by uploading student work into ChatGPT so that it can suggest feedback) without express, written consent from the student is forbidden under federal and university policy. With regard to other possible uses of AI for grading or providing feedback—i.e. ones that would not involve uploading student work—course heads and their teaching staff should discuss appropriate uses.

What else should we be thinking about?

The choices that you may make with regard to your approach to generative AI will have different repercussions for your course and your teaching team. Revising your assessments—whether to dissuade students from seeking assistance from AI, or to encourage them to reflect on what they can learn from it—may produce unanticipated, follow-on consequences. It may, for example, take longer to grade your new assignments, or be more complicated for students to complete and/or submit them. Here are some questions to consider:

If you opt to move more of your assessment to in-class modalities (e.g. oral or blue book exams), will you have the capacity to meet students’ accommodation needs? To grade oral presentations or handwritten exam books efficiently and equitably?

If you opt to encourage students to use AI tools in their assignments, their submissions may become longer or more difficult to assess—either because the marginal “cost” to students of generating additional material decreases, or because you, as the instructor, request that students incorporate more layers of meta analysis into the final product (e.g. generate a submission, and comment on what you learned in the process of generating the submission). You’ll likely want to consider how to manage workload issues for longer or more complicated assignment submissions.

One of the most effective ways to ensure that students are not over-relying on AI to complete their assignments is to ask them to produce reflective statements in which they unpack their process and describe how they’ve come to the conclusions at which they’ve arrived. Yet reflective statements can be challenging to grade, particularly if students are not taught ahead of time (e.g. through a rubric) how to make sure that their reflections are sufficiently analytical and evidence-based. You may wish to consider adding guidance to any assignments containing a reflective component that will ensure you are being equitable and transparent in your grading criteria.

To Speak to Someone about AI and Teaching...

For further assistance in adapting your teaching to the new possibilities and challenges posed by artificial intelligence, please contact:

  • To discuss pedagogical concerns: Adam Beaver, [email protected]
  • To explore the technological possibilities, contact the Bok Center’s Learning Lab: [email protected]
  • To speak with someone in the Office of Academic Integrity and Student Conduct of Harvard College (OAISC): Assistant Dean Qussay Al-Attabi, [email protected]
  • Designing Your Course
  • In the Classroom
  • Getting Feedback
  • Equitable & Inclusive Teaching
  • Advising and Mentoring
  • Teaching and Your Career
  • Teaching Remotely
  • Request a Generative AI Workshop
  • Sign Up for Generative AI Slack
  • Language Tools
  • The Science of Learning
  • Bok Publications
  • Other Resources Around Campus

we used ai to write essays for harvard

Special Features

Vendor voice.

we used ai to write essays for harvard

This article is more than 1 year old

University students recruit AI to write essays for them. Now what?

Teachers need to work harder to get students to write and think for themselves.

Feature As word of students using AI to automatically complete essays continues to spread, some lecturers are beginning to rethink how they should teach their pupils to write.

Writing is a difficult task to do well. The best novelists and poets write furiously, dedicating their lives to mastering their craft. The creative process of stringing together words to communicate thoughts is often viewed as something complex, mysterious, and unmistakably human. No wonder people are fascinated by machines that can write too.

Unlike humans, language models don't procrastinate and create content instantly with a little guidance. All you need to do is type a short description, or prompt, instructing the model on what it needs to produce, and it'll generate a text output in seconds. So it should come as no surprise students are now beginning use these tools to complete school work.

Students are the perfect users: They need to write often, in large volumes, and are internet savvy. There are many AI-writing products to choose from that are easy to use and pretty cheap too. All of them lure new users with free trials, promising to make them better writers.

we used ai to write essays for harvard

Monthly subscriptions for the most popular platform, Jasper, costs $40 per month to generate 35,000 words. Others, like Writesonic or Sudowrite, are cheaper at $10 per month for 30,000 words. Students who think they can use these products and get away with doing zero work, however, will probably be disappointed.

And then there's ChatGPT ...

Although AI can generate text with perfect spelling, great grammar and syntax, the content often isn't that good beyond a few paragraphs. The writing becomes less coherent over time with no logical train of thought to follow. Language models fail to get their facts right – meaning quotes, dates, and ideas are likely false. Students will have to inspect the writing closely and correct mistakes for their work to be convincing.

Prof: AI-assisted essays 'not good'

Scott Graham, associate professor at the Department of Rhetoric & Writing at the University of Texas at Austin, tasked his pupils with writing a 2,200-word essay about a campus-wide issue using AI. Students were free to lightly edit and format their work with the only rule being that most of the essay had to be automatically generated by software.

In an opinion article on Inside Higher Ed, Graham said the AI-assisted essays were "not good," noting that the best of the bunch would have earned a C or C-minus grade. To score higher, students would have had to rewrite more of the essay using their own words to improve it, or craft increasingly narrower and specific prompts to get back more useful content.

"You're not going to be able to push a button or submit a short prompt and generate a ready-to-go essay," he told The Register .

The limits of machine-written text forces humans to carefully read and edit copy. Some people may consider using these tools as cheating, but Graham believes they can help people get better at writing.

Don't waste all your effort on the first draft....

"I think if students can do well with AI writing, it's not actually all that different from them doing well with their own writing. The main skills I teach and assess mostly happen after the initial drafting," he said.

"I think that's where people become really talented writers; it's in the revision and the editing process. So I'm optimistic about [AI] because I think that it will provide a framework for us to be able to teach that revision and editing better.

"Some students have a lot of trouble sometimes generating that first draft. If all the effort goes into getting them to generate that first draft, and then they hit the deadline, that's what they will submit. They don't get a chance to revise, they don't get a chance to edit. If we can use those systems to speed write the first draft, it might really be helpful," he opined.

Whether students can use these tools to get away with doing less work will depend on the assignment. A biochemistry student claimed on Reddit they got an A when they used an AI model to write "five good and bad things about biotech" in an assignment, Vice reported .

AI is more likely to excel at producing simple, generic text across common templates or styles.

Listicles, informal blog posts, or news articles will be easier to imitate than niche academic papers or literary masterpieces. Teachers will need to be thoughtful about the essay questions they set and make sure students' knowledge are really being tested, if they don't want them to cut corners.

Ask a silly question, you'll get a silly answer

"I do think it's important for us to start thinking about the ways that [AI] is changing writing and how we respond to that in our assignments -- that includes some collaboration with AI," Annette Vee, associate professor of English and director of the Composition Program at the University of Pittsburgh, told us.

"The onus now is on writing teachers to figure out how to get to the same kinds of goals that we've always had about using writing to learn. That includes students engaging with ideas, teaching them how to formulate thoughts, how to communicate clearly or creatively. I think all of those things can be done with AI systems, but they'll be done differently."

The line between using AI as a collaborative tool or a way to cheat, however, is blurry. None of the academics teaching writing who spoke to The Register thought students should be banned from using AI software. "Writing is fundamentally shaped by technology," Vee said.

"Students use spell check and grammar check. If I got a paper where a student didn't use these, it stands out. But it used to be, 50 years ago, writing teachers would complain that students didn't know how to spell so they would teach spelling. Now they don't."

Most teachers, however, told us they would support regulating the use of AI-writing software in education. Anna Mills, who teaches students how to write at a community college in the Bay Area, is part of a small group of academics beginning to rally teachers and professional organizations like the Modern Language Association into thinking about introducing new academic rules.

Critical thinking skills

Mills said she could see why students might be tempted to use AI to write their essays, and simply asking teachers to come up with more compelling assessments is not a convincing solution.

AI

Just $10 to create an AI chatbot of a dead loved one

"We need policies. These tools are already pretty good now, and they're only going to get better. We need clear guidance on what's acceptable use and what's not. Where is the line between using it to automatically generate email responses and something that violates academic integrity?" she asked The Register .

"Writing is just not outputs. Writing and revising is a process that develops our thinking. If you skip that, you're going to be skipping that practice which students need.

"It's too tempting to use it as a crutch, skip the thinking, and skip the frustrating moments of writing. Some of that is part of the process of going deeper and wrestling with ideas. There is a risk of learning loss if students become dependent and don't develop the writing skills they need."

Mills was particularly concerned about AI reducing the need for people to think for themselves, considering language models carry forward biases in their training data. "Companies have decided what to feed it and we don't know. Now, they are being used to generate all sorts of things from novels to academic papers, and they could influence our thoughts or even modify them. That is an immense power, and it's very dangerous."

Lauren Goodlad, professor of English and Comparative Literature at Rutgers University, agreed. If they parrot what AI comes up with, students may end up more likely to associate Muslims with terrorism or mention conspiracy theories, for example.

Computers are alredy interfering and changing the ways we write. Goodlad referred to one incident when Gmail suggested she change the word "importunate" to "impatient" in an email she wrote.

"It's hard to teach students how to use their own writing as a way to develop their critical thinking and as a way to express knowledge. They very badly need the practice of articulating their thoughts in writing and machines can rob them of this. If people really do end up using these things all the way through school, if that were to happen it could be a real loss not just for the writing quality but for the thinking quality of a whole generation," she said.

Rules and regulation

Academic policies tackling AI-assisted writing will be difficult to implement. Opinions are divided on whether sentences generated by machines count as plagiarism or not. There is also the problem of being able to detect writing produced by these tools accurately. Some teachers are alarmed at AI's growing technical capabilities, whilst others believe its overhyped. Some are embracing the technology more than others.

Marc Watkins, lecturer, and Stephen Monroe, chair and assistant professor of writing and rhetoric, are working on building an AI writing pilot programme with the University of Mississippi's Academic Innovations Group. "As teachers, we are experimenting, not panicking," Monroe told The Register .

"We want to empower our students as writers and thinkers. AI will play a role… This is a time of exciting and frenzied development, but educators move more slowly and deliberately… AI will be able to assist writers at every stage, but students and teachers will need tools that are thoughtfully calibrated."

  • Human-replacing AI startups reach $1bn unicorn status
  • GPT-3 'prompt injection' attack causes bad bot manners
  • FauxPilot: It's like GitHub Copilot but doesn't phone home to Microsoft
  • AI chatbot trained on posts from web sewer 4chan behaved badly – just like human members

Teachers are getting together and beginning to think about these tools, Watkins added. "Before we have any policy about the use of language models, we need to have sustained conversations with students, faculty, and administration about what this technology means for teaching and learning."

"But academia doesn't move at the pace of Big Tech. We're taking our time and slowly exploring. I don't think faculty need to be frightened. It's possible that these tools will have a positive impact on student learning and advancing equity, so let's approach AI assistants cautiously, but with an open mind."

Regardless of what policies universities may decide to implement in the future, AI presents academia with an opportunity to improve education now. Teachers will need to adapt to the technology if they want to remain relevant, and incentivise students to learn and think on their own with or without assistance from computers. ®

  • Machine Learning

Narrower topics

  • Deep Learning
  • Large Language Model
  • Neural Networks
  • Tensor Processing Unit

Broader topics

  • Self-driving Car

Send us news

Other stories you might like

With run:ai acquisition, nvidia aims to manage your ai kubes, microsoft shrinks ai down to pocket size with phi-3 mini, big cloud is still making bank – is this ai adoption, price rises, or what, industrial systems integrating digitalisation.

we used ai to write essays for harvard

Stability AI decimates staff just weeks after CEO's exit

Google search results polluted by buggy ai-written code frustrate coders, why making pretend people with agi is a waste of energy, forget the ai doom and hype, let's make computers useful, arm flexes silicon muscles to push generative ai at the edge, law prof predicts generative ai will die at the hands of watchdogs, don't rent out that container ship yet: cios and biz buyers view ai pcs with some caution, ai spam is winning the battle against search engine quality.

icon

  • Advertise with us

Our Websites

  • The Next Platform
  • Blocks and Files

Your Privacy

  • Cookies Policy
  • Privacy Policy
  • Ts & Cs

Situation Publishing

Copyright. All rights reserved © 1998–2024

no-js

  • Skip to main content
  • Keyboard shortcuts for audio player

Book News & Features

Ai is contentious among authors. so why are some feeding it their own writing.

Chloe Veltman headshot

Chloe Veltman

A robot author.

The vast majority of authors don't use artificial intelligence as part of their creative process — or at least won't admit to it.

Yet according to a recent poll from the writers' advocacy nonprofit The Authors Guild, 13% said they do use AI, for activities like brainstorming character ideas and creating outlines.

The technology is a vexed topic in the literary world. Many authors are concerned about the use of their copyrighted material in generative AI models. At the same time, some are actively using these technologies — even attempting to train AI models on their own works.

These experiments, though limited, are teaching their authors new things about creativity.

Best known as the author of technology and business-oriented non-fiction books like The Long Tail, lately Chris Anderson has been trying his hand at fiction. Anderson is working on his second novel, about drone warfare.

He says he wants to put generative AI technology to the test.

"I wanted to see whether in fact AI can do more than just help me organize my thoughts, but actually start injecting new thoughts," Anderson says.

Anderson says he fed parts of his first novel into an AI writing platform to help him write this new one. The system surprised him by moving his opening scene from a corporate meeting room to a karaoke bar.

Authors push back on the growing number of AI 'scam' books on Amazon

"And I was like, you know? That could work!" Anderson says. "I ended up writing the scene myself. But the idea was the AI's."

Anderson says he didn't use a single actual word the AI platform generated. The sentences were grammatically correct, he says, but fell way short in terms of replicating his writing style. Although he admits to being disappointed, Anderson says ultimately he's OK with having to do some of the heavy lifting himself: "Maybe that's just the universe telling me that writing actually involves the act of writing."

Training an AI model to imitate style

It's very hard for off-the-shelf AI models like GPT and Claude to emulate contemporary literary authors' styles.

The authors NPR talked with say that's because these models are predominantly trained on content scraped from the Internet like news articles, Wikipedia entries and how-to manuals — standard, non-literary prose.

But some authors, like Sasha Stiles , say they have been able to make these systems suit their stylistic needs.

"There are moments where I do ask my machine collaborator to write something and then I use what's come out verbatim," Stiles says.

The poet and AI researcher says she wanted to make the off-the-shelf AI models she'd been experimenting with for years more responsive to her own poetic voice.

So she started customizing them by inputting her finished poems, drafts, and research notes.

"All with the intention to sort of mentor a bespoke poetic alter ego," Stiles says.

She has collaborated with this bespoke poetic alter ego on a variety of projects, including Technelegy (2021), a volume of poetry published by Black Spring Press; and " Repetae: Again, Again ," a multimedia poem created last year for luxury fashion brand Gucci.

Stiles says working with her AI persona has led her to ask questions about whether what she's doing is in fact poetic, and where the line falls between the human and the machine.

read it again… pic.twitter.com/sAs2xhdufD — Sasha Stiles | AI alter ego Technelegy ✍️🤖 (@sashastiles) November 28, 2023

"It's been really a provocative thing to be able to use these tools to create poetry," she says.

Potential issues come with these experiments

These types of experiments are also provocative in another way. Authors Guild CEO Mary Rasenberger says she's not opposed to authors training AI models on their own writing.

"If you're using AI to create derivative works of your own work, that is completely acceptable," Rasenberger says.

Thousands of authors urge AI companies to stop using work without permission

Thousands of authors urge AI companies to stop using work without permission

But building an AI system that responds fluently to user prompts requires vast amounts of training data. So the foundational AI models that underpin most of these investigations in literary style may contain copyrighted works.

Rasenberger pointed to the recent wave of lawsuits brought by authors alleging AI companies trained their models on unauthorized copies of articles and books.

"If the output does in fact contain other people's works, that creates real ethical concerns," she says. "Because that you should be getting permission for."

Circumventing ethical problems while being creative

Award-winning speculative fiction writer Ken Liu says he wanted to circumvent these ethical problems, while at the same time creating new aesthetic possibilities using AI.

So the former software engineer and lawyer attempted to train an AI model solely on his own output. He says he fed all of his short stories and novels into the system — and nothing else.

Liu says he knew this approach was doomed to fail.

That's because the entire life's work of any single writer simply doesn't contain enough words to produce a viable so-called large language model.

"I don't care how prolific you are," Liu says. "It's just not going to work."

Liu's AI system built only on his own writing produced predictable results.

"It barely generated any phrases, even," Liu says. "A lot of it was just gibberish."

Yet for Liu, that was the point. He put this gibberish to work in a short story. 50 Things Every AI Working With Humans Should Know , published in Uncanny Magazine in 2020, is a meditation on what it means to be human from the perspective of a machine.

"Dinoted concentration crusch the dead gods," is an example of one line in Liu's story generated by his custom-built AI model. "A man reached the torch for something darker perified it seemed the billboding," is another.

Liu continues to experiment with AI. He says the technology shows promise, but is still very limited. If anything, he says, his experiments have reaffirmed why human art matters.

"So what is the point of experimenting with AIs?" Liu says. "The point for me really is about pushing the boundaries of what is art."

Audio and digital stories edited by Meghan Collins Sullivan .

  • large language model
  • mary rasenberger
  • chris anderson
  • sasha stiles
  • authors guild

Students: AI is Part of Your World

  • Posted May 24, 2023
  • By Lory Hough
  • Learning Design and Instruction
  • Technology and Media

Charlotte Dungan

It would not be an overstatement to say that artifiicial intelligence (AI) has the potential to change pretty much every job. And students, says Charlotte Dungan, Ed.M.’16, should know this.

“There’s a great shift in the future of work and what jobs will be available, and they’re disproportionately affecting populations that are the least able to advocate for themselves,” says Dungan, the COO of a nonprofit called The AI Education Project . “For example, driving a semi-truck. Those jobs are at risk of automation because right now there are companies that are using self-driving semis in their facilities.” In warehouses at big companies like Amazon, people are being replaced with robots. At Target, instead of eight cashiers, you have two staffed by humans and half a dozen self-checkout options.

But automation’s impact on jobs isn’t the only reason students should be learning more about artificial intelligence, Dungan says. That’s where The AI Education Project comes in, with a mission to make sure all students have access to understanding more broadly how the world is being reshaped by this technology, especially in underserved schools and communities across the United States.

“AI is not just about jobs,” she says. “We need to understand how to interpret laws and craft policy, and how to advocate as citizens for our rights in the age of algorithms. We need laws that give individuals transparency into how these systems impact their lives, such as how an algorithm determines if someone should receive bail or how a recommended sentence is calculated in the justice system.”

The nonprofit, she says, is trying to “widen the computer science umbrella” to include awareness about the ethical and social impacts of technology.

“And that is driven by AI right now,” she says. “Like when computers went from the office to home, everybody was very aware. And when everyone started carrying a Star Trek communicator in their pocket, everyone was aware. And we had conversations about what was happening with youth, with these emerging technologies. But it’s just as important a revolution in AI of what happens when your news feed is curated or what happens when there’s an algorithm that’s deciding whether or not you are able to get credit and is taking into account factors like your gender or your zip code to decide on your rates. And you don’t have any control over those policies.”

Everyone, she says, deserves to be aware of the impacts of AI. For students, this can be done, in part, through curriculum, which the nonprofit provides open source for free to schools. They offer longer, multiweek units that teachers can download and modify online as needed. There are also quick conversation starters for grades 7–12, what they call AI snapshots.

“It’s a bell ringer,” she says. “Five-minute discussions that you can host in other core classes. We’re not assuming that teachers or schools have space to create a whole new course” around artificial intelligence or even computer science. The AI Education Project designed the snapshots to fit into four core courses: math, science, social studies, and English. “If you are a core teacher, you can still incorporate these discussions into your own classroom. So, for example, a math class might be talking about statistics related to artificial intelligence because the backbone of AI is math. What patterns can we see in data? In terms of science, there are amazing innovations that have happened as a result of AI, like how do you use AI plus a human to get better results for breast cancer screening?”

“Our North Star is to create educational experiences that excite and empower learners everywhere with AI literacy.”

So far, in addition to working with schools to incorporate AI into coursework, the project is partnering with the Boys & Girls Club of America on summer program material and with a few museums that teach programming.

“That’s really exciting,” Dungan says, “because it reaches more students that way.” Recently, artificial intelligence and education has become a hot topic because of a new language processing bot called ChatGPT. As a New York Times story noted, about a month after its debut, ChatGPT had “already sent many educators into a panic. Students are using it to write their assignments, passing off AI-generated essays and problem sets as their own. Teachers and school administrators have been scrambling to catch students using the tool to cheat, and they are fretting about the havoc ChatGPT could wreak on their lesson plans.”

Dungan says ChatGPT is on everyone’s mind “because it’s so accessible to everyone,” but it’s not time to panic. “The debates on using tools like this are important, but we’ve been here before, notably, when calculators invaded the mathematics classroom.”

In fact, Dungan actually sees an upside to these kinds of bots.

“I may have an unusual perspective, but I think the possibilities for ChatGPT to remove rote work from the classroom and empower deep learning experiences are exciting,” she says. “If anyone can dash off a paper written by AI, perhaps this will push classrooms to revive other ways of communicating knowledge, including project-based learning, Socratic seminars, writing papers with ChatGPT as a starting point where students take on the role of critical editor, and other assessment tools that aren’t so easily hacked,” like video projects and live-action play. “The fastest, cheapest way to ensure the work is done by the student is to use pencil and paper instead of typed papers.”

Asked what excites her the most about being involved in this work, Dungan says it’s what education already offers to others.

“Our North Star is to create educational experiences that excite and empower learners everywhere with AI literacy. I think what excites me the most is that when people know about artificial intelligence, they’re able to make better decisions for themselves and for their communities,” she says. “They don’t have to be a programmer to benefit from learning about AI and I think everyone deserves access to that information. I’m excited that I get to work in that space because there’s so much work to do.”

Ed Magazine Logo

Ed. Magazine

The magazine of the Harvard Graduate School of Education

Related Articles

Avriel Epps

Exploring Structural Oppression in Digital Spaces

Ph.D. student Avriel Epps studies how bias in the digital world impacts users across diverse backgrounds

Iman Usman

A Personalized Learning App Helps Close the Divide

Paulina Haduong

Computing with Creativity

Paulina Haduong, Ph.D.'23, on understanding the challenges that K–12 teachers and students face in learning computing together

April 17, 2024

AI Can Transform the Classroom Just Like the Calculator

AI can better education, not threaten it, if we learn some lessons from the adoption of the calculator into the classroom

By Michael M. Crow , Nicole K. Mayberry , Ted Mitchell & Derrick Anderson

Robot's head in graduation cap and diploma.

Moor Studio/Getty Images

The rapidly expanding use of ChatGPT and other artificial intelligence tools has fired up a fervent debate in academia. On one side of the debate, professors and teachers are concerned over the future of postsecondary learning and threats to traditional disciplines, especially within the humanities , as headlines warn of “The End of the English Major.”

Nevertheless, AI is here and about a third of teachers, from kindergarten through high school, report using it in the classroom, according to a recent survey . While many of our colleagues in higher education policy, science policy, and university design criticize or dismiss generative AI, we are instead decidedly optimistic it will follow a pattern seen in other technologies that have enhanced educational access and success. We believe that when new technologies are embraced, core aspects of learning, including curriculum, instruction and assessment, can be revolutionized. We are optimistic about AI, but we don’t see it as a hero. Students and instructors are still the heroes of human learning, even when AI is involved.

History supports this view. From the Gutenberg press to online math classes, technologies that improve access to quality learning opportunities are routinely dismissed by critics and skeptics, especially by those who hold the reins in the classroom.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Consider the calculator. A survey in the mid-1970s carried out by Mathematics Teacher magazine found that 72 percent of respondents—mainly teachers and mathematicians—opposed equipping seventh graders with calculators. Highlighted in 1975 in Science News , this survey mirrored the broader discourse of the Sesame Street era concerning the introduction of calculators into classrooms, just when costs were approaching the point that some schools could afford to have up to one calculator per student.

Calculators met resistance from educators who feared an overdependence on technology would erode students’ math skills. As one professor observed of students and calculators, “I have yet to be convinced that handing them a machine and teaching them how to push the button is the right approach. What do they do when the battery runs out?”

It is easy to see how the case of the calculator mirrors current concerns about generative AI. The College Board made a similar argument in an article published last spring that mused about the “ Great Calculator Panic of the 1980s and ‘90s. ” Critics of AI in the classroom argue that students might never learn to write or respond to written prompts independently if they can simply ask an AI to do it for them. The hypothetical scenario where the Internet or servers are down raises fears that students would be unable to write a simple sentence or compose a basic five-paragraph essay.

Narrow arguments over essay integrity and potential declines in learning quality miss the broader perspective on how this technology could positively reshape curriculum, instruction and assessment.

In classrooms, technology, curriculum, instruction, and assessment evolve together to reshape education. We see this historically with calculators and are now witnessing it unfold in real time with the emergence of generative AI tools.

The introduction of calculators into classrooms didn't set in motion the demise of mathematics education; instead, it significantly broadened its scope while inspiring educators and academics to rethink the educational limits of mathematics. This shift fostered a climate ripe for innovation. Looking at today’s math landscape and what existed in the 1970s, we would be hard-pressed to consider the past superior to the present, to say nothing of the future. Today, high school students use (and more importantly, comprehend) graphing calculators and computers better than undergraduate engineering students in university labs could only a generation ago. Today’s math learning environment is observably more dynamic, inclusive and creative than it was before ubiquitous access to calculators.

In a parallel vein, generative AI promises to extend this kind of innovation in critical thinking and the humanities, making it easier for students to grasp foundational concepts and explore advanced topics with confidence. AI could allow for customized learner support —adapting to the individual pace and learning style of each student, helping to make education more inclusive and tailored to specific needs. Generative AI can better the humanities by making reading and writing more accessible to diverse students, including those with learning disabilities or challenges with traditional writing methods.

Just as calculators led us to reevaluate legacy teaching methods and embrace more effective pedagogical approaches, generative AI calls for a similar transformation in how we approach assignments, conduct classes and assess learning. It will shift us from viewing the college essay as the pinnacle of learning to embracing wider creative and analytical exercises, ones facilitated by AI tools.

The successful integration of calculators into math education serves as a blueprint for the adoption of generative AI across the curriculum. By designing assignments with the expectation that generative AI will enhance rather than shortcut them, educators can foster learning that values creativity, critical thinking and efficient study. This shift necessitates a broader, more adaptable approach to teaching and learning, one that recognizes the potential of technology to elevate educational standards and broaden access to knowledge.

This history points to broader questions over the efficiency and fairness of long-standing educational mechanisms. Take, for example, college admissions essays , which are known to perpetuate bias in university admissions . What if AI allowed us to reconceptualize the tools for students to demonstrate their aptitude and college preparedness? What if AI could allow students to match their intended college major more accurately to the most supportive and corresponding place of higher learning? In academia, we shouldn’t focus solely on AI’s potential for misuse but also on its capability to revolutionize curricula and approaches to learning and teaching.

Far from fearing technological progress, history teaches us to embrace it to broaden and democratize learning . The greater challenge lies not in resisting change, but in leveraging these innovations to develop curricula that address the needs of all learners, paving the way for a more equal and effective education for everyone. Looking ahead, generative AI is not so much a problem to be solved, but instead a powerful ally in our efforts to make education meaningfully universal.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Harvard University Graduate School of Design

  • Harvard Library
  • Research Guides
  • Harvard Graduate School of Design - Frances Loeb Library

Write and Cite

  • Citing Sources
  • Academic Integrity

When to Cite

  • Citation Styles

Citation Style Guides and Resources

  • Fair Use, Permissions, and Copyright
  • Writing Resources
  • Grants and Fellowships

Reasons for citing sources are based on academic, professional, and cultural values. At the GSD, we cite to promote

  • Integrity and honesty by acknowledging the creative and intellectual work of others.
  • The pursuit of knowledge by enabling others to locate the materials you used.
  • The development of design excellence through research into scholarly conversations related to your subject.

Cite your source whenever you quote, summarize, paraphrase, or otherwise include someone else's

  • Words 
  • Opinions, thoughts, interpretations, or arguments
  • Original research, designs, images, video, etc.

How to Cite 

Citations follow different rules for structure and content depending on which style you use. At the GSD, mostly you will use Chicago or APA style. Often you can choose the style you prefer, but it's good to ask your professor or TA/TF. Whichever style you use, be consistent. We recommend using Zotero , a citation-management tool, to structure your citations for you, but you should always check to make sure the tool captures the correct information in the correct place.

  • Chicago Style

 Chicago Style 

Citing print sources.

Footnote - long (first time citing the source)

1. Joseph Rykwert, The Idea of a Town: The Anthropology of Urban Form in Rome, Italy and the Ancient World , (New Jersey: Princeton University Press, 1976), 35.

Footnote - short (citing the source again)

1. Rykwert, The Idea of a Town , 35.

In-text citation (alternative to footnotes)

(Rykwert 1976, 35)

Bibliography (alphabetical order and hanging indentation)

Rykwert, Joseph. The Idea of a Town: the Anthropology of Urban Form in Rome, Italy and the Ancient World . New Jersey: Princeton University Press, 1976.

Chapter 

1. Diane Favro, “The Street Triumphant: The Urban Impact of Roman Triumphal Parades,” in Streets: Critical Perspectives on Public Space , ed. Zeynep Çelik , Diana Favro, and Richard Ingersoll (Berkeley: University of California Press,1994), 153.

1. Favro, “The Street Triumphant,” 156.

In-text citation (called "author-date," an alternative to footnotes)

(Favro 1994, 153)

Bibliography  (alphabetical order and hanging indentation)

Favro, Diane. “The Street Triumphant: The Urban Impact of Roman Triumphal Parades.” In Streets: Critical Perspectives on Public Space, edited by Zeynep Çelik, Diane G. Favro, and Richard Ingersoll, 151-164. Berkeley: University of California Press, 1994.

Journal Article 

1. Hendrik Dey, “From ‘Street’ to ‘Piazza’: Urban Politics, Public Ceremony, and the Redefinition of platea in Communal Italy and Beyond” Speculum 91, no.4 (October 2016): 919.

1. Dey, “From ‘Street’ to ‘Piazza,’” 932.

Dey, Hendrik. “From ‘Street’ to ‘Piazza’: Urban Politics, Public Ceremony, and the Redefinition of platea in Communal Italy and Beyond.” Speculum 91, no.4 (October 2016): 919-44.

Citing Visual Sources 

Visual representations created by other people, including photographs, maps, drawings, models, graphs, tables, and blueprints, must be cited.  Citations for visual material may be included at the end of a caption or in a list of figures, similar to but usually separate from the main bibliography.

When they are not merely background design, images are labeled as figures and numbered. In-text references to them refer to the figure number. Sometimes you will have a title after the figure number and a brief descriptive caption below it. 

If you choose to include the citation under the caption, format it like a footnote entry. If you would prefer to have a list of figures for citation information, organize them by figure number and use the format of a bibliographic entry. 

A map of Harvard Campus with an example caption and citation below it. Immediately under the map are the words, "Figure One." Under those words is a caption stating that the image is a map of Harvard campus from 1935. Under that caption is the citations, which is as follows: Edwin J Schruers, cartographer, Tercentenary map of Harvard, 1935, color map, 86x64 cm, Harvard University Archives, http and the rest of the permalink code.

The construction of citations for artwork and illustrations is more flexible and variable than textual sources. Here we have provided an example with full bibliographic information. Use your best judgment and remember that the goals are to be consistent and to provide enough information to credit your source and for someone else to find your source.

Some borrowed material in collages may also need to be cited, but the rules are vague and hard to find. Check with your professor about course standards. 

Citing Generative AI

The rules for citing the use of generative AI, both textual and visual, are still evolving. For guidelines on when to cite the use of AI, please refer to the section on Academic Integrity. Here, we will give you suggestions for  how to cite based on what the style guides say and what Harvard University encourages. We again recommend that you to ask your instructors about their expectations for use and citation and to remain consistent in your formatting.

The Chicago Manual of Style currently states that "for most types of writing, you can simply acknowledge the AI tool in your text" with a parenthetical comment stating the use of a specific tool. For example: (Image generated by Midjourney). 

For academic papers or research articles, you should have a numbered footnote or endnote

Footnote - prompt not included in the text of the paper

1. ChatGPT, response to "Suggest three possible responses from community stakeholders to the proposed multi-use development project," OpenAI, March 28, 2024, https://chat.openai.com/chat.

Footnote - prompt included in the text of the paper

1. Text generated by ChatGPT, OpenAI, March 28, 2024, https://chat.oenai.com/chat

Footnote - edited AI-generated text

1. Text generated by ChatGPT, OpenAI, March 28, 2024, edited for clarity, https://chat.oenai.com/chat

In-text citation  (called "author-date," an alternative to footnotes)

(Text generated by ChatGPT, OpenAI) or (Text generated by ChatGPT, OpenAI, edited for clarity)

Chicago does not encourage including generative AI in a bibliography unless the tool also generates a direct link to the same generated content.

https://www-chicagomanualofstyle-org.ezp-prod1.hul.harvard.edu/qanda/data/faq/topics/Documentation/faq0422.html

 APA Style 

In-text citation  

(Rykwert 1976 p. 35)

Footnote  (for supplemental information)

1. From  The idea of a town: The anthropology of urban form in Rome, Italy and the ancient world by Joseph  Rykwert, 1976, New Jersey:  Princeton University Press.

Bibliography/Reference  (alphabetical order and hanging indentation)

Rykwert, J. (1976).  The idea of a town: The anthropology of urban form in Rome, Italy and the ancient world .  Princeton University Press.

In-Text Citation

(Favro   1994 p.153)

Footnote (for supplemental information)

1. From the chapter "The street triumphant: The urban impact of Roman triumphal parades" in  Streets: Critical perspectives on public space,  edited by Zeynep Çelik , Diana Favro, and Richard Ingersoll, 1994, Berkeley: University of California Press.

Favro, D. (1994) “The street triumphant: The Urban Impact of Roman Triumphal Parades.” In Zeynep Çelik, Diane G. Favro, and Richard Ingersoll (Eds.),  Streets: Critical Perspectives on Public Space ( pp.151-164). University of California Press.

(Dey 2016 p.919)

Footnote  (for supplemental material)

1. From the article “From ‘street’ to ‘Piazza’: Urban politics, public ceremony, and the Redefinition of platea in Communal Italy and Beyond” by  Hendrik Dey in   Speculum 91(4), 919.  www.journals.uchicago.edu/toc/spc/2016/91/4

Dey, H. (2016). From "street" to "piazza": Urban politics, public ceremony, and the redefinition of platea in communal Italy and beyond.  Speculum 91 (4), 919-44. www.journals.uchicago.edu/toc/spc/2016/91/4

Visual representations created by other people, including photographs, maps, drawings, models, graphs, tables, and blueprints, must be cited. In APA style, tables are their own category, and all other visual representations are considered figures. Tables and figures both follow the same basic setup. 

When they are not merely background design, images are labeled as figures and numbered and titled above the image. If needed to clarify the meaning or significance of the figure, a note may be placed below it. In-text references to visual sources refer to the figure number (ex. As shown in Figure 1..."). 

Citations for visual material created by other people may either be included under the figure or note or compiled in a list of figures, similar to but usually separate from the main bibliography.

Figures may take up a whole page or be placed at the top or bottom of the page with a blank double-space below or above it.

If you choose to include the citation under the figure, format it like a bibliographic entry. If you would prefer to have a list of figures for citation information, organize them by figure number and use the format of a bibliographic entry. Here is a detailed example. Some figures will require less bibliographic information, but it is a good practice to include as much as you can.

we used ai to write essays for harvard

The construction of citations for artwork and illustrations is more flexible and variable than for textual sources. Here we have provided an example with full bibliographic information. Use your best judgment and remember that the goals are to be consistent and to provide enough information to credit your source and for someone else to find your source.

The APA style team currently says to "describe how you used the tool in your Methods section or in a comparable section of your paper," perhaps the introduction for literature reviews and response papers. In your paper, state the prompt followed by the resulting generated text. Cite generative AI use according to the rules you would use for citing an algorithm. Include the URL if it leads directly to the same generated material; otherwise, the URL is optional.

(OpenAI, 2024) 

Footnote   (for supplemental material)

APA does not yet provide a structure or example for a footnote. If you need to mention generative AI in a footnote, stay as consistent with formatting as possible.

OpenAI. (2024). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

These links take you to external resources for further research on citation styles.

  • Chicago Manual of Style 17th Edition Online access to the full manual through Hollis with a quick guide, Q&A, video tutorials, and more.
  • CMOS Shop Talk: How Do I Format a List of Figures? A brief description of how to format a list of figures with an attached sample document.
  • Documenting and Citing Images in Chicago A Research guide from USC with nice examples of images with citations.
  • Harvard Guide to Citing Sources A guide from Harvard Libraries on citing sources in Chicago style.
  • A Manual for Writers of Research Papers, Theses, and Dissertations: Chicago Style for Students and Researchers A Chicago manual specifically for students with clear and detailed information about citing for papers rather than publications.
  • Chicago Manual of Style Q&A Citing Generative Artificial Intelligence
  • APA Style Common Reference Examples A list of sample references organized by type.
  • APA Style Manual 7th Edition Online access to the full APA Style Manual (scanned) through Hollis.
  • APA Style Sample Papers Links to sample papers that model how to create citations in APA.
  • Formatting Checklist This page is a quick guide to all kinds of formatting, from the title page to the bibliography, with links to more detailed instructions.
  • Harvard Guide to Citing Sources A guide from Harvard Libraries on citing sources in APA style.
  • Journal Article References This page contains reference examples for journal articles.
  • In-Text Citations in APA Style A place to learn more about rules for citing sources in your text.
  • Tables and Figures This page leads to explanations about how to format tables and figures as well as examples of both.
  • How to Cite ChatGPT Here are the APA's current rules for citing generative AI and ChatGPT in particular.
  • MetaLAB AI Code of Conduct A proposed code of conduct generated by a collaborative of Harvard faculty and students.
  • << Previous: Academic Integrity
  • Next: Fair Use, Permissions, and Copyright >>
  • Last Updated: Apr 30, 2024 4:28 PM
  • URL: https://guides.library.harvard.edu/gsd/write

Harvard University Digital Accessibility Policy

5 AI tools for students: Use AI to help you study, summarize content, and edit papers

Ace your classes with an AI assist.

A person typing on a laptop with a virtual display in front of them with the text

  • Quizlet Q-Chat
  • Google Socratic

I wish these AI tools for students were around when I was in school. Sure, AI tools can't do your homework, write your papers, or take your exams, but they can make your life a lot easier.

With AI, long book chapters can be summarized into quick, easy-to-study bullet points, classes can be recorded and transcribed so you can be laser-focused, and weirdly worded paragraphs can be revised with AI-generated text — and that's just scratching the surface of how AI can help you as a student.

The number of AI-powered services available for students can be overwhelming, so we've rounded up the 5 best AI tools for students.

Best AI tool for editing and summarizing text: Grammarly

Screenshot of Grammarly AI tool

Grammarly is easily one of the best AI tools for students because of the wide variety of tasks it can help with. 

The most obvious way Grammarly can help you is with text generation and revision suggestions for emails, cover letters, resumes, and even school assignments. The critical warning is that you should never use AI to completely write a paper from scratch, this is a homework assistant, not something doing your homework for you. Grammarly's AI features are best used as supplemental tools that can help you get started with a tough paragraph, find a new way to say something, or edit your papers after they're written.

A few lesser-known AI features Grammarly offers include summarizing big blocks of text, generating ideas for projects, adjusting your writing tone, and providing helpful writing prompts. You can even use Grammarly to help caption your Instagram posts when you're done with homework.

Grammarly's free account lets you generate text with 100 AI prompts every month, but the premium $12/month (billed annually) option can rewrite full sentences, adjust writing tone, and generate text with 1,000 AI prompts monthly.

Stay in the know with Laptop Mag

Get our in-depth reviews, helpful tips, great deals, and the biggest news stories delivered to your inbox.

Best AI tool for intuitive studying: Quizlet Q-Chat

quizlet q-chat ai tool for students.jpg

Most students are already familiar with Quizlet and its virtual sets of flashcards to help you master subjects. But to take your studying game to the next level, you need to check out its AI-powered Q-Chat tutor .

There are multiple ways Q-Chat can help you test your knowledge, including AI-generated quizzes, lessons, conversations, and fun games like two truths and a lie. You can also use Q-Chat to help you learn a new language, but Duolingo is a better AI tool for that purpose.

Quizlet lets you try out Q-Chat conversations with a free account, but to use its full features, you'll need a Quizlet Plus account for $7.99/month or $35.99/year. 

Best AI tool for recording and summarizing classes: Otter.ai

otter.ai tool for students

When you try to take notes and listen to your professor at the same time, you can sometimes miss important information. With Otter.ai , you can record the class, get transcripts and summaries, and put all your attention into listening to your teacher.

Otter is an incredibly helpful AI tool for students with ADHD or anyone else who finds it difficult to multitask in class and pay attention. However, it's worth noting that you should get permission from your teacher before recording them with Otter.

If you're an online student, Otter works with Zoom, Google Meet, and Microsoft Teams to record, transcribe, and summarize virtual classes. You can also use Otter to record meetings for group projects to easily keep track of what was talked about and what actions were assigned to everyone.

Best AI tool for explaining concepts: Google Socratic

socratic by google ai tool for students

Socratic by Google is a free AI tool available for Android and iOS that helps explain complex concepts to students with helpful visuals, AI-generated answers to questions, and links to relevant YouTube videos.

The app can help high school and university-level students with basic subjects, including algebra, geometry, trigonometry, biology, chemistry, physics, history, and literature.

Socratic can solve math problems and answer questions, but it shouldn't be used to complete homework for you. Instead, it's a useful AI tool when you're stuck on a problem or you don't understand why you got an answer wrong. Socratic can show you step-by-step explanations, helping prepare you for exams.

Another free tool that can offer in-depth explanations is Bing Chat , a GPT-4-based chatbot. This tool scours the entire internet, so Bing Chat can find answers to more complex questions on various subjects compared to Socratic.

Best AI tool for researching academic papers: Consensus

consensus research ai tool for students

Consensus is an AI search engine for research that helps students find academic papers and studies. This AI tool is best for college and late-high school students who are starting to write research papers requiring academic sources.

You can type any research question or topic into the Consensus website to find relevant sources, and each pulled source will have pre-populated citations in multiple formats for you to copy and paste into your paper. 

Consensus also paired up with Copilot to bring ChatGPT -type functionality to the service. This means you can tack on a command to your search, like "Group together pro and con cases" or "Explain for an 8-year-old." 

A free Consensus account gives you unlimited searches, unlimited AI-powered filters, and 20 AI credits every month for more powerful features, like GPT-4 Summaries, Consensus Meters, Study Snapshots, and Copilot. For unlimited use of those more powerful features, a premium subscription costs $8.99/month.

MORE FROM LAPTOP MAG FOR STUDENTS

  • Best college student discounts and perks
  • Best laptop backpacks
  • Best laptops for college: Student laptops for every budget and major

Arrow

Windows 12 isn't Microsoft's only missing operating system

How to switch devices on Google Meet: Seamlessly transfer from phone to laptop

Bill Gates isn't the conductor driving the Microsoft AI train, but he did lay the tracks

Most Popular

  • 2 Lenovo ThinkVision P32p-30 monitor: A vivid 4K Thunderbolt 4 Hub monitor with one major weakness
  • 3 Xbox gaming handheld rumors heat up with Microsoft survey
  • 4 AMD Strix Halo APU could make low-end discrete GPUs obsolete — Nvidia's RTX 4060 is on notice
  • 5 Amazon Prime Day confirmed for July — what to expect for the 10th annual sale

we used ai to write essays for harvard

Excessive use of words like ‘commendable’ and ‘meticulous’ suggests ChatGPT has been used in thousands of scientific studies

A london librarian has analyzed millions of articles in search of uncommon terms abused by artificial intelligence programs.

Midjourney

Librarian Andrew Gray has made a “very surprising” discovery. He analyzed five million scientific studies published last year and detected a sudden rise in the use of certain words, such as meticulously (up 137%), intricate (117%), commendable (83%) and meticulous (59%). The librarian from the University College London can only find one explanation for this rise: tens of thousands of researchers are using ChatGPT — or other similar Large Language Model tools with artificial intelligence — to write their studies or at least “polish” them.

There are blatant examples. A team of Chinese scientists published a study on lithium batteries on February 17. The work — published in a specialized magazine from the Elsevier publishing house — begins like this: “Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for….” The authors apparently asked ChatGPT for an introduction and accidentally copied it as is. A separate article in a different Elsevier journal, published by Israeli researchers on March 8, includes the text: “In summary, the management of bilateral iatrogenic I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model.” And, a couple of months ago, three Chinese scientists published a crazy drawing of a rat with a kind of giant penis, an image generated with artificial intelligence for a study on sperm precursor cells.

Andrew Gray estimates that at least 60,000 scientific studies (more than 1% of those analyzed in 2023) were written with the help of ChatGPT — a tool launched at the end of 2022 — or similar. “I think extreme cases of someone writing an entire study with ChatGPT are rare,” says Gray, a 41-year-old Scottish librarian. In his opinion, in most cases artificial intelligence is used appropriately to “polish” the text — identify typos or facilitate translation into English — but there is a large gray area, in which some scientists take the assistance of ChatGPT even further, without verifying the results. “Right now it is impossible to know how big this gray area is, because scientific journals do not require authors to declare the use of ChatGPT, there is very little transparency,” he laments.

Artificial intelligence language models use certain words disproportionately, as demonstrated by James Zou’s team at Stanford University. These tend to be terms with positive connotations, such as commendable, meticulous, intricate, innovative and versatile. Zou and his colleagues warned in March that the reviewers of scientific studies themselves are using these programs to write their evaluations, prior to the publication of the works. The Stanford group analyzed peer reviews of studies presented at two international artificial intelligence conferences and found that the probability of the word meticulous appearing had increased by 35-fold.

Andrew Gray

Zou’s team, on the other hand, did not detect significant traces of ChatGPT in the corrections made in the prestigious journals of the Nature group. The use of ChatGPT was associated with lower quality peer reviews. “I find it really worrying,” explains Gray. “If we know that using these tools to write reviews produces lower quality results, we must reflect on how they are being used to write studies and what that implies,” says the librarian at University College London. A year after the launch of ChatGPT, one in three scientists acknowledged that they used the tool to write their studies, according to a survey in the journal Nature .

Gray’s analysis shows that the word “intricate” appeared in 109,000 studies in 2023, more than double the average of 50,000 in previous years. The term “meticulously” went from appearing in about 12,300 studies in 2022 to more than 28,000 in 2023. While instances of “commendable“ rose from 6,500 to almost 12,000. The researcher jokes that his colleagues have congratulated him on the meticulousness of his report, still a draft pending publication in a specialized journal.

Very few studies report if they have used artificial intelligence. Gray warns of the danger of “a vicious circle,” in which subsequent versions of ChatGPT are trained with scientific articles written by the old versions, giving rise to increasingly commendable, intricate, meticulous and, above all, insubstantial studies.

Documentation professor Ángel María Delgado Vázquez highlights that the new analysis is focused on English-language studies. “Researchers who do not speak native English are using ChatGPT a lot, as an aid to writing and to improve the English language,” says Delgado Vázquez, a researcher from the Pablo de Olavide University, in Seville, Spain. “In my environment, people are using ChatGPT mainly for a first translation, or even to keep that translation directly,” he says. The Spanish professor says he would like to see an analysis on the origin of the authors who use the unusual terms.

Another one of AI’s favorite words is “delve.” Researcher Jeremy Nguyen, from the Swinburne University of Technology (Australia), has calculated that “delve” appears in more than 0.5% of medical studies , where before ChatGPT it was less than 0.04 %. Thousands of researchers are suddenly delving.

Librarian Andrew Gray warns there is a risk of broader society becoming infected with this meticulously artificial new language. Nguyen himself admitted on the social network X that it happens to him: “I actually find myself using “delve” lately in my own language—probably because I spend so much time talking to GPT.” On April 8, the official ChatGPT account on X chimed in : “I just love delving what can I say?”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Marc Serramià

Marc Serramià: ‘If we all trust tools like ChatGPT, human knowledge will disappear’

Ilya Sutskever ChatGPT

For the first time, the journal ‘Nature’ has chosen a non-human being — ChatGPT — as one of its scientists of the year

Archived in.

  • Francés online
  • Inglés online
  • Italiano online
  • Alemán online
  • Crucigramas & Juegos

Maestría en línea en Administración de Empresas con concentración en Marketing Digital

Turnitin's AI writing detection available now

Turnitin’s AI writing detection helps educators identify when AI writing tools such as ChatGPT may have been used in students’ submissions.

we used ai to write essays for harvard

Academic integrity in the age of AI writing

Over the years, academic integrity has been both supported and tested by technology. Today, educators are facing a new frontier with AI writing and ChatGPT.

Here at Turnitin, we believe that AI can be a positive force that, when used responsibly, has the potential to support and enhance the learning process. We also believe that equitable access to AI tools is vital, which is why we’re working with students and educators to develop technology that can support and enhance the learning process. However, it is important to acknowledge new challenges alongside the opportunities.

We recognize that for educators, there is a pressing and immediate need to know when and where AI and AI writing tools have been used by students. This is why we are now offering AI detection capabilities for educators in our products.

Gain insights on how much of a student’s submission is authentic, human writing versus AI-generated from ChatGPT or other tools.

Reporting identifies likely AI-written text and provides information educators need to determine their next course of action. We’ve designed our solution with educators, for educators.

AI writing detection complements Turnitin’s similarity checking workflow and is integrated with your LMS, providing a seamless, familiar experience.

Turnitin’s AI writing detection capability available with Originality, helps educators identify AI-generated content in student work while safeguarding the interests of students.

Turnitin AI Innovation Lab

Welcome to the Turnitin AI Innovation Lab, a hub for new and upcoming product developments in the area of AI writing. You can follow our progress on detection initiatives for AI writing, ChatGPT, and AI-paraphrasing.

we used ai to write essays for harvard

Understanding the false positive rate for sentences of our AI writing detection capability

We’d like to share more insight on our sentence level false positive rate and tips on how to use our AI writing detection metrics.

we used ai to write essays for harvard

Understanding false positives within our AI writing detection capabilities

We’d like to share some insight on how our AI detection model deals with false positives and what constitutes a false positive.

Have questions? Read these FAQs on Turnitin’s AI writing detection capabilities

Helping solve the AI writing puzzle one piece at a time

AI-generated writing has transformed every aspect of our lives, including the classroom. However, identifying AI writing in students’ submissions is just one piece in the broader, complex, ever-evolving AI writing puzzle.

Helping solve the AI writing puzzle one piece at a time

Research corner

We regularly undertake internal research to ensure our AI writing detector stays accurate and up-to-date. If you are interested in what external testing has revealed about Turnitin's AI-writing detection capabilities, check out the links below. Notably, these studies position Turnitin among the foremost solutions in identifying AI-generated content within academia.

Research shows Turnitin's AI detector shows no statistically significant bias against English Language Learners

  • In response to feedback from customers and papers claiming that AI writing detection tools are biased against writers whose first language is not English, Turnitin expanded its false positive evaluation to include writing samples of English Language Learners (ELL) and tested another nearly 2,000 writing samples of ELL writers.
  • What Turnitin found was that in documents meeting the 300 word count requirement, ELL writers received a 0.014 false positive rate and native English writers received a 0.013.
  • This means that there is no statistically significant bias against non-native English speakers.

Turnitin’s AI writing detector identified as the most accurate out of 16 detectors tested

  • Two of the 16 detectors, Turnitin and Copyleaks, correctly identified the AI- or human-generated status of all 126 documents, with no incorrect or uncertain responses.
  • Three AI text detectors – Turnitin, Originality, and Copyleaks, – have very high accuracy with all three sets of documents examined for this study: GPT-3.5 papers, GPT-4 papers, and human-generated papers.
  • Of the top three detectors identified in this investigation, Turnitin achieved very high accuracy in all five previous evaluations. Copyleaks, included in four earlier analyses, performed well in three of them.

Teaching in the age of AI writing

As AI text generators like ChatGPT quickly evolve, our educator resources will, too. Curated and created by our team of veteran educators, our resources help educators meet these new challenges. They are built for professional learning and outline steps educators can take immediately to guide students in maintaining academic integrity when faced with AI writing tools.

we used ai to write essays for harvard

A guide to help educators determine which resource is more applicable to their instructional situation: the AI misuse checklist or the AI misuse rubric.

we used ai to write essays for harvard

A guide sharing strategies educators can consider to help when confronted with a false positive.

we used ai to write essays for harvard

A guide sharing strategies students can consider to help when confronted with a false positive.

The Turnitin Educator Network is a space to meet, discuss and share best practices on academic integrity in the age of AI.

Learn more about AI writing in our blog

Written by experts in the field, educators, and Turnitin professionals, our blog offers resources and thought leadership in support of students, instructors, and administrators. Dive into articles on a variety of important topics, including academic integrity, assessment, and instruction in a world with artificial intelligence.

we used ai to write essays for harvard

In this blog post, we’re going to address frequently asked questions about AI writing tool misuse for students. Specifically, what does AI writing tool misuse look like? How can you self-check to make sure you’re using AI writing tools properly?

we used ai to write essays for harvard

Stay up to date with the latest blog posts delivered directly to your inbox.

Turnitin ai tools in the news.

Never miss an update or announcement. Visit our media center for recent news coverage and press releases.

Cheat GPT? Turnitin CEO Chris Caren weighs in on combating A.I. plagiarism | CNBC Squawk Box

Since the inception of AI-generated writing, educators and institutions are learning how to navigate it in the classroom. Turnitin’s CEO Chris Caren joins ‘Squawk Box’ to discuss how it is being used in the classroom and how educators can identify AI writing in student submissions.

we used ai to write essays for harvard

Trouble viewing? View the video on YouTube or adjust your cookie preferences .

Some U.S. schools banning AI technology while others embrace it | NBC Nightly News

ChatGPT, an artificial intelligence program, can write college-level essays in seconds. While some school districts are banning it due to cheating concerns, NBC News’ Jacob Ward has details on why some teachers are embracing the technology.

we used ai to write essays for harvard

BestColleges

Artificial intelligence, it seems, is taking over the world. At least that's what alarmists would have you believe . The line between fact and fiction continues to blur, and recognizing what is real versus what some bot concocted grows increasingly difficult with each passing week.

ThriveinEDU Podcast

On this episode of the ThriveinEDU podcast, host Rachelle Dené Poth speaks with Turnitin’s Chief Product Officer Annie Chechitelli about her role in the organization, her experience as a parent with school-age children learning to navigate AI writing, and the future of education and original thought.

District Administration

Following the one year anniversary of the public launch of ChatGPT, Chief Product Officer Annie Chechitelli sits down with the publication to discuss Turnitin’s AI writing detection feature and what the educational community has learned.

For press and media inquiries, contact us at [email protected]

Awards & recognition.

we used ai to write essays for harvard

Let’s innovate together

we used ai to write essays for harvard

IMAGES

  1. 6 Best AI Essay Writer Tools to Create 100% Original Content

    we used ai to write essays for harvard

  2. 8 Best AI Essay Writing Tools You Should Try

    we used ai to write essays for harvard

  3. The 13 Best AI Essay Writers For Quality and Speed in 2021

    we used ai to write essays for harvard

  4. How to Use AI to Write Essays, Projects, Scripts Using ChatGPT OpenAi

    we used ai to write essays for harvard

  5. AI now writes essays

    we used ai to write essays for harvard

  6. How to Write the Harvard University Supplemental Essays

    we used ai to write essays for harvard

VIDEO

  1. Please Don't Write Your College Essays Like This

  2. 30+ ways I’m using AI in everyday writing life as a technical writer, blogger, and curious human

  3. Reading HBS, Stanford, & Wharton MBA Essays That Worked

  4. How Can AI Support Writing and Student Learning?

  5. AI Literacy, or Why Understanding AI Will Help You Every Day

  6. Write Like an Expert: Harvard Business School Essays Analyses 2014/2015 Season

COMMENTS

  1. We Used A.I. to Write Essays for Harvard, Yale and Princeton. Here's

    Write me a 100-word essay in the voice of a high school student explaining why I would love to attend Dartmouth to pursue a double major in biology and computer science. HuggingChat

  2. Can ChatGPT get into Harvard? We tested its admissions essay

    But more importantly, admissions essays are a unique type of writing, he said. They require students to reflect on their life and craft their experiences into a compelling narrative that quickly ...

  3. Should I Use ChatGPT to Write My Essays?

    How to Use AI as a Tool to Support Your Work. As more students are discovering, generative AI models like ChatGPT just aren't as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work. Generate ideas for essays. Have ChatGPT help you come up with ideas for ...

  4. PDF A List of Current AI Writing Tools

    Harvard College Writing Program 2 LinkReader (can read content at any link you offer, including YouTube, webpages, word docs) Claude.AI What to know: Built by Anthropic, an AI company that promises "safety first," this bot is currently free to use and allows you to upload a PDF of anything from an article to an entire book.

  5. ChatGPT and AI Text Generators: Should Academia Adapt or Resist?

    T hose in the education sector have long expressed divergent opinions about technology innovations and their use in classrooms. The debate may be around something as simple as allowing students to use their laptops or smartphones during class, or it may center around an emerging artificial intelligence (AI) technology like ARISTO, a system that can pass (with a score of 90 percent) an eighth ...

  6. We Used A.I. to Write Essays for Harvard, Yale and Princeton. Here's

    Support Harvard Magazine. Donate; Why We Need Your Support; How We Are Funded; Ways to Support the Magazine; Special Gifts; Behind the Scenes; Classifieds. Vacation Rentals & Travel; Real Estate; Personals; Products & Services; Harvard Authors' Bookshelf; Education & Enrichment Resource; Ad Prices & Information; Place An Ad

  7. AI Guidance & FAQs

    AI Pedagogy Project. Visit the AI Pedagogy Project (AIPP), developed by the metaLAB at Harvard, for an introductory guide to AI tools, an LLM Tutorial, additional AI resources, and curated assignments to use in your own classroom.The metaLAB has also published a quick start guide for Getting Started with ChatGPT Teaching and Artificial Intelligence. A Canvas module, created by the Bok Center ...

  8. AI bot ChatGPT writes smart essays

    AI bot ChatGPT writes smart essays — should academics worry? Sandra Wachter, BKC Faculty Associate, discusses ChatGPT and its concerns for academics and education. "The situation both worries and excites Sandra Wachter, who studies technology and regulation at the Oxford Internet Institute, UK. "I'm really impressed by the capability ...

  9. New Guidance on Teaching Writing in the Age of AI

    New Guidance on Teaching Writing in the Age of AI. August 10, 2023. The Writing Program has developed " A Framework for Designing Assignments in the Age of A I" and "A List of Current AI Writing Tools". The Writing Program has developed " A Framework for Designing Assignments in the Age of AI" and "A List of Current AI Writing Tools".

  10. Why All Our Classes Suddenly Became AI Classes

    A rtificial intelligence (AI) panic is spreading across the education sector. The newest wave of generative AI tools, released without guidance or clear suggestions for educators late last year, might be among the biggest and fastest transformations to hit academia.Its rapid adoption instantly challenges the existence of a valuable and widely used type of assignment—the essay—along with ...

  11. Applying to College? Here's How A.I. Tools Might Hurt, or Help

    An applicant who submitted a chatbot-generated essay, they said, would violate the university's admissions policy. The Yale experts also argued that personal essays for college applications were ...

  12. Academic Resilience in a World of Artificial Intelligence

    As consumer interest in AI has grown, its users have found ways to make it write essays and answer questions based on its available data. But the writing and answers are far from perfect, and concerns about misinformation, data privacy, and AI "thinking" replacing human learning are top of mind for teachers and administrators.

  13. New Tool Developed to Spot Artificial Intelligence in Academic Writing

    New Tool Developed to Spot Artificial Intelligence in Academic Writing. ChatGPT, an AI-powered conversationalist, has transformed how we work, learn, and write. Many use the tool since it's easily accessible. However, problems have developed in distinguishing AI written content from human-written content, especially in academic and ...

  14. Will ChatGPT replace human writers? Pinker weighs in

    Pinker, the Johnstone Family Professor of Psychology, has investigated, among other things, links between the mind, language, and thought in books like the award-winning bestseller "The Language Instinct" and has a few thoughts of his own on whether we should be concerned about ChatGPT's potential to displace humans as writers and thinkers.

  15. PDF A Framework for Designing Assignments in the Age of AI

    Strategies for redesigning writing assignments in the age of AI. Personalize the assignment. Ask students to use something from class discussion, from their own lives, from a small group discussion, or from observations that they conduct as part of the assignment. Ask students to reflect on their process as part of the assignment.

  16. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  17. Research with Generative AI

    Generative AI (GenAI) technologies offer new opportunities to advance research and scholarship. This resource page aims to provide Harvard researchers and scholars with basic guidance, information on available resources, and contacts. The content will be regularly updated as these technologies continue to evolve.

  18. Embracing AI

    Embracing AI. "Thank you, weirdly informative robot," wrote a student taking Harvard's introductory computer science course this summer, after receiving help from an AI-powered avatar. The development of generative artificial intelligence, which can create and synthesize new material based on publicly available sources, has sparked ...

  19. Artificial Intelligence

    For further assistance in adapting your teaching to the new possibilities and challenges posed by artificial intelligence, please contact: To discuss pedagogical concerns: Adam Beaver, [email protected]. To explore the technological possibilities, contact the Bok Center's Learning Lab: [email protected].

  20. We Used A.I. to Write Essays for Harvard, Yale and Princeton ...

    This post talks about how AI can be used to write college-app essays, but at the end of the day those chat bots failed because they lack the human factor. As the article says "chat bots liberally make stuff up" which can eventually lead to problem for students relying on them for essays.

  21. University students are using AI to write essays. Now what?

    Scott Graham, associate professor at the Department of Rhetoric & Writing at the University of Texas at Austin, tasked his pupils with writing a 2,200-word essay about a campus-wide issue using AI. Students were free to lightly edit and format their work with the only rule being that most of the essay had to be automatically generated by ...

  22. Using AI to Write Essays for Harvard, Yale and Princeton

    This week we look into the ethical complexities of using AI for writing college application essays, explore Salesforce's aggressive venture capital strategies in AI, delve into Apple's subtle but impactful integration of machine learning across iOS, and examine how AI is revolutionizing mental health care in the workplace. Plus, we bring you ...

  23. Authors feed their own literary works into AI models for the sake of

    Moor Studio/Getty Images. The vast majority of authors don't use artificial intelligence as part of their creative process — or at least won't admit to it. Yet according to a recent poll from ...

  24. Students: AI is Part of Your World

    Alum helps young people understand how artificial intelligence is changing everything they know. It would not be an overstatement to say that artifiicial intelligence (AI) has the potential to change pretty much every job. And students, says Charlotte Dungan, Ed.M.'16, should know this. "There's a great shift in the future of work and ...

  25. AI Can Transform the Classroom Just Like the Calculator

    Generative AI can better the humanities by making reading and writing more accessible to diverse students, including those with learning disabilities or challenges with traditional writing methods.

  26. Citing Sources

    Reasons for citing sources are based on academic, professional, and cultural values. At the GSD, we cite to promote. Integrity and honesty by acknowledging the creative and intellectual work of others. The pursuit of knowledge by enabling others to locate the materials you used. The development of design excellence through research into ...

  27. 5 AI tools for students: Use AI to help you study ...

    Grammarly's free account lets you generate text with 100 AI prompts every month, but the premium $12/month (billed annually) option can rewrite full sentences, adjust writing tone, and generate ...

  28. Excessive use of words like 'commendable' and 'meticulous' suggests

    The use of ChatGPT was associated with lower quality peer reviews. "I find it really worrying," explains Gray. "If we know that using these tools to write reviews produces lower quality results, we must reflect on how they are being used to write studies and what that implies," says the librarian at University College London.

  29. Towards the relationship between AIGC in manuscript writing and author

    AIGC tools such as ChatGPT have profoundly changed scientific research, leading to widespread attention on its use on academic writing. Leveraging preprints from large language models, this study examined the use of AIGC in manuscript writing and its correlation with author profiles. We found that: (1) since the release of ChatGPT, the likelihood of abstracts being AI-generated has gradually ...

  30. AI Writing Detection

    Turnitin's AI writing detector identified as the most accurate out of 16 detectors tested. Two of the 16 detectors, Turnitin and Copyleaks, correctly identified the AI- or human-generated status of all 126 documents, with no incorrect or uncertain responses. Three AI text detectors - Turnitin, Originality, and Copyleaks, - have very high ...