Skip to Main Content

Artificial Intelligence

Artificial Intelligence (AI) impacts all fields of study and is not subject specific. This guide is here to support research and learning involving Artificial Intelligence.

What is ChatGPT?

ChatGPT is large language model tool that uses artificial intelligence and natural language processing to answer user-generated prompts. It was developed by OpenAI and is currently freely available for public use. 

There have been imitators of ChatGPT since it has been in the news. The correct link for it is: https://chat.openai.com/auth/login

What is ChatGPT Good For - and Not Good For?

Remember, you'll always need to verify the information, because ChatGPT will sometimes make things ups (known as "hallucination").

  • First drafts of writing projects
  • Brainstorming ideas
  • Coming up with topic ideas for a research paper, and keywords for searching in library databases.
    See Generate Topics for Your Research Paper with ChatGPT.
  • Explaining information in ways that are easy to understand
  • Summarizing and outlining
  • Asking questions (be sure to fact check the results) You can ask a million questions without fear of being judged.
  • Translating text to different languages (not completely fluent in every language)
  • Helping write or debug computing code
  • Humor and improvisation

What is it not so good for?

Fact Checking is Always Needed

AI "hallucination"
The official term in the field of AI is "hallucination." This refers to the fact that it sometimes "makes stuff up." This is because these systems are probabilistic, not deterministic.

Which models are less prone to this?
GPT-4 (the more capable model behind ChatGPT Plus and Bing Chat) has improved and is less prone to hallucination. According to OpenAI, it's "40% more likely to produce factual responses than GPT-3.5 on our internal evaluations." But it's still not perfect. So verification of the output is still needed.
 

ChatGPT makes up fictional sources
One area where ChatGPT usually gives fictional answers is when asked to create a list of sources. See the Twitter thread, "Why does chatGPT make up fake academic papers?" for a useful explanation of why this happens.

The University of Arizona Library offers this FAQ
I can’t find the citations that ChatGPT gave me. What should I do?
Or use the University of Maryland's "Ghost Citations" worksheet:
Ghost Citations Worksheet

 

There is progress in making these models more truthful
However, there is progress in making these systems more truthful by grounding them in external sources of knowledge. Some examples are Bing Chat and Perplexity AI, which use internet search results to ground answers. However, the Internet sources used, could also contain misinformation or disinformation. But at least with Bing Chat and Perplexity you can link to the sources used to begin verification.
 

Scholarly sources as grounding
There are also systems that combine language models with scholarly sources. For example:

  • Elict
    A research assistant using language models like GPT-3 to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table. Learn more in Elicit's FAQ.
  • Consensus

    A search engine that uses AI to search for and surface claims made in peer-reviewed research papers. Ask a plain English research question, and get word-for-word quotes from research papers related to your question. The source material used in Consensus comes from the Semantic Scholar database, which includes over 200M papers across all domains of science.

ChatGPT Privacy and Terms of Use

Understanding the privacy policies and terms of use of software and AI tools we commonly use is important, as they explain what information is collected, why they collected that information, how that information will be used (both in the present and the future), and how they will or will not protect that private information. In ChatGPT's case: 

  • OpenAI's privacy policy allows them to share your data with third-party vendors, law enforcement, and other users.
    • What is put into this software is not private. 
  • Their Terms of Use state "you must be 18 years or older and able to form a binding contract with OpenAI to use the Services".
  • These Terms state that users must not "represent that output from the Services was human-generated when it is not". 
    • What information gained from ChatGPT and OpenAI must be shared as from ChatGPT and OpenAI. Meaning if it is used it a paper, it must be stated as from their services and not human generated. 
  • Users are responsible for the Content generated "ensuring it does not violate any applicable laws or these Terms."
  • While you can delete your ChatGPT account and no longer use it, your prompts and questions belong to OpenAI and become a part of the company’s collected data.