Skip to Main Content

Artificial Intelligence

Artificial Intelligence (AI) impacts all fields of study and is not subject specific. This guide is here to support research and learning involving Artificial Intelligence.

Helping Students Care About Being Transparent

Discuss the use of ChatGPT with students

You can help students care about being transparent in their use. Discuss ChatGPT and create a policy for whether and how to use it.

 

Go beyond traditional citations

Professor Ethan Mollick (see above), recommends going beyond traditional citations. He asks his students to include an appendix to their papers in which they list each prompt they used in ChatGPT and discuss how they revised those prompts to get better output.

See: Mollick, Ethan R. and Mollick, Lilach, Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts (March 17, 2023).

 

Citing

Guidelines for citing generative AI:

Here are some statements from academic publishers about the use of generative AI.

  • Science Journals policy: "Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals"
  • Nature publishers: "... researchers using LLM tools should document this use in the methods or acknowledgements sections.”

As the use of generative AI increases, we expect more publishers to create and enforce policies similar to these.

Fact Checking Is Always Needed

AI "hallucination"
The official term in the field of AI is "hallucination". This refers to the fact that it sometimes "makes stuff up". This is because these systems are probabilistic, not deterministic.

Which models are less prone to this?
GPT-4 (the more capable model behind ChatGPT Plus and Bing Chat) has improved and is less prone to hallucination. According to OpenAI, it's "40% more likely to produce factual responses than GPT-3.5 on our internal evaluations." But it's still not perfect. So verification of the output is still needed.
 

ChatGPT makes up fictional sources
One area where ChatGPT usually gives fictional answers is when asked to create a list of sources. See the Twitter thread, "Why does chatGPT make up fake academic papers?" for a useful explanation of why this happens.

The University of Arizona Library offers this FAQ
I can’t find the citations that ChatGPT gave me. What should I do?
 

There is progress in making these models more truthful
However, there is progress in making these systems more truthful by grounding them in external sources of knowledge. Some examples are Bing Chat and Perplexity AI, which use internet search results to ground answers. However, the Internet sources used, could also contain misinformation or disinformation. But at least with Bing Chat and Perplexity you can link to the sources used to begin verification.
 

Scholarly sources as grounding
There are also systems that combine language models with scholarly sources. For example:

  • Elicit
    A research assistant using language models like GPT-3 to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table. Learn more in Elicit's FAQ.
  • Consensus

    A search engine that uses AI to search for and surface claims made in peer-reviewed research papers. Ask a plain English research question, and get word-for-word quotes from research papers related to your question. The source material used in Consensus comes from the Semantic Scholar database, which includes over 200M papers across all domains of science.