Building Generative AI Literacy and Metacognition in Researchers

These resources provide valuable insights into understanding and effectively collaborating with AI systems, particularly large language models (LLMs). The first two videos offer foundational knowledge about how LLMs function, including their reliance on transformers and common misconceptions about AI accuracy. Complementing these, several articles emphasise the importance of metacognitive skills in human–AI collaboration, outlining how to critically engage with AI outputs, recognise cognitive biases, and enhance decision-making. Additionally, a guide on teaching AI literacy introduces practical strategies for building foundational knowledge, while a thought-provoking piece explores why AI systems don’t truly ‘understand’ in the human sense.

  1. This YouTube explains how large language models (LLMs) work, with a focus on transformers, the core concept behind models like Chat GPT.
  2. Building on #1, this YouTube explains how LLMs store facts and helps to dispel common assumptions about relying on LLMs for accurate information.
  3. This article outlines a validated three-step process to help users reflect on and improve their thinking processes when interacting with AI systems.
    Reference: Sidra, & Mason, C. (2023, October 27). How to strengthen your metacognitive skills to collaborate effectively with AI. The Times Higher Education.
  4. This article explains why metacognitive thinking is crucial when working with generative AI. The article also highlights cognitive biases that are often amplified in human–AI interactions.
    Reference: Sidra, S., & Mason, C. (2024). Reconceptualizing AI literacy: The importance of metacognitive thinking in an artificial intelligence (AI)-enabled workforce. 2024 IEEE Conference on Artificial Intelligence (CAI), 1178-1183. doi: 10.1109/CAI59869.2024.00211.
  5. This article reflects on how LLMs do not ‘understand’ in the conventional sense of the word, highlighting the impetus on humans to critically assess AI outputs rather than get caught up in the illusion of AI comprehension.
    Reference: Sejnowski, T. J. (2023). Large language models and the reverse Turing test. Neural Computation, 35(3), 309–342. doi: 10.1162/neco_a_01563
  6. This article offers valuable insights into the core concepts of AI literacy and provides practical guidance on how AI literacy can be effectively taught.
    Reference: Van Brummelen, J., Heng, T., & Tabunshchyk, V. (2021). Teaching tech to talk: K-12 conversational artificial intelligence literacy curriculum and development tools. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15655-15663. doi: 10.1609/aaai.v35i17.17844

© 2024 Australian Academy of Science

Top