AI in Science
Highlights from the event
- The Artificial Intelligence (AI) in Science conference, hosted by ANU in collaboration with CSIRO, brought together over 75 early- and mid-career researchers (EMCRs) from across Australia to critically examine AI’s role in advancing scientific research.
- The conference explored ethical, social and environmental dimensions of AI, calling on participants to consider AI’s broader societal impact and the responsibility researchers bear in shaping its development.
AI: strengths and risks
Artificial intelligence (AI) approaches are being accelerated into STEM. With the new frontier of capable and productive algorithms, there has been a sharp upward shift in the power, accessibility and public profile of AI.
AI offers enormous potential through the economy of labour and by pushing human limits of logic and creativity. AI has the capacity to facilitate transdisciplinary collaborations through the integration of complimentary research. However, AI may also be misinformative, which can be challenging to recognise and contain. A critical need exists within the emerging national STEM leadership to be able to efficiently access relevant knowledge about AI, to use its strengths and recognise and address the risks, and build connections with experts and peers in AI.
The AI in Science project, delivered as a conference, aims to support Australia’s emerging early and mid-career STEM leaders in embracing the opportunities and challenges of rapid developments in AI. Hosted by The Australian National University, the AI in Science project received grant funding from the Theo Murphy Initiative (Australia) administered by the Australian Academy of Science. The project also received additional sponsorship from Australia’s National Science Agency, CSIRO.
Early- and mid-career researcher (EMCR) rapid fire presentations
As part of this project, EMCRs from around Australia were invited to submit Rapid-Fire Presentations showcasing their research using AI.
Watch these presentations.
Resources for researchers using AI
EMCRs were also invited to run activities that supported Australian researchers to develop practical skills and knowledge to work effectively with large language models (LLMs) and generative AI systems, and to broaden their AI literacy. Explore the curated resources developed through these activities:
These resources highlight key issues surrounding the ethical implications of AI. The report on AI adoption in science provides an overview of how AI is shaping research trends. The article on labour exploitation in the AI industry discusses the ethical concerns of underpaid gig workers who power AI systems. There are valuable insights into ‘AI for social good’, offering a critical view of its impact, and practical advice on prompt engineering for effective LLM use. Another article examines the environmental cost of training large AI models, while another explores how big data in criminal justice challenges established criminal procedures.
- This report gives a comprehensive overview of AI adoption in scientific research, with insights into future trends.
Reference: Hajkowicz, S., Naughtin, C., Sanderson, C., Schleiger, E., Karimi, S., Bratanova, A., & Bednarz, T. (2022). Artificial intelligence for science – Adoption trends and future development pathways. CSIRO Data61, Brisbane, Australia. - This resource highlights the often-overlooked exploitation of gig workers in the AI industry, arguing that addressing labour abuses, such as underpaid and highly surveilled data labellers, content moderators, and delivery drivers, should be central to AI ethics efforts, rather than solely focusing on debiasing data and ensuring transparency.
Reference: Williams, A., Miceli, M., & Gebru, T. (2022, October 13). The exploited labor behind artificial intelligence: Supporting transnational worker organizing should be at the center of the fight for “ethical AI”. Noema. - This article explores the potential of ‘AI for social good’.
Reference: Moorosi, N., Sefala, R., & Luccioni, S. (2023, December). AI for whom? Shedding critical light on AI for social good. In NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment. - This guide offers practical, hands-on advice for effectively using LLMs, and is regularly updated.
Reference: Anthropic. (2024). Prompt engineering overview. - This article examines the environmental impact of AI development, focusing on the energy consumption of training large models and the need for more sustainable AI practices.
Reference: Strubell, E., Ganesh, A. and McCallum, A., 2020, April. Energy and policy considerations for modern deep learning research. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 09, pp. 13693-13696). - This article explores how big data, algorithmic analytics, and machine learning are transforming criminal justice by reshaping how crime is understood and addressed, while simultaneously undermining regulatory safeguards, abolishing case-specific subjectivity, and challenging established criminal procedure rules.
Reference: Završnik, A., 2021. Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of criminology, 18(5), pp.623-642.
These resources provide valuable insights into understanding and effectively collaborating with AI systems, particularly large language models (LLMs). The first two videos offer foundational knowledge about how LLMs function, including their reliance on transformers and common misconceptions about AI accuracy. Complementing these, several articles emphasise the importance of metacognitive skills in human–AI collaboration, outlining how to critically engage with AI outputs, recognise cognitive biases, and enhance decision-making. Additionally, a guide on teaching AI literacy introduces practical strategies for building foundational knowledge, while a thought-provoking piece explores why AI systems don’t truly ‘understand’ in the human sense.
- This YouTube explains how large language models (LLMs) work, with a focus on transformers, the core concept behind models like Chat GPT.
- Building on #1, this YouTube explains how LLMs store facts and helps to dispel common assumptions about relying on LLMs for accurate information.
- This article outlines a validated three-step process to help users reflect on and improve their thinking processes when interacting with AI systems.
Reference: Sidra, & Mason, C. (2023, October 27). How to strengthen your metacognitive skills to collaborate effectively with AI. The Times Higher Education. - This article explains why metacognitive thinking is crucial when working with generative AI. The article also highlights cognitive biases that are often amplified in human–AI interactions.
Reference: Sidra, S., & Mason, C. (2024). Reconceptualizing AI literacy: The importance of metacognitive thinking in an artificial intelligence (AI)-enabled workforce. 2024 IEEE Conference on Artificial Intelligence (CAI), 1178-1183. doi: 10.1109/CAI59869.2024.00211. - This article reflects on how LLMs do not ‘understand’ in the conventional sense of the word, highlighting the impetus on humans to critically assess AI outputs rather than get caught up in the illusion of AI comprehension.
Reference: Sejnowski, T. J. (2023). Large language models and the reverse Turing test. Neural Computation, 35(3), 309–342. doi: 10.1162/neco_a_01563 - This article offers valuable insights into the core concepts of AI literacy and provides practical guidance on how AI literacy can be effectively taught.
Reference: Van Brummelen, J., Heng, T., & Tabunshchyk, V. (2021). Teaching tech to talk: K-12 conversational artificial intelligence literacy curriculum and development tools. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15655-15663. doi: 10.1609/aaai.v35i17.17844
AI in Science EMCR consortium
As part of the AI in Science Conference on 6 November 2024, an Australian EMCR Consortium for AI in Science was formed. The main aim of the Consortium is to develop a position statement on the implications of AI for scientific research in Australia. The position statement will be made available here upon completion.