Science and Science Communication in the Age of Large Language Models
This symposium will explore both the potential and the pitfalls of LLMs, and how we can responsibly harness their capabilities to align with the goals of scientists and science communicators.
Cost
Free, but registrations essential
Content navigation
RegisterDescription
Artificial Intelligence (AI) is emerging as a transformative force in scientific research and communication, with seemingly limitless potential applications. Recent advances in large language models (LLMs) have particularly encouraged researchers to explore their applications in their own work.
Fields such as Medicine and Chemistry have already embraced LLMs in various applications 1-3. This includes using them for accelerating drug discovery, optimising vaccine design, even making mind-reading a tangible possibility 1. In biotechnology, researchers are actively using LLMs to generate protein sequences with predictable functions across diverse families, demonstrating considerable potential for protein design and engineering 4. There are even ongoing studies exploring the automation of the entire scientific processes using LLMs, including everything from design of research to planning and execution of lab experiments 5. In social science, researchers have been inspired to use LLMs to improve agent-based models, simulate human populations taking surveys, and even replacing human participants in social experiments 6-8. The Impressive ability of LLMs to answer questions has also inspired researchers to use them for creating expert-domain chatbots, in diverse areas such as climate change, finance, physics, material science, and clinical knowledge 9-13.
This rapid proliferation of AI tools in scientific research and communication, however, has raised significant concerns among scientific communities. They caution against various risks, including the environmental footprint, the disruption of nuanced and contextual understanding, contributions to an AI-driven infodemic, and the undermining of human interaction14. There are also concerns about researchers becoming vulnerable to illusions of understanding, where they may believe they comprehend more about the world than they actually do. This is followed by epistemic risks that may arise if these tools dominate the production of scientific knowledge15. These and other concerns have encouraged new efforts aimed at developing ethical and policy guidance for the use of AI within scientific research and communication16.
In this symposium, we invite contributions that engage critically and empirically with the question of how LLMs might transform the production of scientific knowledge and science communication. What are the risks and long-term challenges associated with integrating these technologies into the daily practices of researchers and science communicators? And how can we envision new ways to design, develop, and deploy LLMs in a responsible way that best aligns with the aspirations of scientists and science communicators?
This event is organised by ANU Responsible Innovation Lab, and supported by ANU Institute for Climate, Energy & Disaster Solutions, Australian National Centre for Public Awareness of Science.
If you would like to share your ongoing project on this topic, please contact Ehsan Nabavi (Ehsan.Nabavi@anu.edu.au).
Event Program
WELCOME & INTRODUCTION | 9:30-9:45am
Dr Ehsan Nabavi (Head of Responsible Innovation Lab, Australian National Centre for Public Awareness of Science, ANU)
INSIGHTS & CONVERSATION | 9:45-10:45 am
Dr Stefan Harrer (Director, AI for Science – Science Digital, CSIRO)
Prof Karin Verspoor (Dean of the School of Computing Technologies, RMIT)
A/Prof Michelle Riedlinger (Digital Media Research Centre, Queensland University of Technology)
Morning Tea/Coffee | 10:45-11:00am
SHOWCASE | 11:00-12:30pm
Dr Ehsan Abbasnejad (Director of Responsible Machine Learning, Australian Institute for Machine Learning, University of Adelaide)
Dr Ashlin Lee (Research Scientist and Digital Sociologist, CSIRO)
Lucy Darragh (Australian National Centre for Public Awareness of Science, ANU)
Dr Sarah Bentley (Research Scientist, Data61, CSIRO)
Lunch & Networking | 12:30-1:00pm
WORKSHOP: RESPONSIBLE LLM IN SCI(COM) | 1-2:30pm
Dr Ehsan Nabavi & Dr Chris Browne (Responsible Innovation Lab, ANU)
Afternoon Tea/Coffee | 2:30-2:45am
WRAPPING UP: MOVING FORWARD | 2:45-3:30pm
Prof Joan Leach (Director, Australian National Centre for Public Awareness of Science, ANU)
Dr Hayley Teasdale (Science Advice and Policy, Australian Academy of Science)
A/Prof Fabien Medvecky (Associate Director of Research, Australian National Centre for Public Awareness of Science, ANU)
--
REFERENCES
1 Tang, J., LeBel, A., Jain, S. & Huth, A. G. Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience, 1-9 (2023).
2 Pan, J. Large language model for molecular chemistry. Nature Computational Science 3, 5-5 (2023). https://doi.org:10.1038/s43588-023-00399-1
3 Li, S. et al. CodonBERT: Large Language Models for mRNA Design and Optimization. bioRxiv, 2023.2009. 2009.556981 (2023).
4 Madani, A. et al. Large language models generate functional protein sequences across diverse families. Nature Biotechnology 41, 1099-1106 (2023). https://doi.org:10.1038/s41587-022-01618-2
5 Boiko, D. A., MacKnight, R. & Gomes, G. Emergent autonomous scientific research capabilities of large language models. arXiv preprint arXiv:2304.05332 (2023).
6 Aher, G., Arriaga, R. I. & Kalai, A. T. Using large language models to simulate multiple humans. arXiv preprint arXiv:2208.10264 (2022).
7 Ziems, C. et al. Can Large Language Models Transform Computational Social Science? arXiv preprint arXiv:2305.03514 (2023).
8 Bail, C. A. Can Generative AI Improve Social Science? (2023). https://doi.org:10.31235/osf.io/rwtzs
9 Caruccio, L. et al. Can ChatGPT provide intelligent diagnoses? A comparative study between predictive models and ChatGPT to define a new medical diagnostic bot. Expert Systems with Applications 235, 121186 (2024).
10 Vaghefi, S. A. et al. Chatclimate: Grounding conversational AI in climate science. (2023).
11 Wu, S. et al. Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564 (2023).
12 Singhal, K. et al. Large language models encode clinical knowledge. Nature 620, 172-180 (2023). https://doi.org:10.1038/s41586-023-06291-2
13 Xie, T. et al. DARWIN Series: Domain Specific Large Language Models for Natural Science. arXiv preprint arXiv:2308.13565 (2023).
14 Nabavi, E. et al. Potential Benefits and Dangers of Using Large Language Models for Advancing Sustainability Science and Communication. Authorea Preprints (2024).
15 Messeri, L. & Crockett, M. Artificial intelligence and illusions of understanding in scientific research. Nature 627, 49-58 (2024).
16 Resnik, D. B. & Hosseini, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI and Ethics, 1-23 (2024).
Location
Innovation Space, ANU Birch Building (Building 35)