Royal Society of New South Wales and Learned Academies Forum: ‘AI: The Hope and the Hype’
Thursday, 6 November 2025
Government House
Her Excellency the Honourable Margaret Beazley AC KC
Bujari gamarruwa
Diyn Babana Gamarada Gadigal Ngura
Good morning, distinguished guests, Fellows of the Royal Society of New South Wales, representatives of Australia’s Learned Academies, leaders from government, industry and research, and all joining us in person and by livestream.
I pay my respects to the Traditional Owners of the lands on which we meet — the Gadigal of the Eora Nation — and to Elders past, present, and emerging. I extend that respect to all Aboriginal and Torres Strait Islander peoples with us and acknowledge the particular relevance of the topic of today’s discussions to First Nations peoples, their culture and intellectual property.
It is my great pleasure to welcome you to the 2025 Royal Society of New South Wales and the Learned Academies Forum and its topical subject: “AI: The Hope and the Hype.”
May I begin by acknowledging:
- Emeritus Professor Christina Slade, President of the Royal Society of New South Wales and Chair of this Forum;
- The Honourable Victor Dominello, Director of the UNSW–UTS Trustworthy Digital Society Hub, and former NSW Minister for Customer Service and Digital Government;
- Professor Pascal Van Hentenryck, A. Russell Chandler III Chair and Professor at the Georgia Institute of Technology; and
- Professor Hugh Durrant-Whyte, Chief Scientist and Engineer of New South Wales.
Each of these distinguished leaders – representing the fields of philosophy, civic leadership, science and engineering - presents a vital dimension to today’s discussion.
I also acknowledge the generous support of the many bodies that have contributed to this Forum.
How does one define AI?
As AI evolves at an unprecedented pace, the definition itself continues to evolve. Indeed, in recent years, the Organisation for Economic Co-operation and Development (OECD) has struggled to reach consensus on a definition, acknowledging that “There is no clear red line but a continuum of features characterising what we think of artificial intelligence and where the “magic” happens.”[1]
Its current (2023) definition, a definition which the Australian Government’s Digital Transformation Agency also employs, is that it is “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. (It notes that) different AI systems vary in their levels of autonomy and adaptiveness after deployment.”[2]
AI’s applicability and transformative potential in a seemingly endless range of environments—from governance to science and engineering, medical research, law, education, health and the public and community sectors—is matched by extraordinary commercial investment. McKinsey forecasts an astonishing $7 trillion US in global AI investment over the next 5 years.[3] It has fuelled expectations of governments, with the Australian Government’s Productivity Commission forecasting growth of $116 billion in gains over the next decade and a 4.3 percentage-point boost in labour productivity.[4]It has also fuelled an industry of new AI scholarship with an incredible 403,897 books, journals and papers published worldwide in 2023.[5]
A quick old-school ‘google’ search of the range of recent media articles about AI - noting that AI relies on patterns, and thus historical rather than current data,[6] ‘threw up’ some arresting headlines:
· “Australia’s culture of AI-powered innovation”
· “How AI falsely named an innocent journalist as a notorious child murderer”
· “The Sydney hospital using AI to improve diagnoses under pressure”
· “Big Tech deploys Orwellian doublespeak to mask its democratic corrosion”
· “AI can help the environment, even though it uses tremendous energy”
· “Tech billionaires seem to be doom prepping. Should we all be worried?”
And this last:
· “Over 50 Percent of the Internet Is Now AI Slop, New Data Finds” … with the intriguing subline: “Humans aren’t finished … yet.”
All of which led another editorial to lead with a lament:
· “Can we just have one day when no one mentions AI?”[7]
It then went on to quote a recent Gallup poll in the US which showed that 49% of people think AI is just another tech advance that will improve our lives and 49% think it will harm humans and society.[8]
Australians are less inclined to sit on the digital fence:
A snap Roy Morgan SMS survey of 18000 Australians in early October revealed that 65% of us believe that “artificial intelligence (AI) creates more problems than it solves” compared to 35% who believe it “solves more problems than it creates.”[9]
On any measure, Artificial Intelligence can be a polarising topic. It has entered our public discourse not only as a technological innovation, but as a social force—one that promises to reshape the way we learn, work, govern, create and even imagine what it means to be human. Indeed, we should remember Alan Turing who, way back in 1950, pressed the question: “Can machines think?”
On one hand, we hear of AI’s capacity to help solve some of our greatest challenges—climate change, diseases and medical conditions, resource scarcity. On the other, we are reminded that technology has often outpaced regulation, and that many promises may remain—at least in the near term—unfulfilled.
This duality—the hope and the hype—underscores the tension that societies around the world now grapple with: how to harness AI’s enormous potential while guarding against its risks. The question arises: who is leading whom in intelligence? Who imagines the future?
Even more importantly: who has the power and where is the equity?, noting that: “AI algorithms need big datasets to learn from, but several [cultures and] groups of the human population are absent or misrepresented in existing datasets.”
As a paper published by the now insecurely-funded US National Library of Medicine has pointed out: “AI is thus prone to reinforcing bias which can lead to [poor or] fatal outcomes”[10] – or entrenching disadvantage.[11] All of this has led the European Union to legislate the EU AI Act—the first comprehensive regulation on AI by a major regulator anywhere.[12]
Over the course of today’s sessions—AI and the Law; in Communities; in Health; AI in Practice and AI Research & Future Directions—we will probe precisely these tensions. Let us hold in mind a few guiding principles:
- Use evidence over assertion.
- Prioritise ethics, justice, and human rights.
- Factor in governance and oversight.
- Reflect on interdisciplinarity and humility.
- The need for continuous assessment and adaptation of our legal, regulatory, educational frameworks.
AI is a shared responsibility and Forums such as this are essential. They bring together the broad spectrum of expertise needed to ensure that our collective intelligence keeps pace with our artificial one. As more than one academic has pointed out: “We are still in the early stages of this history, and much of what will become possible is yet to come.”[13]
The Royal Society of New South Wales—the oldest learned society in the Southern Hemisphere—has always stood for rigorous, open, cross-disciplinary dialogue. That tradition is alive today in this gathering of scientists, policymakers, ethicists, and industry leaders.
It is, therefore, with great pleasure that I declare open the 2025 Forum of the Royal Society of New South Wales and the Learned Academies, “AI: The Hope and the Hype.”
[2] ibid
[4] ibid
[5] https://ourworldindata.org/grapher/annual-scholarly-publications-on-artificial-intelligence?tab=discrete-bar&time=latest
[6] “Artificial intelligence is often hailed as the technology of the future. Yet it primarily relies on historical data, reproducing old patterns instead of fostering progress”: Alexander von Humboldt: Institut Fur Internet und Gesellschaft: https://www.hiig.de/en/why-ai-is-currently-mainly-predicting-the-past/
[10] Natalia Norori et al: ‘Addressing bias in big data and AI for health care: A call for open science: National Library of Medicine (USA)’; https://pmc.ncbi.nlm.nih.gov/articles/PMC8515002/
[11] “Alexander von Humboldt: Institut Fur Internet und Gesellschaft: https://www.hiig.de/en/why-ai-is-currently-mainly-predicting-the-past/
[13] https://ourworldindata.org/brief-history-of-ai. Also, the University of Sydney’s Dr Rob Nicholls: “Dr Nicholls still believes the industry is 20 or 30 years off truly mimicking humans, and that AI as it currently stands is immense data systems that are very good at predicting trends and patterns.” https://www.abc.net.au/news/2025-09-22/is-ai-the-fourth-industrial-revolution/105790912