Dear Select Committee on Adopting Artificial Intelligence,
My name is Michael Kerrison, a management consultant, data scientist, and soon-to-be father. I think that almost all the evidence points towards AI becoming a profoundly transformative technology; while current models still have serious limitations, it would be a real mistake to confuse where we are now with where we're on track to be. Given the pace of recent progress, the apparent success of pure scaling, and the fact there doesn't seem to be anything particularly special about organic brains, there's a very real possibility that I and my family will live to see society-level changes even more discontinuous than those my grandparents and lived through.
I'm optimistic about that future, if and only if none of the many possible catastrophes play out: cybersecurity, biosecurity, national security, nuclear security, direct risks from superintelligences that simply do not care about humans... There's a real gauntlet ahead of us, and I would be disappointed if Australia just slouched towards that - or worse, sprinted head-on into it.
Government and civil society seem to be doing a good job of driving adoption of AI, which I acknowledge is important given the pace of change globally. For instance, I'm glad that the APS has been quick to start investigating how it might use AI, and the CSIRO’s National AI Centre seems to be tackling immediate concerns, like racial and other biases in AI chatbots.
However, while these are all admirable efforts, there are still gaps in our AI governance infrastructure that need to be addressed, and I urge the Senate Committee to address them.
Firstly, AI-specific regulation is lacking. It's true that existing regulators can contribute to managing relevant uses of AI, but this isn't a comprehensive solution. We need a new regulator to monitor potentially dangerous AI capabilities and deployments. As with regulations on cars, planes, firearms, pharmaceuticals, and so many other things with the potential for large negative externalities, we need to regulate not just the use, but also the sale and possession of potentially harmful AI technology.
Secondly, Australia should have an AI safety institute, independent from efforts focused on accelerating AI capability or adoption. This separation is essential for integrity and trust and mirrors the model used in other nations. For example, the UK has established its AI Safety Institute independently of the Alan Turing Institute. I note that just establishing it is not sufficient - the UK also has a ways to go with its efficacy.
The issue of liability for AI companies is another area where our current approach is dangerously inadequate. Our negligence laws, which date back to well before today's technologies became central in our lives, put the onus on the victim to prove developer negligence, which is near impossible with the complexity and "black-box" nature of current AI systems. Without functioning liability systems, AI companies are incentivised to release risky products, downplay those risks, mislead the public and actievly evade responsibility for any harm caused.
One potential mitigant would be something like a strict liability regime for AI harms, making developers liable for any harm caused without the need for the victim to prove fault. Another approach would be a fault-based liability system that defines care duties for AI developers and places the burden of proof on them if their systems cause harm.
AI systems are already having a profound impact on our economy and society. As their capabilities grow, so does the potential for harm if we don't have a fit-for-purpose liability regime. I implore the Senate inquiry to urgently prioritise the modernisation of Australia's AI liability laws.
I believe that the Australian Government's top priority should be to prevent dangerous or catastrophic outcomes from AI. Research by Ready Research and The University of Queensland has shown that Australians share this concern. When asked about their worries regarding AI, the top response was AI systems not being safe, trustworthy or aligned with human values. Accordingly, I urge this Committee to recommend that the Government prioritise AI safety issues alongside its current focus on AI adoption and addressing immediate problems posed by AI.
In conclusion, AI presents an opportunity for significant benefits for Australia and Australians. However, we must tread carefully to make sure we realise those benefits, rather than catastrophe. I strongly believe that a comprehensive review of our governance infrastructure, liability laws, and safety regulations is required as a starting point, to buy us time to find the better, more robust, longer-term solutions, to ensure a safe and prosperous AI future in Australia.
Regards,
Michael Kerrison