Australian government unveils new AI Plan

Photo by julien Tromeur on Unsplash

After the Australian government’s recent statement that protecting creative industries from AI risks would be a national priority, the long-anticipated government AI plan has been released. The plan promises to balance competing considerations of technological advancement and labour market health.

In practice, this will start with AI engagement in government, while also providing skills training in schools and for public service employees. Investment into AI tech and data centres will also continue, following the Productivity Commission’s recommendation that mandatory guardrails would hinder innovation and investment. A $30 million dollar safety institute will begin next year to monitor developments and advise government on risk and emerging legal gaps. The focus is on “capturing the opportunity of AI, broadening our safe and responsible use of this technology while building public trust and confidence”. 

Missing from this plan are the mandatory guardrails many creators were hoping for. Specifically, mandatory guardrails for high-risk AI technology. The government has decided against new legislation, and objections are already being raised. Considering the many concerns creators, academics and legal professionals have raised across jurisdictions, will the government’s reliance on company transparency be to the benefit or detriment of the Australian public? 

Will these tech companies act in their own self-interests, or will they act in the best interest of the public? If the goal of the government is to enable responsible use, how can we ensure that tech companies will be more concerned with ethics than with profit margins?  In encouraging the public sector to “embrace” AI, what does this mean for creators’ intellectual property rights? Will they continue to be at risk? Will there be an effective manner of asserting these rights?