Menu
Situs Panduan dan Solusi Terkini

Australian digital minister avoids AI ethics legislation and will remain a voluntary framework

  • Share
banner 468x60

Australia’s Minister for Pensions, Financial Services and the Digital Economy, Jane Hume, has confirmed that the country’s AI ethics framework will remain voluntary for the foreseeable coming.

banner 336x280

As part of her speech during the virtual CEDA AI Innovation in Action event on Tuesday, Hume made it clear that there are ample regulatory frameworks in place and that another one would be unnecessary.

“We already have a very powerful regulatory framework; we already have privacy laws, we already have consumer laws, we already have a data commissioner, we already have a privacy commissioner, we have a misconduct regulator. We have all those barriers that already exist around the way we run our business,” she said. for ZDNet.

“Artificial intelligence is simply a technology that is being imposed on an existing company. It is significant that the technology is used to solve the problems. The problems themselves have not really changed, so our regulations have to be flexible enough to accommodate technological changes… We want to make sure that there is nothing in the regulations And legislation prevents the advancement of technology.

“But at the alike time, putting in place unused regulations for a technology, unless we can see a use case for it, is something we would be unwilling to do, to legislate and overstate it.”

The federal government developed a national AI ethics framework in 2019, following the release of a discussion paper by Data61, the digital innovation arm of the Commonwealth Scientific and Industrial Research Organization (CSIRO).

The discussion paper highlighted the need to develop AI in Australia that is encapsulated in an adequate framework to ensure that nothing is placed on citizens without proper ethical consideration.

The framework consists of eight ethical principles: human, social and environmental well-being; Human-centered values ​​in relation to human rights, diversity, and the autonomy of individuals; fairness. privacy protection and data security; reliability and security in accordance with the intended purpose of artificial intelligence systems; Transparency and interpretability. competition. and accountability.

Hume believed that the principles were designed in such a way that they were “rather universal” and that the industry would therefore be willing to adopt them voluntarily.

“There’s nothing out there for people to feel uncomfortable about, nothing too prescriptive… These are all things we expect. I think nothing is particularly stressful,” she said.

While developing these principles is one thing, applying them may be quite distinct, Hunt admitted.

“You have to have the right governance structures, for example. But you have to have the right governance structures in your organization for many things, workplace safety, for example, is a pleasing example,” she said.

“I think we would like to see the broader industry, regardless of which industry is embracing AI technologies, to sign up for those frameworks, voluntarily, rather than have something that is being imposed from the top down.”

For Microsoft, it has voluntarily adopted the AI ​​framework by developing an inner governance structure to “enable progress and accountability rules to standardize responsible AI requirements, training and practices to aid our employees operate according to our principles, and to think deeply about the implications of corporate affairs for Microsoft Australia,” said Belinda Dennett, Director of Corporate Affairs at Microsoft Australia , During the event, the artificial intelligence systems.

Microsoft was one of the first companies to put its hands to test the running of AI ethics principles, to ensure they could be translated into realistic scenarios.

The others are National Australia Bank, Commonwealth Bank, Telstra and Flamingo AI.

Earlier this month, the CBA revealed that testing ethical principles while creating and designing its Commbank app feature gave Bill Sense insight into how the bank could implement responsible AI at scale.

“It was distinguished to see that the principles of AI are perfectly aligned with the control and governance frameworks that the bank has already put in place. Things like secure data management, customer privacy, and transparency have been central to the way we work lengthy prior the advent of AI,” said CBA Chief Decision Scientist Dan Jermain. for ZDNet.

“But the pace, scale and sophistication of AI solutions unkind we need to ensure that we are always evolving to meet the demands of unused technology, which is why collaboration with our partners across government and industry is so significant.”

In an effort to ensure that AI is applied responsibly, the bank has developed a tool to make it easier for teams across the bank to safely deliver AI for scaling, according to Jermain.

For example, we have developed an ‘interpretable AI’ capability, which makes it simple for any of our teams to understand and explain the key drivers of even the most complicated deep learning models,” he said.

He added that responsible application of AI will be essential as the CBA continues to use it as a tool to improve customer experience.

“We view AI as a key enabler in providing a distinguished and personalized experience for all of our customers, and so we are committed to ensuring that we apply it in a consistently sincere and ethical manner,” said Jermain. “As we persevere to develop the ways in which AI helps us support the financial well-being of our customers and communities, it is required that we do so in a responsible and sustainable way.”

Related Coverage

Referensi: www.zdnet.com

banner 336x280
banner 120x600
  • Share