Incorporating ethical, human rights-based approaches into the design, development and use of critical technology can help ensure they benefit societies around the world. The world is at an inflection point, and cooperation is vital to ensure non-binding ethical frameworks that align with international law, including human rights, are considered in the design, development and use of critical technologies.
The world is on the precipice of a new era of technological disruption, which will transform our societies, ways of living, and our safety and wellbeing.
With these opportunities come challenges. The proliferation of intelligent, autonomous systems may pose difficult questions about how decisions are made and why. The growth in the collection, analysis and use of personal and biometric data involves significant and complicated considerations of convenience, functionality, rights and freedoms that are not always readily apparent. Some advances in Artificial Intelligence (AI), bioengineering and neuro technology even raise fundamental questions about what it means to be human, as technologies assume attributes or alter functions that were previously considered uniquely human. Some technologies may also be used in ways different to the original intent, increasing the complexity of how we respond.
Ethical frameworks that are consistent with international law including human rights
In response to these challenges, many stakeholders are looking to establish ethical frameworks to guide how technologies are designed, developed and used. These non-binding ethical frameworks can help ensure technologies are used to benefit people and societies, while mitigating potential risks from their use or deliberate misuse.
Australia believes existing international law, including international human rights law, must be the foundation for the ethical design, development and use of critical technologies. International human rights law has been developed from fundamental, universal principles on the value and dignity of human life. In addition to international human rights law, initiatives such as the United Nations (UN) Guiding Principles on Business and Human Rights provide a global standard and guidance to the private sector on how they can prevent and address the risk of adverse human rights impacts linked to business activity. Existing international law and principles therefore provide the most suitable basis on which to develop non binding ethical principles and frameworks for critical technologies.
Australia opposes the development of new non-binding ethical principles or frameworks that are not consistent with existing international law, including human rights. This risks creating confusion, inconsistency and lack of clarity regarding states' existing obligations under international law.
Engagement and capacity building on ethical frameworks
Australia will continue to engage in multilateral forums, processes and with cross-regional groups to shape the development and share best practice approaches to the establishment of ethical frameworks that are consistent with international law and human rights, such as Australia's AI Ethics Framework. This includes working with countries in our region through our Cyber and Critical Technology Cooperation Program, to promote the ethical design, development and use of critical technologies. Australia recognises the essential role of industry and civil society to shape and develop ethical frameworks on critical technologies, and commits to strengthening our engagement and collaboration with them to achieve this goal.
GLOBAL PARTNERSHIP ON ARTIFICIAL INTELLIGENCE
Australia is a founding member of the Global Partnership on AI (GPAI). GPAI is an international and multi-stakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation and economic growth. The initiative is the first of its kind.
GPAI will support the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and shared democratic values, as elaborated in the Organisation for Economic Cooperation and Development (OECD) Council Recommendation on AI.
Founding members include Australia, Canada, the European Union, Germany, India, France, Italy, Japan, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom and the United States.
AUSTRALIA'S AI ETHICS FRAMEWORK
The Australian Government's Department of Industry, Science, Energy and Resources AI Ethics Framework was released in November 2019 to help guide businesses and governments seeking to design, develop, deploy and operate AI in Australia.
The principles are aspirational and intended to provide organisations with a signpost as to how AI should be developed and used in Australia. The eight principles (see below) are voluntary and intended to complement existing AI-related regulations. These ethics principles build on the human rights framework.
- Human, social and environmental wellbeing
- Human-centred values
- Privacy protection and security
- Reliability and safety
- Transparency and explainability
We are working across government, industry and academia to ensure Australia's AI ethics principles are implemented effectively.