Ethics
This section brings together key ethical protocols, guides, and rules on the use of artificial intelligence tools, developed by governmental and educational institutions, think tanks, and technology companies.
EU — Artificial Intelligence Act (Regulation (EU) 2024/1689)
The Regulation (EU) 2024/1689, known as the AI Act, is the world’s first comprehensive law on artificial intelligence, adopted by the European Union in 2024. It establishes a unified legal framework for the development, deployment, and use of AI systems both within the EU and beyond, if such systems are used on the European market. The document introduces a risk-based approach, classifying AI systems according to their potential level of harm — from prohibited practices to high-risk systems and those requiring only basic transparency. The Regulation also integrates AI requirements into existing European safety and human rights standards, emphasizing transparency, reliability, and ethical use of technologies.
G7 Hiroshima AI Principles
The document “The Hiroshima AI Process: Leading the Global Challenge to Shape Inclusive Governance for Generative AI” describes an initiative launched under Japan’s G7 presidency in May 2023. The purpose of this process is to promote the development of safe, reliable, and trustworthy artificial intelligence by establishing international standards and guiding principles. In December 2023, the “Hiroshima AI Process Comprehensive Policy Framework” was adopted — the first international regulatory instrument that includes the International Guiding Principles and the International Code of Conduct for organizations developing advanced AI systems. These documents address all stages of the AI lifecycle, from design to deployment, and emphasize the importance of transparency, accountability, and ethics in the application of AI technologies.
EU — Ethics Guidelines for Trustworthy AI (AI HLEG, 2019)
The ethical standard developed by the EU’s High-Level Expert Group on Artificial Intelligence, presented in April 2019, defines the framework for creating and using reliable, safe, and ethical AI systems. It identifies three core components of trustworthy AI: lawfulness, ethics, and technical/social robustness.
OECD — AI Principles (Recommendation on AI), Organisation for Economic Co-operation and Development
The first intergovernmental standard on artificial intelligence, adopted in 2019 and updated in 2024, establishes a framework for reliable, innovative, and human-centered AI. The document outlines five key principles: inclusive growth, rights protection, transparency, safety, and accountability. It also provides practical recommendations for policymakers to help design effective, ethical AI policies and trustworthy systems.
Universal Guidelines for AI
The “Universal Guidelines for Artificial Intelligence”, developed by CAIDP in 2018, present a set of ethical principles aimed at ensuring transparency, accountability, and fairness in AI development and use. The document addresses the right to transparency, the right to a human decision, the right to non-discrimination, the right to safety, and the right to accountability. These guidelines address global AI challenges and risks and promote ethical technological development. Since their adoption, they have become the foundation for many international AI initiatives and policies.
Montréal Declaration for Responsible AI
The Montreal Declaration for a Responsible Development of Artificial Intelligence, created with the participation of academia, civil society, business, and government institutions, outlines ethical principles for the development and use of AI. Its goal is to guide technological change so that it serves human and social well-being, remaining fair, transparent, and democratic. The declaration is intended as a reference point for developers, organizations, and companies working with AI, helping them make responsible decisions and avoid risks associated with uncontrolled or harmful uses of technology.
UNESCO — Recommendation on the Ethics of Artificial Intelligence (2021)
The first global ethical standard on AI, developed for UNESCO member states, covers accountability, openness, human rights protection, and responsibility. States are urged to develop national policies and regulations consistent with these principles.
Toronto Declaration (Amnesty International & Access Now) — Human Rights and Non-Discrimination
The international document adopted in May 2018 at the initiative of Amnesty International and Access Now focuses on the protection of human rights in the age of artificial intelligence. The authors emphasize equality and non-discrimination, pointing to the risks of algorithmic bias in justice, healthcare, education, and employment. The declaration calls on governments and the private sector to integrate international human rights standards into the design and use of machine learning systems, ensuring transparency, accountability, and effective remedies for harm.
Use of Artificial Intelligence Systems in Line with Human Rights
The guide “Human-Rights-Compliant Use of Artificial Intelligence Systems: Toolkit for Civil Society”, published by the Digital Security Lab, provides detailed recommendations for civil society organizations, journalists, and human rights defenders on the ethical and safe use of AI. It addresses issues such as human rights impacts, personal data protection, prevention of algorithmic bias, freedom of expression, respect for intellectual property rights, and the importance of human oversight.
IBM — Principles for Trust & Transparency; governance resources
IBM (International Business Machines), one of the world’s largest hardware and software producers, offers AI governance materials that explain how the company ensures algorithmic transparency, reduces bias, and builds human-accountable systems.
This platform was created as part of the project “Strengthening Independent Media for a Strong Democratic Ukraine”, implemented by DW Akademie in cooperation with Lviv Media Forum and Ukraine’s public broadcaster Suspilne, is funded by the European Union. The project is also supported by IMS (International Media Support).