Ethics

This section brings together key ethical protocols, guides, and rules on the use of artificial intelligence tools, developed by governmental and educational institutions, think tanks, and technology companies.

OECD.AI Policy Observatory

The OECD.AI Policy Observatory is a global database that systematizes AI-related policies, research, and best practices across OECD and non-OECD countries. The platform provides comparative analytics, implementation metrics, examples of regulatory approaches, and research projects. It is one of the most comprehensive and up-to-date tools (as it is continuously updated) for governments and researchers to track how countries are integrating principles of responsible AI into practice.

International
read more

UNESCO AI and Education

The “Recommendation on the Ethics of Artificial Intelligence”, adopted by UNESCO in November 2021, is the first international legal instrument proposing a globally aligned ethical framework for AI development and use. It identifies seven key principles: respect for human dignity, human rights, and the rule of law; transparency and accountability; non-discrimination and equality; human and planetary well-being; safety and security; privacy and data protection; and inclusiveness and accessibility. The document calls on member states to develop national strategies, integrate ethics into education, and ensure public participation in AI-related decision-making.

International
read more

NIST — Generative AI Profile (GenAI companion)

The NIST document which details AI-specific risks and offers practical steps to address them, making it especially valuable for organizations working with generative artificial intelligence.

International
read more

NIST — AI Risk Management Framework (AI RMF 1.0)

The voluntary AI Risk Management Framework (AI RMF), published by the U.S. National Institute of Standards and Technology (NIST) in 2023, provides organizations of all sectors and sizes with practical tools for assessing, measuring, and mitigating AI-related risks throughout the entire lifecycle of AI systems. The document combines a description of AI characteristics with a functional approach to risk management, covering planning, assessment, and control. AI RMF is positioned as a flexible and adaptive resource that helps ensure ethical, transparent, and safe AI deployment.

International
read more

Digital Development Strategy 2024‑2030

The “Digital Development Strategy 2024–2030”, published by the UK’s Foreign, Commonwealth and Development Office, outlines how the United Kingdom plans to support digital development in partner countries through 2030. Its goal is to make digital transformation inclusive, responsible, and sustainable.

National
read more

OECD — Roadmap for Regulation of AI in Ukraine

The “Roadmap for the Regulation of Artificial Intelligence in Ukraine” is a national policy initiative, listed in the EC-OECD STIP Compass, aimed at developing a roadmap for regulating artificial intelligence in Ukraine. Its goal is to introduce legal frameworks ensuring ethical and responsible development of AI, addressing governance, ethical, technological, and social challenges. The strategy includes an analysis of the current situation, identification of priorities (especially in human rights, safety, transparency, and accountability domains), stakeholder engagement, and a phased implementation of rules and standards.

National
read more

National Strategy for Development of AI in Ukraine (2021–2030)

The official strategy of Ukraine defining key directions for the development of artificial intelligence in education, economy, science, and governance.

National
read more

White Paper on AI Regulation in Ukraine — Ministry of Digital Transformation

The analytical paper describing the current state of AI development in Ukraine and offers recommendations for regulation.

National
read more

US federal AI governance overview (IAPP article)

The overview of U.S. AI policies at the federal level which summarizes legislative initiatives, White House strategies, and agency-specific approaches.

International
read more

U.S. Intelligence Community — AI Ethics Framework

The document from U.S. intelligence agencies outlining security and ethical rules for the use of AI in intelligence operations.

International
read more

Council of Europe CAHAI

The first Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law covers the entire AI lifecycle — from design to application. It aims to ensure that AI systems comply with international standards of human rights, democracy, and the rule of law. The convention requires mandatory risk and impact assessments on these principles and allows for moratoria or bans on AI applications that may pose threats. It also sets requirements for transparency, accountability, non-discrimination, privacy protection, and effective legal remedies for individuals whose rights may be violated by AI use. The convention does not apply to matters of national security or defense, except in cases where AI testing could affect human rights or democratic processes.

International
read more

Australia — AI Ethics Principles & National framework for assurance in government

The Artificial Intelligence Ethics Principles, developed by the Government of Australia, are designed to ensure the safe, reliable, and ethical use of AI in business, the public sector, and society. The principles are advisory and aim to help organizations assess the consequences of implementing AI systems, particularly when these may affect people, communities, or the environment.

National
read more