Ethics
This section brings together key ethical protocols, guides, and rules on the use of artificial intelligence tools, developed by governmental and educational institutions, think tanks, and technology companies.
How Ukrainska Pravda Will Use Artificial Intelligence
The editorial document which outlines how the Ukrainska Pravda newsroom is exploring the potential of generative AI tools — including text, image, and voice generators — to expand its editorial capabilities.
Thomson Reuters Foundation — Three steps to an AI-ready newsroom: A practical guide
The practical guide for newsrooms on developing their own AI-use policies provides templates and recommendations for creating internal guidelines, emphasizing the importance of adapting such policies to the specific context of each organization. It highlights the need to balance innovation with journalistic standards, offering step-by-step instructions suitable for newsrooms of various sizes.
EU — Guidelines on the responsible implementation of artificial intelligence systems in journalism
The official recommendations adopted by the Council of Europe’s Committee on Media and Information Society establishing standards for the responsible implementation of AI systems in journalism. They aim to ensure that AI use aligns with human rights, democratic principles, and freedom of expression as set out in Article 10 of the European Convention on Human Rights.
Paris Charter on AI and Journalism
The Paris Charter, adopted by members of the media community in November 2023, which sets out ten fundamental principles — from prioritizing journalistic ethics and human agency to ensuring transparency in the use of AI systems and a clear distinction between authentic and synthetic content. The authors emphasize that media organizations bear full responsibility for all published content, regardless of whether AI was used. The Charter also addresses economic safeguards, fair compensation for the use of journalistic content in training AI models, and transparency regarding the origin of data used in algorithms.
From Policy To Practice: Responsible media AI implementation
The article “From Policy to Practice: Responsible Media AI Implementation”, published by Digital Content Next, describes the steps media organizations can take to introduce artificial intelligence ethically. It stresses the need for clear AI-use policies that address bias mitigation, risk management, legal compliance, and long-term governance. The piece draws on the experiences of The New York Times, Financial Times, BBC, USA Today, The Guardian, NPR, Radio-Canada, and Graham Media Group. Key elements include transparency in AI use, human supervision, staff training, and regular risk evaluations. These practices help minimize risks, protect privacy, and preserve editorial control, thereby fostering audience trust and reinforcing brand reputation.
Recommendations for the Responsible Use of Artificial Intelligence in the Media Sector
The document titled “How to Responsibly Use Artificial Intelligence: Guidelines Developed for Media”, prepared by the Ministry of Digital Transformation of Ukraine, provides media organizations with practical advice on how to apply AI ethically and safely at every stage of media production — from data collection to content creation and distribution. It highlights the importance of transparency, accountable editorial decision-making, risk assessment, data protection, and clear labeling of AI-generated content, as well as ensuring human oversight. The document also recommends the development of internal policies and staff training to uphold high ethical standards, protect human rights, and strengthen audience trust in the context of ongoing digital transformation.
This platform was created as part of the project “Strengthening Independent Media for a Strong Democratic Ukraine”, implemented by DW Akademie in cooperation with Lviv Media Forum and Ukraine’s public broadcaster Suspilne, is funded by the European Union. The project is also supported by IMS (International Media Support).