Ethics

This section brings together key ethical protocols, guides, and rules on the use of artificial intelligence tools, developed by governmental and educational institutions, think tanks, and technology companies.

Google — Model Cards / AI Principles (corporate practices)

Google Model Cards are a set of structured documents providing a clear overview of how a particular AI algorithm or model was developed and evaluated. They include information on the model’s purpose, version, architecture, training data, performance metrics, and limitations, as well as the conditions under which the model performs better or worse. The primary goal of these cards is to ensure transparency, accountability, and informed understanding among users, developers, and regulators regarding the strengths, weaknesses, possible risks, and appropriate contexts for the model’s use.

Corporate
read more

Anthrop\c — Usage policy

The document from Anthropic — the U.S. artificial intelligence company that developed Claude — outlines the permitted and prohibited uses of the model. Anthropic forbids the use of its service for political campaigns, lobbying, election interference, disinformation, weapons development, malicious cyber operations, and the generation of violent, fraudulent, or otherwise harmful content.

Corporate
read more

OpenAI — Usage & safety policies (product policies)

The set of rules defining how OpenAI’s products may and may not be used, with the goal of preventing misuse.

Corporate
read more

The University of Alabama — Guidelines for Appropriate Use of AI Generated Media

The policy of the University of Alabama which addresses the ethical and safe use of AI in communications and media projects.

Corporate
read more

"PRSA — The Ethical Use of AI For Public Relations Practitioners"

The document developed by the Public Relations Society of America (PRSA) serves as a practical adaptation of the organization’s professional Code of Ethics. It explains how traditional PR ethics principles apply to new AI tools such as ChatGPT and analytical platforms. PRSA offers practical guidance for various AI-use scenarios and emphasizes that PR professionals should act as the “ethical conscience” of their organizations during the adoption of AI technologies.

International
read more

Ofcom’s strategic approach to AI 2024/25

The document by Ofcom — the UK regulator for telecommunications, online safety, broadcasting, and other communication services — discusses the challenges and opportunities associated with artificial intelligence. Its main goal is to leverage the benefits of AI while mitigating risks, in line with the UK government’s innovation-friendly approach and the five AI regulatory principles: safety, transparency, fairness, accountability, and the ability to challenge decisions.

International
read more

Artificial Intelligence Usage Policy on the Vchysia.Media Website and Social Media Platforms

The policy of Vchysia.Media explains how the outlet uses AI for audio transcription, subtitle creation, and text translation. According to its rules, no editorial material may be generated by AI.

Corporate
read more

Artificial Intelligence (AI) Usage Policy on the Online.ua Website and Related Services

The AI-use policy of Online.ua regulates the use of artificial intelligence for video transcription, partial text rewriting, and the creation of headlines, descriptions, and SEO blocks. The editorial team states that it adheres to the recommendations of the Ministry of Digital Transformation of Ukraine and to journalistic ethics principles.

Corporate
read more

Recommendations of the Commission on Journalism Ethics Regarding the Use of Artificial Intelligence

The recommendations of the Commission on Journalism Ethics on the use of artificial intelligence in the production of journalistic materials, published on October 31, 2023, provide Ukrainian media with guidelines for the ethical application of generative AI models such as ChatGPT, Bard, and Bing. The Commission emphasizes the need to integrate AI-use policies into editorial practice and reminds that responsibility for published materials lies with the author and the editorial team, even when AI is used at the preparation stage. Special attention is given to the limitations of AI, including its lack of critical thinking, inability to verify facts, and potential for bias. The recommendations also require media outlets to inform their audiences about AI involvement in content creation — for instance, through appropriate disclaimers — and urge caution when using AI to generate images, headlines, or videos, in order to avoid manipulation and maintain journalistic standards.

National
read more

CAI — Huderia metodology

The document of the Council of Europe’s Committee on Artificial Intelligence describes the HUDERIA methodology (Human Rights, Democracy and Rule of Law Impact Assessment), which is used to assess the risks and impacts of AI systems in terms of human rights, democracy, and the rule of law.

International
read more

BBC — Corporate Policy

The BBC’s corporate policy which defines three key principles for working with artificial intelligence: acting in the public interest, being open and transparent, and recognizing the importance and uniqueness of the human factor. The policy also addresses the organization’s approach to risk management.

Corporate
read more

AI Usage Policy on the 24 Channel Website

The AI-use policies of Channel 24 describe how AI is employed for news voiceovers, video transcription, headline generation, SEO blocks, and entertainment content creation.

Corporate
read more