Knowledge Base
This section offers a curated selection of up-to-date materials and research on the use of artificial intelligence in the media and beyond.
New York Times goes all-in on internal AI tools
The New York Times article discusses the implementation of internal AI tools for editorial and product teams. The publication developed its own summarization tool called Echo and approved the use of GitHub Copilot, Google Vertex AI, NotebookLM, Amazon AI, and the OpenAI API. Journalists can use AI to generate SEO headlines, social media posts, interview questions, document analysis, and promotional materials.
Reporting on Armed Conflicts in the Digital Age: How Journalists Use AI, OSINT and Innovative Tools
Ukrainian media professional Kateryna Teslenko describes digital tools for covering armed conflicts. These technologies have become crucial for maintaining the authenticity of materials and protecting journalists and their sources in the era of hybrid wars.
The EU’s new AI code of practice has its critics but will be valuable for global governance
The Chatham House material analyzes the EU General-Purpose AI Code of Practice developed within the framework of the EU’s most comprehensive global AI Act. The Code provides non-binding recommendations for companies such as OpenAI and DeepMind on transparency, copyright, and safety. However, its drafting process sparked criticism: civil society accused the drafters of granting privileged access to American tech giants, while U.S. Senator J. D. Vance called EU regulation “innovation-suffocating.” The authors argue that despite controversies and the global trend toward lighter regulation, EU rules will exert significant worldwide influence and offer valuable lessons for global AI governance.
10 AI dangers and risks and how to manage them
The IBM material analyzes ten major risks of artificial intelligence and strategies to manage them. The author explains that while AI provides immense value in fields such as healthcare and environmental protection, it also poses potential threats — algorithmic bias, cybersecurity risks, privacy violations, environmental harm, job losses, intellectual property issues, lack of transparency and accountability, and the spread of disinformation. Each risk is paired with practical recommendations for mitigation, including ethical practices and employee training.
Challenging Systematic Prejudices
The research by the International Research Centre on Artificial Intelligence under the auspices of UNESCO focuses on the issue of stereotyping within large language models such as GPT-3.5 and GPT-2 by OpenAI, and Llama 2 by Meta. The authors provide evidence of bias against women in the content generated by each of these LLMs.
How Generative AI Is Changing Creative Work
The Harvard Business Review material examines how generative AI is transforming creative work in business. Part of the text was written by GPT-3 itself to demonstrate the technology’s capabilities. The authors explain that while large language models require millions of dollars to train, humans remain essential for crafting effective prompts and editing outputs. The article highlights real-world uses in marketing, programming, chatbots, and corporate knowledge management. However, challenges arise as deepfakes become accessible to everyone. It remains unclear who holds the copyright for AI-generated content, and the systems tend to reproduce the biases embedded in the data they were trained on.
The Politics of Using AI in Public Policy: Experimental Evidence
The scientific article presents an experiment exploring how people form opinions about using AI in government decision-making. More than 1,500 Americans completed paid tasks on an online platform, where they were randomly assigned to work under either an algorithmic or human supervisor. Researchers then tracked changes in participants’ views. An interesting pattern was discovered: while personal experience working under an AI boss affected people’s behavior at work, it did not change their opinions about whether AI should be used in public policy. Instead, participants shifted their views when they received new information about the benefits or risks of artificial intelligence — even those who were initially skeptical became more supportive after being exposed to positive information.
AI in Business Efficiency
The McKinsey report, published by the global management consulting firm, examines the impact of AI on business process efficiency. It describes how companies use algorithms to automate routine tasks, analyze data, forecast sales, and optimize resources. The authors emphasize the strategic importance of AI for competitiveness and long-term business development.
What is AI ethics?
The SAP article which discusses the concept of AI ethics. The key principles of ethical AI include transparency, fairness, privacy protection, accountability, and reliability. SAP recommends implementing ethical policies, ensuring diversity in development teams, conducting regular system audits, training staff, and maintaining transparent communication with users.
AI Incident Database (AIID) — a collection of AI-related incidents
The open database which collects cases of improper or dangerous AI applications worldwide. It is used for educational purposes to help prevent the repetition of past mistakes.
AI for Good
The UN initiative dedicated to using artificial intelligence for social good. The platform compiles examples of AI applications in healthcare, climate change mitigation, education, and humanitarian projects. It is an international knowledge base that demonstrates how technology can serve not only business interests but also the global goals of sustainable development.
This platform was created as part of the project “Strengthening Independent Media for a Strong Democratic Ukraine”, implemented by DW Akademie in cooperation with Lviv Media Forum and Ukraine’s public broadcaster Suspilne, is funded by the European Union. The project is also supported by IMS (International Media Support).