Artificial Intelligence: it’s the buzz word of this year, and with good reason. 

The smart technology is being incorporated in nearly every industry across the world. It’s commonly used to run operations efficiently to increase productivity and reduce time – whether it’s in logistics, manufacturing, medical research, financial markets or journalism. There is no doubt that more industries are making space for AI in their operations. 

Matt Brittin, president of Google Africa, Middle East and Europe was quoted on “AI is already a key part of many of our lives − in fact, if you use Google tools regularly, you’re probably using AI without even realising: it’s what helps Maps give you the fastest or most fuel-efficient route, or Search to find what you’re looking for.” 

“We’re continuing to pursue AI boldly and responsibly − creating tools that improve the lives of as many people as possible,” he said. 

It’s a good thing for innovation and progress, but as the technology speeds up, we must ensure that we are using it ethically and responsibly. 

Responsible AI is an area of AI governance which ensures that the technology is developed ethically and legally – and that the usage of the technology is safe and trustworthy. According to TechTarget, using AI responsibly will increase transparency and reduce AI biases and prejudices, while also ensuring fairness and credibility of AI applications. 

frayintermedia is working with the Canadian government on a project called AI4D which focuses on responsible AI in Africa. Through partnerships with Africa’s science and policy communities, it works on leveraging AI through high-quality research, responsible innovation, and talent strengthening to improve the quality of life for Africans. AI4D works across sectors from agriculture to climate change, gender equality and inclusion among other things. 

Watch: What is responsible AI? 

According to research by AI Media Africa, AI and related automation technologies are currently impacting more than 120 traditional industries globally, and are creating new opportunities and challenges rapidly. 

AI Media Africa was co-founded by Dr Nick Bradshaw who also established the recently launched South African Artificial Intelligence Association (SAAIA) – an industry body which focuses on promoting responsible AI in South Africa. 

SAAIA was launched in June and is backed by a number of organisations, businesses and government and academic institutions to drive responsible AI in South Africa. It seeks to encourage a multitude of stakeholders across different sectors and fields to adopt responsible AI for the commercial and societal benefit of South African citizens. It focusses on economic growth, trade, investment, equality and inclusivity, primarily. 

Some of its tasks include providing analysis and research to inform strategy and decision making, help national, provincial and city governments develop policies, help African smart tech companies find markets abroad, grow the local economy, promote debate on inclusion, ethics, regulation and standards, and share best practice and education for all. 

Read: How newsrooms are using AI

Read: An ethical checklist for robot journalism

Since the association is fairly new, a proper set of ethical guidelines have not been published, yet. However, where the media is concerned, we can take an educated guess on what it might look like, thanks to Bavarian Broadcasting – Bavaria’s public broadcasting service – which published its own set of ethics on what responsible AI looks like in the media. 

  • User Benefit: AI is used to make use of entrusted resources, generate new content, and enhance user experience efficiently. 
  • Transparency and discourse: Engaging in open discussions about AI’s societal impact plus the emerging trends; how technologies work and strengthen an open debate on the role of public service media in a data society in the future. 
  • Diversity & regional focus: Promoting diversity in AI projects and collaborating with regional partners to support the media industry and academia.
  • Conscious data culture: Prioritise data integrity, security, and user data sovereignty.
  • Responsible personalisation: Personalisation enhances the value of media services – but should always preserve societal diversity and unintended effects. Data-driven analytics can inform editorial decision-making and collaborate with other media services. 
  • Editorial control: While embracing automation, editorial responsibility remains paramount – conducting thorough integrity checks on the content and sources.  
  • Agile learning: Continuous improvement through pilot projects, user feedback, and experimentation with strict guidelines.
  • Partnerships: Collaborating with universities, research institutions, and experts to foster AI development.
  • Talent and skill acquisition: Proactive recruitment of diverse talent with cutting-edge AI skills for responsible journalism.
  • Interdisciplinary reflection: Early integration of ethical considerations and regular evaluations to avoid wasteful projects.

The media, as we know, has the responsibility to be balanced, unbiased, credible, factually correct, fair, inclusive and transparent. Incorporating responsible AI policies in newsrooms will not only strengthen the quality of journalism, but also improve our outputs. 

Also see: AI Journalism Starter Pack by JournalismAI