By George Ogola Reader in Journalism, University of Central Lancashire

This article is republished from The Conversation under a Creative Commons license. Read the original article.

“shadowy groups” are using coordinated and elaborate tactics to spread disinformation on social media. Sergey Nivens/Shutterstock

George OgolaUniversity of Central Lancashire

Kenya has a particularly vocal political public on Twitter. Commonly known as KOT (Kenyans on Twitter), they have made the platform integral to political conversations in the country. These vocal digital actors have the power to shape debates offline as well as on other media platforms.

However, a recent report on Twitter use in Kenya revealed its darker side. Hidden from view are “shadowy groups” using elaborate tactics to spread disinformation and discredit certain groups and individuals. On the receiving end are particularly those focusing on political issues.

The report was published by Mozilla Foundation, a non-profit which works to ensure the internet remains a public resource that is open and accessible to all.

Prepared with data from May and June 2021, it revealed at least 11 disinformation campaigns consisting of more than 23,000 tweets and 3,700 participating accounts. These groups used bots – automated software that runs repetitive tasks online. They also employ “sock puppet” accounts, which are fake accounts created mainly to

create malicious content, generate fake engagement, and ultimately hijack Twitter’s own trending algorithm.

To give their campaigns legitimacy, the groups were also found to be using well-known “influencers” to promote their messages, often coordinated attacks on targeted individuals or campaigns.

The report also identified hashtags sponsored or paid for by Kenya’s key political players in an effort to control the national political narrative through various disinformation practices.

Twitter took action on over 100 accounts operating in the country which it had reportedly found engaging in what it called “violations of its platform manipulation and spam policy”.

Worryingly, these accounts were targeting not only campaigns but also individuals, many now fearful of getting involved in online debates. The report noted that members of the Linda Katiba movement (a pro-democracy activist group) and members of the Kenyan judiciary were the subjects of most of the attacks.

The hashtag #wakorajudges (wakora is a Kiswahili word loosely translated as “errant”), for example, was used to direct criticism and abuse towards judges perceived to have been against the government following their court rulings. One such ruling was the Court of Appeal decision upholding an earlier High Court judgement which declared as illegal a government attempt to amend the constitution.


Read more: How authoritarian rulers manage their international image


Social media give ordinary people the freedom, however nominal, to express themselves and their agency. These platforms are particularly popular in tightly controlled communication environments. This is because of their capacity to subvert the state’s repressive legislation or control of traditional media.

In addition, traditional newsroom cultures mean that editors and other actors – such as advertisers and the state – can both directly and indirectly determine who and what is heard. Social media generally undermine these forms of news “gatekeeping”.

Social media users in Africa have therefore created important pockets of “freedom” in these digital spaces. As a result, they encourage much-needed political “indiscipline” by legitimising and popularising alternative political discourses.

The invasion and weaponisation of the digital public sphere by well-resourced state operatives and various shadowy groups is therefore worrying. The interests, incentives and intentions of such groups may be varied. But it is clear they are increasingly undermining the potential of these platforms to facilitate open and reasoned debate.

Subtle suppression

Authoritarian governments traditionally used internet shutdowns and throttling to suppress dissent. Though these methods are still practised, they now draw and focus unwanted international attention on repressive states and governments. Countries such as Uganda, Zambia, Zimbabwe and Tanzania have in the past attracted widespread condemnation for such practices.


Read more: Shutting down the internet doesn’t work — but governments keep doing it


For this reason, many such governments are starting to adopt much more subtle and sophisticated ways of controlling or shaping political agendas and discourses.

For instance, in Tanzania, copyright laws are subtly being used to silence activists online. Activists have been subject to suspected state intimidation through the exploitation of the controversial US Digital Millennium Copyright Act.

The act gives guidance on how to manage copyright complaints including violations of platform media use.

Last October, a Tanzanian activist who uses the Twitter handle @Kigogo2014 had his account suspended by Twitter. This is because it had allegedly received “more than 300” complaints that the account had breached its copyright policy.

In an interview with the BBC @Kigogo14 said that more than 1,000 tweets from his account were copied and used to set up three websites. The complainants then used those websites to prove breach of copyright rules. To get his account reactivated Twitter asked him to provide his personal details, which in effect would have unmasked his identity for the authorities.

Cases of misusing the Digital Millennium Copyright Act to harass activists and journalists have also been reported in Nigeria.

What can be done to protect this space?

While civil society activism now increasingly relies on the political potential of social media, its exploitation of these platforms is now under threat. Effective social media use is not only determined by access but also by the capacity to understand the mechanics of how it operates. Resources matter and so does digital literacy.

The public must possess the literacy to discriminate organic conversations from sponsored and manipulated content. Without this, they will remain vulnerable to misinformation. There is also a risk that these spaces will be dominated by the powerful and their interests, permanently deferring the egalitarian promise of the platforms as genuine deliberative spaces.

Activists and pro-democracy groups need to re-imagine the future of activism in a digital world. This is a world in which they are not in control of the infrastructures of such communication. They must continue lobbying for transparency, public digital literacy campaigns and being at the centre of debates on any legislative attempts that seek to vest even more power in either the big tech companies or the state, with whom they often silently work.

George Ogola, Reader in Journalism, University of Central Lancashire

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Frayintermedia
+ posts