Journal

When AI becomes a weapon: what African governments must do to protect women online.

5 min read

Author profile

P

Prudence Chepngeno

Author

Across sub-Saharan Africa, artificial intelligence tools are being used to silence, shame, and threaten women in public life. Governments in the region have laws on the books, but almost none of them were built for this moment. The question is no longer whether AI can harm women. It already is. The question is whether African […]

Across sub-Saharan Africa, artificial intelligence tools are being used to silence, shame, and threaten women in public life. Governments in the region have laws on the books, but almost none of them were built for this moment. The question is no longer whether AI can harm women. It already is. The question is whether African digital governance is serious enough to stop it.

Imagine you are a woman running for office. You wake up to find a video circulating on WhatsApp showing your face on someone else’s body in a compromising situation. The video is fake, created by an AI tool anyone can access for free online. By the time it is debunked, thousands of people in your constituency have already seen it. When you go to report it, the police have no idea what law applies.

This is not a hypothetical. A 2025 report called Digital Shadows, by the Tanda Community Network, documented these attacks on women politicians and journalists in Ghana, Namibia, and Senegal. The researchers spoke to women who had been targeted with AI-generated fake images and videos designed to destroy their reputations.

In Ghana’s 2024 elections, Professor Naana Jane Opoku Agyemang, who became the country’s first female Vice President, faced fabricated narratives designed to portray her as unfit for leadership. In Namibia, an AI-generated image falsely showing presidential candidate Netumbo Nandi-Ndaitwah collapsing at a rally was spread to make her appear physically weak. The question worth sitting with is this: how many women decided not to run at all after seeing what happened to their colleagues? Research by Sensity AI found that 96 percent of deep fakes online are non-consensual sexual content, and 99 percent of those target women.

The laws we have were not built for this

Sub-Saharan Africa is not without laws. Kenya has a Data Protection Act, a Computer Misuse and Cybercrimes Act, and a National Artificial Intelligence Strategy. South Africa has a data protection law called POPIA. Nigeria passed a data protection law in 2023. Rwanda has an AI policy. At the regional level, the Maputo Protocol, adopted by the African Union in 2003, commits governments to protect women from violence. The ECOWAS Cybercrime Directive, adopted in 2011, asks West African countries to criminalize computer-based offences.

The problem is that none of these frameworks were designed with AI in mind. The Maputo Protocol was written before most Africans had internet access. The ECOWAS Cybercrime Directive was built to address email fraud and hacking, not AI-generated fake images. Kenya’s Cybercrimes Act does not specifically cover deep fakes.

Kenya is the closest to a breakthrough. The Artificial Intelligence Bill 2026, introduced in the Senate in February 2026, would make Kenya the first country in the region with a standalone AI law. The bill specifically criminalizes harmful deep fakes and proposes the creation of an AI Commissioner to oversee how AI is used in the country.

The gap between what the law says and what women actually experience online is not a small one. It is structural. Governance was built for an analogue world, patched for the internet, and has not been rebuilt for artificial intelligence. Women are paying the price of that delay with their safety, their reputations, and their decision to stay in or leave public life.

This is a democracy problem, not just a women’s problem

When women are driven out of public life by AI-enabled harassment, it is not only a violation of their rights. It is a democratic crisis. Democracy requires that all citizens can participate equally in political life. When AI tools systematically target and silence women, the voices shaping laws, budgets, and national priorities become less diverse. Decisions get made without the perspectives of half the population.

This is what I call the legitimacy test. A government’s digital governance framework is only as legitimate as its ability to protect all of its people online. If AI-facilitated harm against women flourishes without consequence, the framework has failed that test. It does not matter how sophisticated the country’s technology strategy is, or how many AI hubs it has built, or how many innovation awards it has won. If women cannot move safely through digital spaces, the governance is not fit for purpose.

What needs to change?

The solutions are not complicated in concept, even if they require political will to implement. Governments across sub-Saharan Africa need to update their laws to explicitly cover AI- generated harm, including fake intimate images and AI-powered surveillance used by abusive partners.

Institutions also need to catch up. Police units need training in digital evidence. Data protection offices need the technical tools to investigate AI complaints. Social media companies operating in Africa need to be held accountable for how their platforms work in African languages, not just English. Research has shown that AI content moderation routinely fails to detect harassment in Swahili, Hausa, and Amharic, leaving women exposed to coordinated abuse that the platform cannot even see.

Most importantly, the women most affected by these harms need seats at the table where decisions are made. African feminist organizations, women journalists, and women in politics who have experienced AI-enabled abuse are not peripheral voices in this conversation. They are the essential ones. Any AI governance framework designed without them will have gaps that no technical expert can fill.

Strengthening legal protection systems for women and girls in the age of artificial intelligence is not a niche issue for gender specialists. It is the test of whether African digital governance is serious about the rights of all its people. That test is happening right now. The results are not yet written.

Prudence Chepngeno is a conflict, security and digital peacebuilding practitioner. She works at the intersection of artificial intelligence, conflict analysis and governance, with a focus on building frameworks that center gender in peacebuilding and security processes.

Disclaimer

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of Real Life Research Institute or its Board of Directors.

Back to all posts

When AI becomes a weapon: what African governments must do to protect women online.

Explore more from RLRI Africa Program