ai

Kenya's AI Bill Says You Must Be Told When You Are Talking to a Chatbot. Here Is What It Actually Proposes.

Kenya's AI Bill Says You Must Be Told When You Are Talking to a Chatbot. Here Is What It Actually Proposes.

If you have ever chatted with a customer service bot on a Kenyan bank's website without knowing it was a machine, the Artificial Intelligence Bill 2026 would make that a compliance failure rather than a design choice. If someone has used AI to put your face in a video you never appeared in, the same bill would make them liable to a fine of up to Ksh 5 million, two years in prison, or both.

The bill, sponsored by Nominated Senator Karen Nyamu and currently before the Senate, is Kenya's first comprehensive attempt to regulate the entire lifecycle of AI systems. It is a wide-ranging proposal with ambitious institutional architecture, real penalties, and some provisions that will directly affect how Kenyan developers build products. Here is what it actually says.

The Disclosure Requirement: You Must Know When You Are Talking to AI

The bill introduces a risk-based classification system that sorts AI systems into tiers based on their potential for harm. At the lower end of the risk spectrum ( basic chatbots, recommendation systems, content filters ) the compliance obligation is primarily transparency.

The specific requirement: any chatbot or automated interaction system must disclose to the user that they are interacting with an AI, not a human. This is labelled a "limited risk" obligation under the bill's framework, meaning it applies to the widest class of AI systems and carries lighter regulatory weight than the requirements for higher-risk systems.

In practical terms, this means every Kenyan bank, telco, e-commerce platform, and government service that deploys a chatbot would be required to make the AI nature of the interaction clear at the point of contact. Not buried in a terms of service document. Not a small disclaimer at the bottom of the page. Clear disclosure at the point of interaction.

This is consistent with the EU AI Act (which the bill explicitly references as a model ) and reflects a principle that consent to interact with AI requires knowing you are interacting with AI.

The Deepfake Provisions: Ksh 5 Million and Two Years

This is the most immediately consequential section of the bill for the general public, and the most directly personal for Senator Nyamu herself, who has publicly cited experiencing AI-generated harassment as a driver of the bill.

The provisions are direct. Anyone who generates or distributes AI-created content using another person's image, voice, or likeness without their consent ( where that content results in misinformation, harm, defamation, or reputational damage ) faces a fine of up to Ksh 5 million, up to two years in prison, or both.

Political deepfakes are explicitly addressed and carry the same liability. With Kenya's 2027 General Elections approaching, this provision targets a well-documented threat. Fabricated videos of candidates saying things they never said, AI-generated audio clips designed to spread false information during campaigns, manipulated images designed to damage reputations, all of these would attract criminal charges under the bill.

The disclosure requirement extends to AI-generated content more broadly. Any content produced by AI that resembles existing persons, places, or events must be clearly labelled as AI-generated. This applies to media companies, political campaigns, social media users, and anyone else producing and distributing synthetic content.

Technology providers that build tools capable of manipulating voices, images, or likenesses would be required to obtain clear consent from affected individuals before using their likeness. This creates an obligation not just on the distributor but on the tool builder.

The Rights Created for Citizens

Beyond the penalties, the bill creates a set of affirmative rights for Kenyans affected by AI decision-making.

Right to Explanation: If an AI system makes a decision that significantly affects you (a loan rejection, a job application screening, a medical assessment, a government benefit determination ) you have the right to a plain-language explanation of how that decision was reached. Not "our algorithm determined," but a meaningful account of the factors involved.

Right to Human Review: If you are affected by an automated decision, you can request that a qualified human review it. The machine cannot be the last word.

Right to Challenge: You can dispute AI-driven outcomes and have your views heard before a decision is finalised.

These rights collectively address a gap that has been growing quietly as Kenyan banks, government agencies, and employers increasingly use automated systems for consequential decisions without any obligation to explain or review them.

The Institutional Architecture — and the Cost Question

The bill proposes three new government bodies:

The Office of the Artificial Intelligence Commissioner — the primary regulator, with powers to investigate complaints, audit AI systems, require algorithmic modifications, and impose penalties. The Commissioner would be appointed by the President with Parliamentary approval.

The Artificial Intelligence Authority — responsible for setting technical and ethical standards, running regulatory sandboxes where startups can test AI products under supervised conditions with relaxed compliance requirements, and developing Kenya's national AI strategy implementation.

The Artificial Intelligence Advisory Council — a consultative body of experts advising on emerging trends, global standards, and regulatory direction.

Three new institutions is where the bill draws serious criticism from analysts. Kenya already has the Office of the Data Protection Commissioner, the Communications Authority, and the Kenya Information and Communications Technology Authority, all of which have existing mandates that overlap with parts of what the AI Commissioner would do. The concern is not that AI regulation is unnecessary, but that stacking another institution on top of existing regulators risks incoherence, duplication, and a budget burden that Kenya's constrained public finances cannot absorb sustainably.

Business Daily's editorial board has called the bill "counterproductive," specifically on this institutional design question. The more targeted critique: fines of Ksh 5 million and criminal liability extending to company directors means that a single enforcement action could derail a startup's fundraising round, in a sector where Kenya accounts for a small fraction of the global AI skill pool and cannot afford to scare developers away.

The Practical Problem for Kenyan Developers

The most technically honest criticism of the bill concerns its requirements around training data provenance.

Most Kenyan AI developers do not build foundation models from scratch. They take existing open-source models ( Llama, Mistral, Gemma ) trained by companies based in the US and Europe, and adapt them for local use cases. Agriculture diagnosis tools. Swahili-language chatbots. Fraud detection for mobile money. Medical triage systems.

The bill requires that these developers produce audit trails of how the underlying model was trained, what data was used, and how specific decisions were reached. For a Kenyan developer who downloaded a Hugging Face model and fine-tuned it for a local use case, that information does not exist in a form they can produce. The training data decisions were made by researchers at Meta or Google, not by the Kenyan developer deploying the model.

Imposing criminal liability for failure to comply with a requirement that is technically impossible to meet for the most common class of AI deployment in Kenya is, as one analyst described it, a drafting error. It can be fixed ( by scoping the audit trail requirement to models built rather than models adapted ) but it needs to be fixed before the bill passes.

Why Now?

Two immediate pressures explain the timing.

The first is the 2027 election. AI-generated disinformation is not a hypothetical threat for Kenyan elections, it is a documented and growing one. The bill's deepfake and political synthetic media provisions address a concrete, near-term danger with a clear deadline.

The second is the February 6, 2026 High Court order. An urgent petition arguing that the absence of AI safeguards threatens fundamental rights (particularly privacy and equality) resulted in a judicial directive that pushed the legislature to accelerate action. Senator Nyamu's bill is a direct legislative response to that order.

The underlying concerns are legitimate. The question the Senate now has to answer is whether this particular institutional design, this penalty structure, and these compliance requirements are the right instruments for addressing them, or whether a more targeted approach, focused on the genuine harms, would protect Kenyans without inadvertently punishing the developers who are building Kenya's AI future.

The Artificial Intelligence Bill 2026 is currently before the Senate. Public participation opportunities, when announced, will be the moment for the developer community and civil society to shape what the final legislation actually looks like.

Comments

to join the discussion.