privacy

Nairobi Is Watching Your Bedroom. And Meta Knew.

Nairobi Is Watching Your Bedroom. And Meta Knew.

The global tech press is framing this story as a privacy scandal for Meta Ray-Ban wearers in Europe and America. They are not wrong. But they are missing the other half of the story, the half that happens in Nairobi.

A joint investigation published February 27 by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten has confirmed what should alarm anyone building Kenya's AI economy: a Nairobi-based company called Sama is routing intimate footage from millions of Western homes through its offices, paying Kenyan workers approximately $2 per hour to watch and label it, under NDAs, under office surveillance cameras, with an explicit policy that if you raise concerns about what you are seeing, you lose your job.

This is not a technology problem. It is a labour problem, a data ethics problem, and, critically for Kenya, a regulatory problem that the Office of the Data Protection Commissioner has jurisdiction over right now.

What Is Actually Happening

Meta's Ray-Ban smart glasses, developed in partnership with EssilorLuxottica and priced from $299, have sold over seven million pairs. When a wearer says "Hey Meta" and asks the AI to analyse their surroundings, that footage does not stay on the device. It is uploaded to Meta's servers, where it enters a training pipeline that routes it to human data annotators at Sama's Nairobi offices.

Those workers ( mostly college students and young graduates) are tasked with labelling objects in video clips so Meta's AI can learn to recognise the world more accurately. Draw a box around the chair. Identify the room type. Categorise the activity. Standard computer vision annotation work. Except the footage is not standard.

Workers told the Swedish journalists they regularly encounter footage of people using bathrooms, changing clothes, watching pornography, and having sex. Not because wearers were deliberately recording these moments, in many cases, the glasses were simply running, or left on a surface pointed at a room the wearer had forgotten was being captured.

"In some videos you can see someone going to the toilet, or getting undressed," one Sama worker told the newspapers. "I don't think they know, because if they knew they wouldn't be recording."

Another contractor described reviewing footage where the wearer set the glasses on a bedside table and left the room, only for their wife to walk in and undress, unaware she was being watched.

Other footage included bank card details captured accidentally in frame, users watching pornography with the glasses on, and recordings described simply as "sex scenes." Workers said they felt compelled to annotate regardless of the content, stopping or raising concerns meant risking termination.

Meta's face anonymisation system is supposed to protect identities in the footage. Workers say the automatic face anonymisation does not always work, faces that should be obscured are sometimes fully visible, particularly in difficult lighting conditions.

Sama's Track Record in Nairobi

Sama is not a new name in this conversation. It is the same company, in the same Nairobi offices, that TIME Magazine exposed in 2023 for paying Kenyan workers $1.32 to $2 per hour to label graphic content for OpenAI (child sexual abuse material, beheadings, suicide footage ) while being billed to OpenAI at $12.50 per worker per hour. Workers described that experience as psychological torture. Several developed PTSD. Sama ended that contract after the TIME investigation and the resulting public pressure.

It then pivoted to computer vision annotation, the work it now does for Meta's Ray-Ban glasses. Same workforce. Same pay structure. Same NDAs. Different content.

After further reports exposed worker trauma and alleged union-busting at Sama's Nairobi office, the company ended its content moderation work for Meta in 2023 and shifted its focus to computer vision data annotation, the exact type of work now tied to the AI glasses.

The pattern is not coincidental. Sama's business model depends on a Nairobi labour market where $2 per hour is above minimum wage, where young graduates with limited formal sector options are willing to accept difficult conditions for stable income, and where NDA enforcement and office surveillance make collective action difficult.

This is not unique to Sama. As Agence France-Presse documented last year, the global AI training industry routes its most sensitive, psychologically taxing, and legally ambiguous work to workers in Kenya, Colombia, India, and the Philippines — markets where labour costs are low, regulatory oversight of outsourced data work is limited, and workers have limited recourse when the content damages them.

But Sama is headquartered in Nairobi. It employs Kenyan workers. The data processing happens on Kenyan soil. That matters legally.

The Regulatory Question Nobody Is Asking

European privacy regulators are already moving. Petra Wierup, a lawyer at the Swedish data protection authority IMY, made clear that if Meta is the data controller under GDPR, protections must extend to subcontractors in third countries and cannot be weakened. NOYB, the Vienna-based privacy organisation that has filed multiple lawsuits against Meta, has flagged clear transparency failures and is examining legal grounds for action.

The question that is not being asked loudly enough in Kenya: what is the Office of the Data Protection Commissioner going to do about this?

The Data Protection Act 2019 applies to data processing operations conducted in Kenya. Sama processing footage in Nairobi ( including footage of identifiable individuals who have not consented to Kenyan data handling) falls squarely within the ODPC's jurisdiction. The Act requires data processors to implement appropriate technical and organisational measures to protect personal data. Footage of people undressing, footage of people in bathrooms, footage of bank card details — all of it constitutes sensitive personal data under the Act's definitions.

The ODPC has the authority to investigate Sama's data handling practices, audit whether the processing meets the Act's standards, and impose penalties for non-compliance. It has the authority to require Meta to demonstrate the legal basis under which it transfers personal data to a Kenyan subcontractor. It has the authority to require Sama to disclose what data it holds, for how long, and what security measures protect it.

Whether it exercises that authority is a different question. The ODPC is a young institution with limited resources and a mandate that has historically been stretched across complaints from individual Kenyans rather than investigations into multinational data pipelines. An investigation into Sama's Meta contract would be the most consequential enforcement action the ODPC has ever undertaken.

It would also be entirely justified.

What This Means for the Workers

The international coverage has focused primarily on the privacy of the Ray-Ban wearers and the people accidentally captured in footage. That coverage is correct and important. But the workers in Sama's Nairobi offices are also victims of this pipeline, and they are getting less attention.

These workers did not sign up to watch intimate footage of strangers from Western homes. They signed up for data annotation work, a job category that Kenya's government and tech industry have positioned as a formal sector opportunity for young graduates in the digital economy. The Kenya Vision 2030 framework and subsequent digital economy strategies have explicitly cited AI data labelling as a growth sector for employment.

What Sama's model demonstrates is that "digital economy jobs" is not a self-evidently positive category. The digital economy can generate formal employment. It can also generate exploitative employment in professional-looking offices, with NDAs that prevent workers from speaking about what they experience, at pay rates that make the label "formal sector" technically accurate but economically misleading.

The psychological toll of sustained exposure to intimate and disturbing footage ( documented in the OpenAI contract, documented again now ) is a worker welfare issue that Kenya's Ministry of Labour has not engaged with publicly. The Kenya ICT Board, which promotes Kenya as a BPO and data services destination, has been silent on the Sama revelations.

Someone in government needs to ask: what are the conditions under which this work is being conducted, and do they meet Kenya's own standards?

The Technology Defence That Does Not Hold

Meta's official position is that its terms of service permit human review of AI interactions, that face anonymisation protects individuals in the footage, and that privacy controls give users meaningful choices about what is shared.

Each of these claims deserves scrutiny.

The terms of service defence is technically accurate and practically meaningless. Buried in a document that Ray-Ban users click through during setup is one sentence reserving the right to conduct "manual human review" of AI interactions. No reasonable person buying a pair of glasses marketed with the phrase "designed with your privacy in mind" reads that sentence and understands it to mean that strangers in Nairobi will watch footage from their bedroom.

The face anonymisation claim has been directly contradicted by workers and by a former Meta employee. The system fails in difficult lighting — exactly the conditions common in homes, bedrooms, and bathrooms where the most sensitive footage originates.

The privacy controls argument, that users can choose not to use the AI features, is accurate but deflects from the core problem. The LED indicator light intended to signal recording is, by multiple accounts, too small and dim for bystanders to reliably notice. The people accidentally captured in footage (the wife who walked into the bedroom, the person in the bathroom) have no controls. They are not Ray-Ban users. They never agreed to any terms of service. Their data is in Meta's training pipeline anyway.

What Needs to Happen

For Meta, the minimum response is meaningful: stop routing sensitive footage to human annotators without explicit, informed consent from all people identifiable in that footage. Fix the anonymisation system so it works in real-world lighting conditions, not just controlled tests. Publish a clear, plain-language explanation of what footage is reviewed by humans, by whom, in which countries, and under what data protection standards.

For Sama, the questions are about labour standards: what psychological support is available to workers who encounter distressing content? What is the actual pay rate inclusive of all deductions? What does the NDA permit workers to disclose about their working conditions, and is that NDA itself compliant with Kenyan labour law?

For the ODPC, the question is whether it will act. The Swedish investigation has provided the factual basis for an inquiry. European regulators are already examining the cross-border data transfer implications. Kenya's own data protection commissioner has both the jurisdiction and the obligation to investigate data processing happening in Nairobi offices that does not appear to meet the standards the Data Protection Act 2019 requires.

For Kenya's broader tech sector, this story is a prompt for a harder conversation. AI data labelling is real employment. It is also work that can cause real harm, to workers who see things they cannot unsee, and to the people whose most intimate moments become training data without their knowledge. Positioning Kenya as Africa's AI data hub without asking hard questions about the conditions of that work is a strategy that trades short-term employment numbers for long-term reputational and human cost.

The LED light on a pair of glasses in a Stockholm apartment is not doing enough to protect privacy. And the data commissioner's office in Nairobi has not yet done enough to protect the workers, and the data subjects, whose interests it exists to serve.

If you work or have worked in AI data annotation in Kenya and want to share your experience confidentially, reach us at [email protected]. Your identity will be protected.

Comments

to join the discussion.