Practice Area

AI Safety &
Governance

Artificial intelligence is being deployed across African markets at pace, often ahead of the regulatory frameworks designed to govern it. The legal and reputational exposure is real, and it is building. We help organisations understand what responsible deployment requires and what the emerging regulatory landscape demands.

โ–ถ

Explainer video coming soon

The Governance Gap

Most African jurisdictions do not yet have dedicated AI legislation. That does not mean AI deployment is unregulated. Existing data protection laws apply directly to automated decision-making, algorithmic profiling, and AI-driven processing. Sector regulators in financial services, healthcare, and telecoms are issuing guidance. The EU AI Act creates upstream obligations for technology vendors that African organisations procure from.

The governance gap is not an absence of risk. It is an absence of clear rules, which creates different risks legal uncertainty, reputational exposure, and the near-certainty that frameworks arriving in the next two to five years will have retrospective relevance for systems deployed today.

Organisations that build governance infrastructure now documentation, accountability structures, risk assessments are materially better positioned when regulation crystallises.

Our Approach

We work at the intersection of legal analysis and technical reality. Understanding AI governance requires engaging with how AI systems actually work their training data, their outputs, the decisions they are used to make, the populations they affect. We do not produce governance frameworks that treat AI as an abstraction.

Our analysis draws on the international frameworks that are shaping AI regulation the EU AI Act, the OECD AI Principles, UNESCO's Recommendation on the Ethics of AI, and the African Union's emerging AI governance agenda and translates them into what they mean for organisations operating in African markets today.

We also track how existing African data protection law applies to AI: the automated decision-making provisions in Nigeria's NDPA, the profiling restrictions under POPIA, the DPIA requirements triggered by high-risk processing in Kenya.

What You Receive

๐Ÿงฎ

AI Risk Assessment

A structured assessment of the AI systems your organisation deploys or procures mapped against applicable legal requirements, international standards, and sector-specific expectations. Outputs include a risk classification and a remediation plan.

๐Ÿ“œ

AI Governance Framework

A documented governance architecture covering accountability structures, model oversight procedures, human review requirements, incident escalation, and audit trails. Designed to meet current legal requirements and position the organisation for incoming regulation.

โš–๏ธ

Algorithmic Accountability Review

For organisations using AI in consequential decisions credit scoring, hiring, benefits allocation, content moderation we review the decision logic, the data inputs, the human oversight mechanisms, and the legal basis for automated processing under applicable data protection law.

๐Ÿ”

AI Procurement Due Diligence

Before procuring AI systems from third-party vendors, organisations need to understand what they are taking on data rights, liability exposure, audit access, and the upstream regulatory obligations of the vendor. We conduct the legal due diligence.

๐Ÿ“ก

Regulatory Horizon Scanning

Africa's AI regulatory landscape is forming in real time. We monitor legislative developments, regulatory consultations, and enforcement signals across key jurisdictions and deliver structured briefings on what is coming and what it means for your organisation.

๐Ÿ›๏ธ

Policy Engagement Support

For organisations that want to participate in shaping AI governance through public consultations, industry bodies, or regulatory engagement we provide the legal and policy analysis to support substantive participation.

Key Legal Frameworks We Work With

  • EU AI Act upstream obligations for African procurers of EU-origin AI
  • NDPA 2023 (Nigeria) automated decision-making and profiling provisions
  • POPIA (South Africa) automated decision-making, operator obligations
  • Kenya Data Protection Act 2019 DPIA triggers for high-risk AI processing
  • OECD AI Principles transparency, accountability, robustness standards
  • UNESCO Recommendation on AI Ethics human rights due diligence framework
  • African Union AI Policy Framework emerging continental governance agenda
  • Sector-specific AI guidance financial services, healthcare, telecoms

Who We Work With

  • Financial services firms using AI in credit, fraud, and customer decisions
  • Technology companies developing or deploying AI products in African markets
  • Healthcare organisations using AI in diagnostics or patient triage
  • Government agencies deploying AI in public services
  • Multinationals navigating AI obligations across multiple jurisdictions
  • Procurement teams acquiring AI systems from third-party vendors

Start the Conversation

Tell us about your organisation, the AI systems you are deploying or considering, and what governance questions you are working through.

Request a Consultation โ†’ Subscribe to the Digest