On February 24, 2026, the Department of Communications and Digital Technologies (DCDT) briefed Parliament on the progress of South Africa’s Draft National AI Policy—a framework that has been in development since 2020 and is now finally moving through Cabinet approval.
The timeline is clear:
- Early March 2026: Economic cluster ministerial council reviews the draft
- Late March 2026: Cabinet approval and gazetting for 60-day public consultation
- 2026/2027 financial year: Policy finalization (April 2026 – March 2027)
- 2027/2028: Sector-specific regulations and full implementation
For businesses operating in South Africa—whether deploying AI for employee monitoring, algorithmic decision-making, customer service automation, or financial risk assessment—the era of unregulated AI is ending.
And unlike the European Union’s comprehensive AI Act or India’s innovation-first approach, South Africa is charting what Deputy Director-General Alfred Mmoto called a “middle of the road” strategy: regulating enough to protect citizens and ensure fairness, but not so much that it stifles the innovation Africa desperately needs.
Whether that balance holds—or whether South Africa ends up with the worst of both worlds—will depend on how the public consultation unfolds and whether sector-specific regulators can coordinate effectively.
The Framework: Five Pillars, Multi-Regulator Model, No Single AI Authority
South Africa’s Draft AI Policy rests on five core pillars outlined during the Parliamentary briefing:
1. Skills and Capacity Building
The government will build national AI skills through:
- Education and training programs
- Collaboration with industry
- AI Hubs at universities (already launched at University of Johannesburg, Tshwane University of Technology, Central University of Technology, Stellenbosch Military Academy)
- Digital infrastructure improvements (compute capacity, GPUs, connectivity)
- Support for SMEs and local innovation
2. Responsible Governance
Practical safeguards to address:
- Data misuse
- Cybersecurity threats
- Misinformation and deepfakes
- Clear accountability for AI system failures
3. Ethical and Inclusive AI
Focus on:
- Fairness and bias mitigation
- Ensuring AI benefits are distributed evenly across society
- Addressing current concentration of AI capabilities
- Long-term benefits for current and future generations
4. Cultural Preservation
(Details limited in public briefings, but likely focused on protecting indigenous knowledge and ensuring AI respects South African cultural contexts)
5. Human-Centered Deployment
AI as a tool to:
- Support inclusive economic growth
- Improve job creation (while managing job displacement)
- Increase access to AI skills, infrastructure, and services
What’s notable is what South Africa is not doing: creating a single AI regulator.
Instead, the government has opted for a sector-specific, multi-regulator model. Oversight will be distributed among existing authorities:
- ICASA (Independent Communications Authority of South Africa): Digital infrastructure and communications aspects
- Existing regulatory bodies for finance, healthcare, employment, consumer protection, data protection
This means AI governance will be embedded within established supervisory frameworks rather than introduced through a standalone regime. If you deploy AI in financial services, you’ll answer to financial regulators using AI-specific guidelines. If you deploy AI for employee monitoring, you’ll answer to labor regulators.
Ashlin Perumall, partner at Baker McKenzie Johannesburg, noted that this approach represents “coordinated oversight rather than centralized regulation.” AI accountability will intersect with existing obligations around conduct, risk management, data protection (POPIA), and cybersecurity.
The Regulatory Philosophy: Rejecting Both EU Over-Regulation and Indian Laissez-Faire
During the Parliamentary briefing, Acting Committee Chair Shaik Imraan Subrathie raised the fundamental tension in AI regulation: Does regulation stifle innovation, or does it protect society?
He pointed to two extremes:
- India’s model: Minimal regulation, viewing heavy-handed rules as innovation killers
- EU’s AI Act: Comprehensive, risk-based regulation with strict compliance requirements
In 2025, the Free Market Foundation warned South Africa against “blindly following EU regulations on technology and AI,” arguing that Europe’s regulatory approach creates compliance burdens that African startups can’t afford.
Mmoto’s response was clear: South Africa agrees that EU regulations are concerning.
“Following a benchmarking exercise,” he said, “the department agrees that EU regulations are concerning. Ours is the middle of the road in South Africa. We have to have this policy in order to make sure that we have a policy lever upon which we can stimulate economic growth, ensure that the social wellbeing, but also ensure that we position our country deliberately as a leader in innovation.”
What does “middle of the road” mean in practice?
Based on the briefing and legal analysis:
- Not prescriptive like the EU: No comprehensive risk categorization system (minimal, limited, high, unacceptable risk) requiring pre-market conformity assessments
- Not hands-off like India: Clear expectations around fairness, accountability, transparency, and safety
- Sector-specific rather than horizontal: Different industries will have different AI governance requirements tailored to their risk profiles
- Principles-based initially, regulations later: The policy sets strategic direction; sector-specific regulations follow in 2027/2028
This could be pragmatic—or it could create regulatory uncertainty where businesses don’t know what compliance actually looks like until sector-specific rules emerge.
The Timeline: Why It’s Taken So Long (And Why It’s Still Not Done)
South Africa’s AI policy journey began in 2020 with the Presidential Commission Report on the Fourth Industrial Revolution. Six years later, the policy still isn’t final.
Here’s what happened:
2020: Presidential Commission Report laid groundwork
2021: South Africa led development of African Union AI blueprint
2023-2024: AI Hubs launched at four universities
August 2024: National AI Plan launched at University of Pretoria; AI Policy Framework published for public comment (deadline: November 29, 2024)
Late 2024: SEIAS (Socio-Economic Impact Assessment System) certification granted
February 2026: Cabinet approval process begins
March 2026: Expected gazetting for 60-day public consultation
2026/2027: Policy finalization
2027/2028: Implementation and sector-specific regulations
During the February 24 briefing, MPs questioned whether the timeline could be accelerated. Mlindi Mashologu, DDG for Digital Society and Economy, acknowledged the delays:
“We will try on our side to ensure the policy is brought forward. For us, if we can get the policy gazetted later in the coming month, we should be able to push through and ensure that by mid of the next financial year we will be finalising the final policy.”
Director-General Nonkqubela Jordan-Dyani echoed that sentiment: “A lot of work has already been done. I think it’s very important that we set forward a nationwide policy position with regards to where we want to take artificial intelligence.”
Communications Minister Solly Malatsi previously told ITWeb that South Africa’s multi-layered compliance and consultation approach explains the delays—not intentional foot-dragging.
But the subtext is clear: South Africa is late. Nigeria, Mauritius, and Rwanda are already developing their own AI strategies. The EU’s AI Act is operational. The US is debating federal frameworks. And South Africa is still in consultation.
What Businesses Should Do Now
Legal experts are advising organizations to treat 2026 as a preparation year. Here’s what Baker McKenzie, Fasken, and other firms recommend:
Immediate Actions (March-June 2026):
1. Monitor the gazetting process
- Watch for publication in the Government Gazette
- Note the exact start date of the 60-day consultation period
- Assign responsibility internally for tracking regulatory developments
2. Participate in public consultation
- Submit formal comments on areas where the policy impacts your operations
- Join industry associations coordinating collective responses
- Engage with DCDT through stakeholder forums (quarterly digital economy stakeholder forum runs through June 2026)
3. Assess current AI deployments
- Inventory all AI systems currently in use (employee monitoring, customer service bots, algorithmic decision-making, credit scoring, fraud detection)
- Classify systems by risk level and impact
- Identify high-impact systems likely to face increased scrutiny
4. Review governance structures
- Do you have AI governance policies?
- Who’s accountable for AI system failures?
- Are bias and fairness considerations built into development processes?
- Is there transparency in how algorithmic decisions are made?
Medium-Term Actions (2026/2027):
5. Prepare for sector-specific regulations
- If you operate in finance, healthcare, or employment sectors, expect detailed AI-specific rules in 2027/2028
- Engage with sector regulators early
- Build compliance frameworks that can adapt to new requirements
6. Strengthen data governance
- AI regulation will intersect with POPIA compliance
- Ensure AI training data, processing, and outputs comply with data protection obligations
- Document data sources, consent mechanisms, and retention policies
7. Address bias and fairness proactively
- Test AI systems for discriminatory outcomes
- Implement bias detection and mitigation tools
- Document fairness assessments and remediation efforts
8. Build internal AI ethics capacity
- Train legal, compliance, and operational teams on AI governance
- Establish cross-functional AI ethics committees
- Develop internal guidelines before external regulations force them
The Risks: What Could Go Wrong
South Africa’s “middle road” approach has three major risks:
1. Regulatory Fragmentation
With multiple regulators overseeing AI across sectors, businesses could face:
- Conflicting requirements from different authorities
- Compliance costs multiplying as each regulator issues its own AI guidelines
- Uncertainty about which regulator has jurisdiction when AI systems span multiple sectors
2. Implementation Delays
The policy is supposed to be finalized in 2026/2027, with sector-specific regulations in 2027/2028. But South Africa’s regulatory track record suggests slippage is likely. If implementation drags into 2029 or beyond, businesses face years of uncertainty.
3. Innovation Chilling Effect
Even a “middle road” policy can stifle innovation if:
- Compliance costs are too high for startups
- Regulatory uncertainty makes investors cautious
- Local AI development shifts to jurisdictions with clearer rules
South Africa’s tech ecosystem is already struggling with funding constraints, brain drain, and infrastructure gaps. If AI regulation becomes another barrier rather than an enabler, the country risks falling further behind global AI leaders.
The African Context: South Africa vs. Nigeria, Rwanda, Mauritius
South Africa isn’t the only African country developing AI policy. Here’s how it compares:
Nigeria:
- Developing AI strategy
- Focus on leveraging AI for economic development
- No comprehensive regulatory framework yet
Rwanda:
- Anthropic-Rwanda AI partnership (covered by TechMoonshot) positioning Rwanda as AI hub
- Government-led AI initiatives
- Fast-moving regulatory environment
Mauritius:
- AI strategy in development
- Focus on AI as economic differentiator
- Leveraging financial services expertise
South Africa’s advantage: Most mature tech ecosystem, deepest regulatory capacity, strongest institutional infrastructure
South Africa’s disadvantage: Slower-moving government, more complex political economy, higher compliance costs
The Verdict: A Pragmatic Bet on Balanced Regulation
South Africa’s Draft AI Policy won’t be perfect. It’s too slow, too cautious, and too reliant on sector-specific regulators that may not coordinate effectively.
But it’s also pragmatic. By rejecting the EU’s comprehensive risk-based regime and India’s hands-off approach, South Africa is betting that a sector-specific, principles-based model can:
- Protect citizens from AI harms (bias, privacy violations, misinformation)
- Enable innovation (no pre-market conformity assessments, no heavy compliance burdens)
- Position South Africa as a responsible AI leader in Africa
Whether that bet pays off depends on three factors:
1. Public consultation quality: If the 60-day comment period produces substantive feedback that shapes the final policy, it could be genuinely useful. If it’s performative, the policy will be disconnected from reality.
2. Sectoral regulator coordination: ICASA, financial regulators, labor authorities, consumer protection agencies, and data protection authorities all need to align on AI governance. If they don’t, businesses face fragmented compliance.
3. Implementation speed: If sector-specific regulations take until 2029, businesses will operate in limbo for years. If they arrive by 2027/2028 as planned, South Africa could actually lead African AI governance.
For now, businesses should prepare strategically. Inventory AI systems. Strengthen governance. Engage in consultation. Build compliance capacity.
Because the era of unregulated AI in South Africa is ending. And whether the new era enables innovation or stifles it will depend on execution—not just policy documents.
South Africa’s Draft National AI Policy is expected to be gazetted in March 2026 for 60-day public consultation. Finalization is targeted for the 2026/2027 financial year, with sector-specific regulations following in 2027/2028.