South Africa’s first real shot at putting binding rules around AI didn’t just stumble: it fell apart over fake citations. And while the country goes back to the drawing board, the tech the policy was meant to regulate is still running without any guardrails.
On 27 April 2026, Communications and Digital Technologies Minister Solly Malatsi pulled the Draft National AI Policy, just two weeks after it was published. News24 reported on 24 April that at least six of the 67 academic references in the draft were fabricated – the kind of citations. What makes it worse is that the Cabinet had already signed off on the draft twice, on 25 March and 1 April, and public comments were meant to run until 10 June.
At first glance, it looks like a straightforward, if ironic, mess-up. But it’s also a story about accountability for all the parties that comprise the coalition government of national unity. Who put this together and who approved it? Who pushed back and who kept quiet?
The day before the draft was withdrawn, ANC MP Khusela Diko, who chairs Parliament’s communications committee, told Malatsi, a DA minister, to scrap it. The next day, the ANC Study Group on Communications called the policy a “catastrophic failure of critical thinking and accountability at the highest levels of departmental leadership”. Both the ANC and the MK Party submitted formal letters demanding that Malatsi appear before the committee.
From the DA, though, it’s been quieter. Public Works Minister Dean Macpherson brushed off the criticism as “grandstanding”, but the party’s top leadership hasn’t really weighed in, despite the DA running the portfolio since 2024. Most of the coverage has just called it a “government” failure.
DA spokesperson Tsholofelo Bodlani told /explain/: “The DA has great respect for the principles of the separation of powers. Minister Malatsi is fully empowered to run the ministry and communicate on the same. There is nothing to defend. The minister acted within his powers and made a swift decision by withdrawing the draft policy.”
Basically, a big miss in one of the DA’s key portfolios gets treated as the minister’s individual call, the party leadership stays out of it on principle, and it all ends up being filed under “government” rather than pinned on either of the two parties that signed it off – the DA, whose minister drafted and gazetted it, and the ANC, whose CAbinet majority approved it twice.
What the draft actually proposed
The draft was arguably more ambitious than most AI policies. It proposed creating five new bodies: a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, and a National AI Safety Institute. On top of that, it introduced the idea of an AI Insurance Superfund, essentially a public pool to compensate people harmed by automated decisions when it’s unclear who’s liable. But the gazetted text omitted key details, such as how it would be funded, what counts as harm, and how claims would work.
Substantively, the policy leaned heavily on the European approach. Rules for automated decision-making are tied back to the Protection of Personal Information Act (Popia), the country’s existing data-privacy law. There were proposals to watermark AI-generated content and to track the sources of training data. Cross-border data flows would be tightly managed, with different rules for sectors such as banking, healthcare, and policing.
Sadia Rizvi, a senior associate at Cliffe Dekker Hofmeyr, told /explain/ the architecture was sound: “The fact that they’re establishing an AI office and the AI Superfund – all of that has stepped in the right direction. It embodies the European Union AI Act in the way that it speaks to a risk-based approach.”
AI is already making decisions
The need for rules isn’t abstract: AI is already making decisions in South Africa every day. The withdrawn draft was the first real attempt to construct some national guardrails.
In banking, the Financial Sector Conduct Authority (FSCA) and Prudential Authority reported in November 2025 that 52% of South African banks have adopted AI systems, mainly for credit scoring, fraud detection, and loan approvals. Insurers are behind, sitting at about 8%, but this is set to rapidly expand into underwriting and claims decisions.
Policing is further along than most people realise. The South African Police Service (SAPS) is using predictive analytics in parts of the Western Cape to track patterns of gang violence. Private systems like Vumacam’s number-plate recognition network generate tens of thousands of alerts daily, feeding into security operations and police investigations. There are also tools that match CCTV images to police databases.
The problem is that none of this is governed by a dedicated national AI framework. Existing laws like Popia cover data protection, but they don’t address factors like generative AI, training data, or systemic risk. Sector regulators like the FSCA, the Information Regulator, and SAPS operate in silos and there’s no clear, AI-specific recourse for people affected, whether it’s being denied a loan or flagged by a surveillance system.
How fictitious sources slipped through
The fake citations weren’t hard to spot. On 24 April, News24 reported that at least six of the 67 academic references in the draft simply didn’t exist, classic AI “hallucinations”, where tools generate sources that sound real but aren’t.
What’s more concerning is how far those citations got. The Department of Communications and Digital Technologies (DCDT) drafted the policy. Cabinet approved it twice, on 25 March and again on 1 April. It was then gazetted on 10 April. At none of those stages did anyone pick up the problem. It took an external review, more than two weeks later, to flag it.
This isn’t just a South African issue. In 2023, lawyers in the US were fined for submitting court papers containing fabricated case citations generated by ChatGPT. In the UK, senior judges in the King’s Bench Division warned British lawyers that relying on AI-generated legal references could even lead to criminal consequences. Cases like these are piling up globally. Damien Charlotin’s incident database, which tracks AI hallucinations in court filings worldwide, has logged more than 1,350 cases as of April 2026.
The pattern is clear: AI is being used in high-stakes work, but the checks haven’t caught up. In his withdrawal statement, Malatsi admitted the issue went beyond a technical error and undermined the policy’s credibility. “This failure is not a mere technical issue” and it had “compromised the integrity and credibility of the draft policy”, he wrote. Malatsi has promised consequence management for those responsible for drafting and quality assurance”, but no individuals have been publicly named or sanctioned.
As Rizvi put it, there’s a clear irony. “These are the people that just say you must use AI responsibly,” she said, “and they don’t do it themselves. It’s pretty clear that they’ve probably used some kind of AI – like ChatGPT or Claude – to put this draft together.
What happens between now and a credible second draft
Malatsi says a revised draft is in the works, this time with “much more rigorous oversight”. There’s no timeline, but realistically, mid-year is the earliest we’ll see a new version, followed by another 60-to-90-day public comment period.
When it lands, there’ll be few things to watch out for. Will the government explain how it’s checking sources this time? Will there be transparency around who actually drafted the policy and what tools they used? More substantively, does the plan to create five new AI bodies stay the same or get scaled back? Will the proposed AI Insurance Superfund finally get details on how it’s funded, what counts as harm, and how claims work?
For now, the gap between practice and policy remains as we await V2.0.


