Rethinking the CAC’s AI-Powered Name Reservation System, by Tolulope Idowu
Editor’s note: In this piece, lawyer Tolulope Idowu explains how the CAC’s AI-powered name system may be undermining fairness in business registration, and why its unchecked power demands urgent legal and ethical scrutiny.
In recent times, Nigeria’s Corporate Affairs Commission (CAC) ushered in a new era of digital governance, unveiling its AI-powered Company Registration Portal (CRP 3.0). This technological upgrade promises near-instant name reservation, biometric verification via NIN, facial authentication, and full incorporation of businesses within 30 minutes.

Source: Getty Images
This shift seems revolutionary, given the perceived inefficiencies that have hitherto burdened the system. It holds the potential to accelerate Nigeria’s standing in the Ease of Doing Business Index and broaden access to formal enterprise creation. But beneath the surface of this bold innovation lies a troubling question:
What happens when administrative discretion is delegated to machines without explanation, without oversight, and remedy?
The age of automated legal decisions
Across the world, public institutions are increasingly embracing AI to reduce delays, standardize decisions, and curb corruption. Nigeria’s CAC appears to be following suit, allowing artificial intelligence to make first-level decisions on matters such as name availability. Legal decisions, however, unlike numerical computations, are context-dependent. They require understanding, nuance, and the balancing of multiple interests. When a proposed business name is denied and the only explanation given is a percentage similarity score with nothing more, the process begins to lose its legal character.
This is not a regulation, it is a digital oracle—opaque, absolute, and immune to challenge.
A recent example: AI overreach in practice
A recent example seen by the author highlights this problem. An applicant’s proposed business name, “OMEDENTALCARE LTD”, was rejected by the CAC's system for being 74.79% similar to an existing company named “LID VENTURES LTD.” On any rational basis, there is no phonetic, semantic, or industrial similarity between the two names. The algorithm’s logic in this wise is not only unexplainable, it is indefensible.

Read also
GS-25: PIC targets “new voices" to champion conversations on gender inclusion, economic devt
In another case, “OME DENTAL CARE LTD” was rejected for being 76.99% similar to “ATMOSPHERE DENTAL CARE LTD.” Here, the only overlapping phrase is the generic industry term “Dental Care.” This should not constitute grounds for refusal under any fair or human reading of the law.
In both instances, the affected users received no formal rationale, only a numerical score, void of legal context or recourse.
Legal standards vs algorithmic logic
According to Section 852 of the Companies and Allied Matters Act (CAMA) 2020, the CAC is empowered to reject company names that are:
“Identical with or so nearly resemble the name of an existing company as to be likely to deceive.”
This standard introduces the crucial element of “likelihood to deceive”—a legal test rooted in the perceptions of a reasonable person. An AI system that flags name similarity based on string matching or character sequence proximity is fundamentally disconnected from this test. Without factoring in context, industry, consumer perception, and intent, the CAC’s AI, as it stands, cannot meaningfully assess what the law demands.

Read also
Recruiter who has employed over 1000 people mentions what to say during interviews to get the job

Source: Getty Images
The risks of unchecked automation
One of the primary risks with unchecked automation is the lack of transparency. At present, no formal documentation exists explaining how the similarity percentage is calculated, what factors are considered, or what threshold triggers rejection.
Compounding this issue is the absence of any right of appeal. There is currently no mechanism for a user to request a human review or to challenge the AI’s decision; an affront to the principles of administrative fairness and natural justice.
A further concern lies in the erosion of legal discretion. Replacing legal discretion with algorithmic decision-making risks dehumanizing governance, where efficiency trumps equity, and speed replaces scrutiny.
The economic and social impact of this can be significant. For small business owners, many of whom are first-time entrepreneurs, these unexplained rejections introduce delays and added costs, often discouraging them from completing formal registration.
Toward a more accountable AI regime
It will not augur well to disconnect public service from technological advancement, but innovation in public service must be matched with governance, oversight, and legal alignment. To ensure the CAC’s AI system serves the public interest and meets acceptable governance and ethical standards, several reforms are necessary.
Transparency must be a starting point. CAC should clearly state how name similarity is calculated, e.g., weight given to shared words vs. total character overlap, and provide explainable outputs, e.g., Shared keyword “Dental Care”: 40%, Prefix character overlap: 20%, Total score: 60%.
A hybrid oversight approach would also be beneficial. Manual review should be allowed for AI-flagged rejections that fall within a gray zone(e.g., 60–85%).
Additionally, there must be a formal and accessible appeal mechanism for applicants to challenge rejections and obtain human feedback.
Algorithms must also be legally literate. AI systems should be programmed to reflect not just textual resemblance but legal definitions under CAMA, especially the standard of “likelihood to deceive.”

Read also
Education minister Alausa hailed as FG partners with Amazon to train Nigerian educators, students
Finally, governance and oversight must be institutionalised. Legal experts, technologists, and entrepreneurs should be invited to evaluate CAC’s AI deployment and to set standards for ethical governance in public digital systems.
Conclusion: Progress must not silence justice
The CAC’s drive to modernize is commendable. Digital public infrastructure, when thoughtfully implemented, has the power to transform governance. But progress must not come at the cost of justice.
When an algorithm becomes the gatekeeper of economic opportunity, the legal system must ask: Is this tool fair? Is it explainable? Is it appealable? And most importantly, is it lawful?
We must never forget that technology should serve law, not replace it, and in a democratic society governed by rules, no machine should ever become the final arbiter of rights, opportunity, or fairness.
Tolulope Idowu is a legal practitioner and digital governance researcher with a special interest in the intersection of law, technology, and administrative justice. He writes on tech policy, regulatory reform, and access to justice in emerging economies.
Disclaimer: The views and opinions expressed here are those of the author and do not necessarily reflect the official policy or position of Legit.ng.
Source: Legit.ng