MENU

Keeping AI on the right side of cybersecurity

September 16, 2025 • Accidents

AI now sits on both sides of the cybersecurity coin. Criminals use it to automate phishing, while defenders rely on it to spot the tiny anomalies humans miss. South African organisations might be tempted to jump in as quickly as possible, but questions of accountability and governance must come first.

“AI isn’t going away, so the question is, who controls the context,” says Fikile Sibiya, CIO at e4, a leading partner in digital transformation. “Executives are asking, ‘What exactly is the model doing, where is it used, and how do we keep it accountable?’ If you can’t answer that, you are already on the back foot.”

AI’s value and the flipside of AI-enabled attacks

On the defender’s side, AI is quickly proving its worth, especially when it comes to vulnerability management. “Machine-learning models surface weak signals that would take a human team hours and days to connect, within minutes. We get a clearer picture of risk while the window for attackers is still closing,” says Sibiya.

She adds that AI-driven analytics now provide continuous visibility across code repositories, cloud workloads and employee endpoints. “That breadth used to demand multiple point tools, but now a single model can correlate it all,” she says.

Of course, the same capabilities can also be used for nefarious purposes, including the use of convincing deep-fake voicemails used in spear-phishing. “No one should be naïve to the fact that threat actors can also iterate faster than ever because generative AI reduces the cost of experimentation,” she explains.

Uninvited intelligence, or AI that slips into the environment without oversight, is a notable risk for businesses. “If you treat every model as a black box, you’re inserting uninvited intelligence into the business. You must know what data it trains on and how decisions are reached,” adds Sibiya.

Despite AI’s immense processing power, context remains a human skill. “A model can flag anomalous traffic, but only an analyst can decide whether it’s malicious or a new business process. AI amplifies judgement, it doesn’t replace it,” says Kevin Halkerd, Risk and Compliance Manager at e4.

Governance, along with the Technology

Overall, governance should be a defining and deciding factor when it comes to AI. The application of DevSecOps guard-rails, secure-code scanning and tiered access controls to every model deployed are non-negotiable. “Auditability should be embedded at the build stage, not after a breach has already happened,” adds Halkerd.

Sibiya says organisational AI checklists need to start early and address data quality, bias testing, explainability and explicit ownership. “You cannot secure what you cannot see, and you cannot justify what you cannot explain,” she says.

Ultimately, governance needs to come before AI experimentation and deployment. “It becomes difficult to decide what’s right when you don’t have guard rails, so one approach is to create them internally first,” she explains.

Halkerd says how AI is viewed is a fundamental starting point. “AI is just one piece of the puzzle. The real power comes in how you’ve adapted it to your environment safely and effectively to enhance the security of your business.” Automation can also hard-wire assurance without overwhelming human teams. “Think of it as AI policing AI,” he says, where automated policy checks run alongside every model to scan for vulnerable code or unexpected outputs.

“We must start navigating an AI world we don’t fully understand yet and build protection into every layer while we learn. Organisations should see AI as a tool that magnifies intent. Govern it well, and it multiplies your defences. Leave it unchecked, and it multiplies your risk,” says Sibiya.

Related Posts

«