Ethical Hackers Are Still Our Best Cyber Defense

We’re joined today by Dominic Jainy, an IT professional whose work at the intersection of artificial intelligence and cybersecurity offers a critical perspective in a world increasingly reliant on automation. As organizations race to adopt AI-driven security, he explores the irreplaceable role of human intellect and ethics in digital defense. Our conversation will delve into the concept of the “human algorithm,” where machine speed meets human reason. We’ll discuss what “algorithmic warfare” truly looks like in practice, how elite ethical hackers are trained to compete against adaptive AI adversaries, and why personal accountability remains the ultimate safeguard in an age of automated decision-making.

The article introduces the concept of a “human algorithm,” where automated detection supports human decision-making. Can you walk me through a real-world scenario where an AI flagged a threat, and a certified professional’s ethical reasoning was crucial in determining the appropriate response?

Absolutely. Think of a large enterprise network. An advanced AI security tool suddenly flags an unusual data flow pattern from a critical server late at night. The algorithm sees a deviation from the baseline; to the machine, it’s a potential data breach in progress. It might even recommend shutting down the server immediately. But this is where the human algorithm kicks in. A CEH-certified professional doesn’t just blindly follow that recommendation. Their first step is to interpret the flag, not just accept it. They’ll investigate the context: Is there a scheduled system-wide backup? Is a new software patch being deployed that would explain the traffic? They bring a level of reasoning the machine can’t. They understand that shutting down that server could halt business operations, costing the company millions. Their ethical reasoning forces them to weigh the risk of a potential breach against the certain damage of a system shutdown, a nuanced judgment that pure automation simply isn’t capable of making.

You mention that CEH AI teaches defenders about “algorithmic warfare.” What does this look like in a practical training lab? Please describe the steps a student might take to identify and counter an attack from an AI designed to generate deceptive data.

It’s a fascinating and intense experience. In our CEH AI labs, a student isn’t just up against a static piece of malware; they’re facing an intelligent adversary. A typical exercise might begin with the student monitoring a simulated corporate network. The attacking AI won’t launch a noisy, obvious assault. Instead, it might start by generating subtle, deceptive data—for instance, creating thousands of log entries that look like normal user activity to camouflage its real reconnaissance scans. The student’s first task is to use their own AI-driven analytics tools to spot this statistical noise, to see the faint signature of the attack beneath the surface. Once identified, the challenge escalates. The adversary AI might then try to weaponize the defender’s own tools, perhaps by feeding them manipulated data to trigger false alarms and create chaos. The student has to learn to think like the attacking algorithm, anticipate its next move, and deploy countermeasures that are just as adaptive, effectively fighting fire with fire while maintaining control of the network.

The text highlights a four-part learning cycle ending with “Engage” and “Compete.” How do these hands-on phases, like global Capture the Flag events, prepare ethical hackers for the pressure and unpredictability of defending against a live, automated adversary that adapts its behavior?

This cycle is what transforms a student from someone who knows security concepts into a true practitioner. The initial “Learn” and “Certify” stages build the foundation. But it’s in the “Engage” and “Compete” phases where that knowledge is forged into skill. The “Engage” phase puts them in hyper-realistic simulated networks where they have to neutralize breaches under pressure. It’s like a pilot in a flight simulator—the stakes feel real. But the “Compete” phase, with global Capture the Flag events, is the ultimate test. Here, you’re facing adversaries that are not only automated but are also learning and adapting to your defensive tactics in real time. The pressure is immense. The AI doesn’t get tired; it doesn’t follow a predictable script. This forces the human defender to become incredibly creative, to improvise solutions, and to make critical ethical decisions in seconds. It’s this crucible of competition that builds the mental resilience and sharp instincts needed for modern cyber defense.

According to the CEH Hall of Fame report, 80% of finalists now work in organizations using AI security tools. From your perspective, what are the primary responsibilities these professionals have in ensuring those powerful AI tools are used transparently and responsibly?

That 80% figure is incredibly telling. The primary responsibility for these professionals is to serve as the human conscience for the machine. First, they are the ultimate arbiters of truth. They cannot blindly trust the AI’s output; their job is to constantly question and validate its findings, ensuring data is interpreted accurately. Second, they are guardians of transparency. When an AI makes a security decision—like blocking a user’s access or flagging an employee’s activity—the certified professional must be able to explain the “why” behind that action in clear, justifiable terms. This is crucial for both legal and ethical reasons. Finally, they are responsible for governance, ensuring the AI is deployed in a way that respects privacy boundaries and avoids bias. They are the essential human bridge between the raw analytical power of the machine and the responsible, accountable oversight that organizations demand.

The content argues that CEH-certified professionals serve as a safeguard because their accountability remains personal, even with automation. How does the program instill this ethical framework, and what guidance does it provide for situations where an AI’s recommendation might conflict with privacy or legal standards?

The ethical framework is the absolute core of the CEH program; it’s not an afterthought. From the very beginning, the curriculum ingrains the principle that technology is a tool, but the human professional is always personally accountable for how it’s used. This is instilled through a relentless focus on a strict code of conduct, the importance of documenting every action, and respecting legal and privacy boundaries. The training labs are filled with scenarios presenting ethical dilemmas. For example, an AI might recommend monitoring an employee’s private communications after flagging a minor anomaly. The program teaches the professional to stop and ask critical questions: Does this recommendation align with our corporate policy? Does it violate privacy laws? Is there a less invasive way to verify the threat? The guidance is clear: The machine provides a data point, but the human makes the ethical judgment. This ensures that even as defense becomes more automated, the accountability for every decision rests squarely on the shoulders of a trained, ethically-minded professional.

What is your forecast for the evolution of the “human algorithm” over the next five years, as both offensive and defensive AI become even more sophisticated?

Over the next five years, I believe the “human algorithm” will become even more critical, though the role will evolve significantly. The AI component will become vastly more autonomous, handling the vast majority of real-time threat detection and response at a scale we can barely imagine. This will free up the human expert to operate on a more strategic level. Their focus will shift from day-to-day incident response to three key areas: first, proactive threat hunting for novel, sophisticated attacks that current AI models haven’t been trained on; second, designing and supervising the AI defense systems themselves, essentially teaching the machines how to be better defenders; and third, and most importantly, serving as the ultimate ethical and governance backstop. As AI gets woven into the fabric of our critical infrastructure—from power grids to healthcare—the human ethical hacker will be the final decision-maker, the conscience in the machine, ensuring that these powerful systems operate securely, fairly, and in the best interest of society.

Explore more

Unlock AP Automation in Business Central With Yavrio

Today we’re joined by Dominic Jainy, an IT professional with deep expertise in applying advanced technologies like AI and machine learning to solve real-world business problems. We’ve invited him to discuss a challenge that many finance teams face: the overwhelming burden of manual accounts payable processing, especially for those using powerful ERPs like Microsoft Dynamics 365 Business Central. Throughout our

Integrated ERP vs. Standalone WMS: A Comparative Analysis

The decision of how to manage the intricate dance of goods within a warehouse often becomes the critical pivot point on which a company’s entire supply chain success balances. In this high-stakes environment, technology is the choreographer, and businesses face a fundamental choice between two distinct approaches: leveraging the warehousing module within a comprehensive Enterprise Resource Planning (ERP) system or

With Millions of Open Jobs, Why Has Hiring Stalled?

The Paradoxical Chill in a Seemingly Hot Job Market A perplexing silence has fallen over the American job market, where the loud proclamation of millions of available positions is met with the quiet reality of hiring grinding to a halt. On the surface, data showing over seven million job openings suggests a landscape ripe with opportunity for workers. Yet, a

Why Early HR Is a Startup’s Smartest Investment

The initial product has been successfully launched into the market, and the very first customer payment has officially cleared, marking a pivotal moment of triumph for any emerging enterprise. In the wake of this hard-won validation, the founder’s focus inevitably shifts toward the next critical challenge: growth. The instinct is often to pour resources into hiring more engineers to build

Trend Analysis: Digital Postal Transformation

The final letter delivered by Denmark’s PostNord late last year marked more than the end of a postal route; it signaled the quiet conclusion of a 400-year-old tradition and the dawn of a new era for public services. This unprecedented move to completely cease traditional letter delivery, a first for any nation, serves as a powerful bellwether for a global