Culture, Not Code, Is Your Best Cyber Defense

With cyber threats becoming more sophisticated and pervasive, organizations are facing a critical choice: double down on technology or invest in their people. Dominic Jainy, an IT professional with deep expertise in artificial intelligence and machine learning, argues that the most significant vulnerability often lies not in code, but in culture. He joins us to discuss why a “cutthroat” approach to security is failing and how a shift toward psychological safety and continuous learning is the only sustainable path forward.

Our conversation explores the dangerous ripple effects of a blame-driven security environment, especially as AI-powered attacks make it harder than ever to distinguish friend from foe. We delve into the surprising blind spots created by leadership overconfidence, the obsolescence of traditional “box-ticking” security training, and the tangible actions executives must take to model the behavior they expect. Finally, we look ahead to how organizational culture will become the primary defense in an increasingly complex digital world.

With cyber breaches surging, some workplaces adopt a “cutthroat” culture. How does this “name-and-blame” approach undermine resilience, and what are the first practical steps a leader can take to shift from punishment to understanding what actually went wrong? Please provide a step-by-step example.

The “name-and-blame” approach is actively dangerous because it builds a culture of fear, not resilience. In a region like Australia and New Zealand, where we’ve seen breaches skyrocket from 56% to 78% in just a year, the stakes are incredibly high. When an incident occurs, the pressure from regulators, customers, and financial fallout creates an urgency that can push leaders toward finding a scapegoat. The problem is that if an employee worries they’ll be publicly shamed or fired for a mistake, they simply won’t report it. That hesitation, that silence, is what allows a small misstep, like an accidental click, to blossom into a catastrophic, company-wide breach. Cybersecurity thrives in an open, supportive culture where people feel safe to raise their hand the moment something feels off.

The first step for a leader is to reframe an error as an intelligence-gathering opportunity. For example, say an employee in finance reports clicking on a link in a convincing but fraudulent invoice email. Step one is to immediately ensure the leader’s response is one of support, not accusation. Step two is to have a security team member engage in a blameless post-mortem with the employee, asking questions like, “What about this message felt so real? What was the call to action?” Step three is to use that information to fine-tune technical defenses and, critically, to anonymize the details of the attempt. Finally, step four is to share that anonymized scenario with the entire company as a real-world lesson. This transforms one person’s mistake into a shield for everyone else and reinforces that reporting is a valued, protective act.

Attackers now use AI to create incredibly polished and personalized fraudulent messages. How does this shift the conversation from employee carelessness to shared vulnerability, and what specific changes does this require in security awareness training? Please share some examples.

The rise of AI-powered phishing completely changes the game. It’s no longer accurate or fair to label employees as “careless.” Attackers are now leveraging AI to craft messages that are grammatically perfect, contextually relevant, and emotionally resonant. They can mimic the tone of a CEO or a trusted vendor with terrifying accuracy. This reality shifts the focus from individual fault to a state of shared vulnerability. The truth is, it’s becoming incredibly difficult for anyone—even seasoned IT professionals—to distinguish what’s real from what’s fake. Attackers know that manipulating a person’s trust or sense of urgency is often far more effective than trying to brute-force their way through a firewall.

This demands a fundamental evolution in our training. Instead of just showing examples of emails with bad grammar, we need to create more sophisticated simulations. For instance, training should include scenarios where an AI-generated voice message, seemingly from a senior executive, leaves an urgent voicemail requesting a fund transfer. Another example is a hyper-personalized email that references a recent project or a public LinkedIn post to build credibility. The training then needs to focus on teaching a new core reflex: verification. Rather than just spotting red flags, employees must be trained to pause and verify unusual requests through a separate, trusted channel, like calling the person on a known phone number or messaging them on an internal platform.

A worrying gap can exist where leaders believe their teams are “too savvy” to be fooled, yet many fall for phishing attempts themselves. How does this overconfidence create new risks, and how can organizations address this blind spot without shaming their staff?

This is a classic case of perception versus reality, and it’s one of the most significant risks a company can face. When leadership operates under the assumption that their people are “too savvy” to be tricked, they let their guard down. We see data showing that while many IT leaders are confident in their teams’ abilities, a notable number of those same leaders admit to having clicked on a suspicious link themselves. This overconfidence leads to underinvestment in continuous training and creates an environment where emerging threats are not taken seriously enough. It’s precisely in those moments of comfort and complacency that attackers find their perfect opening.

Addressing this requires a delicate touch that normalizes vulnerability for everyone, starting at the top. The key is to make security awareness a universal and equitable program. Run phishing simulations that target everyone, including the C-suite, and then share the anonymized, aggregate results. When the data shows that people at all levels are susceptible, it removes the shame and reframes the issue as a collective challenge. Leaders can also champion the conversation by openly sharing their own experiences, saying things like, “I received a very convincing message this week and had to double-check with my team before acting on it.” This models humility and reinforces that vigilance is a shared responsibility, not a test of individual intelligence.

Many companies still rely on annual compliance videos for security training. Why is this model now ineffective against modern threats, and what does a modern, continuous program look like day-to-day? Please describe a few key metrics for measuring its true impact on behavior.

The annual compliance video is a relic from a bygone era. Cyber threats evolve in weeks, not years, and what protected an organization last year is likely obsolete today. A single, one-off training session is often forgotten the moment the video player is closed. It’s a box-ticking exercise that provides a false sense of security while having almost no impact on day-to-day employee behavior. With attackers using AI to launch new, more sophisticated campaigns constantly, security awareness can’t be a once-a-year event. It simply doesn’t match the pace of the threat.

A modern program is continuous, relevant, and integrated into the daily workflow. This means instead of a one-hour video, employees receive bite-sized content—a two-minute micro-learning module one week, a simulated phishing test the next. The content reflects the actual threats the company is facing, making it immediately relevant. The goal is to build secure habits, not just impart information. To measure its impact, you have to go beyond completion rates. The most important metrics are behavioral. First, track the click-rate on phishing simulations to see if it decreases over time. More importantly, measure the report-rate: how many employees are actively reporting suspicious messages? A high report-rate is the sign of a healthy, engaged culture. Finally, track the time-to-report, as the speed with which a potential threat is flagged can make all the difference.

Cybersecurity is often described as a shared responsibility. Since attackers increasingly target senior leaders, what specific, visible actions must executives take to model good behavior, and how does this leadership commitment directly impact an employee’s willingness to report a potential mistake?

Shared responsibility begins at the top, and it must be demonstrated, not just declared. With attackers specifically targeting senior leaders because of their high-level access and authority, executive behavior becomes the cornerstone of the entire security culture. The most powerful action a leader can take is to visibly participate in the exact same security protocols and training as everyone else. This means being seen completing the bite-sized training modules and having their responses to phishing simulations included in the company-wide, anonymized data. When an executive receives a suspicious email, they should use it as a teaching moment, perhaps mentioning in a team meeting, “I flagged a strange request with our security team today—it’s a great reminder to stay vigilant.”

This level of visible commitment has a profound impact on an employee’s willingness to report a mistake. If an employee sees their CEO treating security with gravity and humility, it creates a powerful sense of psychological safety. It signals that the organization treats security as a collective effort, not a bottom-up mandate. When an employee eventually makes a mistake—which is inevitable—they are far more likely to report it immediately, without fear of reprisal, because they’ve seen their leaders model the very same cautious and responsible behavior. That speed is often the deciding factor between a minor issue and a major crisis.

What is your forecast for cybersecurity culture over the next five years, especially as technologies like AI continue to blur the lines between what’s real and what’s fake?

My forecast is that a strong, resilient cybersecurity culture will transition from being a “nice-to-have” to becoming the single most critical defense an organization possesses. Technology will always be essential, but as AI makes it nearly impossible to distinguish a legitimate request from a sophisticated fake, the final line of defense will be a well-trained, psychologically safe, and empowered human. The focus will shift dramatically from preventing every single click to fostering an environment where suspicious events are reported instantly and without fear. The organizations that thrive will be those that measure the strength of their culture as rigorously as they measure their firewall’s performance. They will see investment in continuous training and building psychological safety not as a cost center, but as a strategic imperative for survival. In five years, the best-defended companies won’t just have the strongest systems; they will have the strongest, most resilient cultures.

Explore more

Is 2026 the Year AI Gets Real for Business?

Beyond the Hype: A Glimpse into AI’s Pragmatic Future The past few years have felt like a gold rush for artificial intelligence, with breathless headlines and astronomical valuations dominating the conversation. From generative AI creating content in seconds to the promise of fully autonomous agents, the hype has been inescapable. But for business leaders, a persistent question lingers beneath the

Algorithmic Problem Solving – Review

The intricate process of transforming a vaguely defined business challenge into a precise, computationally efficient solution remains one of the most critical yet undersold skills in modern technology. Algorithmic problem-solving represents a foundational pillar in data science and software engineering. This review will explore the practical application of core algorithms through the lens of selected challenges from Advent of Code

Generative AI Redefines B2B Brand Strategy for 2026

The once-predictable pathways through which B2B customers discovered and validated brands have been completely redrawn by generative AI, compelling a radical reevaluation of foundational marketing principles. The rise of conversational search engines like ChatGPT and Gemini has created a new intermediary between a company and its audience, one that synthesizes public perception rather than simply ranking a corporate website. For

B2B Marketers Face a Costly Marketing Data Mirage

The modern B2B marketing dashboard often glows with an impressive array of green indicators, from rising click-through rates to expanding audience engagement, yet this veneer of success frequently conceals a troubling reality of stagnant revenue and wasted investment. For many senior marketing leaders, this phenomenon has created a perplexing and expensive paradox where the abundance of positive data provides a

What Is the Biggest Blind Spot in Your Hiring?

Organizations invest immense resources searching for exceptional talent, yet many inadvertently walk past their ideal candidates every single day, blinded by processes rooted in a bygone industrial era. This systemic failure to see potential beyond a conventional career path creates a frustrating paradox where talent shortages and overlooked talent pools coexist, crippling growth and innovation. The root of this widespread