With a tsunami of new software vulnerabilities on the horizon, the old ways of managing cybersecurity are becoming obsolete. We sat down with Dominic Jainy, an IT professional with deep expertise in leveraging technology for security, to unpack a recent forecast that has sent shockwaves through the industry. We explored the critical shift from reactive patching to strategic, forward-looking defense, discussing how security leaders can prepare their teams and budgets for a future where vulnerability numbers could triple. The conversation centered on moving beyond simple metrics like CVSS scores to a more nuanced, risk-based approach and highlighted the indispensable role of collaborative networks in surviving the coming storm.
With projections showing a potential of 70,000 to 100,000 new CVEs by 2026, what specific operational changes must security teams make to their patching capacity and response processes? Please describe a few key steps they should take to avoid being overwhelmed.
The first step is a frank and honest assessment of your current state. You have to look at your people and processes right now and ask a tough question: can we handle the 50,000 CVEs expected this year, let alone a future with double that number? The feeling of drowning in alerts is already common; this forecast confirms it’s about to get much worse. Operationally, this means teams must stop the futile exercise of chasing every single vulnerability. They need to build a ruthless prioritization engine that focuses exclusively on the threats that pose a genuine, tangible risk to their specific environment and most critical data. It’s about shifting from a “patch everything” mentality to a “patch what matters most, first” strategy.
Many teams prioritize vulnerabilities based on CVSS scores. Given the expected volume, why is this reactive approach no longer sufficient, and what framework should they use to prioritize the vulnerabilities that pose the greatest actual risk to their specific data and systems?
Relying solely on CVSS scores is like trying to navigate a blizzard by only looking at the temperature. It gives you one piece of information, but it completely misses the context of your situation. A high-CVSS vulnerability in a non-critical, internally-facing system might be far less dangerous than a medium-rated one in your primary, customer-facing database. With the volume set to explode towards 100,000 vulnerabilities, this reactive approach guarantees you will misallocate your most precious resource: your team’s time. The framework must be risk-based. It requires an intimate understanding of your own asset inventory, data flows, and business processes. The central question must always be, “How does this specific vulnerability, on this specific asset, affect my organization’s ability to operate and protect its data?”
The difference between preparing for 30,000 versus 100,000 vulnerabilities is described as strategic, not just operational. In practical terms, how does a CISO use these forecasts to plan their budget, team structure, and resource allocation over the next three years?
This is precisely where forecasting becomes a CISO’s best friend. It’s like a city planner seeing population growth projections and realizing they don’t just need more buses; they need a whole new subway system. You can’t just hire a few more analysts to handle a 200% increase in workload. Instead, a CISO can take these numbers—projecting a median of over 51,000 CVEs in 2027—to the board and make a data-driven case for fundamental change. This means justifying budgets for automation platforms, threat intelligence services, and advanced analytics. It means restructuring teams not around ticket queues, but around specific technology stacks or business units. It allows you to shift the conversation from “How do we patch faster?” to “How do we build a security program that is resilient by design for the threat landscape of 2028?”
Projections show a potential upper bound of nearly 193,000 vulnerabilities by 2028. How can organizations use these forecasts alongside their asset inventories to develop vendor- and product-specific contingency plans? Please walk me through what that preparation process looks like.
That 193,000 figure is terrifying, but it’s also a powerful catalyst for action. The process begins with your asset inventory—if you don’t know what you have, you can’t protect it. Once you have a clear picture of your hardware and software, you map that against these forecasts. You identify your most critical vendors and products and ask, “What happens if a core software vendor suddenly has a 300% increase in critical vulnerabilities?” This is where you develop contingency plans. It could mean pre-allocating emergency patching resources, identifying alternative vendors, or even architecting systems to reduce dependency on a single product. It’s a proactive “what if” exercise that moves you from being a victim of a vulnerability announcement to being prepared for its inevitability.
As the volume of threats grows, the need for collaboration increases. What does an effective, trusted network for sharing threat intelligence look like, and what are the first steps a company can take to build or join one before a major crisis hits?
An effective network is built on trust, not transactions. It’s a group of peers from different organizations who can share raw, real-time intelligence about what they are seeing without fear of judgment or exposure. It’s the difference between reading a public report about a breach and getting a direct message from a trusted contact saying, “We’re seeing this specific attack vector; check your systems for this indicator.” The first step to building this is to start small and be a giver. Join industry-specific groups, like those facilitated by organizations like FIRST, and contribute what you can. Share anonymized insights, ask thoughtful questions, and build personal relationships. You have to build the well before you’re thirsty, because when a major crisis hits, it’s those pre-existing, trusted relationships that enable the rapid response needed to survive.
What is your forecast for vulnerability management over the next five years?
My forecast is one of radical change, driven by necessity. We will see the final death of manual, ticket-based vulnerability management. The sheer volume will force universal adoption of AI-driven prioritization and automation. Security teams will become more integrated with business operations, as risk will be defined by business impact, not a technical score. Finally, I believe we’ll see a surge in collaborative defense, where organizations understand that their individual security is intrinsically linked to the collective strength of their industry network. The isolationist approach to cybersecurity is a relic; the future is interconnected, automated, and relentlessly focused on risk.
