NIST Deprioritizes Pre-2018 CVEs Amid Backlog and New Threats

Article Highlights
Off On

The US National Institute of Standards and Technology (NIST) recently made a significant decision affecting the cybersecurity landscape by marking all Common Vulnerabilities and Exposures (CVEs) published before January 1, 2018, as “Deferred” in the National Vulnerability Database (NVD). This move impacts over 20,000 entries and potentially up to 100,000, signaling that these CVEs will no longer be prioritized for further enrichment data updates unless they appear in the Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities (KEV) catalog. NIST’s decision comes in response to an ongoing struggle with a growing backlog in processing vulnerability data, exacerbated by a 32% surge in submissions in the past year.

An Overwhelming Backlog and Strategic Reprioritization

NIST’s challenges in processing and enriching the vast amount of incoming data have delayed its goal of clearing the backlog by the end of fiscal year 2024. In response, NIST is developing new systems to handle these issues more efficiently. Industry experts consider this move practical given the complexities of managing vulnerabilities at scale. Ken Dunham from Qualys describes it as an evolution in the face of changing cyber threats. Meanwhile, Jason Soroko from Sectigo interprets this as a strategic reprioritization, with resources redirected towards addressing emerging threats, assuming that legacy issues have been mitigated through routine patch management practices. The responsibility for managing deferred CVEs now shifts more heavily onto organizations. For security teams, this means identifying and monitoring legacy systems, prioritizing the patching of deferred vulnerabilities, and hardening or segmenting outdated infrastructure. Using real-time threat intelligence to detect attempts at exploiting these vulnerabilities becomes crucial. This shift highlights a broader trend where organizations must adopt proactive risk management strategies due to the increasing volume of CVEs and limited resources available to handle them.

Embracing Advanced Technology for Improved Efficiency

In addressing its backlog, NIST is also exploring the potential use of artificial intelligence (AI) and machine learning to streamline the processing of vulnerability data. This move reflects an ongoing trend in the cybersecurity industry toward leveraging advanced technologies for more efficient management of vulnerabilities. By incorporating AI and machine learning, NIST aims to ensure that both older and newer vulnerabilities receive appropriate attention within the constraints of available resources. This nuanced approach to cybersecurity management underscores the need for a balance between addressing legacy vulnerabilities and staying ahead of emerging threats. Organizations are encouraged to adopt similar strategies, using technology to enhance their cybersecurity efforts and ensure comprehensive coverage of potential vulnerabilities. This shift in focus not only addresses immediate backlog issues but also sets the stage for more sustainable and scalable vulnerability management practices in the future.

New Paradigm for Cybersecurity Management

The US National Institute of Standards and Technology (NIST) has recently made a crucial decision that impacts the cybersecurity domain by designating all Common Vulnerabilities and Exposures (CVEs) published before January 1, 2018, as “Deferred” in the National Vulnerability Database (NVD). This adjustment affects over 20,000 entries and potentially up to 100,000, indicating that these CVEs will no longer receive prioritized updates for enrichment data unless they are listed in the Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities (KEV) catalog. NIST’s decision is a response to an ongoing challenge with a growing accumulation of vulnerability data, which has been aggravated by a 32% increase in submissions over the past year. This strategic shift aims to address the backlog more effectively and allocate resources more efficiently, ensuring newer and more critical vulnerabilities receive the attention they require for maintaining robust cybersecurity measures.

Explore more

How Does Martech Orchestration Align Customer Journeys?

A consumer who completes a high-value transaction only to be bombarded by discount advertisements for that exact same item moments later experiences the digital equivalent of a salesperson following them out of a store and shouting through a megaphone. This friction point is not merely a minor annoyance for the user; it is a glaring indicator of a systemic failure

AMD Launches Ryzen PRO 9000 Series for AI Workstations

Modern high-performance computing has reached a definitive turning point where raw clock speeds alone no longer satisfy the insatiable hunger of local machine learning models. This roundup explores how the Zen 5 architecture addresses the shift from general productivity to AI-centric workstation requirements. By repositioning the Ryzen PRO brand, the industry is witnessing a focused effort to eliminate the data

Will the Radeon RX 9050 Redefine Mid-Range Efficiency?

The pursuit of graphical fidelity has often come at the expense of power consumption, yet the upcoming release of the Radeon RX 9050 suggests a calculated shift toward energy efficiency in the mainstream market. Leaked specifications from an anonymous board partner indicate that this new entry-level or mid-range card utilizes the Navi 44 GPU architecture, a cornerstone of the RDNA

Can the AMD Instinct MI350P Unlock Enterprise AI Scaling?

The relentless surge of agentic artificial intelligence has forced modern corporations to confront a harsh reality: the traditional cloud-centric computing model is rapidly becoming an unsustainable drain on capital and operational flexibility. Many enterprises today find themselves trapped in a costly paradox where scaling their internal AI capabilities threatens to erase the very profit margins those technologies were intended to

How Does OpenAI Symphony Scale AI Engineering Teams?

Scaling a software team once meant navigating a sea of resumes and conducting endless technical interviews, but the emergence of automated orchestration has redefined the very nature of human-led productivity. The traditional model of human-AI collaboration hit a hard limit where a single engineer could typically only supervise three to five concurrent AI sessions before the cognitive load of context