Evaluating ChatGPT for Software Vulnerability Tasks: A Comparative Analysis

With its impressive 1.7 trillion parameters, ChatGPT has emerged as a powerful language model. However, its applicability to code-oriented tasks, such as software vulnerability analysis and repair, remains relatively unexplored. In this article, we delve into the evaluation of ChatGPT against code-specific models, specifically examining its performance on four vulnerability tasks using the Big-Vul and CVEFixes datasets. This comprehensive analysis sheds light on the potential limitations of using ChatGPT for software vulnerability tasks while emphasizing the need for domain-specific fine-tuning.

Evaluation of ChatGPT against code-specific models

To comprehensively evaluate ChatGPT’s performance, security analysts conducted experiments using the Big-Vul and CVEFixes datasets. These datasets provide a comprehensive set of vulnerability tasks, enabling a thorough comparison of ChatGPT against baseline methods. The evaluation focused on the F1-measure and top-10 accuracy metrics.

The results of the evaluation revealed that ChatGPT achieved an F1-measure of 10% and 29% on the Big-Vul and CVEFixes datasets, respectively. These scores were significantly lower compared to the other baseline methods. Similarly, the top-10 accuracy of ChatGPT was 25% and 65%, which again reflected the lowest performance among the examined models.

Analysis of Multiclass Accuracy

In addition to F1-measure and top-10 accuracy, multiclass accuracy was also considered as a crucial performance indicator. The analysis revealed that ChatGPT achieved the lowest multiclass accuracy of 13%, showcasing a striking 45%-52% difference from the best baseline model. These outcomes underscore the challenges faced by ChatGPT in accurately classifying vulnerability tasks across multiple classes.

Evaluation of Severity Estimation

Severity estimation holds paramount importance in vulnerability analysis to prioritize remediation efforts. However, ChatGPT’s performance in this regard proved to be unsatisfactory. The evaluation indicated that ChatGPT exhibited the highest mean squared error (MSE) of 5.4 and 5.85, implying its inaccurate severity estimation compared to the other baselines. This finding raises concerns about relying on ChatGPT for precise severity estimation in vulnerability assessment.

Assessment of Repair Patch Generation

One vital aspect of vulnerability repair is the generation of correct repair patches. Regrettably, ChatGPT failed to generate accurate repair patches in this evaluation. On the other hand, the baseline models demonstrated success in rectifying vulnerable functions, correctly repairing 7% to 30% of them. This stark contrast highlights the limitations of ChatGPT in generating effective repair solutions.

Limitations of fine-tuning

Fine-tuning is a commonly employed technique to optimize language models for specific tasks. However, in the case of ChatGPT, fine-tuning for vulnerability tasks is not viable due to proprietary parameters. This constraint further underlines the challenges in adapting ChatGPT directly for software vulnerability tasks.

The Importance of Domain-specific Fine-tuning

The analysis of ChatGPT’s performance in vulnerability tasks underscores the significance of domain-specific fine-tuning. The complexity and specificity of software vulnerability tasks necessitate the customization of language models like ChatGPT to better suit the requirements. This suggests the need for further research and work on fine-tuning or adapting ChatGPT specifically for software vulnerability tasks.

Comparison with previous studies

While previous studies have examined the effectiveness of large language models in automated program repair, they have not accounted for the latest versions of ChatGPT. This article bridges that gap by shedding light on the specific performance of ChatGPT in software vulnerability tasks. Additionally, the notable disparities in results indicate the necessity for dedicated exploration of ChatGPT’s potential in this domain.

In conclusion, the evaluation of ChatGPT for software vulnerability tasks reveals its limitations in comparison to code-specific models. The lower F1-measure, top-10 accuracy, multiclass accuracy, inaccurate severity estimation, and inability to generate correct repair patches highlight the challenges faced by ChatGPT in this context. The proprietary nature of its parameters further restricts fine-tuning for vulnerability tasks. As such, this study emphasizes the need for additional research and efforts to fine-tune or tailor ChatGPT specifically for software vulnerability analysis and repair. By addressing these challenges, ChatGPT could potentially be leveraged more effectively in securing software systems in the future.

Explore more

Can You Spot a Deepfake During a Job Interview?

The Ghost in the Machine: When Your Top Candidate Is a Digital Mask The screen displays a perfectly polished professional who answers every complex technical question with surgical precision, yet a subtle, unnatural flicker near the jawline suggests something is deeply wrong. This unsettling scenario became reality at Pindrop Security during an interview with a candidate named “Ivan,” whose digital

Data Science vs. Artificial Intelligence: Choosing Your Path

The modern job market operates within a high-stakes environment where digital transformation has accelerated to a point that leaves even seasoned professionals questioning their specialized trajectory. Job boards are currently flooded with titles that seem to shift shape by the hour, creating a confusing landscape for those entering the technology sector. One listing calls for a data scientist with deep

How AI Is Transforming Global Hiring for HR Professionals?

The landscape of international recruitment has undergone a staggering metamorphosis that effectively erased the traditional borders once separating regional labor markets from the global economy. Half a decade ago, establishing a presence in a foreign market required exhaustive legal frameworks, exorbitant capital investment, and months of administrative negotiations. Today, the operational reality is entirely different; even nascent organizations can engage

Who Is Winning the Agentic AI Race in DevOps?

The relentless pressure to deliver software at breakneck speeds has pushed traditional CI/CD pipelines to a breaking point where manual intervention is no longer a sustainable strategy for modern engineering teams. As organizations navigate the complexities of distributed cloud systems, the transition from rigid automation to fluid, autonomous operations has become the defining challenge for the current technological landscape. This

How Email Verification Protects Your Sender Reputation?

Maintaining a flawless digital communication channel requires more than just compelling copy; it demands a rigorous defense against the invisible erosion of subscriber data that threatens every modern marketing department. Verification acts as a critical shield for the digital infrastructure of an organization, ensuring that marketing efforts actually reach the intended recipients instead of vanishing into the ether. This process