Evaluating ChatGPT for Software Vulnerability Tasks: A Comparative Analysis

With its impressive 1.7 trillion parameters, ChatGPT has emerged as a powerful language model. However, its applicability to code-oriented tasks, such as software vulnerability analysis and repair, remains relatively unexplored. In this article, we delve into the evaluation of ChatGPT against code-specific models, specifically examining its performance on four vulnerability tasks using the Big-Vul and CVEFixes datasets. This comprehensive analysis sheds light on the potential limitations of using ChatGPT for software vulnerability tasks while emphasizing the need for domain-specific fine-tuning.

Evaluation of ChatGPT against code-specific models

To comprehensively evaluate ChatGPT’s performance, security analysts conducted experiments using the Big-Vul and CVEFixes datasets. These datasets provide a comprehensive set of vulnerability tasks, enabling a thorough comparison of ChatGPT against baseline methods. The evaluation focused on the F1-measure and top-10 accuracy metrics.

The results of the evaluation revealed that ChatGPT achieved an F1-measure of 10% and 29% on the Big-Vul and CVEFixes datasets, respectively. These scores were significantly lower compared to the other baseline methods. Similarly, the top-10 accuracy of ChatGPT was 25% and 65%, which again reflected the lowest performance among the examined models.

Analysis of Multiclass Accuracy

In addition to F1-measure and top-10 accuracy, multiclass accuracy was also considered as a crucial performance indicator. The analysis revealed that ChatGPT achieved the lowest multiclass accuracy of 13%, showcasing a striking 45%-52% difference from the best baseline model. These outcomes underscore the challenges faced by ChatGPT in accurately classifying vulnerability tasks across multiple classes.

Evaluation of Severity Estimation

Severity estimation holds paramount importance in vulnerability analysis to prioritize remediation efforts. However, ChatGPT’s performance in this regard proved to be unsatisfactory. The evaluation indicated that ChatGPT exhibited the highest mean squared error (MSE) of 5.4 and 5.85, implying its inaccurate severity estimation compared to the other baselines. This finding raises concerns about relying on ChatGPT for precise severity estimation in vulnerability assessment.

Assessment of Repair Patch Generation

One vital aspect of vulnerability repair is the generation of correct repair patches. Regrettably, ChatGPT failed to generate accurate repair patches in this evaluation. On the other hand, the baseline models demonstrated success in rectifying vulnerable functions, correctly repairing 7% to 30% of them. This stark contrast highlights the limitations of ChatGPT in generating effective repair solutions.

Limitations of fine-tuning

Fine-tuning is a commonly employed technique to optimize language models for specific tasks. However, in the case of ChatGPT, fine-tuning for vulnerability tasks is not viable due to proprietary parameters. This constraint further underlines the challenges in adapting ChatGPT directly for software vulnerability tasks.

The Importance of Domain-specific Fine-tuning

The analysis of ChatGPT’s performance in vulnerability tasks underscores the significance of domain-specific fine-tuning. The complexity and specificity of software vulnerability tasks necessitate the customization of language models like ChatGPT to better suit the requirements. This suggests the need for further research and work on fine-tuning or adapting ChatGPT specifically for software vulnerability tasks.

Comparison with previous studies

While previous studies have examined the effectiveness of large language models in automated program repair, they have not accounted for the latest versions of ChatGPT. This article bridges that gap by shedding light on the specific performance of ChatGPT in software vulnerability tasks. Additionally, the notable disparities in results indicate the necessity for dedicated exploration of ChatGPT’s potential in this domain.

In conclusion, the evaluation of ChatGPT for software vulnerability tasks reveals its limitations in comparison to code-specific models. The lower F1-measure, top-10 accuracy, multiclass accuracy, inaccurate severity estimation, and inability to generate correct repair patches highlight the challenges faced by ChatGPT in this context. The proprietary nature of its parameters further restricts fine-tuning for vulnerability tasks. As such, this study emphasizes the need for additional research and efforts to fine-tune or tailor ChatGPT specifically for software vulnerability analysis and repair. By addressing these challenges, ChatGPT could potentially be leveraged more effectively in securing software systems in the future.

Explore more

Strategies to Strengthen Engagement in Distributed Teams

The fundamental nature of professional commitment underwent a radical transformation as the traditional office-centric model gave way to a decentralized landscape where digital interaction defines the standard of excellence. This transition from a physical proximity model to a distributed framework has forced organizational leaders to reconsider how they define, measure, and encourage active participation within their workforces. In the current

How Is Strategic M&A Reshaping the UK Wealth Sector?

The British wealth management industry is currently navigating a period of unprecedented structural change, where the traditional boundaries between boutique advisory and institutional fund management are rapidly dissolving. As client expectations for digital-first, holistic financial planning intersect with an increasingly complex regulatory environment, firms are discovering that organic growth alone is no longer sufficient to maintain a competitive edge. This

HR Redesigns the Modern Workplace for Remote Success

Data from current labor market reports indicates that nearly seventy percent of workers in technical and creative fields would rather resign than return to a rigid, five-day-a-week office schedule. This shift has forced human resources departments to abandon temporary survival tactics in favor of a permanent architectural overhaul of the modern corporate environment. Companies like GitLab and Cisco are no

Is Generative AI Actually Making Hiring More Difficult?

While human resources departments once viewed the emergence of advanced automated intelligence as a definitive solution for streamlining talent acquisition, the current reality suggests that these digital tools have inadvertently created an overwhelming sea of indistinguishable applications that mask true professional capability. On paper, the technology promised a frictionless experience where candidates could refine resumes effortlessly and hiring managers could

Trend Analysis: Responsible AI in Financial Services

The rapid integration of artificial intelligence into the financial sector has moved beyond experimental pilots to become a cornerstone of global corporate strategy as institutions grapple with the delicate balance of innovation and ethical oversight. This transformation marks a departure from the chaotic implementation strategies seen in previous years, signaling a move toward a more disciplined and accountable framework. As