With the increasing adoption of artificial intelligence (AI) in various fields, a recent survey of 800 DevOps and application security operations (SecOps) leaders sheds light on the utilization of generative AI in these domains. The survey reveals that nearly all respondents (97%) are making use of generative AI, with security operations teams reporting greater time savings compared to their DevOps counterparts. This article explores the key findings of the survey, highlighting the benefits, concerns, and implications of the widespread implementation of generative AI in DevOps and SecOps.
Integration of Generative AI in DevOps and SecOps
The survey underlines the significant integration of generative AI in both DevOps and SecOps, with an overwhelming majority (97%) of leaders leveraging this technology. Notably, security operations teams reported higher time-saving benefits, with 57% of respondents indicating that generative AI saves them at least six hours per week. In contrast, only 31% of DevOps professionals reported the same level of benefit.
Top Benefits of Generative AI
The survey reveals distinct variations in the perceived benefits of generative AI between security operations teams and DevOps professionals. Security operations teams primarily highlighted increased productivity (21%) and faster issue identification and resolution (16%) as the top advantages of utilizing generative AI. In comparison, DevOps professionals cited faster software development (16%) and more secure software (15%) as their primary benefits. These findings highlight the differing focuses and priorities within these domains when it comes to leveraging generative AI.
Concerns about Errors and Vulnerabilities in Generative AI Code
Despite the evident benefits, concerns surrounding the code quality of generative AI platforms have been raised. Many widely-used generative AI platforms are trained on extensive datasets that include code of varying quality. Consequently, there is a likelihood that the code generated by these platforms will contain errors, including vulnerabilities, to the same extent as the code used for training. It is crucial for IT organizations to invest time in tuning generative AI platforms to specific tasks to mitigate these concerns.
Pressure to Use Generative AI Despite Concerns
Interestingly, despite concerns over code quality, the survey found that a significant majority (75%) of respondents feel pressure to use generative AI. This pressure can be attributed to the perceived benefits of increased productivity and faster software development associated with generative AI. However, it is essential for organizations to navigate these pressures while ensuring robust security practices.
Implications for Code Quality and Discipline
The responsibility falls on engineers to enforce stringent code quality practices and ensure that the code running in production environments is of high quality, regardless of its origin. As generative AI becomes more prevalent, maintaining code discipline and upholding quality standards becomes even more crucial in the face of potential vulnerabilities. Proper code reviews, regular testing, and adherence to secure coding practices are paramount to mitigate risks.
Ownership of AI-Generated Code
Another area of concern identified in the survey pertains to the ownership of code generated by AI. Forty percent of respondents opined that developers or their organizations should own the copyright for AI-generated output. This complex issue will require legal frameworks and industry standards to address the unique challenges arising from generative AI.
The Irreversible Nature of Generative AI
Generative AI has gained momentum and is here to stay. It is only a matter of time before IT organizations shift their focus towards ensuring that generative AI platforms are finely tuned for specific tasks. While this evolution takes place, application security may face temporary setbacks, as over three-quarters of DevOps respondents expect an increase in vulnerabilities in open-source code due to generative AI. Strengthening code quality practices becomes crucial during this transitional phase.
The survey findings underscore the growing reliance on generative AI in both DevOps and SecOps. The benefits of increased productivity and faster software development are attractive but come with challenges pertaining to code quality and vulnerability management. It is imperative for organizations to adapt to this new reality by prioritizing code quality, embracing best practices, and investing in strategies to mitigate potential risks. Further research and collaboration between industry stakeholders will be vital in harnessing the full potential of generative AI while safeguarding application security.