It’s a bad day for bugs. Earlier today, Sentry announced its AI auto-remediation feature for debugging production code, and now, just hours later, GitHub is launching the first beta of its code-scanning auto-remediation feature to find and fix security vulnerabilities during the coding process. This new feature combines the real-time capabilities of GitHub’s Copilot with CodeQL, the company’s semantic code analysis engine. The company previewed this feature last November.
GitHub promises that this new system can remediate more than two-thirds of the vulnerabilities it finds β often without developers having to edit any code themselves. The company also promises that the automatic code scanning fix will cover more than 90% of notification types in the languages ββit supports, which are currently JavaScript, Typescript, Java and Python.
This new feature is now available for everyone GitHub Advanced security (GHAS) customers.
“Just like GitHub Copilot frees developers from tedious and repetitive tasks, automated code scan remediation will help development teams recover the time they previously spent on remediation,β GitHub writes in today’s announcement. “Security teams will also benefit from a reduced volume of daily vulnerabilities so they can focus on strategies to protect the business while keeping up with an accelerating pace of growth.”
In the background, this new feature uses the CodeQL engine, GitHub’s semantic analysis engine for finding vulnerabilities in code, even before it’s executed. The company made a first generation of CodeQL available to the public in late 2019 after acquiring code analytics startup Semmle, where CodeQL was incubated. Over the years he made several improvements to CodeQL, but one thing that never changed was that CodeQL was available for free only to open source researchers and developers.
Now, CodeQL is at the center of this new tool, though GitHub also notes that it uses “a combination of heuristics and GitHub Copilot APIs’ to suggest his fixes. To produce their corrections and explanations, GitHub uses OpenAI’s GPT-4 model. And while GitHub is clearly confident enough to suggest that the vast majority of autofix suggestions will be correct, the company doesn’t say that “a small percentage of suggested fixes will reflect a significant misunderstanding of the codebase or vulnerability.”