3 Comments

Take a look at how Alpaca was trained with 52k instructions that were generated from GPT-3. https://replicate.com/blog/replicate-alpaca shows how to do this yourself fairly cheaply (~$50 rented compute if my math is correct).

You could do something similar by feeding combinations of the CWE examples and real world code to GPT and then using that output to train llama to detect security issues. E.g. prompt:

CWE insecure temporary file:

Creating and using insecure temporary files can leave application and system data vulnerable to attack.

Example insecure code from CWE:

Example code in <$language>:

(GPT generates the completion here)

Then use the above generated stuff to train your model which can then easily answer “what problems does this code have?” Type questions.

The hardest part of this is the part of going from some database of examples to clean working examples that are realistic. But I’d expect that using AI to help with the cleaning part might help that process up too (“given the following CWE format, generate a prompt that explains the CWE and creates more examples based on the existing example”). It’s turtles all the way down.

I’d be surprised if you couldn’t do this for <$1k compute + dev time.

Next couple it with finding examples of CVEs and the fixes in the wild and you’ve got training models that can detect AND fix the problems.

Next “from the following CWE description write a detection rule for $codeScannerSoftware that detects the problem in $language/$framework” to move the AI out of the detection phase

Next $xxM series A at $xB valuation

Expand full comment

If you prompt ChatGPT to look for vulnerabilities it finds these issues except the gorilla one (assuming that's because ChatGPT doesn't have web access so its relying on its training data). I think the takeaway is less "don't use GPT in your security product" and more "make sure your security product is using a self-trained model or has the right prompts/plugins available for the use case".

Expand full comment