When considering cybersecurity, two things often come to mind nowadays: news of someone being hacked and discussions on whether generated code is good or bad for our security posture. Here are just a few examples:
Indeed, generated code may contain vulnerabilities that could sneak in during the review process:
Even software engineers have been caught by malicious VSCode extensions. As stated in the research work:
Could new tools help improve the situation? If so, what kind of tools?
Let's digest the problems to establish the context in which they should operate. Some of them have existed for a long time, and some that have cropped up more recently:
At the same time, I think that AI both accelerates and aggravates mentioned problems due to:
I think new tools would be very helpful, if not necessary, for addressing these problems and driving improvement.
Considering all the above, let's establish a set of essential characteristics that new tools must have:
Just a few examples of such tools that have already started to emerge:
Frankly, I have no idea what these tools will look like. But I really hope we will have a wide range of tools with such functionality.
My idea is simple: code generated by programs or in collaboration with programs should be tested, hacked, and fixed by other programs.
Why? Well, because it is escalating quickly (data for 2022, 2023, up to May 2024).
It seems like we're reaching a point where the situation should start to change, and I am excited to see a world where code generated by programs gets hacked and patched by other programs.
Thanks for your attention!
👋
P.S. If you enjoyed this post, please consider connecting with me on X or LinkedIn.