np hard
2023 United States meta
18

Do Users Write More Insecure Code with AI Assistants?

Neil Perry, Megha Srivastava, Deepak Kumar, Dan Boneh — Stanford, CCS 2023 — yes, and they believe the opposite. The illusion is the danger.

Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh, at Stanford, ran the first large-scale controlled study of security outcomes when programmers use AI coding assistants. Forty-seven participants completed five security-relevant programming tasks across Python, C, and JavaScript, with random assignment between AI-assistant access (OpenAI codex-davinci-002) and a control group. Result: participants with AI access wrote significantly less secure code across the majority of tasks. Worse, they were also more likely to believe their code was secure than the control group. Participants who engaged more critically with their prompts — re-phrasing, adjusting, questioning the suggestions — produced safer code than those who accepted suggestions verbatim. Published at CCS '23, the flagship academic security conference. The first peer-reviewed evidence that AI coding tools degrade the global property humans most need to preserve under change: security.

Perry, N., Srivastava, M., Kumar, D., Boneh, D. (2023). Do Users Write More Insecure Code with AI Assistants? Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2785–2799. Source →