The US Intelligence Advanced Research Projects Activity (IARPA) issues a request for information (RFI) to identify potential threats and vulnerabilities that large language models (LLMs) may pose.
“IARPA is seeking information on established characterizations of vulnerabilities and threats that could impact the safe use of large language models (LLMs) by intelligence analysts”
While not an official research program yet, IARPA’s “Characterizing Large Language Model Biases, Threats and Vulnerabilities” RFI aims “to elicit frameworks to categorize and characterize vulnerabilities and threats associated with LLM technologies, specifically in the context of their potential use in intelligence analysis.”
Many vulnerabilities and potential threats are already known.
For example, you can ask ChatGPT to summarize or make inferences on just about any given topic, and it can comb its database to provide an explanation that sounds convincing.
However, those explanations can also be completely false.
As OpenAI describes it, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
But the risks posed by LLMs go well beyond nonsensical explanations, and the research funding arm for US spy agencies is looking to identify threats and vulnerabilities that might not have been fully covered in the OWASP Foundation’s recently published “Top 10 for LLM.”
“Has your organization identified specific LLM threats and vulnerabilities that are not well characterized by prior taxonomies (c.f., “OWASP Top 10 for LLM”)? If so, please provide specific descriptions of each such threat and/or vulnerability and its impacts”
Last week, UC Berkeley professor Dr. Stuart Russell warned the Senate Judiciary Committee about a few of the risks in the OWASP top 10 list, including Sensitive Information Disclosure, Overreliance, and Model Theft.
For example, Russell mentioned that you could potentially be giving up sensitive information just by the types of questions you were asking; and then the chatbot could potentially spit back sensitive or proprietary information belonging to a competitor.
“If you’re in a company […] and you want the system to help you with some internal operation, you’re going to be divulging company proprietary information to the chatbot to get it to give you the answers you want,” Russell testified.
“If that information is then available to your competitors by simply asking ChatGPT what’s going on over in that company, this would be terrible,” he added.
If we take what Russell said about divulging company information and apply that to divulging US intelligence information, then we can start to get a better understanding of why IARPA is putting out its current RFI.
But there could also be potential threats and vulnerabilities that are as-of-yet not known.
As former US Secretary of Defense Donald Rumsfeld famously quipped, “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”
So, for the current RFI, IARPA is asking organizations to answer the following questions:
Has your organization identified specific LLM threats and vulnerabilities that are not well characterized by prior taxonomies (c.f., “OWASP Top 10 for LLM”)? If so, please provide specific descriptions of each such threat and/or vulnerability and its impacts.
Does your organization have a framework for classifying and understanding the range of LLM threats and/or vulnerabilities? If so, please describe this framework, and briefly articulate for each threat and/or vulnerability and its risks.
Does your organization have any novel methods to detect or mitigate threats to users posed by LLM vulnerabilities?
Does your organization have novel methods to quantify confidence in LLM outputs?
The primary point of contact for the RFI is Dr. Timothy McKinnon, who also manages two other IARPA research programs: HIATUS and BETTER.
HIATUS [Human Interpretable Attribution of Text Using Underlying Structure]: seeks to develop novel human-useable AI systems for attributing authorship and protecting author privacy through identification and leveraging of explainable linguistic fingerprints.
BETTER [Better Extraction from Text Towards Enhanced Retrieval]: aims to develop a capability to provide personalized information extraction from text to an individual analyst across multiple languages and topics.
Last year, IARPA announced it was putting together its Rapid Explanation, Analysis and Sourcing ONline (REASON) program “to develop novel systems that automatically generate comments enabling intelligence analysts to substantially improve the evidence and reasoning in their analytic reports.”
Additionally, “REASON is not designed to replace analysts, write complete reports, or to increase their workload. The technology will work within the analyst’s current workflow.
“It will function in the same manner as an automated grammar checker but with a focus on evidence and reasoning.”
So, in December, IARPA wanted to leverage generative AI to help analysts write intelligence reports, and now in August, the US spy agencies’ research funding arm is looking to see what risks large language models may pose.
This article was originally published by Tim Hinchliffe on The Sociable.