How Bots Can Help DARPA's Confidence Problem
The Pentagon R&D unit wants to automate the process of assessing scientific studies.
The Pentagon wants to make sure its human employees don’t get derailed by faulty scientific studies, so it’s looking to bots for help.
Automated systems, or even partially automated ones, could help assign “confidence levels” to certain studies so nonexperts know how much stock to put in their findings, according to a new request for information from the Defense Department’s R&D unit.
The system might digest “claims, hypotheses, conclusions, models, and/or theories,” specifically within behavioral and social science—potentially adding to the Pentagon’s understanding of topics like “deterrence, stability, trust and influence, and extremism.”
» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.
The Defense Advanced Research Projects Agency is increasingly interested in social and behavioral sciences and their implications for national security, according to the RFI. This request follows another one from last year searching for a “social supercollider,” a virtual system that could demonstrate whether scientific research methods actually simulate the real world, or whether a study’s parameters are baseless. That project’s goal is to help scientists understand if their research models accurately reflect human behavior, "with precision and certainty almost never available in the 'real world.'"
The Confidence Level assessment system might assign a low value to an early-stage, exploratory study, the RFI said. A study with “questionable research practices” might also get a low score.
Today, humans establish general confidence levels by manually examining scientific studies. But the peer-review process can be slow, the RFI said, and publications may take too long to issue corrections or new findings so readers fully understand the limitations of a study.
An automated system might help researchers understand how much weight to give certain studies about cognitive bias’ influence on decision-making, or how effective certain interventions are in improving public health, the RFI said.