NIST Teams Up with IBM’s Watson to Rate How Dangerous Computer Bugs Are
The artificial intelligence program will replace tedious work done by human analysts.
The government’s cyber standards agency wants to start using artificial intelligence to gauge just how dangerous publicly reported computer bugs are, a top official said Friday.
The AI system, which will replace the work of numerous human analysts, should be assigning risk scores to most publicly reported computer bugs by October 2019, Matthew Scholl, chief of the National Institute of Standards and Technology’s computer security division, said.
Right now, human analysts at NIST work laboriously through thousands of computer vulnerabilities each week and assign each one a severity score.
Vulnerabilities that hackers can exploit remotely, for example, will be scored higher than ones that require the hacker to have physical access to a laptop, phone or other internet-connected devices.
Companies use those scores, known as Common Vulnerability Scoring System scores, or CVSSes, to determine which bugs they should patch immediately and which ones can wait awhile.
NIST’s CVSS system worked well when companies and ethical hackers were only reporting a couple hundred vulnerabilities each week. The number of vulnerabilities reported to the Common Vulnerabilities and Exposures, or CVE, database has ballooned in recent years, however, to several thousand each week.
That’s putting an extra burden on NIST analysts who spend 5 to 10 minutes scoring simple vulnerabilities and far longer on complex or novel ones, Scholl told reporters after a NIST advisory board meeting.
The number of weekly vulnerabilities is likely to grow even larger in coming years as more devices, such as cars, radios, thermostats and even vacuums, connect to the internet.
Earlier this year, NIST launched a pilot program using IBM’s Watson artificial intelligence system to pore through hundreds of thousands of historical CVSS scores from the institute’s human analysts, Scholl said.
Watson then used that data to assign scores to new vulnerabilities.
“We started it just to get familiar with AI, so we could get our hands on it, learn about it, kind of put it in a lab and experiment,” Scholl said. “As we were doing it with this dataset we said: ‘Hey, this seems to be putting out results the same as our analysts are putting out.’”
That success comes with one caveat, Scholl said.
The Watson system is great at assigning scores for vulnerabilities where there’s a long paper trail of human-assigned scores for highly similar vulnerabilities. In those cases, the Watson score will be within the small range of variance between what two different human analysts would assign, say 7.2 versus 7.3 on a 10-point scale, Scholl said.
When the vulnerability is new and complex or highly novel, like the Specter vulnerability discovered in 2017, Watson fares far worse, Scholl said. In those cases, a human analyst will take over.
The Watson system releases a confidence percentage for each CVSS score and if that confidence percentage is beneath the high 90s, a human analyst will review and edit the results, Scholl said.
Right now, the Watson system is only being used as an in-house experiment. NIST’s goal is to use it for most public CVSS scores later this year.
Before the Watson scoring system goes live, the NIST chief information officer needs to ensure the program is securely integrated with other NIST systems and is able to consistently handle the workload, Scholl said.
Scholl’s division is also looking for other areas of NIST that might be interested in using Watson technology so the institute can save money on licenses, he said.
The U.S. government has funded the CVE database since its inception in 1999 and manages it through a master contract with the federally-funded research center MITRE. Numerous organizations, however, now have independent authority to list new vulnerabilities in the database.
House Energy and Commerce Committee leaders complained in a recent letter to Homeland Security Department officials that the CVE program is unwieldy, adequately funded and needs more oversight.
The letter came after reports that security researchers were waiting weeks or even months for vulnerabilities they found to be entered in the database, giving nefarious hackers more time to exploit those vulnerabilities to compromise computers and steal data.
NEXT STORY: European Countries to Test AI Border Guards