sponsor content What's this?
When Words Matter
Presented by Accenture Federal Services
How ‘transfer learning’ can supercharge natural language processing in government
Words are at the heart of the government enterprise – spoken works, computer-based communications, rules and regulations, intelligence reports – and natural language processing (NLP) helps to turn those words into actionable intelligence.
With deep learning architectures, NLP is making it easier to interact with computers, making applications more intuitive, and delivering critical intelligence faster. Today’s NLP can handle more text, and more diverse verbal expressions, more accurately than ever before.
But there’s a hitch: Language is incredibly complex, with significant nuance like slang and context that make it hard to pin down. In some cases, industry leaders overcome these challenges through a brute force combination of massive datasets and incredibly powerful cloud computing.
This means the training of AI algorithms for NLP can be both expensive and time-consuming, making it out-of-reach for many federal applications. However, an AI training technique known as transfer learning could help government agencies train NLP on the nuances of their mission-specific language faster and at lower cost.
What is Transfer Learning?
In our everyday life, we bring our cumulative knowledge to any new task; if we already know how to play woodwinds such as the oboe, bassoon, and clarinet then learning the saxophone should be easy. Transfer learning does something similar for data science. Large datasets are used to train a generic model that can then be refined and applied for specific tasks. This avoids the need to replicate deep learning from scratch for every new model. The field of computer vision has already made widespread use of transfer learning to speed up and simplify visual annotation and training:
- Researchers from Brookhaven National Laboratory have proposed using transfer learning as a method to automatically identify parking lots in aerial imagery.
- The National Institutes of Health have reported on uses of transfer learning as a way to better recognize human ears for biometric identification.
- Accenture is using transfer learning to develop computer vision applications for compliance monitoring for the use of protective equipment in the workplace.
Natural language processing is now building on experience gained from the computer vision field.
“There have been a lot of techniques developed in computer vision over the past decade that are now being deployed in NLP,” said Accenture’s NLP Science Director Dr. Paul Rodrigues. “Now the language engineers are taking on the mantle and building larger, more complicated base models.”
Natural Language Processing Training Requires Nuance, Resources
Unlike visual images, which are often straightforward – a house is not a highway – language is subtle and complex. The word “bank” would likely refer to a savings bank in documents belonging to the Department of the Treasury, but could mean a “river bank” to the Department of the Interior.
Language can vary widely, depending on who is communicating the information and who is receiving and interpreting it. It can also be deeply nuanced according to the subject matter. A big challenge is expanding the contextual understanding from the word (or even parts of a word) to the sentence to the paragraph to the page and so on.
As a result, annotation and training in NLP demands massive computer resources and extensive human labor. “It is often difficult to prepare a sufficient number of labeled samples for solving real-world text-classification problems,” according to researchers with the technology association IEEE.
"One method for handling this problem is transfer learning, which uses a network tuned for an arbitrary task as the initial network for a target task,” they write. Researchers at Accenture have been leveraging transfer learning in support of several government functions related to natural language processing. For example, NLP can help federal officials tasked with making complex judgements in a text-heavy environment.
“Where you have a government employee who has to look at data and make decisions, you can use their subject matter expertise to build a model that helps them make quicker and more informed judgements,” Rodrigues said.
In one trial, Rodrigues set out to help reviewers make their decisions. Previous approaches in NLP would have required a subject matter expert to spend at least a week annotating, in order to secure enough training data to build a reliable system. By starting with an already existing model, “I was able to put together a prototype that would determine whether a claim was valid based on this agency’s standards,” he said. “It took just a couple of hours of annotation by the subject matter experts, marking which claims were valid, and which claims should be denied. This data was used to fine-tune the general model, turning it into a claim checker.”
The same technique can be applied to medical records, where confidentiality regulations make it difficult to pull together big, robust training data sets. Transfer learning can leverage web pages, textbooks, and other publicly available sources to build a baseline. Then agency experts could refine the model with their own documents.
In fact, healthcare is a major focus area for NLP in government. The Centers for Disease Control and Prevention, for example, is using natural language processing to help manage massive amounts of data associated with cancer research. “Some parts of these data — like medical records, laboratory reports, and other clinical reports — are unstructured or narrative text,” the CDC reports. NLP can help to make that information more readily accessible.
NLP can support other areas of government as well. In one project, the Accenture team tried a number of data frameworks in support of a benefits adjudication model. By applying transfer learning to a review process that normally takes 10 months, “the system can spend 17 seconds and find all the evidence for and against the claim according to agency guidelines,” Rodrigues said.
Transfer Learning Enables Cheaper, Faster NLP
The rise of transfer learning comes at a time when federal agencies are dealing with increasingly large amounts of language-based data.
“The government has a ton of data: They write regulations, they write emails, they have communications in support of central government functions,” Rodrigues said. “Agencies are already using natural language processing to analyze public commentary on proposed regulations. Agencies are also using language technologies for decision support, helpdesk call monitoring, form processing, meeting transcription, and more.”
A faster, cheaper, and more effective NLP training method could support those efforts. Transfer learning offers just that.
With conventional training methods, “the base models require so much data, they can cost millions of dollars to create,” he said. With transfer learning, “many of the base models are available as open source, so we can leverage that existing investment to solve our customer’s problems.”
Because the base models are reusable and modular, “we can bring in several million dollars’ worth of potential savings into our customers’ problem areas, with reduced training time, reduced annotation data, and higher accuracy,” he said.
The CheckThat! Lab at the academic CLEF conference has sponsored a series of competitions for solutions to help journalists identify claims that should be fact-checked. This year’s program focused on social media, requiring solutions that could flag suspicious tweets, assess if they would be interesting to the public, and identify, rate, and summarize evidence supporting questionable claims.
Rodrigues led a team of Accenture researchers to leverage pre-trained models in BERT, a neural NLP system designed by Google, and RoBERTa, a Facebook variant that builds on Google’s method, and fine-tuned them to identify claims an expert labeled as requiring professional fact-checking. Using this approach, the team finished first in both the English and Arabic language competitions.
Transfer Learning is Poised for Takeoff
A new language model in early release from OpenAI – GPT-3 – has already shown the potential to deliver dramatic performance improvements. The new model uses 175 billion parameters, which is ten times greater than previous language models.
Given the scope and depth of this pre-training on general data, the subsequent need for fine-tuning on specific tasks is vastly decreased. Almost out-of-the-box, GPT-3 can be used to generate compelling news stories, technical manuals, songs, or almost any kind of text-based communications from the simplest prompt. For example, users have found that it can be used to create guitar tabs or even computer code.
“While focused specifically on language generation, GPT-3 shows how quickly the field is poised for takeoff,” notes Rodrigues.
Transfer Learning Offers Opportunity for Federal NLP, With Some Caution
Transfer learning can accelerate the journey to AI-driven language processes, but it isn’t a panacea. For example, the OpenAI system is still prone to producing sexist or racist rants when given the wrong prompt.
Government requires transparency in the use of AI, and transfer learning needs some finesse to fit that. “[Transfer learning] using neural network approaches is less transparent than statistical natural language processing,” Rodrigues said. “If you want to build accountability into the system, you need to be deliberate about understanding your model and explaining why your system acted the way it did.”
In addition, while transfer learning can lighten the load on human annotators, dedicated input from skilled subject-matter experts is still needed to fine-tune the system.
“They have to take what they do every day and turn that into labels,” Rodrigues said. “That means they need to be aware of the decisions they use to make a specific determination, and we need to break these down and put labels to them.”
Even with these caveats, many researchers point to transfer learning as a way to quickly and cost-effectively train natural language systems, giving government a powerful new way to navigate the tidal swell of written and verbal communications that underlie the federal enterprise.
This content is made possible by our sponsor Accenture; it is not written by and does not necessarily reflect the views of NextGov’s editorial staff.
NEXT STORY: How Inter-Annotator Agreement Drives Confidence in Federal AI