How Artificial Intelligence Can Help Burn Victims
Machine learning allows computers to see patterns in medical images that are invisible to human doctors.
It takes years, decades even, for physicians to refine the expertise required to notice details that remain invisible to the untrained. This aptitude, depending on a doctor’s specialty, might mark the difference between an oncologist knowing a malignant tumor from a benign cyst. It can help a cardiologist determine the velocity of blood as it flows through a hole in the heart. Or it may tell a reconstructive plastic surgeon whether a severe burn is healing nicely or at risk of infection.
None of this is easy unless you know how to see in a certain way.
“The further along you get in your training, the better you are at picking up little subtleties,” says Jonathan Kanevsky, a plastic surgeon at the McGill University Health Center in Montreal. “But every physician is limited by the number of cases they’ve seen in their lifetime.”
Artificial intelligence could change all that. Medical specialities that rely heavily on imaging technologies are on the cusp of undergoing a major transformation in the era of machine learning, a type of AI in which computers exposed to massive data sets can automatically draw inferences from what they see.
Using enormous troves of medical imagery could revolutionize health care because, Kanevsky says, “things that have a visual component can be translated to an image, which can then be translated to a data point, which can be used for machine learning.”
In other words, today’s machines are sophisticated enough to glean hidden insights from complex imagery—perspective that would otherwise evade even the most experienced human. With the right training, machines are able to show human doctors things they cannot see.
“For example, what’s the pattern of a certain infection caused by a certain bacteria?” Kanevsky says. Such a pattern might be so subtle that it can’t be identified by a human, yet it may be discernible to a machine that’s drawing from a large enough data set.
Kanevsky gives another example, recalling a recent patient who was injured in a propane-tank explosion.
“Burns are tricky,” he says, and for a number of reasons. One of the first things he and his colleagues had to do in response was to estimate the extent of the person’s injuries. One of the traditional ways to do that in plastic surgery is to assume 1 percent of a person’s body is equivalent to the surface area of the palm of that person’s hand. It isn’t precise or even necessarily accurate—“We use a very crude estimation,” Kanevsky says—but it’s important for doctors to guess.
Knowing how much of a person’s body is burned is a predictor for mortality, and helps doctors make critical decisions about the best course of treatment; including assessing how much IV fluid is necessary in the initial days after the injury and determining what kind of surgical response is appropriate. Algorithms can already determine how deep a burn is—and accurately predict how long it will take for a burn to heal.
Already, “previously unimaginable” applications for machine learning are “within grasp” for individual patient care, according to a February essay about machine learning in the Journal of the American Medical Association. There are at least dozens of startups specializing in artificial intelligence with a focus on health care.
Yet, there is still much work to be done before clinicians can turn over their clinics to machines. (In other words, docs, don’t worry about being replaced by bots just yet.) It takes time to teach a machine.
Supervised learning requires giving a computer feedback, confirming when it’s right, tagging and cataloguing images, and “training the algorithm so it can say, ‘This is a wound that looks like it will heal,’” as Kanevsky puts it.
With enough attention and pristine data, a machine’s way of seeing can quickly supersede human ability. For example, the deep-learning startup Enlitic boasts an algorithm that’s 50 percent more accurate than human radiologists at detecting lung cancer, according to the company.
In other settings, machines have proved to be at least as good as human doctors at crucial tasks.
“In many applications, the performance of the machine learning-based systems is comparable to that of experienced radiologists,” wrote the authors of a 2012 paper published in the journal of Medical Image Analysis. (And that was four years ago, practically an eternity by AI standards.)
Kanevsky believes machine learning will allow for a sort of black box in medicine, with sophisticated algorithms recording and decoding intricate aspects of a person’s health on levels never before possible. Combine the promise of machine learning with the troves of data that could be collected through individual wearable devices, and doctors could begin to rely on “algorithms that continually optimize for personal information in real time,” as the authors of the JAMA essay put it, to detect abnormalities and select treatment courses.
All the while, the technology used to capture patient data is improving dramatically on its own.
“Between artificial intelligence and the evolution of imaging technologies,” Kanevsky says, “the marriage of those two forces is going to be just out of this world.”