A radiologist “reads” CT scans, looking for signs of cancer. She carefully notes her findings on the unlucky images and is heartened when she writes “Normal” on the rest. The images are clear, and she makes no mistakes. Unfortunately, many her conclusions are wrong because the images have been hacked and changed.
This hypothetical scenario is completely plausible. In 2019, a group of security researchers in Israel proved that they could use artificial intelligence to add or hide cancer in such images, with a greater than 95% success rate in getting radiologists to misdiagnose the disease. They also demonstrated that they could gain access to a hospital’s network and plant a small computer that performed the task. It’s within the reach of hackers to change medical data that make people believe they have a disease they don’t, or make them not get treated for a disease they do have.
Who would do such a thing, and why? To understand, we need to first look at how the motivations of hackers have evolved. Protecting data used to mean preventing unauthorized disclosure, primarily to prevent identity theft. Hackers can file false insurance claims or fake tax refunds if they know enough about you. They can charge items to your credit or debit cards, open new lines of credit in your name, or take over existing bank and other accounts.
InvestmentNews wants to hear from you! Please take a minute to complete this form, so we can better understand and serve our readers.
A more recent motivation is to extract ransom in exchange for restoring access to information. Ransomware and network denial of service attacks make computers and data inaccessible, followed by demands for payment, and such attacks have skyrocketed in recent years.
That covers attacks against data confidentiality and availability. Why change data? Because we trust it to be accurate, so there’s value in exploiting that trust. Imagine a new form of ransom in which a hacker demands payment to tell a hospital which scans were changed. If the hospital doesn’t pay, patients would be misdiagnosed and mistreated.
In another scenario, imagine that a foreign state wants to influence an election; make a candidate think they have cancer or other disease, and they might drop out, or hide the fact that a leader has an illness, so they don’t seek treatment. Attackers can assassinate someone by creating “killer data” in their medical records. At a large scale, manipulating the results of clinical trials could cause drugs to get released that shouldn’t, or prevent a company from releasing a valuable drug.
The risks of changed data extend far beyond medicine. When hacktivists compromise web sites, they change the site to add messages that promote their cause. That’s a simple form of changing data. People with agendas post edited images as “fake news” on social media, which can lead to riots. In 2020, someone changed the voter registration information for the governor of Florida, temporarily preventing him from casting his ballot. Fake data can have real consequences.
If the data that are changed represent physical things, the impact can be devastating. Earlier this year, a hacker gained control of a city computer system in Florida and changed values to add toxic levels of chemicals into the water supply. Similar hacks have been made at water treatment plants in places like Israel. Killer data can be a weapon of mass destruction.
Attacks against data integrity will be the next frontier in cyber warfare. Large-scale attacks are coming and we need to be ready.
How do we stop this? Defending against data integrity attacks will require doing things we’ve never had to do before. We can try to prevent attackers from getting in our systems, but we need to add new ways to assess the trustworthiness of the data when they do get in. Attacks against trust can only be stopped by tools that reestablish trust, so that’s where we need to focus. We have some ideas, and need to develop more, before killer data becomes as commonplace as identity theft and ransomware.
Lou Steinberg is founder and managing partner of CTM Insights, a cybersecurity research lab and incubator with eight operating cyber companies. Earlier, Steinberg served as CTO of TD Ameritrade.
Andrew is half-human, half-gamer. He’s also a science fiction author writing for BleeBot.