Reducing the cost of IT security: 'The challenge of the decade' expert says

Dan Kaminsky explains why security, to be effective and efficient, must be engineered into systems, not added later.

Dan Kaminsky is one of seven people in the world who have the capacity to restore the root keys of the Internet's Domain Name System if the system breaks. That's the repository of all the machine-readable Internet addresses -- or the Web's infrastructure. He is known in hacker circles as the guy who discovered critical flaws in DNS that would allow attackers to redirect network clients.

Since leaving his post as director of penetration testing at computer services firm IOActive, he's been developing a line of security products as part of a soon-to-be-launched venture. After presenting a talk on his new application that corrects color-blindness at the recent hacker conference BSides Berlin, Kaminsky spoke with Nextgov about the security industry and the need to bridge the divide between security professionals and product developers.

Nextgov: Have you done security work for feds before?

The feds are interesting. They have the problem of running the largest network in the world and their networks are attacked all the time. They do what they can to prevent successful attacks, but deterrence -- the true agent of defense -- simply doesn't exist online. It's not for lack of trying, or even lack of resources. The nature of the beast is that law enforcement was designed for local crime. The geographical model of law enforcement is butting against the ageographical model of the Internet -- think of some hacker kid in a cyber cafe in Nigeria. He's your next-door neighbor now. Law is based on jurisdiction, jurisdiction is based on geography, and geography is dead on the Internet. What does that say about the law?

You can throw money at a problem -- and the feds throw money at problems -- but it never makes securing systems easy and efficient. Instead, you get this ornate system to manage what should have been nicely engineered in the first place. Have you ever heard about the tragedy of the engineer who was too smart? You may have a system that is technically perfect -- but it's just too complicated to operate in real life. No one will know how to get it to work right except the original engineer. You need to build systems where the guys that work the system don't have to be as smart as its creator and don't have to be experts in a thousand things.

Nextgov: You're saying that security needs to seen as something that is engineered into products and systems from the get-go.

You can have the most secure fix of all time, but if the resulting system is unstable, slow, or even difficult to operate -- well, the fix won't be deployed. And then who's safer, after all that work? Eighteen months ago, I decided to go into product development. It's been fascinating being a security guy playing in the product space. One of the things I'm looking at [is] how to make security scale, because security doesn't scale now. The work effort that goes into stopping and detecting malware infestations is enormous for the defender. Yet, there is nearly no effort required for an attacker.

Figuring out how to reduce the costs of delivering secure system is the challenge of this decade. Security was low on the totem pole for the longest time -- until all hell broke loose. Then in 2003, we had our summer of worms and all these Windows machines stopped working.

Windows has gotten a lot better -- but a remarkable number of organizations are running custom code on their websites, and how's that going? Not too well, and why should it? Let's say it costs a business $50,000 to build a website. To make that site secure, and to validate its security, might literally cost upwards of $150,000. Do you know a customer who's going to pay four times more for security? I don't. Security is competing with insecurity. Insecurity is winning. They've got the product that's better, faster and cheaper. Security has to be willing to compete on the same level.

Nextgov: It strikes me at security trade shows that there are so many vendors there, so many startups. Buyers will never know whether their products can live up to their hype.

There is a lot of cash floating around. There is a terrifying amount of broken stuff being funded. It's the nature of the system right now. So much of the economy is based on the movements of money around the place. In the process, people have forgotten how to create real security solutions. There's a sense that because "cyberwar" is actually kind of happening in an incipient sense, people think the situation is serious now, and investors are warming up to the security sphere. Add to that recent acquisitions such as the McAfee acquisition, which generated this sentiment among investors that, "Oh, I want to be the guy driving that acquisition." My goal is simpler. I've seen more than my share of failed $100-million-plus [public key infrastructures] deployments, security projects, startups . . . I'd like to avoid that.

The game can't be simply to make security bigger. We have to make it cheaper, per infection suppressed, or remediated. If it costs $100,000 per major malware infection that's stopped, we have to get that down to $1,000 -- or $100. Sure, I'm pulling these [numbers] out of my hat. But why are there no real numbers for this? Why do I not know what works -- and at what price point? Because we have no clue as to what actually stops the bad guys.

We haven't had any serious epidemiological analysis of computer security. No one admits when a system is failed -- even when it has failed obviously or it has been failing for 10 years. We are terrible at data collection, and terrible at responding to data. We go ahead and get together at security conferences and exchange notes a little, but it's always like, "I can't give you the details, but just so you know, here's some." We're an industry that runs on rumor and anecdote.

Nextgov: Is there is a divide between developers and security researchers?

There are builders, and there are breakers. Security is a new engineering discipline, and it's notoriously difficult to test for. But we're starting to see developers really engaging to find out what they need to do to build things safely, and -- more important -- we're starting to see security professionals learning enough about real world development processes such that they don't repeatedly come back to see the same bugs [penetration] test after [penetration] test. Formal secure development practices are really gratifying to see.