X
Tech

VeriSign's CEO hits back at critics

Stratton Sclavos explains why VeriSign undertook its domain-name redirection service, and tells of his fears for the security of the Internet's root servers
Written by Charles Cooper, Contributor

After a couple of weeks on the hot seat, VeriSign CEO Stratton Sclavos is turning up the fire on his company's severest critics.

His company is the registrar for the main database that keeps track of who owns which names in the .com and .net top-level domains.

VeriSign announced on Thursday that venture capital firm Pivotal Private Equity agreed to acquire its Network Solutions domain registration business for roughly $100m. The deal will enable VeriSign to retain control over the database that directs people to .com and .net addresses. Sclavos spoke with CNET News.com before the deal.

Earlier this month, VeriSign temporarily suspended a new service that redirected misspelled or unassigned .com domain names to a search page it managed.

Up until then, requests for nonexistent or inactive domain names triggered error messages. Opponents contended that VeriSign's addition of a "wild card" feature interfered with spam filters and mail servers. VeriSign subsequently ordered a temporary suspension of the service.

But the controversy attending the dispute over the Site Finder service revealed a deeper split between technologists who helped guide the Internet in its infancy and the businesspersons who later realised the platform's commercial possibilities. After spending much of the last couple of weeks explaining his company's position, Sclavos believes that this cultural divide is a big reason why VeriSign has not received a terribly sympathetic hearing.

He casts the issue within the context of a larger struggle, saying the outcome could determine the pace of future innovation on the Internet. With security attacks becoming increasingly common, Sclavos also says it's time to transfer the responsibility for operating the root servers from volunteers to the commercial sector. He discussed these issues in more depth during a recent CNET News.com roundtable discussion with reporters and editors.

Q: Are security breaches of the Internet getting worse, or are they within the percentage range that should be expected, based upon the growth in traffic over the last year?
A: I'm not sure that it correlates to increases in traffic so much as the cleverness and evolution of hackers. What we've noticed on our networks is that the amount of worms, viruses and distributed denial-of-access attacks is growing at a 120 percent rate year over year.

The escalation in the number and impact of these attacks is forcing us to think about building early warning systems and preventive measures. The funny thing about digital security is that we've lived in a world where we only knew someone was attacking us when they hit our firewalls. It's time to evolve that world so that we get the information that an attack is coming before it hits our front door.

Should the US Department of Homeland Security take the lead on that? And how would you grade its performance to date?
I'd give us all a C+ -- the DHS included. You can't materialise an organisation of that size overnight. They're dealing with really hard issues of just pulling the agencies together. To have expected them in that same period of time to have been incredibly effective at getting the education, the data sharing and the public-private partnership together would be incredibly optimistic.

But do they get it?
I do believe that they understand the problem and realise that with 85 percent of the infrastructure in the private sector's hands, they better figure out a way to get the rest of us to wake up. You've now got an ecosystem -- from the consumer to the enterprise to the government -- linked at high-speed with always-on devices. We better figure out a way to build a better ecosystem of security that's got the same attributes as what we have in the physical world that's built around early warning and the sharing of intelligence.

On a recent conference call, one of your executives discussed the attacks on the domain name root servers last year. He said VeriSign's servers stayed operational because you had invested so much in security while others did not. People looked at that and said, "Well, the Internet is inherently insecure." Is that true? Can there be islands of security, or can there be some kind of bubble of security that's wrapped around the whole Internet?
If you go back to the mid-'90s, when we began talking about what impact the Internet would have, we always talked about the fact that it was a connection of networks and that no single path failure would bring the whole network down. You see the same resilience still there.

The DDOS (distributed denial-of-service) attacks last October on the root system -- hey, there are 13 global copies of that, and they're all operating. It should scare people that nine of the 13 went down. It's time for the Internet infrastructure to go commercial. On the core services of the infrastructure, it's time to pull the root servers away from volunteers who run them out of a university or lab or some other level. That's going to be an unpopular decision.

More than unpopular. That's going to be received as a declaration of war.
It's not a declaration of war; it's a declaration of obvious needs for the network to mature; to being the infrastructure it needs to be if we're going to run the economy on it -- and we are. That's why you're seeing 10 billion hits a day on our network, and that's why you're going to see 20 billion two years from now. The global population deserves a commercially resilient and robust network and the supporting services underneath it; because of the way it grew up over the last 20 to 25 years, the Internet has pockets where that is not the case.

There's some thought that that severity of the attack was overblown. That there's a lot of caching and maybe the DNS records are elsewhere -- it's not like the whole Internet is running on these 13 servers, and if they go down -- boom! -- blackout.
That's what I'm saying. The resilience in the architecture is awesome. But if all those roots go down, every one of those cache systems has a TTL (Time To Live) in it. It's going to need data at some point. So the question is what is going to happen when the data's not available?

[Former cybersecurity czar] Richard Clark came to us two days after taking the job following 9/11, and I told him, "There are 13 geographically dispersed datacentres. You really couldn't take it out." And he said, "What if I drove a truck up to each one and blew them up at the same time?" OK, then you'd take them out. So, there's this notion of what's resilient enough and what's your recovery time.

The reason the root server problem was a big one was because they were attacking the underbelly of the addressing system. Yes, we could have lived 24 to 48 hours. You could say that in that time, we can fix anything -- but maybe not. Microsoft was down for four days with a much simpler denial-of-service attack.

You're saying to go commercial with root servers. But there also are lots of different participants in the Internet, with commerce being just one of them.
I'm not suggesting that any one entity own them. Like we did with ISPs (Internet service providers) that went commercial with backbone build-outs, we need to do something similar on the Internet. The roots are one, and you've probably got a similar situation going on with application-level protocols. The point I'm making here is that there's no turning back -- there's no putting the genie back in the bottle.

The infrastructure and the people who specify its evolution need to really understand that it's much broader today than just a group of technical folks who build research products. It scares a lot of enterprises today, if you say the network is going down or you see the attacks going on. You're starting to spend massive amounts of North American salaries on fixing things that should have been identified prior to hitting us.

But it's not only academic organisations out there. For example, the US Army has one of those root servers.
The question is: what are we going to protect? The overall value of having the infrastructure bulletproof far outweighs the philosophical or emotional debates around whether there's a commercial entity, a government agency or some cooperative volunteer organisation. We sure as hell don't need the digital equivalent of 9/11 to convince us we need to have a better digital infrastructure.

Unfortunately, what I see happening is either hand-waving that it's not as bad as you think it is or the other side that says, "Well, there are privacy concerns, and we don't want this all in the hands of government." There's a balance point, and I'm tired of polarised arguments instead of some kind of level of cooperation between the public, private and academic segments, which we ask: what is the right balance point here?

What do you see as the sequence of events leading up to the transfer of the root servers that you're envisioning?
I'm not suggesting that I even know how to start that process, because it's tied to too much political controversy.

Do you see an imminent risk to the root servers if the status quo doesn't change?
I don't think there's a root risk scenario that's very risky at the moment. But that's mostly because we built it out on our nickel to handle the load if everyone else failed. We decided to upgrade our infrastructure and spent $150m about over the last two and a half years -- in a shrinking economy and with our revenues going down. We wanted not only to handle all the Net traffic if we needed to but also to be the fallback, if the rest of the operators went down.

Are you looking to monetise DNS lookups?
No. That base level of DNS (domain name system) response is an obligation we took on when we inherited that contract. But it would be commercially unreasonable for anyone to suggest that we shouldn't be allowed to build incremental services on top of that if they deliver value.

You temporarily suspended Site Finder in reaction to widespread criticism. What's the next step?
The reason Site Finder became such a lightening rod is that it goes to the question: are we going to be in a position to do innovation on this infrastructure, or are we going to be locked into obsolete thinking that the DNS was never intended to do anything other than what it was originally supposed to do?

Still, a lot of people in the Internet community were quite surprised by Site Finder -- and then you had complaints surfacing that it was not complying to approved standards.
Let's break the argument down: the claim that Site Finder was nonstandard and that we should have informed the community that we were doing something nonstandard -- excuse me: Site Finder is completely compliant to standards that have been out and published by the IETF (Internet Engineering Task Force) for years. That's just a misnomer. The IAB (Internet Architecture Board) in its review of Site Finder said the very same thing -- that VeriSign was adhering to standards.
 
The second claim, that we brought it out without testing -- Site Finder had been operational since March or April, and we had been testing it with individual companies and with the DNS traffic at large. Ninety-nine percent of the traffic is pure HTTP (Hypertext Transport Protocol), and so it handles it the way it should. Just so you know, our customer service lines went from 800 or 900 calls on the first day to almost zero right now. For every customer who had a Site Finder issue, the remediation took less than 12 hours.

Why, then, do you think there was such a strong reaction?
The noise you're hearing publicly does not match the real impact of the system. It's standards-compliant. We have asked for the data five times from anyone who has it -- ICANN (Internet Corporation for Assigned Names and Numbers), the IAB -- and no one can produce data. All they can produce is these fringe stories.

We absolutely should have done a much broader outreach on this. I am very concerned that we have a disconnect between those who think that they are developing standards for the betterment of the network and the community and the users of the network.

You're hinting at a cultural divide?
I think that there is. I don't think it's an intentional divide, but it's drifting apart of the day-to-day usage from the folks who did great steward's work in the early days and were asked to define all the standards to make it work.

And those are the people who still dominate the standards bodies?
They're speaking out of both sides of their mouth right now. It's not OK to say standards are important, unless we don't like someone who implemented it. And it's not OK to say these services at the core should not be built out, unless you're one of the small guys and nobody really cares. How do we build a commercial business with ground rules that seem to shift based on personal agenda and emotion versus any particular logical data set?

But you had to expect get this kind of criticism, didn't you?
And we're out trying to defend ourselves. The one thing I'd question is there doesn't seem to be a process to effectively combat the claims and accusations and the rest. That is what ICANN is supposed to be about: transparent processes that lead to consensus. What we're seeing are predetermined opinions masquerading as processes where the outcome is predetermined. And that's what I resent.

Do you think ICANN needs to be reformed?
It needs to be reformed. It's nobody's fault, but ICANN was designed at a time that was very different from today. It was designed when domains were going up and Network Solutions was the monopoly for the whole thing. So the idea to create competition on the front end and introduce new extensions on the back end seemed like a good idea, when there didn't seem to be any stopping to the growth in names.

Four years later, things are very much changed. Domain names have been flat for the longest time. If I were in ICANN's shoes, I'd want to put forth a charter of promoting innovation, stability and competition. It was really designed to promote competition, and frankly, it did it haphazardly, because it was in such a rush.

This isn't the first time people have called for ICANN to evolve. What's the holdup?
It's very difficult to have the people who built the infrastructure originally also be the reformers of it. That is one of the challenges they will run in to. It's mostly a collection of very technical people and a lot of lawyers. What you don't have are a lot of people who understand how to build products and promote markets. We'd prefer ICANN to become more of a trade association that promotes the growth of the network rather than a regulatory body that seems to have a very difficult time getting anything done.

Editorial standards