Home & Office

NGINX and the future of the web server

CEO Gus Robertson speaks about the company's massive growth and where it's headed next.
Written by Colin Barker, Contributor

NGINX CEO Robertson: "Today, websites aren't really just websites anymore, they're applications."

Image: Colin Barker

Web server company NGINX styles itself as "the secret heart of the modern web" and claims to run 60 percent of the busiest websites in the world.

CEO Gus Robertson is an Australian native with big ambitions for the company: while NGINX already has a significant presence in the US, it now plans to expand its public profile around the world. ZDNet recently spoke to Robertson to find out more.

ZDNet: Tell me about NGINX.

Robertson: There are a couple of different categories in the web server market. Apache is the original web server and that was built 20, 25 years ago, as an open source web server.

It was built for a different type of internet from what we have today. Then websites were really brochureware. Today, websites aren't really just websites anymore, they're applications. You log into them, you share, you download videos, and a host of other features.

NGINX started in 2004, as an open source project, written by one of our founders, Igor Sysoev, and he wrote the software himself, 100 percent of it.

Where was he from?

Moscow, and when he started NGINX he was really trying to scratch an itch that he had for some time. At the company where he worked, he was handling concurrent inbound connections to the application he was working on, and Apache really couldn't scale to 1,000 or maybe 2,000 concurrent connections.

He tried writing modules for Apache and then tried to scale them beyond those limits. There was actually quite a challenge on the internet at the time to see who could break the 10,000 barrier.

Igor went home, wrote some code, tested it out, broke the 10,000 barrier, and open sourced the code. That was in 2004. He managed the project on his own until 2011. By then, it had just got too big because by that stage there were about 50 million websites using the software.

He was just getting too many requests for features and enhancements, so he got together with two of his friends, formed a company, and called it NGINX Inc. The idea was that they would be able to invest in more engineering and support staff around the project, and then be able to monetise it in some way.

I joined the company in 2012 when it was seven guys in Moscow and myself in the US. Since then we have been able to build the business, and we now have over 120 staff globally.

With this next stage of our expansion we have opened offices for EMEA in Cork, Ireland, and we plan to build up to over 100 people there over the next three years. The business has grown year-over-year and we now have over 317 million websites using our software, including 58 percent of the busiest sites in the world.

We are now the default and the most popular webserver for any website doing a reasonable amount of traffic. Think about sites like Uber, Netflix, BuzzFeed, the BBC, and the SoundCloud.

Has it been a straightforward growth path?

Straightforward in terms of the adoption and growth. It really took off around 2007, 2008. That was when the way that people interacted with websites changed.

That's when websites really transitioned from being brochure websites to sites offering real content and real applications.

That's when broadband became totally adopted and mobile phones started kicking in. There were so many connections and so many people coming into the websites and the sites had to be able to scale.

NGINX became the default standard because of our architecture, which was a very different architecture to Apache.

Apache is an event-driven architecture, rather than a process driven architecture. That means that they handle traffic in a very different way to the way we do.

What is the difference between the ways you and Apache handle traffic?

Rather than creating a separate amount of memory and a separate CPU for every single connection, and keeping it open, we only take memory and CPU when there is a request coming from a connection, and we pass that down to the upstream server.

We don't keep the connection open if it's not in place, so we don't lock CPU and memory, and we can handle asynchronous traffic.

Would you describe your way of working as fully flexible in that sense?

Exactly. A good analogy is the idea of a bank teller. You don't create a bank teller for every person and, even if you are standing there and don't need to deposit money or withdraw money, we don't need a bank teller standing there in case you might need some money. You go to the bank and ask to deposit or withdraw money.

So where does the speed come from?

That comes from the lightweight nature of our software. Although we do have an incredible amount of capability and features in the software, it is still less than 200,000 lines of code. If you install it, it is less than 3MB.

We are very maniacal about not adding an extra line of code if it doesn't have to be there. It's very lightweight, high-performance software, we don't want it to become bloatware.

What do you put the success of the company down to? Is it just the quality of the software?

We are the world's number one web server for high-performing websites. But what we have also done is extend the open source product for our commercial offering to handle more features that extend it from being a web server to being an application delivery platform (ADP).

Now an ADP does more than just application delivery. It does load balancing, it does caching, it has security capabilities and acts as an application firewall. It does health checks, monitoring and so on.

It's the natural bump in the wire to do authentication of traffic coming in or to terminate or encrypt. It's the natural place to store commonly used content such as images or video or HTML pages.

You can accelerate the performance of an application dramatically by putting more of the heavy load lifting of HTTP to the front of the application, so that the application server on the back-end only has to do application logic.

If you think about the way applications are delivered today, say Amazon.com for example. Amazon.com is about 178 individual services, which means that each individual application is there to do a very specific thing.

If you type in Nike shoes, for example, you get many things. You get reviews, you get recommendations, you get sizes, you get all this information and every single one is a separate service, or microservice that is focused on delivering that one thing.

As you do that, all these services need to communicate and the way that they communicate is by HTTP traffic -- and how do they do that? They have NGINX.

So how do you handle a smaller site or app?

The same issues are there for the small guys as they are for the Amazons. You look at how you handle inbound connection, how you handle encrypted connection, whether I'm a bank or a small site, I still need to encrypt that traffic.

And if I am on an application, I still expect a response time that is less than a second. The issues that affect a small website are exactly the same as those that affect a large one, it's just on a different level of magnitude.

How do you keep all that secure?

There are many ways. One would be an SSL. Another is the web application firewall -- the ability to look at different traffic and monitor that traffic. We've got a lot of discrete functions set up on the back-end. For example, you can say, 'I know all my end users so as users come in, I can white-list the ones I know or black-list the ones I don't."

I can rate the users so that I can limit the requests that a certain user can make and that's really important, not just for monitoring DDoS attacks that are coming in but you can also be DDoS'd internally by another API.

And this is all straightforward?

We have a configuration file within NGINX, and NGINX is a model that runs on top of Linux, so it's command line driven. We don't have a configuration dashboard per se.

But we do have a dashboard that shows you all the monitoring and analytics of all of the traffic that is coming in.

What are the biggest issues facing your customers at the moment?

DDoS is a huge one: it is one way that can bring a site down. But traffic load is the most common one.

If you look at the industry in the US, Thanksgiving is one of the biggest [days for website traffic] along with Black Friday and Cyber Monday. Every year big sites are going down on those days because they didn't plan or anticipate the amount of traffic that they were going to get. And that's good traffic. It's not bad traffic. It's not a DDoS attack but equally it can bring a site down.

People describe NGINX as a sort of shock absorber at the front of your website.

But surely there must be some occasions when traffic can overload a site?

There are limitations but because NGINIX doesn't block traffic, we can still handle very large amounts. We are not saying that we can handle everything. If you are inundated with a massive DDoS attack, then that is what it is. But NGINIX is very good at absorbing the shock of a massive amount of internet traffic.

If there is a limitation, it is the bandwidth.

What else is new with NGINX?

We have extended NGINX Plus with load balancing, caching, SSL Plus, monitoring and analytics. What that all does is it puts us up against another category of technology -- the application delivery controller, and they're made by companies like F5 and Citrix. They have created a hardware approach for solving application acceleration.

What we are seeing is a transition from hardware to software, and from looking at it from a network point of view to looking at it from a software point of view. We are seeing a lot of our customers migrating away from these expensive hardware appliances to our NGINX commercial product NGINIX Plus. That's because of the cost savings, because it's software, because it's applications-centric, because it goes to the cloud and it's cloud-native.

What we see happening is that we are all moving from the monolithic, everything in one package approach, to a microservices, or distributed application approach.

Read more about NGINX and web servers

Editorial standards