With the world still in lockdown, the economy taking a hit as a consequence, and uncertainty reigning supreme, we don't see as much funding news as we used to these days. That's no surprise. What is surprising is seeing today's news.
Granulate, an Israel-based company that optimizes infrastructure performance in real-time and allows businesses to cut compute costs and increase revenue, announced today that it has raised $12 million in Series A funding. The round is led by Insight Partners, one of the largest global funds focused on investing in scale-up software companies, with participation from TLV Partners and Hetz Ventures.
ZDNet connected with Asaf Ezra, co-founder and CEO of Granulate, to talk about what makes Granulate special enough to be able to pull this through.
Low-level optimization, high-level cost reduction
Ezra and his co-founder, Tal Saiag, have a story shared by many Israeli startup founders. They met over a decade ago and served together in one of Israel's top technological intelligence units. Ezra said that besides the fact that this enabled them to hone their problem solving and initiative-taking skills, it also had some technical influence on them.
Ezra and Saiag were exposed to the intricacies of Linux servers, and the Linux kernel. They learned how to fine-tune Linux to achieve better performance, and by talking to business users and Linux experts, they realized there was a recurring theme there. Not just in the need for low-level optimization to get better performance under stress, but also in the ways that this could be achieved.
Granulate was founded in 2018, and its founders have assembled a team that consists largely of graduates of the same intelligence unit they were part of. Initially, they aimed to develop a product in the area of cybersecurity, but eventually, they pivoted to hardware performance optimization.
Ezra acknowledged the fact that in today's economic climate, cost reduction is a key goal for organizations. In that sense, Granulate may be one of the lucky ones to actually profit from the new priorities: "We are in a unique position to help companies with the most pressing issue at the moment, cost reduction, so this is also an opportunity," said Ezra.
But how does Granulate help organizations optimize their infrastructure and reduce their cost? This is the $12 million question we discussed at length with Ezra. The adage of "optimized infrastructure" translates to solving some very specific low-level problems such as Thread Scheduling, Connection Pooling, and Inter-Process Communication.
Anyone who has ever worked with any server knows how hard it is to tackle those. Granulate chose to focus on Linux servers, for a number of reasons. First off, Linux dominates data centers everywhere. So this made sense from a market segment and data availability perspective. And then, Linux's open-source nature meant that Granulate was able to work with it without roadblocks.
The concept of agents is central to how Granulate works. Comprised of the kernel and user-level components, Granulate agents can be installed on any Linux server (bare-metal or virtual machine), and support any architecture be it data-centers, multi, or hybrid-cloud environments.
The agents infer from the resource usage patterns how to best adapt the machine to the load applied to it, creating a streamlined environment for the application. What's more, in the process agents collect performance metrics, which can be integrated into existing monitoring tools such as Prometheus, New Relic, or AppDynamics.
Ezra said that Granulate's monitoring tools integrations are done on a case by case basis. However, agents collect metadata which are not far from what monitoring tools already collect, so adding new integrations is not too hard. The same goes for deployment. Granulate agents are self-contained, so there is not much difference in how they are deployed in different environments.
Granulate agents can be deployed using Chef or Ansible or other tools as well. Still, in bare metal, agents can intervene to a larger extent. Part of it has to do with the fact that agents have access to more data, more server parameters, in bare metal environments.
So, Granulate agents attach to servers, collect data, and then use that data to identify usage patterns that allow them to optimize those servers. We got the impression this looks like a typical case of machine learning. We were right in that it is machine learning, but wrong about the typical part.
The role of agents is not just to collect data and then send them to Granulate's servers through a pipeline to train machine learning models. The agents are truly autonomous: not just data collection, but machine learning training and inference all take part in the agents.
This happens for a number of reasons, as Ezra explained. This approach enables Granulate agents to work with fewer data and less latency. Perhaps more importantly, each agent experiences different loads, as it is deployed in a server with unique workloads. This autonomous approach enables each agent to adapt to its environment.
A point that Ezra stretched is that agents use localized data generated by repetitive processes, such as serving a web server request for example. This process is repeated many times in a specific time, and the approach agents take is to focus on the recent history of the server they are attached to, rather than use historical data to train their models.
Agents may be given some initial configuration parameters, but after that point, they rely on real-time data and quickly build their own model specific to their environment. Ezra was emphatic about this approach: "You don't want what happened a week ago affecting the strategy of what you do now. The loads could be completely different".
The sliding window of data agents monitor is actually just a few seconds long. Data about the environment agents are deployed in is also utilized but in a different way. In a cluster running a type of server Granulate has worked with before, things such as the number of machines or CPU and memory parameters of the machines become features in the machine learning model for the agents.
100% utilization is not recommended, but performance improvement is
Infrastructure optimization is good. But is there such as thing as excessive optimization? In one of Granulate's use cases, where the priority was to maximize cost reduction, the client began by reducing cluster size and removing machines. Granulate reports that they kept going until reaching pre-Granulate's SLA, and achieved an astounding 33% compute cost reduction.
"Most companies run at 35% IT infrastructure utilization or less due to strict quality of service and stability needs. Granulate solves the trade-off between quality of service and costs, providing customers improved results in both," said Ezra.
He went on to add, however, that achieving 100% utilization is something they do not recommend to anyone. Clusters offer failover capacity, so removing this "slack" entirely could jeopardize performance, or even lead to downtime if traffic peaks or hardware failure occurs.
But there is another way to benefit from performance optimization, said Ezra: "You don't necessarily have to reduce the size of the cluster. You can reduce the capacity of the machines in the cluster, while improving performance, and gain a lot of headroom. You can scale and make better decisions".
Plus, as Ezra went on to add, not everyone has the same goal -- minimizing infrastructure-related cost. Others choose to keep the same infrastructure but benefit from the increased performance they can get out of it. In eCommerce or AdTech clients, for example, increased performance means a competitive advantage.
In a way, Ezra said, what Granulate does is the equivalent of having a dedicated system administrator monitoring and fine-tuning servers at all times, but faster and more efficiently than any system administrator possibly could.
In terms of plans, the goal is to triple the size of Granulate's team and expand its clientele. Ezra said he thinks of Granulate's Series A not just as a cash injection, but as a partnership, as Insight Partners has a proven track record in scaling up SaaS companies.