Let me know if this sounds familiar. Your organization started with a data center, an actual building. Then you added a few satellite installations, either as server racks, co-lo, or additional physical data centers. Then management did a reorg and, all of a sudden, your IT team went from producing applications over the course of months or years to being expected to produce them in weeks.
So you turned to cloud services. Now you have hundreds of accounts and logins, much of your data is in SaaS applications run by competing companies, and even single sign-on seems aspirational. Billing, provisioning, backups, and security are scattered among dozens of unrelated stakeholders.
Also: Digital transformation powered by edge-to-cloud comes to life in this scenario of a big-box retailer
When something goes wrong (and it often does), finding out which system is at fault takes forever. Half the time, it's not just one system. Instead, it's the fact that your various infrastructure components don't play nice with each other. Now, you not only have bickering cloud services and bickering vendors, you also have mobile apps and confidential data that's not just in a couple of secure data centers, but on the phones and laptops of most of your managers.
Worse, have you noticed that the world is changing at warp speed? According to the Q1 2022 Quarterly Remote Work Report by careers platform Ladders, nearly a quarter of all professional jobs are now permanently remote. Data, employee equipment, and the problems that come with them are scattered pretty much everywhere.
Marketing, operations and HR are all begging for new, custom applications to help them keep up. But it's hard enough just to keep the existing systems from melting down. How are you and your fellow IT team members supposed to find the time to implement new solutions, too?
Digital transformation and cloud operations
It's all very overwhelming. Fortunately, yours isn't the only organization that has been facing these challenges. The cloud revolution and the push for digital transformation have resulted in new ways of doing IT that make it possible to manage IT infrastructure as a coherent whole, while making it practical, approachable, and even smooth to spin up new infrastructure and new solutions to meet various business needs.
Much of this relates to the boom in cloud computing. But we're not just talking about software-as-a-service apps or even on-demand infrastructure that lives in some remote data center operated by one of the big tech companies. Sure, that's how it got started. But over time, something very important took place.
Companies started to see the benefits of cloud computing, and wanted to apply them to all levels of infrastructure – from inside the data center, to warehouses and shipping centers, to shared computing resources, to remote sensors. How cool would it be if you could do on-demand self-service scaling and provisioning right from a web browser, and scale and provision everything?
And this is where the time savings starts to really scale up. Using a single-pane-of-glass interface with automation and orchestration tools, it has become possible to set up systems that can build out services and infrastructure inside your on-premises data center and at the edge, as well as in the cloud.
For those companies who don't want to build all these systems themselves, there are platforms like HPE GreenLake – who is also the sponsor of this ZDNET editorial series – designed to help sort out the complexity. We'll get back to GreenLake in a bit, but first let's talk about where edge computing fits into this new paradigm.
The rise of edge computing
If the data center is that building with all your servers, and the cloud is that building that someone else owns with all the servers you rent, the edge is everything else – where it all happens. It's the sensors in smart cities. It's the medical instrumentation in hospitals. It's the fabrication and material transport systems in factories and warehouses. It's the individual retail store in your chain of thousands of stores.
Also: What is edge computing? Here's why the edge matters and where it's headed
The thing about computing at the edge is that it needs to run at the speed of life. A self-driving car can't take the time to send off a query and await a response when a truck swerves in front of it. It has to have all the necessary intelligence in the vehicle to decide what action to take. While this is an extreme example, the same is true of factory processes and even retail sales. Intelligence, data analysis, and decision making must be available without a propagation delay, and therefore must live at the edge.
Of course, all of this adds to the management overhead. Now you have management consoles from a large number of vendors to contend with, plus those for your services on-premises, and then all the stuff up in the cloud.
This is where integration is necessary, where it becomes absolutely essential that all your IT resources – from the edge all the way up to the cloud – need to be managed from a single, coherent, manageable interface.
It's not just about ease of use. It's about preventing mistakes and being able to keep track of and mitigate threats. If you have to open and launch a new management dashboard for every application and subsystem, you're likely to miss things. Some of those things could be systemic failures that you just don't see the signs of. And some of those things could be the indicators of an unwanted hacker penetration or malware attack.
The key to managing all this is a comprehensive edge-to-cloud platform that provides all the services necessary to maintain, grow, and defend your infrastructure over the long haul.
Understanding the benefits of an edge-to-cloud platform
So what characteristics make up a comprehensive edge-to-cloud platform? If you start looking to vendors for a solution, you'll want to explore four key features: self-service, rapid scaling, pay-as-you-go, and managed infrastructure.
These all interconnect, and you pretty much need all of them for the platform to deliver full value. Self-service is that dashboard we've been talking about. It's a cross-vendor, cross-installation provisioning and tracking interface that allows you to see the performance and issues of your currently installed edge-to-cloud infrastructure as well as order up new capabilities and services – and that includes public cloud apps as well as all your private operations.
Rapid scaling goes hand-in-hand with that, because you want to be able to request a new VM, a new container, or even a whole new bare metal environment and have it happen quickly, if not instantly. The key to this, from an operational standpoint, is to have extra capacity available to bring to bear when you need it.
I know. I know. Over-building was a big part of what went wrong in previous incarnations of IT strategy. But that's where the pay-as-you-go billing comes into play. If you're working with a partner provider, they absorb the cost of having the extra capacity at the ready, and all you do is pay for the actual capacity you use. That puts this into the OPEX category, which is also a boon compared to many long-depreciation CAPEX loads on your bottom line.
And finally, the last major characteristic is a managed infrastructure. This is where your partner provider does most of the infrastructure management, and you focus on your operational needs. For example, on a much smaller scale than you're likely to need, I use a managed infrastructure provider to manage my firm's websites. Truly, there's nothing better than being able to open up a support ticket and know that someone there will fix whatever problem I'm having with the servers while I go back to writing my next article.
On the bigger scale you're likely to have, you're looking at all levels of provisioning, management, and support, including security and attack prevention. The reduction in headaches and that, "What the heck do I do now?" feeling can be so very worth it.
Also: Cybersecurity: These are the new things to worry about in 2023
We've already explored how digital transformation requires a comprehensive edge-to-cloud strategy, so now let's look at some of the operational benefits.
The most important is greater agility. As we've seen over the last three years, the world can change with a rapidity that's breathtaking. Your offerings and operations need to be able to respond (or, perhaps, even anticipate) those changes with the same rapidity. Being able to have systems that can spin up or down quickly can allow you to have that responsiveness you need in today's world.
Once you have the agility, doors open. You can modernize applications to meet the needs of work-at-home employees and highly mobile customers. You can optimize a hybrid cloud solution that perfectly fits your working needs, but without all of the chaotic overhead that comes from trying to make multiple vendor configurations work together. You can put your line-of-business needs and your customers' desires first, scaling out to meet the needs of market forces and taking advantage of opportunities as they arise.
HPE GreenLake and other platforms
This is where HPE GreenLake and its competitors come in. Their management of infrastructure is up and down the line, so you can provision co-lo and cloud services off-site, but they'll also deliver gear to your facility within 14 days and with no upfront cost. All of that is cost-controlled with careful metering and pay-as-you-go billing that tracks your usage -- whether it increases or decreases.
HPE manages all of this through HPE GreenLake Lighthouse, which HPE says, "removes the entire process of having to order and wait for a new configuration by allowing customers to add new cloud services in just a few clicks in HPE GreenLake Central and run them simultaneously in just minutes."
HPE GreenLake Central is their unifying dashboard. It's the interface where you control operations, view data-based insights, and manage your entire network. It's also the interface where you order new capabilities and get up-to-date billing information.
As managed services have become more capable and flexible, I've become a huge proponent of them. I used to be a guy who insisted on being able to physically touch all my hardware, but that totally hands-on approach could often be a huge time sink, when my time could better be spent on the unique offerings of my business. I'm guessing the same applies to most of you reading this.
But we live in a time where rapid change means "by next week" not "by next quarter." We need to utilize all the resources and capabilities of systems that enable us to better manage and better respond to predictable and unpredictable events on a network level, a local level, and on a global level. Services like HPE GreenLake can help remove the friction in the process, improve systems, keep users safe, and maybe even let you have a weekend off once in a while.