Sarbiewski: The legacy approach is not going to be the right path for delivering modern applications. We’ve been hard at work for a couple of years now, recasting and re-inventing our portfolio to match the modern approach to software, going through them one-by-one.
You’ve got changes in how you are organized. You’ve got changes in the approach that people are taking. And, you’ve got brand-new technology in the mix and new ways of actually constructing applications. All of these hold great promise, but great challenges too. That's clashing with the legacy approach that people in the past took in building software.
We talk to our customers about this all of the time. It boils down to the same old changes that we see sort of every 10 years. A new technology comes into play with all its great opportunity and problems, and we revisit how we do this. In the last several years, it’s been about how do I get a global team going, focused on potentially a brand-new process and approach.
What are the new technologies that everybody is employing? We’ve got rich Internet technologies, Web 2.0 approaches and our technology is there. For composite applications, we’ve built a variety of capabilities that help people understand how to make the performance right with those technologies, keep the security and the quality high, while keeping the speed up.
So everything from how do we do performance testing in that environment to testing things that don’t have interfaces. And how do we understand the impact of change on the systems like that? We’ve built capabilities that help people move to Agile as a process approach, things like fundamentally changing how they can do exploratory testing, and how they can bring in automation much sooner in the process of performance, quality, and security.
Lastly, we’ve been very focused on creating a single, unified system that scales to tens of thousands of users. And, it’s a web-based system, so that wherever the team members are located, even if they don’t work for you, they can become a harmonious part of the overall team, 24-hour cycles around the globe. It speeds everything up, but it also keeps everyone on the same page. It’s that kind of anytime, anywhere access that’s just required in this modern approach to software.
How is software really supported?
When I talk to customers, I ask them, how they're supporting software. If we talk about software delivery, it's fundamentally a team sport. There isn't a single stakeholder that does it all. They all have to play and do their part.
When they tell me they’ve got requirements management in Microsoft Word, Excel, or maybe even a requirements tool, and they have a bug database for this, test management for that, and this tool here, on the surface it looks like they fitted everybody with a tool and it must be good. Right?
The problem is that the work is not isolated. You might be helping each individual stakeholder out a little bit, but you're not helping the team.
The problem is that the work is not isolated. You might be helping each individual stakeholder out a little bit, but you're not helping the team. The team’s work relates to each other. When requirements get created or changed, it's the ripple effect. What tests have to be modified or newly created? What code then has to be modified? When that code gets checked in, what tests has to be run? It’s the ripple effect of the work we talk about it as workflow automation. It's also the insight to know exactly where you are.
When the real question of how far am I on this project or what quality level am I at -- am I ready to release -- needs to be answered in the context of everyone’s work, I have to understand how many requirements are tested? Is my highest priority stuff working against what code?
So, you see the team aspects of it. There is so much latency in a traditional approach. Even if each player has their own tool, it's how we get that latency out and the finger-pointing and the mis-communication that also results. We take all that out of that process and, lo and behold, we see our customers cutting their delivery times in half, dropping their defect rates by 80 percent or more, and actually doing this more cheaply with fewer people.
In requirements management, one of the big new things that we’ve done is allow the import of business process models (BPMs) into the system. Now, we’ve got the whole business process flow that’s pulled right into the system. It can be pulled right from the systems like Eris or anything that’s putting in the standard business process modeling language (BPML) right into the system.
Now, everyone who accesses ALM 11 can see the actual business process. We can start articulating that this is the highest priority flow. This step of the business process, maybe it's check credit or something like that, is an external thing but it's super-important. So, we’ve got to make sure we really test the heck out of that thing. [See more on HP's new ALM 11 offerings.]
Everyone is aligned around what we’re doing, and all the requirements can be articulated in that same priority. The beautiful thing now about having all this in one place is that work connects to everything else. It connects to the test I set up, the test I run, the defects I find, and I can link it even back to the code, because we work with the major development tools like Visual Studio, Eclipse, and CollabNet.
It's hugely important that we connect into the world of developers. They're already comfortable with their tools. We just want to integrate with that work, and that’s really what we’ve done. They become part of the workflow process. They become part of the traceability we have.
What we hear from our customers is that the coolest new technology they want to work with is also the most problematic from a performance standpoint.
The bottom line is that the coolest new Web 2.0 front ends can now be very easily performance tested.
We went back to the drawing board and reinvented how well we can understand these great new Web 2.0 technologies, in particular Ajax, which is really pervasive out there. We now can script from within the browser itself.
The big breakthrough there is if the browser can understand it, we can understand it. Before, we were sort of on the outside looking in, trying to figure out what a slider bar really did, and when a slider bar was moved what did that mean.
Now, we can generate a very readable script. I challenge anybody. Even a businessperson can understand, when they're clicking through an application, what gets created for the performance testing script.
We parameterize it. We can script logic there. We can suggest alternate steps. The bottom line is that the coolest new Web 2.0 front ends can now be very easily performance tested. So we don't end up in that situation where it's great, you did a beautiful rich job, and it's such a compelling interface, but only works when 10 people are hitting the application. We've got to fix that problem.
It speeds everything up, because it's so readable and quick. And it just works seamlessly. We've tested against the top 40 websites, and they are out there out using all this great new technology and it's working flawlessly.
Lots of pieces
If you think about a composite application, it's really made up of lots of pieces. There are application services or components. The idea is that if I’ve got something that works really well and I can reuse it as part of and combine it with maybe a few other things or in a couple of new pieces and I get new capability, I've saved money. I’ve moved faster and I'm delivering innovation to the business in a much better, quicker way and it should be rock-solid, because I can trust these components.
The challenge is, I'm not making up software made of lots of bits and pieces. I need to test every individual aspect of it. I need to test how they communicate together and I need to do end-to-end testing.
If I try to create composite apps and reuse all this technology, but it takes me ten times longer to test, I haven’t achieved my ultimate goal which was cheaper, faster and still high quality. So Unified Functional Testing is addressing that very challenge.
We've got Service Test which actually is incredible visual canvas for how I can test things that don't have an interface. One of the big challenges with something that doesn't have an interface is that I can't test it manually, because there are no buttons to push. It's all kind of under the covers. But, we have a wonderful, easy, brand-new reinvented tool here called Service Test that takes care of all that. [See more on HP's new ALM 11 offerings.]
That’s connected and integrated with our functional testing product that allows you to test everything end-to-end in the GUI level. The beautiful thing about our approach is you get to do that end-to-end, GUI level type of testing and the non-GUI stuff all from one solution and you report out all the testing that you get done.
Bring in a lot of automation to speed it up, keep the quality high and the time down low and you get to see it all kind of come together in one place.
So again, bring in a lot of automation to speed it up, keep the quality high and the time down low and you get to see it all kind of come together in one place.
Sprinter is not even a reinvention. It's brand-new thinking about how we can do manual testing in an Agile world. Think of that Instant-On world. It's such a big change when people move to an Agile delivery approach. Everyone on the team now plays kind of a derivative role of what they used to do. Developers take a part of testing, and quality folks have to jump in super-early. It's just a huge change.
What Sprinter brings is a toolset for that tester, for that person who is jumping in, getting right after the code to give immediate feedback, and it's a toolset that allows that tester to automatically figure out what test apps are supposed to go through to drop in data instead of typing it in. I don't have to type it anymore. I can just use an Excel spreadsheet and I can start ripping through screens and tests really fast, because I'm not testing whether it can take the input. I'm testing whether it processes it right. [See more on HP's new ALM 11 offerings.]
And when I come across an error, there's a tool that allows me to capture those screens, annotate them, and send that back to the developer. What’s our goal when we find a defect? The goal is to explain exactly what was done to create the defect and exactly where it is. There are a whole bunch of cool tools around that.
The last point I’d make about this is called Mirror Testing. It’s super-important. It’s imperative that things like websites actually work across the variety of browsers and operating environments and operating systems, but testing all those combinations is very painful.
Mirror Testing allows the system to work in the background, while someone is testing, say on XP and Internet Explorer, five other systems, different combinations will be driven on the exact same test. I'm sitting in front of it, doing my testing, and in the background, Safari is being tested or Firefox. [See more on HP's new ALM 11 offerings.]
If there is an error on that system, I see it, I mark it, and I send it right away, essentially turning one tester into six. It's really great breakthrough thinking on the part of R&D here and a huge productivity bump.
What we hear from our customers is that they really do want their lives to be simplified, and the conclusion that they have come to in many cases is Post-It Notes, emails, and Word docs. It seems simpler at first and then it quickly falls apart at scale. Conversely, if you have tools that you can only work with in one particular environment, and most enterprises have a lot of those, you end up with a complex mess.
Companies have said, "I have a set of development tools. I probably have some SAP, maybe some Oracle. I’ve got built-in .NET, with Microsoft. I do some Eclipse stuff and I do Java. I’ve got those but if you can work with those and if you can help me get a common approach to requirements, to managing tests, functional performance, security, manage my overall project, and integrate with those tools, you’ve made my life easier."
When we talk about being environment agnostic, that’s what we mean. Our goal is to support better than anyone else in the market the variety of environments that enterprises have. The developers are happy where they are. We want them as part of the process, but we don’t want to yank them out of their environment to participate. So our goal again is to support those environments and connect into that world without disrupting the developer.
And, the other piece that you mentioned is just as important. Most customers aren’t taking one uniform approach to software. They know they’ve got different types of projects. I’ve got some big infrastructure software projects that I am not going to do all the time and I am not going to release every 30 days and a waterfall approach or a sequential approach is perfect for that.
I want to make sure it’s rock solid, that I can afford to take that type of an approach, and it's the right approach. For a whole host of other projects, I want to be much more agile. I want to do 60-day releases or 90-day releases or even more, and it makes sense for those projects. What I don’t want, they tell us, I don’t want every team inventing their own approach for Waterfall, Agile, or custom approaches. I want to be able to help the teams follow a best-practice approach.
As far as the workflow, they can customize it. They can have an Agile best practice, a Waterfall best practice, and even another one if they want. The system helps the team do the right thing and get a common language, common approach, all that stuff. That’s the process kind of agnostic belief we have.
The great news is that today you can download all the solutions that we’ve talked about for trials. We have some online demos that you can check out as well. There are a lot of white papers and other things. You can literally pull the software 30 minutes from now and see what I'm talking about.
On the licensing side, we believe that the simplest approach is a concurrent license, which we have on most of the products that we’ve got here. For all the modules that we’ve been talking about, if you have a concurrent license to the system, you can get any of the modules. And, it’s a nice floating license. You don’t have to count up everybody in your shop and figure out exactly who is going to be using what module.
The concurrent license model is very flexible, nice approach. It’s one we’ve had in the past. We're carrying it forward and we’ll look to continue to simplify and make it easier for customers to understand all the great capabilities and how to simply license so that they can get their teams to their modules for the capability they need.