Karczewski: Over the past few years, IT has been asked to deliver more quickly, to be more responsive to our business needs, and to help drive down costs in the way in which we develop, deploy, and deliver software and services to our end customers.
To accomplish that, we've been focusing on automating many of the tasks in a traditional software development lifecycle as much as possible to help make sure that when they're performed manually, they're not skipped.
For example, automating from a source code check in, automating the process by which we would close out defects, that source code was resolving, automating the testing that we do when we create a new service, automating the performance testing, automating the unit testing, the code coverage, the security testing, to make sure that we're not introducing key flaws or vulnerabilities that might be exposed to our external customers.
Applications are basically just a combination of integrated services, and we've been moving forward with a strategic service-based delivery model for approximately a year and a half now. We have hundreds of services that are reused and utilized by our applications.
Paychex is primarily an HR benefits and payroll provider, and our key customers are approximately 570,000 business owners and the employees that work for those business owners.
We've been focusing on the small-business owner because we believe that’s where our specialty is.
We have clients who want Paychex to do some of the business tasks for them, but they want to still do some of the tasks themselves.
What we have been finding over time is that we're developing a hybrid behavioral approach. We have clients who want Paychex to do some of the business tasks for them, but they want to still do some of the tasks themselves.
In order to satisfy the one end of the spectrum or the other and everything in between, we've been moving toward a service-based strategy where we can package, bundle, price, roll out, and deliver the set of services that fit the needs of that client in a very highly personalized and customized fashion.
The more that we can automate, the more we're able to test those services in the various combinations and environments with which they need to perform, with which they need to be highly available, and with which they need to be consistent.
We have an awful lot of information that is very personal and highly confidential. For example, think about the employees that work for one of these 560,000-plus business owners. We know when they are planning to retire. We know when they move, because they are changing their addresses. We know when they get married. We know when they have a child. We know an awful lot of information about them, including where they bank, and it’s highly, highly confidential information.
We took a step back and took a look at our software delivery lifecycle. We looked at areas that are potentially not as value-add, areas of our software delivery lifecycle that would cause an individual developer, a tester, or a project manager, to be manually taking care of tasks with which they are not that familiar.
For example, a developer knows how to write software. A developer doesn’t always know how to exercise our quality center or our defect tracking system, changing the ownership, changing statuses, and updating multiple repositories just to get his or her work done.
So, we took a look at tasks that cause latency in our software delivery lifecycle and we focused on automating those tasks.
A developer knows how to write software. A developer doesn’t always know how to exercise our quality center or our defect tracking system.
We're using a host of HP products today. For example, in order to achieve automated functional testing, we're utilizing Quality Center (QC) in combination with Quick Test Professional (QTP). In order to do our performance testing, pre-production, we utilize. Post-production, we're beginning to look an awful lot at Real Use Monitor (RUM), and we're looking to interface RUM with ArcSight, so that when we do have an availability issue, and it is a performance issue for one of our users anywhere, utilizing our services, we're able to identify it quickly and identify the root cause.
Metrics of success We're looking at the number of testing hours that it takes a manual tester to spin through a regression suite and we compare that with literally no time at all to schedule a regression test suite run. We're computing the number of hours that we're saving in the testing arena. We're computing the number of lines of software that a developer creates today in hopes that we'll be able to show the productivity gains that we're realizing from automation.
We're very interested in looking at the HP IT Performance Suite and an Executive Scorecard. We're also very interested in tying the scorecard of the builds that we're doing in the construction and the development arena. We're very interested in tying those KPIs, those metrics, and those indicators together with the Executive Scorecard. There's a lot of interest there.
We've also done something that is very new to us, but we hope to mainstream this in the future. For the very first time, we employed an external organization from the cloud. We utilized LoadRunner and did a performance test directly against our production systems.
Why did we do that? Well, it’s a very huge challenge for us to build, support, and maintain many testing environments. In order to get a very accurate read on performance and load and how our production systems performed, we picked a peak off-time period, we got together with an external cloud testing firm and they utilized LoadRunner to do performance tests. We watched the capacity of our databases, the capacity of our servers, the capacity of our network, and the capacity of our storage systems, as they throttled the volume forward.
We plan to do more of that as a final checkout, when we deliver new services into our production environment.