Borland Software recently released its Lifecycle Quality Management (LQM) initiative, which applies quality assurance to the entire applications design process, with special emphasis in the requirements phase.
Of course, quality is not just about the code, it's about quality of process, methodology, and of getting the definitions right. It's also about people.
So we took Borland's approach to quality to the field to test the thinking on a busy application development shop. In this sponsored B2B informational podcast, listen to Chris Meystrik, the vice president of software engineering at Jewelry Television in Knoxville, Tenn., describe his fast-paced environment and what he looks for in tools, testing, and application lifecycle management.
Check out there excerpts from the discussion:
... What we’ve done is to move to a very agile, iterative development process, where quality has got to be a part. At the very beginning of this process, from requirements even in pre-discovery, we have QA engineers, and QA managers onboard with the project to get an understanding of what the impacts are going to be. With this we can get the business thinking of quality in the very beginning, with our product managers, and project managers getting a bird’s eye-view of what a real-life project schedule might look like. From there on, our QA is heavily involved in the agile process, all the way to the end, measuring the quality of the product. It has to be that way.
We need the vendors to supply us with products that are open, products that will communicate with one another at every phase in the lifecycle of our product development. We have requirements engineers, product managers, and project managers -- both in the initial stages of the project together with the project charter -- trying to allocate resources, and then putting initial requirements together.
When the engineers finally get that, they’re not dealing with the same set of tools. The requirements engineer’s world is one of documentation and traceability, and being able to make sure that every requirement they’re writing is unambiguous and can be QAed at the end of the day. That’s their job at the beginning.
When that gets pushed off into engineering, they’re using their source code management (SCM) system, and their bug and issue tracking systems, and they’re using their IDEs. We don’t need them to get into other tools. All these tools need to coexist in one ALM framework that allows all these tools to communicate.
So, for example, within Eclipse, which is very, very popular here, you’re getting a glimpse of what those requirements look like right down to the engineers’ desktop, without having to open up some other tool, which we know nobody ever does. Without that, you have this barrier to entry that you just want to avoid. The communications are heavier.
When it comes to traceability, you want traceability all the way down to the source-code level, from those requirements into Subversion, which is the tools we’re using. All the way down into generating test plans out of requirements, our QA engineers are not using the requirements tool; they are using automated regression testing tools and automated performance testing tools. They want to write their test plans, and have bidirectional input in and out of the requirements tools, so they can maintain their traceability. So, all across, it has to be communicating and open.
The stakes are only increasing to get quality right the first time. Quality with Web services and SOA can make or break the performance and reliability of the component services, may even color the perceptions of IT in general. Therefore quality needs to happen right from the start, not as a late-stage activity, lest the architects and business analysts detect that services cannot be trusted on par with monolithic applications.