X
Business

Assessment engine drives ‘benchmarkable’ function point analysis

So there I was talking about highly advanced interactive analytic applications last Friday (like you do) and so Monday must logically start with a nod to extended software analysis and measurement technologies.Function point counting as the standard unit of output used to analyse and measure the quality, productivity and cost of building and operating applications is a challenging part of software analysis.
Written by Adrian Bridgwater, Contributor

So there I was talking about highly advanced interactive analytic applications last Friday (like you do) and so Monday must logically start with a nod to extended software analysis and measurement technologies.

Function point counting as the standard unit of output used to analyse and measure the quality, productivity and cost of building and operating applications is a challenging part of software analysis. Automating this process is arguably even more challenging.

The function point is said to have become the de facto industry standard for measuring software development output; and they are typically counted by specialists based on documentation and/or interviews.

Function points are the units of measure used by the IFPUG Functional Size Measurement Method. The IFPUG FSM Method is an ISO recognised software metric to size an information system based on the functionality that is perceived by the user of the information system, independent of the technology used to implement the information system.

So that’s all lovely background thanks – but why the hell am I talking about this? Well I did think this topic needed some context and explanation, so forgive me the preamble.

Assessment engine company CAST operates in this space and has apparently spent most of the last half decade in intensive R&D and field work to refine some sophisticated algorithms to try and ensure that the CAST-Computed Function Point counts closely replicate the IFPUG 4.2 manual standard.

What development project managers should be looking for here is delivery of this technology in an automated package. Our adjectives here are refreshingly different, so forget performance, interoperability and ease-of-use. A good automated function point analysis system will be characterised by being objective, repeatable, tamper-proof and benchmarkable.

Benchmarkable? Is that a word? Let’s just go with it for this niche please.

“Automating function point counts, especially across the variety of technologies employed by typical IT departments, is not a matter of simply analyzing code.” said Olivier Bonsignour, VP of product development at CAST. “An automated solution must analyze calls from the UI to recreate the user experience all the way to the data layer, all from the source code. Doing so depends on the ability to take the entire application into account – from the user interface all the way down to the structure of the database.”

With the kind of enhanced function point analysis CAST is talking about here we could be looking at function points that have been modified during the course of a project. The company claims that it can now count micro-function points – function point changes that are too small to be counted manually but represent a big proportion of IT effort.

This is all superb news, well – it’s superb if you want to count incremental function points in a project that is on track and running a smooth path. But what if project skew and client requirements are flying out the window like TV sets on a Rolling Stones reunion tour? Won’t these “big picture” problems necessitate a rather more holistic view of the project?

Oh I know, CAST will write to me and tell me that with incremental function point analysis they can monitor a project’s path more accurately and help redirect it if it goes off kilter. But isn't there the concept of triage in here somewhere? You don’t apply a Band Aid to a broken leg do you?

Bring it on people, share your thoughts please – but it’s only Monday so go easy OK?

Editorial standards