Thursday, March 18, 2010

Measuring Your Progress: Application Portfolio Management

posted by Peter Mollins at
Application Portfolio Management helps decision-makers to match corporate priorities with IT resources– across operations, architecture, and development. In the first post in the series I looked at how an organization can define what those priorities are. In this post I’ll look at how you can measure the portfolio to find where goals aren’t being met.

Defining Questions

To achieve the goals that you have defined you have to ask questions. If you are a CIO and you want to reduce application management costs by 20%, you could start by asking questions like:
  • What is the total hardware and infrastructure cost per application?
  • What is the total development team cost per application?
  • Where in the application portfolio do developers spend most of their time?
  • How much effort is required to complete a change by application?
  • How much does each outsourcer cost per application?
  • How business critical is this application?
  • Which platforms are the most expensive to maintain?
The answers to these questions can help determine how well you are reaching your goals – and where you need to do more work. Each of these questions should be drilled-into as executives determine areas of weakness and pass the need onto managers for resolution. This means that more specific questions should be asked at more focused levels in the organization.

Defining Metrics

You have your questions, but what are the answers? In order to spot issues answers should be quantifiable and trendable. For instance, in reply to the question “how business critical is this application?” your metric may be weighted scale from 1 to 10. Metrics must meaningfully answer the question at hand.

Metrics may be in the form of a snapshot where one-off measurements are used to find outliers. Or, more usefully, they can be trended over time to spot creeping issues that can be corrected before they become business critical.

Collecting Data

Once you have your questions and your measurements in place, the next step is to determine what data should be collected. This is the information that will be gathered to answer your questions and locate where goals aren’t being met. Ideally this data should be trended over time to spot service-levels that are eroding and should be corrected before they become significant issues.

Data gathering should err on the side of ease of collection. You do not want to establish a metrics collection regime that costs more time and effort than you can expect to save from improved management. In fact, you may stagger the level of data collection with less granular metrics collection conducted initially and more granular as savings accrue. This approach will be discussed in a later post on “maturity” levels of portfolio management.

For Application Portfolio Management, data typically comes from three sources:

Stakeholder Surveys
To effectively weight business priorities you need opinions from key members of your organization. For instance, you may want to re-architect applications that reach a certain threshold for complexity. But if two have equal levels, which should come first? This is where measurements like “value to the business” and “perceived riskiness” become important.

Typically, these kinds of value metrics are collected by surveying stakeholders in the organization. Be careful to choose an efficient and repeatable approach. Browser-based surveys that can be distributed and collected in an automated fashion are a preferred method.

Related Technologies and Sources
Within IT are numerous data sources that can help answer the questions you’ve posed. For instance, we may try to answer the question of which applications drain the most resources. In that case, frequency of change, bug counts, and time to complete a work item may all be metrics that matter. This data may be instantly accessible via integration with your lifecycle management tools.

Other data sources may be equally important, depending on the question. An HR system can help determine costs and time spent on a given activity. A PPM technology may have insight into project costs. Regardless of the source, it is important that data collected from these sources can be drawn automatically without significant manual effort. This helps ensure that real-time measurements can be presented to end-users.

Application-Specific Measures
The application portfolio itself is a rich source of data points that are useful for decision-making. Details like application (or more granular) size and complexity are important. The challenge, as always, is how to collect these data points. Today’s application analysis tools provide these measurements quite handily out of the box. But often they will focus only on one specific language. Look for coverage across a range of languages to avoid a patchwork of tools.

There are hundreds of industry-standard metrics that can be collected. Cyclomatic complexity, dependency levels, and program volume are just a few. Naturally, you will want to determine which make the most sense for your team -- you don't need hundreds of metrics, just those that answer your questions. Also, be aware that many measures are language-specific and don’t make sense cross-portfolio.

Mixing Metrics
You are collecting metrics to answer the questions that you are asking. So, in some cases you will need metrics only from survey information – in other cases, only from code analysis. In some cases it is important to combine metrics, for instance, dividing “bug counts” by “code complexity” provides a clearer picture of which applications need re-factoring.

Metrics should also be adapted to suit your company’s specific needs. If your goal is to align the application portfolio with corporate security standards, then there may be specific measurements that would track your unique security standards. Again, the metrics you collect should be only those that match the overarching goal-question-metric paradigm that you have defined.


In the next posting I’ll take a look at how goals, questions, and metrics differ by level in the organization.

Labels: ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home