Wednesday, March 24, 2010

Answering Questions: Getting the Right Data to the Right User

posted by Peter Mollins at
Application Portfolio Management helps IT professionals make better decisions about the systems that run their business. But who are these decision-makers and what kinds of choices are they actually making? This posting will take a look at that topic.

Decision-making is not the exclusive purview of senior leadership. Every day architects, analysts, development managers, and developers are making fundamental choices that affect the service-levels provided by applications:
  • Architects want to determine how well architectural models have been implemented within applications. Is there a high-degree of dependency between architectural entities? Where is complexity eroding the flexibility of my systems?
  • Analysts need to understand how well their business processes are maintained by development. Are changes executed quickly with limited rework? How costly are these changes, and how costly overall are the applications that run their lines of business?
  • Development and outsourcing managers want to understand which teams are most effective and which aren’t pulling their weight. Where is complexity rising in the portfolio, and where should refactoring be launched?
  • Operations managers need to make the linkage between systems that fail or are non-performant and the applications and data stores that run on them. Where can improvements be made? Where are duplicate and redundant systems that can be turned off?
  • CIOs want to know overall costs associated with applications, their development, and their underlying infrastructures. In fact, they likely want to see all of the above information, but at an appropriately high level of abstraction.
It is clear that as we go through the goal / question / metric paradigm that there will be different goals for different user roles and levels in the organization. So, it is only natural that the type, focus, and summary level of this data will vary depending on the user and their goals and questions.

Now the question becomes ‘how do we get the data that each of the consumers needs?’. In the previous posting, I looked at what data sources are useful. User surveys, external tools (like a PPM or ALM toolset), and analysis of source code are the key data sources. Also as previously mentioned, they should be included and weighted in varying degrees based on the questions being asked.

Filter Answers at the Right Level
But how do you filter your results for each role and level? The key is the concept of abstractions. Essentially this practice involves defining models that align with how users think about their organization. For business analysts that could be by overarching business process and then by sub-process. For development managers it could be by development team and scrum team, or by outsourcer.

This process doesn’t have to be exhaustive or complex. It just has to define groupings of IT assets that make sense to users. In some cases this is already done through activities like ITIL. In other cases, simple discussion with stakeholders is sufficient to provide the right level of detail, as you can see in the model below.Why do we need these abstractions? Because they provide the “buckets” into which data is sorted as it arrives. These buckets will frequently intersect, so a “claims processing” system that is interesting to an analyst could be managed by “Outsourcer A and B”, which is interesting to a development manager. A simple matrix like this allows data to be quickly sorted to be relevant to different users.

Then, as data arrives from one of your three data sources, it can be marked as being relevant for different types of users. So, complexity data about an application can be marked, preferably in an automated fashion, as being relevant for a development manager and for an architect, say.

Presenting Data Back to Users
As you collect, group, and store data, users will want to access it to support decision-making. Your reporting mechanism, whether that is a purpose-built reporting tool or not, should use the groupings that you have defined. That means reports will filter based on the groupings that are relevant to the end-user and geared to answer their specific questions.

For instance, the development manager that wants to determine which teams are performing. He might combine application complexity measures with bug count data, filtered by scrum teams X and Y. His boss might pull the same data, but instead across the broader data set for Outsourcer A and B. Presenting the right level of data to the right user to help answer their questions and support their defined goals.

Labels: ,

Thursday, March 18, 2010

Measuring Your Progress: Application Portfolio Management

posted by Peter Mollins at
Application Portfolio Management helps decision-makers to match corporate priorities with IT resources– across operations, architecture, and development. In the first post in the series I looked at how an organization can define what those priorities are. In this post I’ll look at how you can measure the portfolio to find where goals aren’t being met.

Defining Questions

To achieve the goals that you have defined you have to ask questions. If you are a CIO and you want to reduce application management costs by 20%, you could start by asking questions like:
  • What is the total hardware and infrastructure cost per application?
  • What is the total development team cost per application?
  • Where in the application portfolio do developers spend most of their time?
  • How much effort is required to complete a change by application?
  • How much does each outsourcer cost per application?
  • How business critical is this application?
  • Which platforms are the most expensive to maintain?
The answers to these questions can help determine how well you are reaching your goals – and where you need to do more work. Each of these questions should be drilled-into as executives determine areas of weakness and pass the need onto managers for resolution. This means that more specific questions should be asked at more focused levels in the organization.

Defining Metrics

You have your questions, but what are the answers? In order to spot issues answers should be quantifiable and trendable. For instance, in reply to the question “how business critical is this application?” your metric may be weighted scale from 1 to 10. Metrics must meaningfully answer the question at hand.

Metrics may be in the form of a snapshot where one-off measurements are used to find outliers. Or, more usefully, they can be trended over time to spot creeping issues that can be corrected before they become business critical.

Collecting Data

Once you have your questions and your measurements in place, the next step is to determine what data should be collected. This is the information that will be gathered to answer your questions and locate where goals aren’t being met. Ideally this data should be trended over time to spot service-levels that are eroding and should be corrected before they become significant issues.

Data gathering should err on the side of ease of collection. You do not want to establish a metrics collection regime that costs more time and effort than you can expect to save from improved management. In fact, you may stagger the level of data collection with less granular metrics collection conducted initially and more granular as savings accrue. This approach will be discussed in a later post on “maturity” levels of portfolio management.

For Application Portfolio Management, data typically comes from three sources:

Stakeholder Surveys
To effectively weight business priorities you need opinions from key members of your organization. For instance, you may want to re-architect applications that reach a certain threshold for complexity. But if two have equal levels, which should come first? This is where measurements like “value to the business” and “perceived riskiness” become important.

Typically, these kinds of value metrics are collected by surveying stakeholders in the organization. Be careful to choose an efficient and repeatable approach. Browser-based surveys that can be distributed and collected in an automated fashion are a preferred method.

Related Technologies and Sources
Within IT are numerous data sources that can help answer the questions you’ve posed. For instance, we may try to answer the question of which applications drain the most resources. In that case, frequency of change, bug counts, and time to complete a work item may all be metrics that matter. This data may be instantly accessible via integration with your lifecycle management tools.

Other data sources may be equally important, depending on the question. An HR system can help determine costs and time spent on a given activity. A PPM technology may have insight into project costs. Regardless of the source, it is important that data collected from these sources can be drawn automatically without significant manual effort. This helps ensure that real-time measurements can be presented to end-users.

Application-Specific Measures
The application portfolio itself is a rich source of data points that are useful for decision-making. Details like application (or more granular) size and complexity are important. The challenge, as always, is how to collect these data points. Today’s application analysis tools provide these measurements quite handily out of the box. But often they will focus only on one specific language. Look for coverage across a range of languages to avoid a patchwork of tools.

There are hundreds of industry-standard metrics that can be collected. Cyclomatic complexity, dependency levels, and program volume are just a few. Naturally, you will want to determine which make the most sense for your team -- you don't need hundreds of metrics, just those that answer your questions. Also, be aware that many measures are language-specific and don’t make sense cross-portfolio.

Mixing Metrics
You are collecting metrics to answer the questions that you are asking. So, in some cases you will need metrics only from survey information – in other cases, only from code analysis. In some cases it is important to combine metrics, for instance, dividing “bug counts” by “code complexity” provides a clearer picture of which applications need re-factoring.

Metrics should also be adapted to suit your company’s specific needs. If your goal is to align the application portfolio with corporate security standards, then there may be specific measurements that would track your unique security standards. Again, the metrics you collect should be only those that match the overarching goal-question-metric paradigm that you have defined.


In the next posting I’ll take a look at how goals, questions, and metrics differ by level in the organization.

Labels: ,

Friday, March 12, 2010

Getting to the Goal: Application Portfolio Management

posted by Peter Mollins at
It’s almost impossible to differentiate between business processes and the applications that automate them. Your business and its application portfolio have become so closely intertwined. So it is imperative that IT managers maintain a firm control over their applications. But rising complexity in the application portfolio threatens to undermine these systems and your control of them, making them expensive, inflexible, and unstable.

The Problem

Application portfolios contain complex relationships between hardware, software, people, and processes that have been adapted over many years. For instance, a Java-based order management system may relay data to a call center’s COBOL application, which relies in turn on a PL/I order fulfillment system.

Over time these systems only grow more complex. New requirements arrive, hardware is modified, new programming languages emerge, and architectural standards erode. So, portfolios become overloaded with duplicate, redundant, undocumented, and fragile systems. This rising complexity has significantly negative impacts on the IT organization:
  • Development costs rise as previously simple changes require senior development effort
  • Development risks increase as changes can disrupt the portfolio in unforeseen ways
  • Infrastructure costs rise as operations cannot shut off the hardware that supports inefficient or redundant applications
  • Business users can’t get new capabilities because development is trying to keep the lights on
Surely IT managers want to focus resources on producing new business services, and not on maintaining existing ones. Of course they do, but the sheer complexity of their portfolio means resources can’t be spared. And even if they could, it’s hard to know where to start. So, where do we go from here?

Getting Ahead of the Problem

Application Portfolio Management (APM) offers a path. It is a best practice that helps users intelligently prioritize development and modernization initiatives. APM works by measuring and trending key performance indicators about your portfolio. This data points IT management to where they should focus effort to get the most return for the business.

But what are these metrics should we collect? First, you should take a step back and start with the goals that you want to achieve for IT and business. What metrics you’ll need will come naturally when you take a goal / question / metric approach to understanding the portfolio. These goals come from interplay between IT and business stakeholders in the organization and will likely tie to the most pressing pains you feel now.



Common goals that I’ve seen at financial services and public sector organizations include:
  • Shift development focus from maintenance to innovation: This goal aims to reduce the cost of supporting existing applications. For instance, through the removal of redundant systems.
  • Cut the risk of business process failure or performance loss: This goal looks at cutting the complexity of a given set of applications. This is often achieved through architectural improvements and refactoring.
  • Lower the cost of completing a business requirement: This goal aims to reduce the effort needed to move a work item through development. This is often addressed by enabling lower cost resources to work on a change.
  • Lower the cost of IT infrastructure: This goal looks at ways to cut ongoing costs for hardware and software support costs. In addition to removing redundancy, this goal may aim to move applications to better supported and more economical architectures.
  • Choose IT projects that are supported by business requirements: This goal looks at a strategic planning approach and how to ensure that new activities are weighted by their relevance to the business.
Goals will vary depending on not only the nature of your company, but also by other aspects of the organization. For instance:
  • Level in the organization: Senior managers will likely have cross-organizational objectives versus more focused goals for middle-level teams.
  • Role in the organization: Development, architecture, and operations teams will each have different requirements. For instance, lowering the cost of IT infrastructure may
  • Maturity level: Whether your organization is looking to put out fires or align strategically with business goals will depend on the maturity of the organization.
Naturally, your own goals will depend on the specifics of your organization. Collaboration across these various facets of your organization will help to ensure that goal determination is synchronized and sufficiently complete.

Once determined, you can understand the questions you want to ask and the measurements you want to track that answer these questions. In the following posts in this series I’ll take a look at how we can determine the right questions to ask and the right metrics to collect to help get toward our defined goals.

Wednesday, March 3, 2010

APM Best Practices

posted by Peter Mollins at
An interesting discussion has kicked off on LinkedIn’s “Application Portfolio Management” Group discussion board. The discussion looks at best practices for running an APM initiative. Because APM directly affects strategic decisions around where IT resources should be applied, the topic is an important one.

There are a few interesting branches to pursue that look at APM best practices. Over the next few weeks, I’ll explore each in some detail:
  • Goal: Constant fire-fighting is no way to run a development organization. Especially in today's era of tight budgets and fast change. In this post I'll summarize the goal of APM and set the stage for a discussion of best practices. Read the post.

  • Questions and Metrics: APM data should answer questions that address a specific goal. Say, ‘why is this business process inflexible?’, ‘where can I cut costs?’, or ‘where is my software architecture flawed?’. To answer these different questions requires different combinations and weightings of data (user surveys, application code, or external sources). Sometimes more of one source, sometimes another. Read the post.
  • Decision-Makers: APM data needs vary based on where you are in the organization. Higher level managers require higher level abstractions, particularly of technical metrics. Also, different types of users will have different data needs. An architect may want technical complexity data, but it may only be meaningful to him if it is filtered by architectural models. Read the post.
  • Maturity: There are different levels of maturity for decision-making. This maturity directly affects which metrics are accessible in the first place and also indirectly because it determines the kind of business goals that an organization is prepared to address. Read the post.
  • What’s next: As a particular initiative moves from “decision” to “action”, different data may be needed. More “bottom-up” data may be necessary to implement the decisions at this stage. Further, different metrics can be monitored to ensure the success of a given development or modernization project as it is executed. Read the post.

Labels: ,