Friday, April 9, 2010

What's Next? Executing on Your Decisions

posted by Peter Mollins at
We’ve explored how application portfolio management helps IT leaders determine development priorities. But deciding what to do is only half of the equation. There must be follow-on actions that turn decisions into results. Let’s take a look.

Identifying the Problem

At this stage in the application portfolio management lifecycle, we’ve identified goals, questions, and metrics. We have also collected the data to generate the metrics, and perhaps we’ve trended our data. Different users in the organization were then able to spot service level agreement violations. Then priorities can be passed to different team members to isolate and correct issues.

Let’s take an example. An insurance company’s CIO looks at her dashboard and notes that business user satisfaction has degraded for the Claims Processing system. She passes this issue to her direct reports to determine if there is an underlying IT problem. Here the violated service level agreement could relate to percentage of satisfied users, or level of stakeholder dissatisfaction, for example.

Focus Attention

The manager then receives the issue from the CIO. He decides to investigate the Claims Processing system to determine if issues do exist. He looks at his own dashboards for the system. His goals concentrate less on cost and satisfaction issues and more on technical topics like throughput of change requests and application performance.

As he investigates, he discovers that there has been an increased backlog of change requests. Further, the application has been frequently down and non-performant at critical times. He drills down still further within his dashboards and discovers that the system is inefficiently architected. Tight coupling between programs means that any change to a given sub-system affects dozens of others, slowing the implementation of changes requested by the Claims Processing line of business owners.

He sees metrics that show that experienced developers are being redirected away from implementing change requests. Instead they are focused on maintaining their aging and un-strategic Account Management system. As a result, the development team is unable to work on what matters most to the business. Now, the manager understands the scope of his problems.

Prioritize Actions

Once the manager has determined the root causes of the issue, he may compare the various issues with other key metrics like resource availability and importance to the business. Using project management tooling in conjunction with application portfolio management metrics supports this sequencing.

Now, our manager has decided to take a two-pronged approach to resolving the issue. First, he plans to launch a renovation project to re-architect the Claims Processing code. The purpose of which is to lower its complexity and facilitate faster responses to business needs. Second, he plans to improve the effectiveness of the team working on the Account Management system so that senior developers can concentrate on the higher-value Claims Processing system.

Execute Tasks

While this discussion won’t focus on how to execute modernization tasks like those described above, it is important to note one aspect. Regardless of the kind of modernization project that is undertaken, there is one common factor. Each requires detailed insight into the reality of the application portfolio.

This returns to a key best practice, which is that each member of the IT organization requires insight into their applications to facilitate their job. A CIO needs highly-abstracted views and measures of the application; a developer needs highly detailed insights into the technical reality of the application portfolio.

So, to enable the renovation activity, we need “bottom up” insight into where complexities and inefficiencies lie within the application code. To enable the reallocation of senior resources, we need “bottom up” insight into applications to help junior team members to become “instant experts” on their applications and hence more productive. Keeping this bottom-up info in the same repository as the top-down data improves collaboration.

Monitor Activities

Now we have identified, prioritized, and executed modernization activities that correct our service level issues. We should treat these modernization activities as monitorable events themselves. For instance, the manager could track the change in architectural complexity as the renovation project proceeds and the throughput on change requests. The CIO similarly could re-survey stakeholders to determine changes in perceived value of the system.

Lastly, we can maintain a “continuous improvement” program where we look to find the next issue that threatens our service level agreements.

Conclusion

As we’ve seen over the course of this series, application portfolio management is a powerful way to regain control over the systems that run your business. It helps to identify, prioritize, and correct issues in your applications before they become business issues.

A critical aspect to recall is that application portfolio management applies at different levels of abstraction. So, it applies equally to CIOs as to developers. It is beneficial for organizations with sophisticated planning and those with limited structures. It helps business analysts and technical architects. It does that because it provides the kind of information users need to make smarter decisions for IT and for the business. As such, it is a keystone for effective IT management.

Labels: ,

Friday, April 2, 2010

How Maturity Affects Portfolio Management

posted by Peter Mollins at
Application Portfolio Management helps you to identify where systems aren’t achieving technical and business goals. In previous posts I took a look at how to define those goals. We then investigated how to collect metrics that spot where goals aren’t met. We also looked at how goals and metrics are influenced by your role and level in an organization.

In this section, we’ll take a look at how portfolio management best practices are influenced by timing and the maturity of your organization’s decision-making processes.

How maturity affects goal definition

IT’s goals change based on the maturity of its planning. An architect may decide to boost the flexibility of several applications. This goal may come from his personal knowledge of a developer’s struggles with modifying a system. Goals at this level of maturity may be worthy and can generate results, but they are not necessarily aligned with corporate strategies.

In contrast, organizations with more mature planning will take a “top-down” approach to goal generation. They will start with general corporate principles and ensure that divisional and team goals support the overarching needs. For instance, a corporate goal may be to ensure that new product launches can be supported by IT within 2 weeks from submission.

Such a top-level goal would inspire subsidiary goals for different teams and roles. The architect may define his goal as reducing dependency levels and improving layering of applications. Development teams may set as their goals a specified turnaround time for change requests. Both goals are set in order to conform to the overarching strategy of boosting business flexibility.

The result of the more mature approach is that goals are aligned with higher priorities, lowering the likelihood of sub-optimal priorities. But also, it permits a more granular approach to managing goals. A CIO can spot where strategic goals are not being met, divisional heads can determine where these issues lie, and managers can locate root causes.

How maturity affects metrics collection

New goals are added or organized as your decision-making matures. This means that new and different metrics will need to be collected. In general, this simply means the same process as discussed previously where you follow a goal / question / metric approach. Though, now you may be doing it for more goals – or at least more coordinated goals.

There is another aspect to how maturity affects metrics collection. This relates to the quantity, timing, and kind of metrics that are collected. Let’s take a look:

  • Quantity: An organization’s goal may be to reduce infrastructure costs for an application portfolio. In that case, a mature organization that wants exact results – or has already plucked the low-hanging fruit – will ask many questions to achieve the goal. “Which applications are duplicates?”, “Where is dead code?”, “Which applications cost the most to operate?”, “Which applications are most valuable?”. But for an organization that is just beginning the portfolio management process, a simpler set of questions can get you fast returns without the need for refinement. In that case, simply asking “which applications are duplicates?” may be sufficient.
  • Timing: An organization with mature decision-making will likely investigate trends over time. For instance, monitoring the turnaround time on change requests for multiple development teams. A less mature organization will likely (often out of necessity) make decisions based on snapshot measures. Trending supports better decision-making, but again some decisions can be made based on less rich data sets. It will depend on the degree of accuracy required and risk-tolerance.
  • Kinds of Metrics: Let’s look back at the three kinds of metrics sources for application portfolio management. They are surveys of stakeholders, data from related tooling, like ALM tools, and lastly data from the applications themselves, like complexity measures. To support a snap decision about which applications you should retire, you may rely on surveyed opinions only. Or, to determine where to re-factor an application you may harvest only code complexity data. To support a complex decision about which outsourcer to standardize on, you may need richer datasets that come from all three data types.
As your application portfolio management process develops you will see increased returns. Better decisions lead to lower waste and bigger business results. But critically, this doesn’t mean that value is low when you are starting your journey. In fact, it is the opposite. Returns generated by early decisions can be turned into reinvestment in improved business intelligence.

As your process matures you are always going to balance the projected returns versus the cost to collect the data. Taking an incremental approach and focusing on high-value goals early will help this process.

Labels: ,

Wednesday, March 24, 2010

Answering Questions: Getting the Right Data to the Right User

posted by Peter Mollins at
Application Portfolio Management helps IT professionals make better decisions about the systems that run their business. But who are these decision-makers and what kinds of choices are they actually making? This posting will take a look at that topic.

Decision-making is not the exclusive purview of senior leadership. Every day architects, analysts, development managers, and developers are making fundamental choices that affect the service-levels provided by applications:
  • Architects want to determine how well architectural models have been implemented within applications. Is there a high-degree of dependency between architectural entities? Where is complexity eroding the flexibility of my systems?
  • Analysts need to understand how well their business processes are maintained by development. Are changes executed quickly with limited rework? How costly are these changes, and how costly overall are the applications that run their lines of business?
  • Development and outsourcing managers want to understand which teams are most effective and which aren’t pulling their weight. Where is complexity rising in the portfolio, and where should refactoring be launched?
  • Operations managers need to make the linkage between systems that fail or are non-performant and the applications and data stores that run on them. Where can improvements be made? Where are duplicate and redundant systems that can be turned off?
  • CIOs want to know overall costs associated with applications, their development, and their underlying infrastructures. In fact, they likely want to see all of the above information, but at an appropriately high level of abstraction.
It is clear that as we go through the goal / question / metric paradigm that there will be different goals for different user roles and levels in the organization. So, it is only natural that the type, focus, and summary level of this data will vary depending on the user and their goals and questions.

Now the question becomes ‘how do we get the data that each of the consumers needs?’. In the previous posting, I looked at what data sources are useful. User surveys, external tools (like a PPM or ALM toolset), and analysis of source code are the key data sources. Also as previously mentioned, they should be included and weighted in varying degrees based on the questions being asked.

Filter Answers at the Right Level
But how do you filter your results for each role and level? The key is the concept of abstractions. Essentially this practice involves defining models that align with how users think about their organization. For business analysts that could be by overarching business process and then by sub-process. For development managers it could be by development team and scrum team, or by outsourcer.

This process doesn’t have to be exhaustive or complex. It just has to define groupings of IT assets that make sense to users. In some cases this is already done through activities like ITIL. In other cases, simple discussion with stakeholders is sufficient to provide the right level of detail, as you can see in the model below.Why do we need these abstractions? Because they provide the “buckets” into which data is sorted as it arrives. These buckets will frequently intersect, so a “claims processing” system that is interesting to an analyst could be managed by “Outsourcer A and B”, which is interesting to a development manager. A simple matrix like this allows data to be quickly sorted to be relevant to different users.

Then, as data arrives from one of your three data sources, it can be marked as being relevant for different types of users. So, complexity data about an application can be marked, preferably in an automated fashion, as being relevant for a development manager and for an architect, say.

Presenting Data Back to Users
As you collect, group, and store data, users will want to access it to support decision-making. Your reporting mechanism, whether that is a purpose-built reporting tool or not, should use the groupings that you have defined. That means reports will filter based on the groupings that are relevant to the end-user and geared to answer their specific questions.

For instance, the development manager that wants to determine which teams are performing. He might combine application complexity measures with bug count data, filtered by scrum teams X and Y. His boss might pull the same data, but instead across the broader data set for Outsourcer A and B. Presenting the right level of data to the right user to help answer their questions and support their defined goals.

Labels: ,

Thursday, March 18, 2010

Measuring Your Progress: Application Portfolio Management

posted by Peter Mollins at
Application Portfolio Management helps decision-makers to match corporate priorities with IT resources– across operations, architecture, and development. In the first post in the series I looked at how an organization can define what those priorities are. In this post I’ll look at how you can measure the portfolio to find where goals aren’t being met.

Defining Questions

To achieve the goals that you have defined you have to ask questions. If you are a CIO and you want to reduce application management costs by 20%, you could start by asking questions like:
  • What is the total hardware and infrastructure cost per application?
  • What is the total development team cost per application?
  • Where in the application portfolio do developers spend most of their time?
  • How much effort is required to complete a change by application?
  • How much does each outsourcer cost per application?
  • How business critical is this application?
  • Which platforms are the most expensive to maintain?
The answers to these questions can help determine how well you are reaching your goals – and where you need to do more work. Each of these questions should be drilled-into as executives determine areas of weakness and pass the need onto managers for resolution. This means that more specific questions should be asked at more focused levels in the organization.

Defining Metrics

You have your questions, but what are the answers? In order to spot issues answers should be quantifiable and trendable. For instance, in reply to the question “how business critical is this application?” your metric may be weighted scale from 1 to 10. Metrics must meaningfully answer the question at hand.

Metrics may be in the form of a snapshot where one-off measurements are used to find outliers. Or, more usefully, they can be trended over time to spot creeping issues that can be corrected before they become business critical.

Collecting Data

Once you have your questions and your measurements in place, the next step is to determine what data should be collected. This is the information that will be gathered to answer your questions and locate where goals aren’t being met. Ideally this data should be trended over time to spot service-levels that are eroding and should be corrected before they become significant issues.

Data gathering should err on the side of ease of collection. You do not want to establish a metrics collection regime that costs more time and effort than you can expect to save from improved management. In fact, you may stagger the level of data collection with less granular metrics collection conducted initially and more granular as savings accrue. This approach will be discussed in a later post on “maturity” levels of portfolio management.

For Application Portfolio Management, data typically comes from three sources:

Stakeholder Surveys
To effectively weight business priorities you need opinions from key members of your organization. For instance, you may want to re-architect applications that reach a certain threshold for complexity. But if two have equal levels, which should come first? This is where measurements like “value to the business” and “perceived riskiness” become important.

Typically, these kinds of value metrics are collected by surveying stakeholders in the organization. Be careful to choose an efficient and repeatable approach. Browser-based surveys that can be distributed and collected in an automated fashion are a preferred method.

Related Technologies and Sources
Within IT are numerous data sources that can help answer the questions you’ve posed. For instance, we may try to answer the question of which applications drain the most resources. In that case, frequency of change, bug counts, and time to complete a work item may all be metrics that matter. This data may be instantly accessible via integration with your lifecycle management tools.

Other data sources may be equally important, depending on the question. An HR system can help determine costs and time spent on a given activity. A PPM technology may have insight into project costs. Regardless of the source, it is important that data collected from these sources can be drawn automatically without significant manual effort. This helps ensure that real-time measurements can be presented to end-users.

Application-Specific Measures
The application portfolio itself is a rich source of data points that are useful for decision-making. Details like application (or more granular) size and complexity are important. The challenge, as always, is how to collect these data points. Today’s application analysis tools provide these measurements quite handily out of the box. But often they will focus only on one specific language. Look for coverage across a range of languages to avoid a patchwork of tools.

There are hundreds of industry-standard metrics that can be collected. Cyclomatic complexity, dependency levels, and program volume are just a few. Naturally, you will want to determine which make the most sense for your team -- you don't need hundreds of metrics, just those that answer your questions. Also, be aware that many measures are language-specific and don’t make sense cross-portfolio.

Mixing Metrics
You are collecting metrics to answer the questions that you are asking. So, in some cases you will need metrics only from survey information – in other cases, only from code analysis. In some cases it is important to combine metrics, for instance, dividing “bug counts” by “code complexity” provides a clearer picture of which applications need re-factoring.

Metrics should also be adapted to suit your company’s specific needs. If your goal is to align the application portfolio with corporate security standards, then there may be specific measurements that would track your unique security standards. Again, the metrics you collect should be only those that match the overarching goal-question-metric paradigm that you have defined.


In the next posting I’ll take a look at how goals, questions, and metrics differ by level in the organization.

Labels: ,

Friday, March 12, 2010

Getting to the Goal: Application Portfolio Management

posted by Peter Mollins at
It’s almost impossible to differentiate between business processes and the applications that automate them. Your business and its application portfolio have become so closely intertwined. So it is imperative that IT managers maintain a firm control over their applications. But rising complexity in the application portfolio threatens to undermine these systems and your control of them, making them expensive, inflexible, and unstable.

The Problem

Application portfolios contain complex relationships between hardware, software, people, and processes that have been adapted over many years. For instance, a Java-based order management system may relay data to a call center’s COBOL application, which relies in turn on a PL/I order fulfillment system.

Over time these systems only grow more complex. New requirements arrive, hardware is modified, new programming languages emerge, and architectural standards erode. So, portfolios become overloaded with duplicate, redundant, undocumented, and fragile systems. This rising complexity has significantly negative impacts on the IT organization:
  • Development costs rise as previously simple changes require senior development effort
  • Development risks increase as changes can disrupt the portfolio in unforeseen ways
  • Infrastructure costs rise as operations cannot shut off the hardware that supports inefficient or redundant applications
  • Business users can’t get new capabilities because development is trying to keep the lights on
Surely IT managers want to focus resources on producing new business services, and not on maintaining existing ones. Of course they do, but the sheer complexity of their portfolio means resources can’t be spared. And even if they could, it’s hard to know where to start. So, where do we go from here?

Getting Ahead of the Problem

Application Portfolio Management (APM) offers a path. It is a best practice that helps users intelligently prioritize development and modernization initiatives. APM works by measuring and trending key performance indicators about your portfolio. This data points IT management to where they should focus effort to get the most return for the business.

But what are these metrics should we collect? First, you should take a step back and start with the goals that you want to achieve for IT and business. What metrics you’ll need will come naturally when you take a goal / question / metric approach to understanding the portfolio. These goals come from interplay between IT and business stakeholders in the organization and will likely tie to the most pressing pains you feel now.



Common goals that I’ve seen at financial services and public sector organizations include:
  • Shift development focus from maintenance to innovation: This goal aims to reduce the cost of supporting existing applications. For instance, through the removal of redundant systems.
  • Cut the risk of business process failure or performance loss: This goal looks at cutting the complexity of a given set of applications. This is often achieved through architectural improvements and refactoring.
  • Lower the cost of completing a business requirement: This goal aims to reduce the effort needed to move a work item through development. This is often addressed by enabling lower cost resources to work on a change.
  • Lower the cost of IT infrastructure: This goal looks at ways to cut ongoing costs for hardware and software support costs. In addition to removing redundancy, this goal may aim to move applications to better supported and more economical architectures.
  • Choose IT projects that are supported by business requirements: This goal looks at a strategic planning approach and how to ensure that new activities are weighted by their relevance to the business.
Goals will vary depending on not only the nature of your company, but also by other aspects of the organization. For instance:
  • Level in the organization: Senior managers will likely have cross-organizational objectives versus more focused goals for middle-level teams.
  • Role in the organization: Development, architecture, and operations teams will each have different requirements. For instance, lowering the cost of IT infrastructure may
  • Maturity level: Whether your organization is looking to put out fires or align strategically with business goals will depend on the maturity of the organization.
Naturally, your own goals will depend on the specifics of your organization. Collaboration across these various facets of your organization will help to ensure that goal determination is synchronized and sufficiently complete.

Once determined, you can understand the questions you want to ask and the measurements you want to track that answer these questions. In the following posts in this series I’ll take a look at how we can determine the right questions to ask and the right metrics to collect to help get toward our defined goals.

Wednesday, March 3, 2010

APM Best Practices

posted by Peter Mollins at
An interesting discussion has kicked off on LinkedIn’s “Application Portfolio Management” Group discussion board. The discussion looks at best practices for running an APM initiative. Because APM directly affects strategic decisions around where IT resources should be applied, the topic is an important one.

There are a few interesting branches to pursue that look at APM best practices. Over the next few weeks, I’ll explore each in some detail:
  • Goal: Constant fire-fighting is no way to run a development organization. Especially in today's era of tight budgets and fast change. In this post I'll summarize the goal of APM and set the stage for a discussion of best practices. Read the post.

  • Questions and Metrics: APM data should answer questions that address a specific goal. Say, ‘why is this business process inflexible?’, ‘where can I cut costs?’, or ‘where is my software architecture flawed?’. To answer these different questions requires different combinations and weightings of data (user surveys, application code, or external sources). Sometimes more of one source, sometimes another. Read the post.
  • Decision-Makers: APM data needs vary based on where you are in the organization. Higher level managers require higher level abstractions, particularly of technical metrics. Also, different types of users will have different data needs. An architect may want technical complexity data, but it may only be meaningful to him if it is filtered by architectural models. Read the post.
  • Maturity: There are different levels of maturity for decision-making. This maturity directly affects which metrics are accessible in the first place and also indirectly because it determines the kind of business goals that an organization is prepared to address. Read the post.
  • What’s next: As a particular initiative moves from “decision” to “action”, different data may be needed. More “bottom-up” data may be necessary to implement the decisions at this stage. Further, different metrics can be monitored to ensure the success of a given development or modernization project as it is executed. Read the post.

Labels: ,

Tuesday, February 16, 2010

New Release of Modernization Workbench

posted by Peter Mollins at
A new release of the Modernization Workbench has just been launched. The new version 3.1 is "3 in 1". It combines best-in-class functionality from Micro Focus' analysis and application portfolio management capabilities into a single platform.
  • Single Platform for Managers: Modernization Workbench provides the only business-centric solution for measuring and managing the application portfolio. Its integrated Enterprise View module delivers browser-based dashboards that help prioritise development projects via metrics like application cost, complexity, value, and risk.
  • Single Platform for Assessments: Modernization Workbench provides the richest technical information about your applications. Powerful queries, visualizations, and specialized assessment tools, including for platform migrations, are available.
  • Expanded Language Coverage: Modernization Workbench provides deep and now even broader language coverage. This new release expands its coverage of Java, JEE, additional job schedulers, and more.
  • Mass Change Activities: Modernization Workbench dramatically accelerates the execution of projects that require numerous changes to application code -for instance, to adhere to regulatory requirements like ICD-10.
  • Major Relational Database Support: Modernization Workbench 3.1 adds support for Microsoft SQL Server, ensuring that all three major relational databases are supported (adding to existing support for Oracle RDBMS and IBM DB2).
For more information, please see this release here:

Labels: ,