Calculating the Maintainability Index and looking at how you can use it for your projects.

Author

Date

Mar 07, 2022

So far, we’ve mostly been looking at metrics around code quality that look at quality from the perspective of readability and understandability. There’s another facet we should be looking at when we think about the quality of code - how easy is it to maintain and update vs how much are we running a risk of building up long term technical debt? The Maintainability Index is a key metric that attempts to quantify these questions and give us insight into the maintainability of code. But, how actionable and accurate is it?

The Maintainability Index first appeared in 1992 when it was proposed by Paul Oman and Jack Hagemeister at the International Conference on Software Maintenance with the goal of establishing automated software development metrics to guide “software related decision making”. The Maintainability Index tries to give a holistic view of the relative maintenance burden for different sections of a project by blending together a series of different metrics. The components are:

- Halstead’s Volume - HV
- Cyclomatic Complexity - CC
- Lines of Code - LOC
- % of Comments - perCOM

These are blended together into the original formula:

Maintainability = 171 - 5.2 * ln(HV) - 0.23 * CC - 16.2 * ln(LOC) + 50 * sqrt(2.46 * perCOM)

Microsoft adopted the Maintainability Index into Visual Studio and in 2011 updated the formula to shift it to a bounded scale with a range of 1 - 100:

Maintainability = Max(0,(171 - 5.2 * ln(HV) - 0.23 * (CC) - 16.2 * ln(LOC))*100 / 171)

Before we dive into where the formulas come from and look at what they mean, let’s take a look at the individual components.

Halstead’s Volume is one of the Halstead Complexity Metrics which look to measure the different properties of software and their relationships to each other. All of the Halstead metrics are based off of the notion of the number of total and unique operators and operands within the code in question. At a high level, an operator is something carrying out an operation while an operand is something that participates in that operation.

For example:

`y = x * x`

Here `x`

and `y`

are operands and `=`

and `*`

are operators. `x`

appears twice,
so we have 3 operands, 2 unique, and 2 operators, both unique.

The two basic Halstead metrics are the Program Vocabulary & the Program Length:

*Program Vocabulary = Unique Operators + Unique Operands*

*Program Length = Total Operators + Total Operands*

Combining them together we get the Halstead Volume:

*Halstead Volume = ProgramLength * log{_2}{Program Vocabulary}*

The Halstead Volume is proportional to the overal size of the program and represents the amount of space necessary for storing the program. From a human understanding perspective, it also is strongly related to the amount of information a reader of the code needs to grasp to understand the code's meaning.

In the example above: Program Vocabulary = 4

Program Length = 5

Halstead Volume = 10

I’ve written in more detail here about what is Cyclomatic Complexity (and why it’s not great). But as a quick refresher, Cyclomatic Complexity is a code quality metric that measures the understandability and maintainability/testability of code by measuring the number of independent paths through that code.

Importantly, Cyclomatic Complexity is heavily influenced by the number of lines of code (almost more than any other feature), which is one of the reasons that Cognitive Complexity is often argued to be a more effective complexity measurement.

Lines of Code is the most straightforward component to the Maintainability Index - it’s just the number of lines of code in a program. It is also indirectly measured by several of the other components in the Maintainability Index.

Just like its name implies, the Percentage of Comments metric is the % of lines of a given program that are comments.

Now that we’ve quickly covered the components of the Maintainability Index itself, let’s take a look at where the actual equation came from. Maintainability Index originated as an internal project at HP to help drive decision making around their software processes, and the coefficients within the metric equation directly reflect this.

Within HP, engineers were asked to rate the maintainability for 16 projects on a 0 to 100 (excellent maintainable) scale. They then constructed more than 50 regression models to “identify simple models that could be calculated from existing tools and still be generic enough to apply to a wide range of software systems” before identifying the initial version of the Maintainability Index. The final outcome of this regression was the original formula:

Maintainability = 171 - 5.2 * ln(HV) - 0.23 * CC - 16.2 * ln(LOC) + 50 * sqrt(2.46 * perCOM)

Over the years, this formula has changed slightly, with Microsoft shifting the formula fairly significantly to bound it to a 1-100 scale. But, at the core these same relationships and coefficients to the HP data remained.

The original Maintainability Index had an upper bound of 171 and no lower bound. In general, in the original paper, the authors recommended that Maintainability Index primarily be used to calculate relative maintainability between sections of projects or projects overall for the same team - rather than be used as an absolute metric. But, as a rough range they suggested that there are general score range guidelines:

- => 85 - Highly Maintainable
- 65 - 85 - Moderately Maintainable
- <= 65 - Difficult to Maintain

Under the new Visual Studio definition the ranges changed slightly:

- => 20 - Highly Maintainable
- => 10 & < 20 - Moderately Maintainable
- <10 - Difficult to Maintain

Both of these formulas and threshold ranges are used by a variety of tools including:

The goal behind the Maintainability Index is great - having an easy metric to measure the costs to maintain and maintainability of a project is important - but the technical details behind the metric have some potential issues.

Not only is Lines of Code a direct component of the Maintainability Index calculation, but it also has a direct relationship with Halstead Volume and is heavily correlated with the Cyclomatic Complexity. This leads to the Maintainability Index being overly reliant on the length of a file (or average length of a file in a project). Extending the length can significantly decrease Maintainability Index, even if all of the changes cause the code to be clearer and more understandable.

Whether it’s looking at a whole project or looking at an individual file, the Maintainability Index is calculated by looking at the average of Halstead Volume and Cyclomatic Complexity. But, there’s evidence that both complexity and maintainability follow a power law. By calculating the Maintainability Index with an average we miss out on the true costs of extremely complex or costly functions, classes, and files in a codebase.

The formula to calculate the Maintainability Index was determined by HP engineers on HP projects. While this may drive directional insight for other projects, all of the coefficients and relationships from the initial development of the Index have been maintained over the last 30 years and are based off of those initial estimates for maintainability.

The original formula for the Index were built on calculations for projects written in C. Historically though, it’s been applied to other languages. Just like there’s a probability that the coefficients and formula don’t fully hold across different companies/projects - it’s unlikely that the formula would perfectly hold for additional languages.

Overall the Maintainability Index gives us an interesting tool to examine the potential costs of maintaining different projects or different parts of our project. But, the metric overweights Lines of Code as a metric and was built for a single company. Ideally we’d look to use some of the components of Maintainability Index rather than the full index metric itself to identify potential issues. But, if we’re going to use the Maintainability Index we should use it to measure relative maintainability within our project rather than use it as an absolute metric.

*This is the third part of a seven-part series on code quality, technical debt,
and development velocity (Here’s
part 1 &
part 2).
**Check out how you can use Sourcery** to instantly
review code and instantly provide your team with feedback to improve code
quality, speed up code reviews, and increase velocity.*