I recently read an article that proposed a pretty dodgy way to measure Cost Per Transaction (CPT). It proposed an simple alternative to activity-based-costing through some simple extimates and sought to break down the CPT between different transaction types in a Service environment using the following method:
- Team Managers estimated the % of time their people spend on the varying transaction types, and salary costs plus fixed (project, property, IT costs) for the team were divided between the transaction types
- The volume of each transaction type was multiplied by a “complexity score”, for example, if it was a really difficult transaction, multiply the volume by 3, or 5, or some other number
- The CPT was then calculated using these “guesses”
There are a number of issues here. It uses “complexity score” rather than actual time measurements. It also adds in some fixed costs which have nothing to do with the transactions at all (like project costs).
This approach is over-simplistic and would also produce erroneous results because so much of the data capture is subjective from the managers – and the more frequent the measurement was made, the worse things get. The managers would need to assess the % of time for each individual in the team every period because of annual/sick leave, loaning out to other areas, team movements and so on. Given that salary costs would be a big lever on the cost per transaction, the accuracy of the estimates is going to introduce significant variability in the output results over time.
The complexity score also ignores the fact that different sub-types within these groups would have significantly different complexity too. It’s a can of worms – this measurement system has many different sources of “noise”, which each have a significant impact on the overall result. It means that the results will be inaccurate. Trendline performance is not really normalised and variation in performance of the measure can produce erroneous conclusions and poor business decisions.
More broadly, the question of “why measure cost per transaction?” is an interesting one. What decisions is this metric going to drive? In this case, it’s unlikely to prove very useful in measuring overall business performance because the article is talking about getting things down to a lower level. (I admit that measuring cost per transaction overall is a reasonable “business health” metric.) In this article, it appears that Cost per Transaction is primarily being used as a proxy to measure efficiency and productivity. So… if they’re using the subjective data to calculate cost per transaction, why not use subjective data to calculate a productivity metric? Neither is going to be perfect because of the subjective data anyway. You can then use the productivity number to directly drive decisions and staff member efficiency discussions (incidentally, without having a $ symbol in the way, which is highly likely to put staff into a defensive position and raise accusations of penny-pinching).
Of course, a metric MUST be perceived as being fair and reasonably accurate if you’re going to use it to drive performance (either of staff or of work flows – because there’s generally always a manager accountable for a workflow) – to do otherwise is to risk disengagement in “the cause” because people will feel hard done by.
Getting measures right is not just a question of how to measure accurately and precisely. We recommend businesses always take a Systems Thinking perspective when making the decision on WHAT to measure. Because ultimately if what you are measuring is not quite right, you’ll end up making bad decisions regardless of how accurate the measure is.