11 August 2010

Does a scoring methodology help or hinder CMS selection?

Many approaches to CMS selection employ some kind of scoring methodology where platforms are scored against requirements to provide simple and transparent comparisons. In most cases, this only serves to provide the illusion of a structured selection process.

The problem is that any platform selection inevitably involves some kind of trade-off between different features and this is very difficult to represent through a set of scores, no matter how sophisticated the mechanism.

For example, how would a scoring mechanism choose between a CMS that is closely aligned to an organisation’s back-office systems and another CMS that provides the most sophisticated content production services? This kind of trade-off cannot be usefully expressed as a simple numerical score.

Most scoring mechanisms introduce some notion of weighting to try and give greater emphasis to the more important selection criteria. However, there is a limit to the extent to which weightings can adequately deal with the different granularity of requirements. Some requirements are concerned with small detail around functionality, while others may involve strategic integration requirements. How do you arrive at a weighting that adequately compensates between the two?

Requirements can also be difficult to score in any meaningful or objective sense. Criteria such as usability will be important, but how do you actually provide a score for it? No matter how well you write your requirements, the process of assigning scores to them will inevitably be quite subjective, particularly when weightings are involved. After all, a weighting system could be tweaked to give any platform the highest overall score if you try hard enough.

A scoring mechanism also tends to encourage a “tick-box” approach to requirements gathering. Most platforms have addressed generic functional requirements to some basic level, so you should be seeking to understand how a platform meets a requirement rather than if the requirement is met. Ideally, the more important requirements should be embellished with use case detail which will help to foster a more informed selection. The more sophisticated your requirements gathering, the more difficult it becomes to assess a platform against them through a single score. At worst, a scoring mechanism can serve to “dumb down” requirements gathering.

Any CMS selection is a trade-off and this cannot be expressed as a simple weighted score. A good CMS selection should make decision makers aware of what the trade-offs are so they can make a sensible decision. After all, most CMS selection exercises tend to take place within several tightly-defined constraints – such as cost or the existing technical infrastructure. You are rarely in the business of selecting the best CMS in the market – just the most appropriate given the current circumstances.

Perhaps the main value of scoring systems is to do with appearances. Selection exercises are not just about selecting the best CMS, they are also about being seen to select the best CMS. After all, there is often a large investment involved here and many stakeholders will need to justify their selection later and provide some evidence of a rational selection exercise. This is where the scoring matrices and spreadsheets come into their own.

Filed under CMS, Strategy.