Economists that study purchasing behavior understand that people often deceive themselves when buying large ticket items. There are certainly emotional components to purchasing behavior. I have purchased over 10 automobiles in my adult life without understanding the “total cost of ownership” (TCO) of any of them. I tried to make good decisions, but at the end of the day, I really wanted that vehicle. I once bought a totally tricked out Jeep. It was awesome. The tires came up to my chest. It had the meanest sounding engine I had ever heard. I tried to perform a few checks on it, but I really did not want to think about the downside risks. I purchased it and about three months later the engine failed. Total loss. If that had been a major software purchase for my company, I would have probably been fired.
I have been selling enterprise software professionally for over 15 years, mostly to good-sized energy companies. The solutions that I have sold are used by front, middle and back offices. The number of complex sales engagements on which I have worked is certainly over 100. Not once, did any of those purchasing companies have a comprehensive method for calculating the total cost of ownership (TCO), including metrics for downside risks. I want to share one “best practice” to add to your list when buying software.
Let’s assume for the moment that you know how to calculate a basic five-year TCO. You plug in your purchase price, annual maintenance, annual escalators, and any additional product and service costs over the five years. That’s it, right? not necessarily. You might want to look at your total costs in a risk-adjusted format.
If my software project fails, how might it fail? (Come up with your own list)
My company changed direction or strategy. This is the automobile equivalent of buying a Mazda Miata and then having twins. Time to dump that car and get another one. How likely is your company to switch directions?
My vendor “exaggerated” their capabilities. After the contract is signed, these facts are discovered. The vendor admits that the required functionality is not built yet. Now the project is delayed (at best) and the risk of total failure just went up.
My software deployment project failed. This is where most projects fail. Both sides likely left important project requirements undefined. Now the project costs and timelines are blown and everyone is pointing fingers.
What is the likelihood the above events happen?
My company changed direction or strategy. This does happen from time to time but in most companies, it is not common. Risk-adjusted cost +5%.
My vendor “exaggerated” their capabilities and now expected costs and timelines are blown. This happen quite often in complex software sales. Risk adjusted cost +20%.
My software deployment project failed. Poorly-defined project plans and vendor omissions just increased the risk of project failure. Risk adjusted cost +30%.
You can see how this is working. First, make a list of all the ways the software purchase or project could fail. Then assign basic percentages to their likelihood.
Now comes the tricky part. You should assign different risk percentages for each risk factor to each vendor that you are considering. Some vendors will have a better reputation or more experienced staff. Some software packages are built using better technologies. SaaS solutions are inherently less risky (and less costly) than deployed software. And SaaS solutions allow the user to extensively test the same solution that will be used in production.
When comparing costs for software purchases, try running a risk-adjusted cost comparison and see if this can help your company to make better software buying decisions.