Data Driven Business

11 years ago
Measuring the value of Intangibles

 
At our company, we have a lofty ambition. Actually it’s what I call our BHAG our Big Hairy Audacious Goal coined by Jim Collins in his excellent books Good to great and Built to Last. We want to earn our clients €1,000,000,000. A billion euros of revenue that our activities have directly contributed to after taking away our costs. It’s big enough to be challenging but its also possible. I wanted to create what Jim calls a mechanism that not only serves as a goal for our company but also focuses our staff on doing the right things for our clients. If one of our consultants is in a difficult place with a client case I want them to ask themselves “how does this contribute to the billion?” in other words how do I make my client money?

Counting

We have counted what we can from the last 7 months and we’ve already done €20 million in revenues (hey, it’s a start, you might laugh now but if in another six months we’ve achieved 100 million then what would you think?). These are the revenues which are easily calculated. So for instance if we’re working with a company that has e-commerce operations and our direct marketing work (like email or SEM campaigns) has contributed a portion of the sales the company made that month we total that up for the month, take away our cost and add it to the billion figure.

Easily done for some but not for all

It’s easy for anything where a client has sales, lead generation or customer service as it’s all easy to quantify and verify with our client. One of the other things the billion BHAG has done is focus our folks thinking into asking themselves questions they didn’t ask before. We now have consultants asking the question “but what about the value of analysis and simply providing information?”, “how do we measure the value of intangible work?”

We do a lot of work where we simply help our clients with their operations that never leads to a directly measurable outcome that has a euro value. At the moment only about a third of our client cases are counted in the billion because we haven’t been able to quantify the value of an analytics tagging implementation plan or the value of an analysis that isn’t directly contributing to a revenue outcome.

This is a great exercise as it’s also helping us focus on what’s important for a range of clients where its more difficult to put a return on their online investments. Some companies don’t do direct sales, we can’t save them much money and it’s difficult to quantify the value of a website visitor. So how do you justify doing online marketing? Why bother with analytics? What’s the point of measuring anything? Our clients are all smart enough to know that they need to have KPIs, that they need to optimise their activities and they know what they do is contributing to their purpose as a company. But many don’t know with any level of confidence what the euro value of their efforts are.

Applied Information Economics

These questions led us to work done by a guy called Douglas Hubbard who invented Applied Information Economics (AIE). For the past year we’ve been using his methods to model things like measuring the value of a seen but un-clicked banner ad or doing risk assessments for online investments using Monte Carlo simulations. The BHAG we set ourselves has meant that in order to quantify the other 2/3rds of our work we’re going to start doing this with everyone.

The key is in what Hubbard calls a calibrated probability assessment. Bit of a mouthful but it basically means that when you answer a question you have to be as close to 100% sure that your answer is correct. You have to be so confident that you would bet your own money on it. So for instance, if I was to ask;

What’s the population of Finland?

A lot of Finns would estimate 5 million people as that’s what’s commonly reported. But then the calibration starts.

So lets say you have €100K in the bank. Would you bet €90,000 of your own money to get a 10% return, so would you bet 90K to get 99K back and swell your bank account to 109K. You’d be betting that the number of residents in Finland was exactly 5 million. No-one would because no-one could be sure, especially for a relatively low reward and high risk.

So when does the level of confidence improve? Would the same people bet 90K of their own money about the population being between 3 and 7 million? This is when most of the people that know it’s somewhere around 5 million would be extremely confident, confident enough to withdraw 90K and bet on it in other words they would be 90% confident.

The theory is that by calibrating the question in this manner you could then run a mathematical simulation with a 90% confidence level between 3 and 7 million to measure what the real number is likely to be. Monte Carlo simulations allow you to do this. In this example you’d get an answer around 5 million which for the purposes of decision making works.

The key is in asking the right questions and getting solid calibrated answers.

Lets take a hypothetical example.

Let’s say you have a lot of products that are sold via other resellers. For instance you could be in the consumables industry selling food and drink wholesale to retailers but not selling anything directly on your website. There is value in what you do but because you don’t sell anything it’s difficult to put a value on your efforts.

So first you’d need to know how much revenue you generated in the previous year. Let’s say that was €1 billion.
Then you’d need to know your market share in comparison to your competition. Let’s say you know this and you have a 50% market share of consumables bought from your categories. So a total market size of 2bn.

Then you would need to know how many purchases were made that made up the 2 billion so you could work out what the purchases per person were in your category. You’d need to make this estimate with a 90% certainty as I described in the Finnish population example.
Let’s say that you know with certainty of 90% that each order is around €10. So you can say that there were 2,000,000,000/10 number of purchases. So you can say 200 million purchases.

Then you need to know how many people bought those products (with 90% certainty). Lets say for arguments sake it’s 3 million people buying approximately 200 million products per year. So each person buys 67 products per year on average and spends about €670 on the products.

Now that you have that figure and you’re quite certain about it you can start to look at your own website statistics to see how valuable it is. You would need to know with 90% certainty how many people your website influenced. So lets say you have 1 million people per year coming to your website and you influenced (based on some proxy like, goals completed, number of pages visited combined with a survey response) that 20% of your visitors were influenced by your website.

If each person influenced is assumed to purchase products from you due to brand preference because of your website (again survey responses here would need to be carefully considered, but I digress) then you could say 200,000 per year x 670 = €134M.

So now you have the estimated value of the website to the company (and a proxy for optimisation).

Getting to the value of information.

So for the purposes of this example we know that the website is worth €134 million per year to the company. We can now put a value on the information provided in analysis (done roughly once per month). The questions we have to ask is what would happen if they didn’t have the information provided in the reports? Would there be opportunity losses? And what is the potential value of those opportunity losses?

Assuming you know that the market for the goods grew by 3% in 2012 so a good outcome would be to grow by 3% again in 2013. In 2012 we’d also assume the company had all the information they needed to run their campaigns, optimise their spending and develop the service for their visitors. What would be the potential downside of making mistakes and not picking up trends? Could it be that they would lose 3% and revert to the level of the previous year when they didn’t have good information to hand? If we say the potential good outcome is a 3% growth (134×1.03 = 138 – so a €4M growth in value) and the worst is a 3% decline (4M decrease in value) then the total expected opportunity loss is €8 million. If by default doing nothing means you have a 50/50 chance of being successful but having the reports means you have a 60/40 chance of being successful the value of the information is a 60% chance minus the 50% chance. 50% of 8 million is 4M and 60% is 4.8M so the value of the information is €800K.

As a side note Hubbard then recommends that the actual cost of getting the information should be between 1 & 10% so an €8K and €80K cost depending on time required to do the work etc.

Opinions?

We haven’t implemented this for our figures yet and all of our more difficult cases would need to be carefully considered. The key is getting to a 90% confidence level with each input variable before running the simulations. However what I like about this method is that the issue of “engagement” goes away. There have been many debates in the analytics arena about measuring the value of engagement. This gives the engagement point a euro value and allows optimising based on something everyone understands.

Done with the full cooperation of each company involved I believe we could put the issue of engagement and the value of analysis to bed. Some have called this too fluffy. What’s your opinion? Which camp are you in?

Comments are closed.