Back in the late 1990's the Internet was in its infancy and data communication via 'broadband' was, compared to today, rare and expensive. At that time I worked for a telecoms provider and was tasked with identifying businesses that might buy a low cost internet access service. This was a burgeoning cable broadband market and our revenue targets were aggressive! 
 
Creating a prospect database with named companies, their location, type of business and an estimate of their likely annual spend on broadband was no easy task. But it could be done with the application of some known customer data and a set of mathematical algorithms. 
The method used is today referred to in the literature as machine learning. In basic terms you harvest a set of known customer data ('big data') from your CRM system, clean it and apply a set of standard algorithms to try to identify the characteristics which are most likely to predict a purchase of your company's product or service. The output is a model, which can be referred to as 'customer insight'. For example, the model might lead to the following 'customer insight': a commercial bank with its head office located in London, employing 1,000 people on premise is likely to have a telecoms data spend of £1million per annum. The model can then be applied to a fresh database to identify similar 'prospect' companies for the sales and marketing teams to target. This latter stage is called 'predictive analytics' because the model tries to predict within the fresh data which companies might become £1million customers in the future. 
 
Okay, I agree with you. I shouldn't need an algorithm to tell me I have a customer generating £1million in revenue and that London has a concentration of similar 'prospect' banks the sales force should be calling on. But customer and target customer acquisition (prospect) data can scale quickly. When dealing with millions of separate data points, manually spotting patterns in data can be extremely difficult. Imagine a database listing one hundred thousand separate company names, each having ten characteristics such as location, number of employees, date of last purchase etc. with which to analyse and build a predictive model. You have a database of one million data points which a computer can analyse far more effectively and efficiently than any human. The model can then be applied to a prospect database of several million records, sifting and marking those companies that are likely to generate revenue for you. 
The output is a list of businesses that sales and marketing can target with their customer acquisition campaigns. 
Will the model predict accurately every time? No! Be prepared for the sales-force to tell you the prospect database is useless because one of their number has just called on three prospects and found no requirement to buy. I received a call like this and it's okay. A good model should predict accurately 70-80% of the time. The salesperson I spoke to merely experienced a statistical error within his prospect list, the impact of which can be mitigated by sales-force training in basic statistics. 
 
Don't be put off by the hyperbole of 'big data'. At the heart of terms such as 'customer insight', 'machine learning', 'predictive analytics', 'algorithms' etc.etc. there are some very sound and basic business principles. I'm happy to enter into a discussion with you about how these principles might help your business. 
 
One final thought. Did we hit our revenue target? Well, a world without the internet and cheap broadband is now unthinkable and the telecoms company I worked with became VirginMedia. I'll leave you to decide! 
Share this post:

Leave a comment: 

Our site uses cookies. For more information, see our cookie policy. ACCEPT COOKIES MANAGE SETTINGS