The ‘Big Challenge’ According To Eric Schmidt — And Other Predictions
Google backed away from managing radio and print advertising networks due to lack of “closed loop feedback.” In other words, the company couldn’t tell an advertiser IF the consumer actually saw the ad or if they acted afterwards. Efforts to embed unique commercial identifiers into radio ads exist, but are still immature. And in print, it’s still not possible to tell who (specifically) is seeing which ads — at least not until someone places sensors between every two pages of my morning newspaper.
Despite this limitation, Schmidt feels that Google will soon crack the code of massive multivariate modeling of both online and offline marketing mix influences by incorporating “management judgment” into the models where data is lacking. This will enable advertisers to parse out the relative contribution of every element of the marketing mix to optimize both the spend level and allocation – even taking into account countless competitive and macro-environmental variables.
That “everything is measurable” and, according to Schmidt, Google has mathematicians who can solve even the most thorny marketing measurement challenges.
That the winning marketers will be those who can rapidly iterate and learn quickly to reallocate resources and attention to what is working at a hyper-local level, taking both personalization and geographic location into account.
On all these fronts, I agree with him (I’ve actually said these very things in this column over the past few years).
So when I caught up with Schmidt in the hallway after his speech, I asked two questions:
1. How credible are these uber-models likely to be if they fail to account for “non-marketing” variables like operational changes affecting customer experience, and/or the impact of ex-category activities on customers within a category (e.g. how purchase activity in one category may affect purchase interest in another)?
2. At what point do these models become so complex that they exceed the ability of most humans to understand them, leading to skepticism and doubt fueled by a deep psychological need for self-preservation?
1. “If you can track it, we can incorporate at into the model and determine its relative importance under a variety of circumstances. If you can’t, we can proxy for it with managerial judgment.”
2. “That is the big challenge, isn’t it?”
So, my takeaway from this interaction is: Google will likely develop a “universal platform” for market mix modeling, which in many respects could be more robust than most of the other tools on the market, especially in terms of seamless integration of online and offline elements, and Web-enabled simulation tools. While it may lack some of the subtle flexibility of a custom-designed model, this platform will likely be “close enough” in overall accuracy, given that it could be a fraction of the cost of custom, if not free. The tool will likely evolve faster to incorporate emerging dynamics and variables, as the company’s scale will enable it to spot and include such things faster than any other analytics shop.
If Google has a vulnerability, it may be underestimating the human variables of the underlying questions (e.g. how much should we spend and where/how should we spend it?) and of the potential solution.
Reflecting over a glass of cabernet several hours later, I realized that Google’s developments are generally good for the marketing discipline, as the company will once again push us all to accelerate our adoption of mathematical pattern recognition as inputs into managerial decisions. Besides, the new human dynamics this acceleration creates will also spur new business opportunities. So everyone wins.
read article and comments here.