Up Close Coverage From SMX Advanced 2013
What happens when you combine big data with some big math? Good things, bad things and things we have yet to truly comprehend. Big Data is the most talked about, misunderstood and nebulously defined component of online marketing. Big data can mean anything from large scale consumer behavioral analysis to a relatively simple study of baseline intent analytics.
Big data has effectively replaced predictive modeling as a buzz phrase in the digital marketing universe. As is often the case, search marketers feel they are the center of said universe—and rightly so—since all the best things begin and end with understanding defined intent.
As we learned this week at SMX Advanced, analyzing data can do great things for you. You can monitor changes in Penguin or other Google updates; you can isolate trends in an attempt to garner shoppers who may have not even considered buying something in your category. On the downside, analyzing big data also has the all the trappings of analysis paralysis, quicksand or a little malady I like to call, “Let’s listen to the guy who used to work at Google because he must be a genius” syndrome. Technical capabilities considered it’s still a good idea to spend some time with good old human intuition.
A Technical Crystal Ball
Majestic SEO’s Dixon Jones noted that the first big data set in search was the artist formerly known as the query log file. Jones cited examples of using data to predict elections, when a company was about to tank and perhaps most practically applied how to isolate related interests to identify new audiences. For example, by analyzing data sets you may be able to sell shoppers looking for camping supplies on some hunting gear. This type of analytical application takes the behavioral advertising model to a more defined level.
It wasn’t too long ago I remember swinging by the Googleplex and watching matrix style keyword strings flashing across the screen by the gajillion. On a large scale we’ve been tracking people’s search data since the before the first time the words “search” and “marketing” came together to form an industry.
Simple is Good
Simple helps we the masses understand really complicated things. IBM uses the very simply defined strategy of using their data to determine exactly what people are doing (in accordance with state, local, federal and I’m sure international guidelines as well) online.
While definitions of big data vary, IBM’s James Mathewson illustrated three distinct categories: volume, variety and speed. IBM’s focus is using data analysis to drive content strategy and implementation. Volume and variety are witnessed in IBM’s 18 million pages and incredible volume of digital assets in the form of comments, imagery and resources for business. Mathewson noted that learning to use the data quickly is paramount to successful execution.
It’s understood that people spend a lot more time consuming content than searching. The tactical challenge lies in identifying how people consume content. An example of how IBM identified, successfully executed, and benefited from this tactic was witnessed in their ability to achieve a number one ranking for the phrase “big data.” Not surprisingly, referrals and engagements increased fourfold when this ranking was achieved but the real gem data provides is the knowledge of how to focus a content strategy into a positioning strategy.
Robots and People
The most entertaining and useful big “D” definition I’ve heard to date came from Kenshoo’s Josh Dreller. Dreller humorously identified big data as anything that doesn’t fit into an Excel spreadsheet. I like spreadsheets as much as the next guy, but the real value of large amounts of information in one place lies in its use as a portfolio media buying and management strategy amplification device.
Core to the understanding of how big data affects the tried and true portfolio approach is an understanding of the push/ pull relationship in large scale keyword groupings. When you push spending into one area—one that obviously performs well in a direct attribution environment—you might very well, and probably inadvertently, be pulling or sacrificing performance from another.
Dreller expanded upon the push/pull concept to reveal another tactic: developing a solid understanding of marginal return on advertising spending (ROAS). Data will help you predict your potential customer’s next moves allowing you to forecast your potential spend. Consequently, media buying (or in this instance search advertising) increases can not only be measured but effectively predicted. For example, retailers can build search campaigns around available inventory.
Speaking of big robots, publishers like Microsoft use big data to isolate what they call “micro markets.” According to Microsoft’s Mike McMeekin, micro markets allow for greater efficiencies in comparing data points. Testing ad copy variables for example, can be risky. Using historical data to predict future actions is a great way to reduce the potential for revenue loss in the testing environment. As an advertiser, this is significant because one of the biggest barriers to aggressive testing is the fear of potential dips in performance. Understanding interactions with query volume, click volume and ad copy variables within these segments can tech you a lot about making campaigns more efficient.
Mo Robots, Mo People, Mo Dinero
An advertiser should be using third-party software to help predict what the next dollar of spend will do. That’s the easy part. The hard part is trying to understand what happens to other areas if you take a dollar away. Simply put, the combination of big data and software allows a marketer to understand how one ad dollar spent affects the next; absent the company you happen to be buying said ads from.
You should be able to facilitate omni-channel attribution for campaigns that transcend direct attribution metrics as well.
Velvet ropes will part. Champagne will flow from the heavens. Emma Stone will come to your birthday party. Kanye will about it.
Most of all, the tools you select should allow you to add your own insight and intuition. In other words, the robots haven’t completely replaced the humans.
Finally, for some reason every time high-end math geeks start talking about common causation and causality mistakes at the algorithmic level, I start to go cross-eyed and instantly crave some couch time with a therapist. Basically, we can all agree that it’s bad to build a mathematical house of cards. So make sure your data helps you believe the right things for the right reasons.
Just because Pluto isn’t planet doesn’t mean it’s not interested in buying a new Chevy. So avoid trying to place penguins on the south pole. Or whichever pole they don’t live on. Or, don’t draw a line between dots that shouldn’t be in the same room together. Just because the fox moved into the hen house, that doesn’t mean we’re having pulled pork for dinner. All the pieces are there, do something with that last paragraph.
Go forth and be the big data.
Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.