Monday, April 19, 2010

Advertising Effectiveness
What it is, what it isn’t and is there one number

Some people are aware that I’ve created an “advertising effectiveness index” and with much inquiry I was inclined to publicly share this method. Frequent and readily available data on ad effectiveness is relatively new, which means we are in some uncharted waters. I’ve worked closely with many designers, agencies, sales reps, business owners, marketing directors, and the list goes on, over the last few years. The company I work for has collected a lot of valuable information and I’ve learned a great deal from my experiences in sharing information with these different points of contact (good and bad). The one thing I can say is advertising’s much bigger than one ad effectiveness number and one number will not be the answer, but I can understand the desire to find it. I want to touch on this desire for one number, factors to consider when it comes to evaluating advertising, factors we may want to leave out, and lastly, some potential “one number” methods.

New data and the inevitable search:

We now have a number of different research tools at our disposal to help measure ad recall, “ad engagement,” and response in the media business. We use Research & Analysis of Media (RAM) at The Virginian-Pilot. These sources typically measure fifteen or more factors deemed “important” to advertisers. Who came up with these? Who knows? The bigger question may just be “are we asking the right questions?” Let’s leave this for another write-up. As we push closer to being able to show a potential return on investment (the “Holy Grail” in the media world) we may actually be losing sight of some basic tenants in advertising.

Jack Trout, marketing expert and guru, once said:

“Complexity is not to be admired, but to be avoided.”

After ten plus years in the business I’m starting to see why. I work in the research world, but I’m anything but agoraphobic (an abnormal fear of open or public places). I make every effort to get out and meet with the people who know better than any survey results can reveal to us: the advertisers and agencies. I’m also a compulsive collector of information on marketing and advertising and feel when it comes down to it no one has summarized the ad business better than Rosser Reeves. This mid century advertising icon once said:


"You know, only advertising men hold seminars and judge advertising. The public doesn't hold seminars and judge advertising. The public either acts or it doesn't act.”

I can dump a heap of analysis and research that tells a business why its ad is the greatest thing since slice bread (the iPad nowadays) or is complete trash, but they, the advertisers, ultimately know what’s working and what isn’t. Where research and analysis can best serve the advertiser is helping to answer “why” ads are more or less effective.

For too many years the creative world has been rewarding design and not necessarily results, while the scientific world has been ignoring design’s impact and importance. The reality is advertising effectiveness is both art and science. Blasphemy, right? We need to get the beret wearing “artEEsts” and pocket protector wearing “researchers” to sit at the same table to work on achieving what’s most important: RESULTS! That starts by defining the word “effective.”


There are too many factors to account for an ad’s “true” effectiveness and may never truly be measurable. Weather, product quality, locations, and pricing offers are only a few on an exhaustive list of factors that can impact results (See Exhibit 1). Even if we were able to factor these in, there’s one nearly unpredictable factor that cannot be measured; the consumer. How is it that a consumer will make one of the biggest purchases of their lives (real estate) based on emotion, cars on a whim, and yet still be willing to drive 30 minutes out of their way and wait an extra ten minutes in line to save a dollar off a two liter of Coke? Consumers’ decisions can be crazy and amazingly irrational. Even worse, when asked why or how they decided to buy, their responses appear impressively rational.




Let’s start with what “Ad Effectiveness” is not:

“Ad engagement.” My stomach churns at the mere mention of this idea. It pains me to discuss, but I must, because it’s an industry obsession. It has become more of a distraction over the last few years. Don’t get me wrong, I’m not saying it isn’t relevant, I’m just saying it shouldn’t be the focus. Ad engagement is not the end; it’s a means to an end. If we have advertisers that want to engage consumers, great, let’s engage them until their heads pop. Just don’t come running back to me three weeks from now saying “the ad isn’t getting the results we’re looking for.” We need to better understand which elements of engagement can help advertisers achieve their true advertising objectives. But like I said, we need to look at what’s important to the advertiser. We are full circle on what I mentioned early: we need to help answer the questions of “why.”


Let’s get to the matter at hand:
The One Number to rule them all.
One Number to find them.
One Number to bring them all and in the darkness bind them.

And yes, the idea of having “one number” is as fanciful as J.R.R. Tolkien’s Lord of the Rings.


Many have been ambitious in their search to find this “one number” and we should all applaud those efforts. Unfortunately the search can be likened to trying to find “Big Foot.” I definitely feel their pain. It’s a start and a great way to get us moving in the right direction. The concern is using complex and inflexible means may breed opportunity for error. The more variables you include, the further away from truth you sometimes stray. Only the pertinent variables should be included. If ad engagement is a means to an end and you have end metric results available, why would you also include “ad engagement” metrics into the equation? This only gives additional weight to variables that may or may not have had any bearing on the outcome (See Exhibit 2 for example). I opted not to include these variables when devising a formula.




We have to start with defining “Ad Effectiveness”:

After meeting with well over a hundred advertisers it occurred to me that complicated methods weren’t going to cut it and there had to be a simpler way. Let’s demystify and not mystify. Complexity doesn’t have a strong track record (Have you looked at your newspaper’s rate card recently?).

How do your advertisers define or measure ad effectiveness? It varies doesn’t it? Our formula should vary as well. My goal was to find a simple and flexible formula. I also contend that the one number used for indexing or comparing should actually mean something. A number that doesn’t lend itself to meaning doesn’t sit well with advertisers (something like 261% from Exhibit 2). When faced with presenting one number without meaning, the conversation can sometimes lead to a lot of skepticism and derails discussions on how to make improvements.

Ad effectiveness is as good as your last ad, campaign or as good as your closest competitors’. My experience has taught me that between 90% and 95% of the time advertisers are looking for one desired outcome: maximum response, either to their store, to the phones or to the website…PERIOD. Ultimately, our one number will be indexed/compared to the advertisers’ history and/or its competitive set. First, we have to understand what impact’s maximum response? There are a myriad of things, but simply put: one, the percentage of people that saw the ad and two, the percentage that intend on acting. Assuming that your measurement tool is similar to the one we are using, this is what we have measured and that’s where we need to start. Simply put, the “one number” starts by calculating the percentage of potential readers that plan to act. This can be done by multiplying the advertisers’ desired outcome percentage by the percentage of those that saw the ad (See Exhibits 3 and 4). This is the number you’ll use for comparing and indexing. This secret is pretty disappointing, isn’t it?














































See Exhibit 5 for how the OB-AEP translates into a basis of comparison or an index and Exhibit 6a for an example. (If you are unfamiliar with an “index,” please see Exhibit 6b.)






















































































I was intrigued with the idea of being able to show this Outcome Based Ad Effectiveness Percentage on a quadrant chart and discovered that it’s actually more interesting and complicated than you may think. As I started looking at the variations, something about the quadrant analysis didn’t make sense to me.

Here’s why looking at OB-AEP on a quadrant fails us:


“Quadrant” analysis came from the world of academia (See Exhibit 7). It was designed for strategic decision making and not analytics.





So I went down the path of trying to create my own OB-AEP “quadrant.” I placed ad recall percentage on the y axis and the advertiser’s desired outcome percentage along the x axis (See Exhibit 8). Again, the desired outcome percentage can be interchangeable based on the advertiser’s primary objective.


























This quadrant should help us “define” what is or isn’t effective as labeled in Exhibit 9. As you will notice, I have question marks in each quadrant. Your definition would be as good as mine. We know the upper right is good and lower left is bad, everything else is somewhere in the middle, right?


























Let’s plot some example ads (See Exhibit 10) that fall into the different quadrants.




























As I said earlier, advertisers are looking for the one desired outcome, so we need to look at the OB-AEP (See Exhibit 3). Using this formula we are able to calculate OB-AEPs for the example ads plotted (See Exhibit 11).






You can now see why the quadrant fails us. In all four cases, where ads were equally effective, they actually fell into three separate quadrants. The OB-AEP doesn’t fit the quadrant mentality, in fact its distribution, based on equal effectiveness looks more like Exhibit 12. It goes against a lot of things I was taught, but using the quadrant method we are unable to visually demonstrate ad effectiveness. I’ll probably want to file a restraining order against my “Marketing Principles” professor from my college days after saying this, but for our purposes of demonstrating ad effectiveness, it’s time to ditch the archaic quadrant model.





We need to use this basic chart found in Exhibit 13. As you can start to see, the distribution pattern is actually very complicated for such a simple formula. Hang onto your berets and pocket protectors; it’s more complex than you think. The chart we are looking at isn’t actually two dimensional; it’s three (I’m thinking I just lost half the readers at this point.).



























Okay, for the few that haven’t given up on me, get this: When looking at the distribution of the OB-AEPs it creates what is called a “hyperbolic paraboloid” (Now, I’m probably down to two readers.). What is a “hyperbolic paraboloid” and how can this be (See Exhibit 14 for definitions and examples)?











































The better question is how else can we explain the odd progression found in Exhibit 13? We’ve been so used to looking at two dimensional charts, we forgot that we live in a three dimensional world. The factor of these two metrics (recall and the desired outcome) gives us the one number, OB-AEP, or our third axis: z!


Look what happens when we shift our thinking and look at this data three dimensionally (See Exhibits 15, 16 and 17 for progression).



















































What can we learn from this?

Increasing ad effectiveness isn’t necessarily a straight line, as you can tell by looking at the resulting hyperbolic paraboloid (See Exhibit 18). If done right, changes made to ads can yield exponential returns in effectiveness.


























With this three dimensional model in mind, advertisers can do one of three things to improve effectiveness.


1. Increase ad recall percentage and maintain response percentages:
  • Straight line improvement, which limits maximum ad effectiveness potential
  • Leans toward science
  • Usually requires additional monetary investment
2. Increase response percentages and maintain ad recall percentages:


  • Straight line improvement, which limits maximum ad effectiveness potential
  • Leans toward art/design/message
  • Usually requires an increase in time and resources
3. Increase recall and response percentages:


  • Exponential improvement, which removes all limits to maximum ad effectiveness potential
  • Incorporates art and science
  • Can get you the greatest return, but requires money, time and resources
See Exhibit 19 and 20 for graphic example. Understanding what helps improve ad recall and desired outcomes/response (ad engagement or the means) can help us get advertisers better results (the end).




The beauty of this formula and model is not just its simplicity, but its flexibility. We can now use this OB-AEP to compare ads against the advertiser’s competitive set and use “their” desired outcome (whether it be potential store traffic, actual purchase intent, intent to look for more information, intent to visit the website, increase positive feelings about the company, etc.).

Of the two readers I have left, I’m guessing at least one is wondering: “What about the 5% to 10% that don’t just have the “one” desired outcome? Is a formula available for them?” YES, but this added layer of complexity removes our ability to succinctly define the one number. I try to stay clear of using this method and advise others against using it. But for those few instances where the advertiser insists on including multiple variables like brand perception, likeability and traffic driving directives, we do have an index formula available. (Equal or custom weights can be applied; e.g., 70% of my ad was for traffic and 30% was for brand.)

See Exhibits 21 through 24 for formulas and examples.


















































































































When and where is it best to use any of these “Ad Effectiveness Indices?”

1. For internal communications, as a warning system or a simple gauge. This can be a quick and dirty way to summarize how effective an ad was. Hint: don’t let this number be the focal point of sales presentations. Too much time spent on discussing math and methodologies takes the conversation further away from discussing how we can help improve results or answering the “why.”


2. With advertisers that understand your research tool for measuring ad effectiveness, understand the fundamentals of advertising, are ready for a more sophisticated way of gauging effectiveness, or are looking for the same utility we are on an internal basis.

Conclusions:

Is this the “one number?” No, but it’s one way to calculate it and hopefully a sound and flexible way. Outside impacting factors and consumer irrationalities are still cumbersome areas that can impact results to varying degrees, at different times, from business to business, and from market to market.

Variables that may or may not have impacted the end results are not included, but do need to be understood to help direct our decisions for helping advertisers increase effectiveness (size in print, length in digital, use of color, brand awareness, benefits of products/service, etc.). One last bit of advice. Don’t let “Ad Effectiveness Indices” become a crutch. Research is not absolute, but it can give us better direction. The public is the true judge.


I leave this in your hands to either: adopt, reject, or enhance. Remember, ad effectiveness is art and science.

For more information on the “Outcome Based Ad Effectiveness Index,” media and marketing industry observations, other actionable research methods, advertising and editorial insights, or book reviews, you can visit this blog for monthly updates.

I wish you the best of luck in your research and sales endeavors.