Saturday, June 19, 2010

The Improvement Gap Analysis & Method

The need for better satisfaction surveys

My goal was to create satisfaction surveys that would lend actionable results and not necessarily the old “’at a boy” mentality. We see satisfaction studies everywhere: studies for employees (from HR on work, managers, diversity and from department to department on support) and customers (on pricing, products, quality, services, etc.). It’s ridiculous. For more than ten years I’ve seen these surveys drawn-up, delivered and analyzed (keep in mind, I use the word analyzed loosely here). I’m not sure a single one ever drove a hard decision that had a true impact.

In my first blog post about Newspaper Advertising Effectiveness I mentioned something about asking the right questions. Never has it been more important to ask the right questions when it comes to driving real strategic changes, whether it’s for products, prices, employees, etc. The old line of questioning gives leadership the opportunity to choose areas they want to focus on. Example: An employee survey shows a decline in company communications/gatherings over the last year. What do leaders do? They choose to incorporate more company picnics, etc. Then, the next year we see an increase in satisfaction for company communications/gatherings. We celebrate improving satisfaction. Makes sense, doesn’t it? Hang on. How do we know that non-work related activities are even important to employees or worse, if it has any bearing on an employee's overall satisfaction or loyalty? Current satisfaction surveys are easy and convenient (because they’ve always been the same), but for the most part can be misguided. I’ve been quietly pushing a survey method that embraces a new line of thinking. When I look at current surveys, they already seem to have solutions in mind. I can’t tell you how many times I’ve seen surveys that were designed with the goal of rationalizing a decision that was already made. It’s time to take a hard look in the mirror and that starts by asking the “right” questions.

Note on satisfaction surveys and there intended purpose:

You don’t need satisfaction surveys to figure out you have a poor product, service, managers, benefits, etc. to figure out you have loyalty or growth issues. We need a survey to help us understand what’s causing these issues. I can share improvements in satisfaction scores until the cows come home, but if we’re losing customers and employees left and right, these results clearly don’t mean a thing. This is another indicator that we’re not asking the “right” questions.

Where we should start:

First: Ask respondents how important different factors are to them. You can include an open-ended response area to capture any factors you may not have included/listed.

Second: Ask respondents how satisfied they are with the execution/delivery of these same factors. Response in importance and satisfaction allow us to plot points on a grid (as seen in Exhibit 1). A zero to ten scale is used throughout my examples, but any scale should work. And, no, I’m not developing a quadrant analysis (see Ad Effectiveness Index post for reasons why quadrants are not good for analyses).


Life without importance measures always leads us to improving things with the lowest satisfaction. What we need to know is how important factors are before we start developing improvement plans.

Third: Establish a standard or goal (Here’s where the magic begins). I typically ask people if we’re using a scale of 0 to 10 in importance and something is rated as a 10 in importance, what should we be aiming for in satisfaction? Easy, a 10. Now, what should we be aiming for in satisfaction if the importance rating is a 0? Here’s where I typically get one of two answers.

1.) Some would say a 10. We should be striving for a 10 in satisfaction no matter how important something is (see Exhibit 2). If this were the case, we wouldn’t need to ask the importance question. If we have a lot of things that are not very important and work really hard to knock them out of the park, aren’t we taking time and resources away from those things that matter most and need improvement? I’ve taken a look around me lately and I don’t have a lot of help to deliver 10s in everything, so I need to prioritize.


In this case Factor R would receive top priority for improvement, but it’s the second least important factor. I can understand setting a high standard, but a sweeping high standard without considerations for importance diminishes our ability to prioritize, especially with limited resources to improve areas of satisfaction.

2.) Some would say a 0 in satisfaction is acceptable for unimportant factors. If it’s not important, we don’t need to spend any time satisfying customers. The problem is we never want bad satisfaction, even if that factor isn’t as important. Plus, it’s important to some, just not the majority in the case of looking at average scores (see Exhibit 3).


When setting the bar too low or when dissatisfaction is acceptable, we open the door to neglecting the minority and delivering bad service, or no service, which can come back to haunt us in the long run.

My compromise is the middle. If it’s not important at all and we’re going to provide the product or service, we should still, at the very least, strive to deliver moderate/mediocre satisfaction (a 5 in this case). By setting the bottom mark to moderate satisfaction we are setting a tone that satisfaction is always important, but to a point on different factors (see Exhibit 4).


Identifying improvement gaps:

Great, now we have a line. A maximum of 10 in importance equals a 10 in satisfaction and a minimum of 0 in importance equals a 5 in satisfaction. I call this “Ideal level of satisfaction corresponding to the level of importance line” or the “Satisfaction Goal Line.” If something is an X in importance we now have an idea of what we would like to or should be achieving in satisfaction.
So why is this line important? It gives us a baseline for identifying areas of potential improvement. The further away from the line results are in satisfaction, the greater need for improvement (see Exhibit 5 for example of variations in gaps).


Calculating and using gaps:

We now need to calculate the distance from the actual response to the ideal level of satisfaction (see Exhibit 6 for formula).


Your ability to calculate the slope of a line makes all the difference here. If your scales for importance and satisfaction are equal and you use my maximum and minimum standards, the slope will always equal two; otherwise, you’ll need to be able to calculate a slope if you use a different line or use unequal scales.

Here’s how the formula varies across different scales of equal range in importance and satisfaction (see Exhibit 7).


Now what? We can rank the importance scores, we can also rank satisfaction, but now we can rank gaps (or largest areas for improvement). Gaps are just a different lens for us to look at satisfaction scores. This gives us an idea of areas that need the most improvement and can help us in prioritizing efforts. BUT, the rank order isn’t iron clad for prioritizing. Like I mentioned before, levels of importance need to be considered. Just because we need to improve something doesn’t necessarily mean we should make it priority one. The largest gap areas highest in importance should be our focus and priorities. If there are “positive” gaps, this may actually mean we are unnecessarily or over-servicing customers. Resources and efforts can be reduced in these positive gap areas without significant harm to our overall service and satisfaction.

Some idealist would say if we can increase the levels of importance with areas we do well with, we can improve overall satisfaction. I say good luck. Changing one’s perceptions or values is a lot more difficult than our ability to change satisfaction levels. Let’s focus on increasing satisfaction in the largest gap areas. Better yet, choose the largest gap areas that are greatest in importance. This should yield a greater return in overall satisfaction.

Does the 0-10 scale need to be used? No, I would prefer 0 to 10, but just about any scale should work. With some research platforms or delivery mechanisms, I’m forced to use the 1-10 scale. It’s not ideal, but it’s still directionally correct. I would recommend a minimum of a 5 point scale. The more points you can get away with using the better direction the results should yield. Also try to use a scale where there’s a true midpoint for respondents. 1-10 doesn’t have a whole number midpoint (5.5) whereas 0-10 does (5).

Once large gap areas, high in importance, have been identified, the group needs to develop a strategic plan for reducing those gaps. Selecting one to five areas is ideal, because once you start pushing closer to ten areas you may find that you bit off more than you could chew. The next task is a plan for moving the needle (closing the gap). Once a plan has been developed and put into action, we can now measure whether or not changes were able to move that needle. Again, the aim isn’t to get better results in satisfaction on the next survey, it’s to improve actual satisfaction which is better measured through increased productivity, reduction in turnover and growth in revenue. For media companies an increase in audience size and frequency is the ultimate metric.

Example of Strategic implementation of the Improvement Gap Method

I wanted to give you an idea of what this process would looks like in reality. The follow case is a hypothetical example and does not reflect actual survey questions or responses.

1. Establishing an Objective: Identify areas or threats for possible employee dissatisfaction or potential turnover.

2. Developing the Questions/Survey:






How IMPORTANT to you are the following when it comes to working for Company X:
(Using an anchor scale where 0 = Not at all important and 10 = Extremely important)

• Location
• Company reputation
• Company growth
• Company structure
• Company leadership
• Company communications
• Health care benefits
• Vacation
• Hours
• Salary
• Manager
• Coworkers
• Career growth/development
• Work load
• Job autonomy
• Job security
• Work space
• Recognition
• Resources/Technology provided
• Others not list that are “Extremely important” to you:

How SATISFIED are you with the following when it comes to working for Company X:
(Using an anchor scale where 0 = Not at satisfied and 10 = Extremely satisfied)

• Location
• Company reputation
• Company growth
• Company structure
• Company leadership
• Company communications
• Health care benefits
• Vacation
• Hours
• Salary
• Manager
• Coworkers
• Career growth/development
• Work load
• Job autonomy
• Job security
• Work space
• Recognition
• Resources/Technology provided
• If listed other areas of “Extreme importance,” how satisfied?

Include basic questions for:
• Job function
• Department
• Whether or not they are a manager
• Years with company
• Full-time vs. part-time

and if possible/necessary:
• Age
• Gender
• Race/Ethnicity
• Plus others

3. Tabulating results and charts (see Exhibits 8 and 9 for examples)




Now, if we were using the “old” method we would have only looked at satisfaction scores (as seen ranked in Exhibit 10).


In some cases, management would have pointed to career growth/development as the area they needed to improve performance because it was the only factor that fell below a 5 (or the midpoint) in satisfaction. Then they would have patted themselves on the back for such a strong overall satisfaction rating.

Others would have looked to the same satisfaction ranking and said we need to work on the lowest three scores. In this case, that would have been career growth/development, company communications and work space (see Exhibit 11). This is better than the previous mentality, but still misguided.


If we only looked at the gaps, this is what prioritizing would look like (see exhibit 12).


As you can see, there are already differences, but we still need to consider importance.

4. Analyzing the results: Use gaps and importance rankings for evaluating and prioritizing areas of improvement (see Exhibit 13). It’s also wise to look at gaps across different departments, job descriptions, length of service, etc. [Visit later blog post for improvements to the remainder of this post or click here:  Improvement Gap Analysis Amendment ]

Again, as you can see the focus changes when looking at the results through multiple lenses.

5. Developing a plan of action: After identifying the areas of focus highlighted in Exhibit 13, we now develop a plan for shrinking/reducing those gaps. In this instance, resources and/or budgets allocated for “recognition” can probably be reduced and moved to help improve other areas (salary, bonus or resources). Manager evaluations and training would need serious considerations in this case. Specific strategies may need to be addressed if there are differences in scores and rankings across different departments, job functions, etc.

6. Continually track: After the strategic plan has been implemented, we should go back to the field to measure progress and help identify any new directives the company should take. When we do this, we have established an ongoing program for measuring and directing improvements, one where we continually learn and then adapt. This learn-adapt, learn-adapt, learn-adapt or “LA, LA, LA Concept” was coined by one of the greatest advertising copywriters and author of Tested Advertising Methods, John Caples.

I can attest to seeing this method’s successful use for internal purposes, with outlining strategic direction, finding product improvements and with identifying new product opportunities for my company as well as others we work with. We’ve also found it to be helpful in uncovering areas of focus for advertising and branding efforts.

In my next posts, I’m going to review Daniel Pink’s Drive, The Surprising Truth About What Motivates Us and will include a short write-up on the idea of Brand or Branding in the advertising world.

Additional Improvement Gap Suggestions:
Measuring levels of “importance” vs. “interest”


I recently came across a survey conducted many years ago for our Sports department. At first, the survey looked like the gap method described here. It asked questions about how interested readers were in different sports and then asked questions about how well we were covering those sports (using satisfaction). The points were even plotted on a grid. Then the house of cards started to wane. Three potential mistakes were made. First, the analysis was completed on adults in general, not our customers/readers. Second, after the points where plotted on the grid, a “quadrant analysis” was completed (see the Newspaper Advertising Effectiveness blog post for the issues with using quadrants for the purpose of analyses). Three, while it would appear that asking how interested readers were in different sports is the right direction, there was never a connection to our product. Confused? As an example, let’s say NASCAR finished at the top of the list in areas of interest. Yes, they were interested and their satisfaction was low, but was it important that we cover it in our product? Come to find out seven years and a probably more than 300 million pages in printing costs later, IT ISN’T important. Ouch! First, I’m not sure if our readers showed high levels of interest in NASCAR, but even if it did, it never occurred to anyone that those readers prefer getting their NASCAR coverage from another source. Even worse if it was non readers that felt this way, we ended up changing our product to appeal to them. A lot of effort was made to try and acquire a non-newspaper reader at the cost of losing our most loyal readers that wanted better coverage in another sport. Potentially misguided results are what you will most likely receive when using general “interest” questions; although with some work and testing there may be a way (maintaining some flexibility is always important). I'm going to stick to using importance for now.