Back

StrategyFeb 13, 2018

Benchmarking Airline Customer Experience: 7 Steps to Building a Framework

Allie Rubenstein

Recently, I worked with a small Credera team to develop a framework to benchmark the customer experience across major U.S. airlines. We decided to not only create a framework, but to also pressure test it by flying and rating our experiences on different airlines. After reflecting on this exercise, we landed on seven key, repeatable steps for developing an effective airline customer experience framework:

1. Decide Which Competitors to Include and Exclude

We focused on two criteria when selecting airlines to compare: airlines that are 1) major carriers and 2) U.S.-based. We selected nine airlines that fit these parameters.

2. Understand the Entire Customer Journey

A customer’s journey with an airline starts well before they set foot on the aircraft. Our customer journey map started with the booking experience and ended with the post-trip communications a passenger receives from the airline. We used this journey map to ensure the completeness of our analysis. Our analysis focused on six primary ‘journeys,’ each of which includes additional ‘subjourneys’ that represent specific elements of the travel experience.

3. Prioritize the Most Important Journey Elements

Not all customer journey elements are created equal. For example, the average customer places a heavier emphasis on customer service quality than on lavatory cleanliness. With this in mind, we used extensive research and customer survey data to assess the relative importance of each airline element. Using this research and survey data, we weighted each element of the customer journey accordingly to increase the accuracy of our final scoring.

4. Consider Customer Types

The travel industry caters to two types of travelers—business and leisure—each of which has their own experience priorities. While business travelers place a premium on reliable in-flight WiFi and loyalty perks, leisure travelers typically value in-flight entertainment and variety of destination options.We identified the customer experience elements each of these traveler types deems important and weighted these elements accordingly. We weighted elements valuable to both business and leisure most heavily.  

5. Incorporate Qualitative Feedback

While most of our analysis used numeric scales to assess each framework component, quantitative analysis can only do so much. We supplemented our numeric scoring with qualitative customer feedback. This included informal passenger interviews on specific elements of the airline experience (e.g., booking, boarding process, etc.). These interviews provided valuable insight into the customer experience, added to the validity of our assessment, and, moreover, brought the airline customer experience to life.

6. Conduct Analysis

All team members booked several flights, each on a different airline. The assessment began when we started searching for a flight. Then, with notepads in hand, we boarded planes bound for all parts of the country. We assessed each element of the customer journey on a numerical scale, conducted interviews with fellow passengers, and took copious notes.

7. Consolidate and Calibrate Scores

After our travel adventures, the team met to compare results and share our own travel stories. Throughout the review process, each team member took the time to explain the logic behind the scores for each airline element. Discussing the rationale behind the scores was one of the most important steps of the consolidation process, as it promoted consistency in scoring across all evaluators.

As we evaluated each airline, we rated each element of the customer experience (e.g., variety of food and beverage options) on a scale of one to five (lowest to highest). Using these ratings, we developed scores for each subjourney (e.g., food and beverage). These subjourney scores were weighted and consolidated to create scores for each of the six journeys (e.g., boarding and in-flight). Then we calculated a composite score for each airline based on the weighted scores for each journey. We added rigor to our scoring by incorporating evaluation criteria for each element; for example, for “variety of food available for purchase,” a one indicates there is no food for purchase, a three signifies there are snack options for purchase, and a five means meals are available for purchase.

One of the strengths of our framework was that it featured both quantitative and qualitative analysis, making for a more holistic evaluation. At the same time, we identified an improvement area in the need to incorporate concrete metrics outside of the subjective travel experience (e.g., on-time arrival statistics, number of kiosks, etc.) into the framework. Noting this improvement and others like it, we iterated upon the framework and ultimately provided our client with an effective tool to benchmark the customer experience.

Interested in finding out how your customer experience stacks up to the competition? Reach out to us here to learn how we can help!