Trust Is the New Black (of brand metrics)

How does your brand measure the health of your relationship with your customers? If you answered: “Satisfaction,” then you are not keeping up with the latest in metrics fashion. Trust is the new black of brand metrics, and here’s why.

Digital thought leaders are calling the times in which we live the Relationship Era of marketing. They reference the need for Trust Agent networks. They insist that brands must now communicate touchpoint by touchpoint, on a personalized, one-to-one basis, building trust with every encounter.

This is valid. This is not, however, new — at least to the world outside of marketing.

Building relationships the good old-fashioned way

As marketing continues the transition from mass, single message communication into the realm of customer-by-customer relationship-building, it is effectively embracing the discipline known in pre-digital times as Selling. Effective selling has always been about forging enduring, loyal relationships. Effective selling relies on relationship-building principles established long before the advent of digital life.

At its heart, relationship-building is no more than doing the right thing by people. When you behave that way over time, people grow to trust you. When they trust you, they come back to you and continue to give you their business. You have proven (especially on occasions when you have done the right thing counter to your own interests) that you are — among other things — honest, fair, respectful, punctual, responsive, diligent, and responsible. What you have been is the best indicator of what you likely will be. You have earned trust. You will enjoy the rewards trust brings.

If brands are to adopt relationship-building as their digital marketing strategy, the brand itself must behave like a person – like a sales person working to earn the trust of a customer. The brand must deliver on its promises as diligently and consistently as the sales person (and the organization behind the sales person) must deliver on his or her promises.

The socially-engaged customer and viral consequences

Once the brand stops behaving in a manner that engenders trust and starts exhibiting behaviors that undermine trust, customers will become less loyal and more receptive to the enticements of competitors. When ( in this viral, digital age) a brand experience does not live up to a socially engaged customer’s expectations, that customer’s Facebook friends, Twitter followers, blog readers, and fellow chatters and forum attendees all get to know about it within seconds. That’s not exactly healthy for the brand. A bad experience is bad enough, but if the experience constitutes a betrayal, then the viral consequences for the brand can be dire, sometimes catastrophic. Ask Dominos or Taco Bell or Tiger or any other brand whose behavior has contravened everything its customers or fans have come to expect.

Nielsen’s Buzzmetrics division monitors social buzz (or The Conversation, as they call it) for major brands. Nielsen reports that the #1 “buzz” topics for consumer-serving brands (fast food companies, cable companies, car companies, for example) are actually the good old (pre-digital) issues – bathroom hygiene in fast food restaurants, customer service issues with cable or telephone companies, product defects for the car companies.

Nielsen advises its clients to make sure that they re-engineer their business processes in a way that allows them to take (especially) the negative feedback from the buzz and act on it. The brand needs to acknowledge the problem, thank the customer for bringing it to their attention, accept responsibility for it, fix it, communicate progress back to its customers, and make sure it doesn’t happen again. Just like a real person must do when they want to salvage or repair any regular personal relationship in which they have mishandled.

Why trust is the new black

Relationships can survive isolated instances where the one party fails to deliver on its promises. Brands cannot expect to sustain relationships with their customers, however, if they fail to correct problems and see news of that failure broadcast across the social media landscape from a growing cadre of “betrayed” customers.

And the new millennium has already seen a parade of betrayals by business, especially and most recently by iconic brands from Wall Street. The Edelman Trust Barometer, which measures the public’s trust in a variety of areas, is barely above the lowest point in the survey’s 10-year history. The US public’s trust in the business sector is lower than that of almost every national public surveyed. This is important because trust drives the kind of consumer/customer behavior businesses want to see (repeat purchase, recommendation, stock purchase) and distrust drives exactly what they don’t want (churn, negative recommendation, and stock dumping).

As brands embrace Relationship Building as their digital marketing strategy, therefore, they would be wise to study, embrace, and adapt the principles developed in the analog world of selling and behave like a real person. With that understanding, they can use trust as the metric by which they measure the health of their customer experience across every touchpoint. That is why Trust is the new Black.

–Roger Beynon, Chief Strategy Officer

What’s In the Placement of a Consumer Survey? Everything.

Econsultancy recently posted a comprehensive review of best practices for e-commerce consumer surveys by Tim Leighton-Boyce.  It’s an excellent piece.  The writer is obviously a practitioner, since the advice reflects knowledge that can only have come the hard way.  One piece of that advice, however, is fundamentally flawed.  In the section “Where to place the survey”, Tim writes:

“Although there are great systems for allowing feedback surveys on every page of your site, I’m not in favour of using any form of pop-up which might distract your visitor from whatever they want to do.

Instead, my favourite type of e-commerce survey is one embedded in the order confirmation page. I like these because there is zero risk of distracting someone from placing an order since the survey is only offered once the sale is complete.”

Tim anticipates that this will raise objections, so he adds:

“The obvious objection is that this means you don’t get any survey entries from people who did not intend to buy or were unable to buy. That’s a common-sense point. But in reality it doesn’t seem to be a problem.

… In real life it turns out that people who have problems buying can be remarkably tenacious. Some will eventually find what they want, or make it through a tricky checkout, and then let you know all about the problems when they get to the survey comments form.”

It may, indeed, be a valid assumption that problems experienced by those who complete transactions are the same as the problems of those who abandon the site or fail to complete a transaction.  But how do you know for sure?  More importantly, how do you quantify the impact of those problems?  How do you measure the revenue loss they inflict?  How do you determine their root cause?  How do you set priorities in taking remedial action?

Failure data may be the most valuable kind you collect

As convenient as it may appear, surveying only those who emerge from the confirmation page necessarily skews the sample and presents a distorted picture of the user experience, especially the experience of visitors who fail.  Visit failure data may be the most valuable data a site can investigate because beneath that cumulative experience lie the root causes of conversion impediments.  Intercepting visitors at the start of their journey through your website ensures that you include those who fail as well as those who transact.  For most e-commerce sites, the proportion of those who do not purchase far exceeds those who do.  The process of identifying whom the site fails, where it fails them, and why it fails them offers the most direct route to continuous improvement.

Behavioural and attitudinal feedback from hundreds of thousands of survey respondents over the last decade reveals patterns applicable to any e-commerce site.  This is what they look like:

They start with some notional depiction of shopper behaviour – the sequence of thoughts or actions they evidence when shopping online.

These steps can be grouped into three basic user decision points:

Suitability – is this site likely to meet my needs?

Findability – how easily can I make my way to the product or information I seek?

Buyability – how easily can I reach certitude and then complete the transaction?

Visitors who fall out of the funnel at the site suitability level represent (in our classification) the problem of Bounce.

Visitors who fall out of the funnel at the findability level represent Opportunity Loss.

Visitors who fall out of the funnel at the buyability level represent the Abandonment problem.

These problems are what site owners must identify, quantify, analyze, and address if they are to systematically attack visit failure and its impact on conversion.  Sampling the visitor population only from those who successfully navigate their way through to the confirmation page makes this process inordinately difficult, if not impossible.

Continuous improvement is not just a tool; it is a philosophy and a strategy.  If a site is going to commit to a continuous improvement process, it should ask visitors to participate from the outset of their journey, so that it captures the full spectrum of site experiences and outcomes.  That’s where analysis starts and systematic improvement begins.

–Roger Beynon, CSO

What’s In a Metric? Well, it Depends.

A client of ours has undertaken a Findability initiative.  A site’s “findability” determines the ease with which visitors can get from the page on which they arrive at the site to the page(s) containing the products or information they seek.

Funding for the project is conditional upon each phase proving its impact and value.  How to measure that impact, therefore, has become a focal point of debate and, naturally enough, contention.  Yes, we’re all familiar with the adage that numbers can make them say whatever we wish, but the complications go much further than that.  Over 50% of all failed visits to e-commerce websites happen because of findability issues.  Findability issues can encompass the site’s architecture and navigation scheme, its taxonomy, or its site search and meta-tagging strategy.  Whatever the reason, findability frequently impedes the visitor’s quest to find what they seek.  If they can’t find it, they can’t buy it.  So there’s a direct hit on conversion and revenue.

Assessing the effectiveness of the findability project should be a simple matter, should it not, of measuring conversion before and after?  Assuming nothing else has changed on the site, any difference can be reasonably ascribed to the findability initiative, right?  The problem lies in the assumption that nothing else changes, because that’s absurd.  In the online world everything changes at the speed of light.  Acquisition strategies are constantly being refined, resulting in dozens, possibly hundreds of different campaigns driving new and existing visitors to the site.  Landing pages are being tinkered with to optimize conversion.  Products are being added or subtracted.  Promotions descend with bewildering frequency at different times and within different categories or across categories.  A single review can launch or destroy a product. Items get moved into Sale or Clearance sections to make room for the next season’s inventory.  Pricing changes.  Recommendation engines place products in different contexts for different visitors.  A website never sleeps.  So the simple assumption that we undertake a findability initiative and compare pre/post conversion rates is, simply, not feasible.

We need to be nuanced in our approach to measurement and cognizant of matching the metrics to the measure.  Here’s what we mean.

Viewpoint

First, it’s essential to understand the viewpoint that a metric reflects.   Search success and search relevance, for example, should be metrics gathered from users through survey responses because the metrics reflect the viewpoint of the user.  The site’s metric for search success, however, often measures something completely different — the number of times the engine actually returns one or more results.  Similarly, the search function that ranks and scores results by “relevance” reflects the site’s definition of relevance, not the user’s.  Assigning the appropriate metric, therefore, depends on the viewpoint you wish to represent and what you  want to do with the data.  What are you measuring and what are you going to do with the results?

Thus, for the search aspects of the findability project, we might start by clarifying that we want to capture the user’s viewpoint of search success, because then we’ll know how effective it is from their point of view — which is what really matters.  One of our other clients followed the conventional way to measure search success – counting the number of times the engine produced one or more results.  Under that metric and from that viewpoint, it reported search success at 99.8%.  When they started asking visitors who used site search how successful their searches had been, the number fell slightly – 48%!  Be careful what you ask for!

Dimension

Beyond viewpoint, it’s important to distinguish between attitudinal and behavioral metrics.  We all know that what people say is often different from what they do.  During the redesign of the homepage of a national DIY site, the designers got positive reviews in usability testing of a shortcut button they added.  Subsequent path analysis demonstrated that the conversion rate of visitors clicking on the shortcut was 33% lower than that of visitors navigating through the left-hand menu.  Attitudinal data, in this instance, meant nothing in comparison to the behavioral data.  (The shortcut was never removed.  It had been the site manager’s idea.)

Relationship

The final step in metrics definition might be to explore the comparative value of direct versus indirect metrics.  Take a tree-test for site taxonomy.  You run a tree test on a single category.  Say, for argument’s sake, that 85% of users look in the correct categories for the products you ask them to find.  So you take the 15% of products that were not located and rework the taxonomy according to the feedback of where users thought the products belonged.  Then you rerun the test with the new taxonomy. Lo and behold, 94% of users find products this time.  You have a nine-point lift and a 10.6% improvement.  That’s a direct metric.  The cause-and-effect relationship between the taxonomy changes and findability scores are irrefutable.  The new taxonomy is better than the old.  But what happens if conversion falls when you introduce the new taxonomy to production?  Can you be sure that the cause-and-effect relationship is valid.  Because so many other factors contribute to conversion, the relationship between the taxonomy change and the lower conversion is indirect and therefore less reliable/trustworthy.  You have contradictory metrics.  What do you do?  Leave the new taxonomy in place or revert to the old?  You leave it in place.  Because direct metrics are indisputable and trump indirect metrics.  Whatever the cause of the conversion problem, the new taxonomy is certainly not it.

While you leave it in place, however, you should actually monitor a few other metrics.  The first is to look at the conversion of first-time visitors before and after the change.  This is as close to an apples-to-apples comparison of indirect metrics as you can get.  (You look at first-time visitors because visitors familiar with the site may or may not think the change is better, but it is different, and change can often provoke discomfort.)  The second is to look at site visitor survey data and compare the pre-/ post-change percentage of first-time visitors who cited Labeling issues as a reason for visit failure.

The take-away from all this metric-mashing?  Be clear about what you intend to do with the metrics.  Form follows function in metrics just as much as in architecture.  Balance the viewpoints, clarify the role of attitudinal and behavioral, assign direct and indirect appropriately.  Monitor and adjust.  You’ll be fine.

Mobile Phones, In-Store Shopping, and a Little Thing Called Certitude

Pew Internet released a report late in January 2012 that quantified a behavior we either exhibit ourselves or see others exhibit ever more often – people using their smart phones as part of the in-store shopping process.  During the holiday shopping season:

  • 38% of smart phone owners called a friend for advice
  • 25% looked up prices online
  • 24% looked up product reviews online

We all understand why people do this – it’s to feel confident that we’re making the right decision and getting the best price.  The word that best describes this state we seek is Certitude, defined as “freedom from doubt.”

Back in the day
Back in the day, reaching certitude in a store used to be difficult unless you had already been to several other stores to check out prices.  This is the way most of us learned to shop.  Reaching certitude took time and effort.  Then the Web introduced online shopping and we no longer physically had to scour the local retail landscape to compare prices and availability.  Comparison shopping was far more efficiently done through a browser.  Online certitude remained elusive, however, because we could not feel, smell, or get a true multi-sensory impression of the product we sought.  And online merchants often failed (and still do) to provide all the information we needed in order for each of us to reach our own points of certitude.

When we shop packing smart phones, however, we have found the fastest path to certitude.  The ability to access the Web while we’re mobile – via bar-code scanning apps and QR links, in particular — has effectively allowed us to be many places at the same time.  Add the social dimension into the mix, and we have the equivalent of certitude support that was previously provided by having a friend there to proffer their opinion.  Macy’s, Saks, and other retailers know their shoppers often send a photo from the fitting room, asking for feedback.

The implications of certitude for the online store
We each have our own pathways and our own levels of certitude, but the smart phone-equipped store shopper is likely to get there faster than the single-mode shopper.  And this realization begs a question for the managers of all e-commerce sites: Have you done everything you can to allow your visitors to reach their own level of certitude?

Answer that question first by looking at your internal search.  The fastest, surest way for a visitor to reach certitude online is by being able to type a product ID into the search box and have the results deliver exactly what’s being sought.  Take the case of jeans.  Females who like how a pair of jeans fits in the store will often go online later to buy more pairs in different colors.  They already know the jeans fit, they just need to see what other colors are available.  Simple, right?  No.  The product ID on a garment label may have nothing to do with the way the garment is referenced as online inventory.  The ID ascribed by the manufacturer may be different from that ascribed by the retailer.  Product descriptions themselves can and often do vary across channels.  Consistency in identification, therefore, is the first task in assuring findability via search and facilitating visitor certitude.

Once you’ve looked at search and meta-tagging, turn your attention to the browse path.  The “path to certitude” checklist depends on what’s being sold, but the typical pieces of information in a typical product-based site would include: features/benefits, demos/ videos, specs/dimensions, colors/ swatches, views/details, options/customization, comparisons, ratings/reviews.   If you haven’t provided information to address the certitude needs of every visitor, you have given many of them a reason to abandon your site.

But there is a back-up plan.  It’s called policy.  A site like Zappos realizes that the key element in shoe-buying is fit.  Zappos also knows that it is impossible to convey fit as part of the shopping experience, yet, fit is essential for a shopper to reach certitude in shoe buying.  So what does Zappos do?  Zappos eliminates all the risk involved in buying shoes that do not fit by offering free shipping and free returns – for a year!  Zappos’ success has proven that online shoppers do not have to reach certitude if the site’s policies shift the consequences of an erroneous decision from the shopper back to the site.

To take the issue of certitude full circle — which is to say, back to the store shopper armed with a smart phone – the site manager must also think of the mobile visitor’s need for certitude by optimizing his site for mobile access or, better still, creating a pure mobile site.

Takeaways
So, the takeaways for retail website managers are:

  • Check your products for consistency of identification and align your meta-tags appropriately.
  • Check that you have provided every piece of product information that any visitor type would need in order to reach certitude.
  • Review your policies to see if you have done all you reasonably (and logically) can to shorten the path to certitude by shifting the burden of risk onto your own shoulders.
  • Create a pure mobile experience to facilitate certitude for the mobile shopper, too.

–Todd Luckey, Senior Usability Analyst

–Roger Beynon, CSO

Start Measuring Your Customers’ Trust in Your Brand

January of each year sees publication of the Edelman Trust Barometer.  It is a fascinating study that shows the degree of trust with which people hold four institutions – government, business, media, and NGOs (non-governmental organizations).

The report highlights the dramatic reduction in trust in governments, in CEOs as spokespeople for their companies, in banks and other financial institutions.  It points to technology companies as the most trusted business sector; it says that companies’ listening to their customers is the primary driver of trust; it speaks to people’s ever-growing trust in a people they see as “like themselves.”

Government’s precipitous fall from grace has left a trust leadership vacuum.  Edelman’s interpretation of the results lays out the opportunity to business to take leadership in the general trust-rebuilding process.  Of the 16 actions that business can take to build trust, that of “listening to the customer” ranks #1 — alongside delivering high quality products or services.

Listening programs, in which companies construct elaborate systems for tracking and, often, responding to customer feedback, are already in place in many Fortune 500 companies.  Yet how often do you see trust as the subject of a question in customer surveys?  Rarely, if ever.

Trust, however, may be the most powerful positive emotion a company can reasonably hope to develop in its customers.  Trust is a far deeper emotion than satisfaction, for example, and the behaviors trust engenders are, from a brand’s perspective, the Holy Grail of customer loyalty and advocacy.  Here’s an older Edelman chart that contrasts the behaviors people exhibit in regard to companies they trust versus those they distrust.

In order to build trust, companies must start by measuring it.  That would suggest, at a minimum, they incorporate a trust metric into their primary surveys, including those they deploy online.  The sooner that happens, the faster they can understand what aspects of the customer experience undermine trust and which enhance it.  Armed with that data, the trust-building process and the benefits it promises can begin in earnest.