Trust Is the New Black (of brand metrics)

How does your brand measure the health of your relationship with your customers? If you answered: “Satisfaction,” then you are not keeping up with the latest in metrics fashion. Trust is the new black of brand metrics, and here’s why.

Digital thought leaders are calling the times in which we live the Relationship Era of marketing. They reference the need for Trust Agent networks. They insist that brands must now communicate touchpoint by touchpoint, on a personalized, one-to-one basis, building trust with every encounter.

This is valid. This is not, however, new — at least to the world outside of marketing.

Building relationships the good old-fashioned way

As marketing continues the transition from mass, single message communication into the realm of customer-by-customer relationship-building, it is effectively embracing the discipline known in pre-digital times as Selling. Effective selling has always been about forging enduring, loyal relationships. Effective selling relies on relationship-building principles established long before the advent of digital life.

At its heart, relationship-building is no more than doing the right thing by people. When you behave that way over time, people grow to trust you. When they trust you, they come back to you and continue to give you their business. You have proven (especially on occasions when you have done the right thing counter to your own interests) that you are — among other things — honest, fair, respectful, punctual, responsive, diligent, and responsible. What you have been is the best indicator of what you likely will be. You have earned trust. You will enjoy the rewards trust brings.

If brands are to adopt relationship-building as their digital marketing strategy, the brand itself must behave like a person – like a sales person working to earn the trust of a customer. The brand must deliver on its promises as diligently and consistently as the sales person (and the organization behind the sales person) must deliver on his or her promises.

The socially-engaged customer and viral consequences

Once the brand stops behaving in a manner that engenders trust and starts exhibiting behaviors that undermine trust, customers will become less loyal and more receptive to the enticements of competitors. When ( in this viral, digital age) a brand experience does not live up to a socially engaged customer’s expectations, that customer’s Facebook friends, Twitter followers, blog readers, and fellow chatters and forum attendees all get to know about it within seconds. That’s not exactly healthy for the brand. A bad experience is bad enough, but if the experience constitutes a betrayal, then the viral consequences for the brand can be dire, sometimes catastrophic. Ask Dominos or Taco Bell or Tiger or any other brand whose behavior has contravened everything its customers or fans have come to expect.

Nielsen’s Buzzmetrics division monitors social buzz (or The Conversation, as they call it) for major brands. Nielsen reports that the #1 “buzz” topics for consumer-serving brands (fast food companies, cable companies, car companies, for example) are actually the good old (pre-digital) issues – bathroom hygiene in fast food restaurants, customer service issues with cable or telephone companies, product defects for the car companies.

Nielsen advises its clients to make sure that they re-engineer their business processes in a way that allows them to take (especially) the negative feedback from the buzz and act on it. The brand needs to acknowledge the problem, thank the customer for bringing it to their attention, accept responsibility for it, fix it, communicate progress back to its customers, and make sure it doesn’t happen again. Just like a real person must do when they want to salvage or repair any regular personal relationship in which they have mishandled.

Why trust is the new black

Relationships can survive isolated instances where the one party fails to deliver on its promises. Brands cannot expect to sustain relationships with their customers, however, if they fail to correct problems and see news of that failure broadcast across the social media landscape from a growing cadre of “betrayed” customers.

And the new millennium has already seen a parade of betrayals by business, especially and most recently by iconic brands from Wall Street. The Edelman Trust Barometer, which measures the public’s trust in a variety of areas, is barely above the lowest point in the survey’s 10-year history. The US public’s trust in the business sector is lower than that of almost every national public surveyed. This is important because trust drives the kind of consumer/customer behavior businesses want to see (repeat purchase, recommendation, stock purchase) and distrust drives exactly what they don’t want (churn, negative recommendation, and stock dumping).

As brands embrace Relationship Building as their digital marketing strategy, therefore, they would be wise to study, embrace, and adapt the principles developed in the analog world of selling and behave like a real person. With that understanding, they can use trust as the metric by which they measure the health of their customer experience across every touchpoint. That is why Trust is the new Black.

–Roger Beynon, Chief Strategy Officer

Sizing Up the Competition, Getting the Most Out of Comparative Testing

A new year is upon us, and perhaps you’re thinking it’s time to size up the competition and dominate your space. At Usability Sciences, one of the services we offer is the Comparative Test. We use this methodology anytime our clients want to find out where they stand among the competition, and more importantly, formulate a strategy for their next move. This article explains methodology, how to get the most value out of it, and when it’s appropriate to use.

How it works and what you’ll get:

The Comparative Test typically places your website/product against two to three competitors. Test participants are asked to perform the same core tasks on each product. As they work through the products, they are asked to provide qualitative feedback as well as complete a variety of surveys, including satisfaction and preference surveys.

During this process, the following occurs – A) Users identify likes and dislikes of each product, and ultimately identify a preference and B) through observation, the analyst team uncovers strengths and weaknesses of each product. When these are coupled with one another, the analyst team is then able to create and recommend a Best-of-Breed model. In other words, if the best attributes from all competitors were merged into one product, the end result would be the best of breed.

This will equip your team with the knowledge to answer important questions such as “What new features and functions should we incorporate into our design?”, “What’s working well with our product and what needs improvement?”, and “What ideas can we borrow from our competition and what ideas should we avoid?”

What we recommend:

We recommend comparing three products during the study (ideally, your product and two competitor products). This allows participants to spend a reasonable amount of time on each product and get a feel for how the products differ from one another. Occasionally, clients want to compare a greater number of products. This can be done, but the richness of insight into any given product may be compromised. Basically, more time spent per product equates to a deeper user experience and thus, a more meaningful comparison.

We’ll work with you to create tasks that expose users to the products’ core features and functions. The goal is to ensure an apples-to-apples experience so that users can make valid comparisons. And don’t worry; if the products have unique features, we have ways to incorporate them into the study as well.

Typically, we recommend recruiting around 12 users in a study of this nature. Trends will begin to emerge between users 4-8, allowing results to comfortably solidify between users 8-12.

What you won’t get:

A common misconception is that a Comparative Test will yield recommendations to resolve usability issues. These typically won’t surface in this type of methodology due to the fact that users are not afforded the amount of time needed to truly explore trouble spots and provide in-depth suggestions. More importantly, moderator questioning is limited so as to avoid artificially amplifying both positive and negative issues and coloring user impressions. Our moderators provide the playing field and the users are left to their own experience to draw conclusions.

You also won’t get metrics such as Time on Task, Success/Failure rates, etc. These metrics are derived in our more clinical and rigidly structured methodology known as the Competitive Test. That’s a topic for another discussion.

When it’s appropriate:

A Comparative Test is a valuable tool that can be used at a variety of different stages of a product’s life. If you have yet to design a website/product or have one in an early stage, this methodology can help identify what features and attributes are most valued by users. If you’re preparing a redesign for an existing website/product, the methodology can help steer you in the right direction, and more importantly, steer you away from bad decisions. And finally, if you have a mature product, the Comparative methodology is an excellent way to gauge where you stand against newer competitors and help identify enhancements and revisions for your own product.

Contact us to discuss how we might use this methodology to serve your needs and help put you ahead of your competition.

— Jason Vasilas, Senior User Experience Specialist

What’s In the Placement of a Consumer Survey? Everything.

Econsultancy recently posted a comprehensive review of best practices for e-commerce consumer surveys by Tim Leighton-Boyce.  It’s an excellent piece.  The writer is obviously a practitioner, since the advice reflects knowledge that can only have come the hard way.  One piece of that advice, however, is fundamentally flawed.  In the section “Where to place the survey”, Tim writes:

“Although there are great systems for allowing feedback surveys on every page of your site, I’m not in favour of using any form of pop-up which might distract your visitor from whatever they want to do.

Instead, my favourite type of e-commerce survey is one embedded in the order confirmation page. I like these because there is zero risk of distracting someone from placing an order since the survey is only offered once the sale is complete.”

Tim anticipates that this will raise objections, so he adds:

“The obvious objection is that this means you don’t get any survey entries from people who did not intend to buy or were unable to buy. That’s a common-sense point. But in reality it doesn’t seem to be a problem.

… In real life it turns out that people who have problems buying can be remarkably tenacious. Some will eventually find what they want, or make it through a tricky checkout, and then let you know all about the problems when they get to the survey comments form.”

It may, indeed, be a valid assumption that problems experienced by those who complete transactions are the same as the problems of those who abandon the site or fail to complete a transaction.  But how do you know for sure?  More importantly, how do you quantify the impact of those problems?  How do you measure the revenue loss they inflict?  How do you determine their root cause?  How do you set priorities in taking remedial action?

Failure data may be the most valuable kind you collect

As convenient as it may appear, surveying only those who emerge from the confirmation page necessarily skews the sample and presents a distorted picture of the user experience, especially the experience of visitors who fail.  Visit failure data may be the most valuable data a site can investigate because beneath that cumulative experience lie the root causes of conversion impediments.  Intercepting visitors at the start of their journey through your website ensures that you include those who fail as well as those who transact.  For most e-commerce sites, the proportion of those who do not purchase far exceeds those who do.  The process of identifying whom the site fails, where it fails them, and why it fails them offers the most direct route to continuous improvement.

Behavioural and attitudinal feedback from hundreds of thousands of survey respondents over the last decade reveals patterns applicable to any e-commerce site.  This is what they look like:

They start with some notional depiction of shopper behaviour – the sequence of thoughts or actions they evidence when shopping online.

These steps can be grouped into three basic user decision points:

Suitability – is this site likely to meet my needs?

Findability – how easily can I make my way to the product or information I seek?

Buyability – how easily can I reach certitude and then complete the transaction?

Visitors who fall out of the funnel at the site suitability level represent (in our classification) the problem of Bounce.

Visitors who fall out of the funnel at the findability level represent Opportunity Loss.

Visitors who fall out of the funnel at the buyability level represent the Abandonment problem.

These problems are what site owners must identify, quantify, analyze, and address if they are to systematically attack visit failure and its impact on conversion.  Sampling the visitor population only from those who successfully navigate their way through to the confirmation page makes this process inordinately difficult, if not impossible.

Continuous improvement is not just a tool; it is a philosophy and a strategy.  If a site is going to commit to a continuous improvement process, it should ask visitors to participate from the outset of their journey, so that it captures the full spectrum of site experiences and outcomes.  That’s where analysis starts and systematic improvement begins.

–Roger Beynon, CSO