How Many Respondents are Too Many in Online UX Research?

When planning studies for the usability lab, sooner or later, the question gets asked, “How many users to do we need to test?” Depending on the goals of the study, and whom you ask, you’ll get answers ranging from 5 to 30. Most experts agree that testing more than that is not the best use of your limited usability budget, since each additional test participant costs money to recruit, test, and compensate.

In the world of online user experience research, a similar question comes up: “How many respondents do we need to consider the survey complete?” In the online realm, additional survey respondents are expensive not so much in terms of money, but in terms of time. How long can you wait for more and more people to complete the survey?

How many responses you’ll need really depends on what you plan to do with the data. What are the main goals of the project? Do you intend to use the data to inform design decisions (e.g., for a site re-design effort) or do you intend to use the data as benchmarks/metrics and compare it to some other data set (e.g., data collected in previous rounds or future rounds of research)? Related to the overarching goals of the research project are the analyses you’d like to have done on the data. Are you interested in analyzing click-stream data or are you strictly interested in survey responses? Will you want to “slice and dice” the data numerous ways to see how different demographic groups respond or how certain survey responses relate to other survey responses?

If you desire complex analyses, analysis of click-stream data, or multiple cross-tabulations of the various survey questions, then we recommend a minimum of 3,000 responses. This number of responses makes allowances for the large amount of variation that we see in site visitor behavior and helps to prevent any particular sub-group of respondents (e.g., first-time visitors) from being too small for meaningful analysis.

If you know at the outset that you are not interested in analyzing click-stream data and that your desired analysis of the survey responses does not involve complex or multiple cross-tabs, then answering the question of “How many responses do I need?” really boils down to two different scenarios:

  1. You need to compare this data set to another data set (e.g., from past or future rounds of research). If you will eventually have more than one data set AND you want to answer questions such as, “Did success increase from Round 1 to Round 2?” then we’d recommend gathering as much data as time would allow. If you intend to compare the data across data sets OR if you just don’t know whether or not you’ll need to compare the data at some point in the future, then we usually recommend a minimum of 1000 responses. Gathering 1000 or more sessions for a survey gives you greater flexibility in terms of how you might use the data in the future. One thousand sessions provide a sufficiently narrow margin of error (± 2.6% at a 90% confidence level) that you can draw conclusions about apparent differences between the two data sets and trust that those conclusions are reliable.
  2. You primarily are running the survey for qualitative purposes (for example, in order to inform design decisions, gather verbatim feedback, discover usability issues, etc.). If you know that you are not going to need to make numerical comparisons between two data sets, then you can feel reasonably comfortable with fewer sessions. The fewer sessions you gather, the wider your margin of error becomes. For example, at 90% confidence, here’s how the margin of error looks for various sample sizes less than 1000:
Survey sample size response table

As you can see, with 400 survey responses, your margin of error is somewhere close to ±4%. You can also see from the table that the relationship between number of sessions and margin of error is not linear, and there is a point of diminishing returns. If your research goals fall in this second category, you need to consider how wide a margin of error you feel comfortable with and balance that with how long you have available to let the survey run.

So, just as with lab-based studies, the answer to “How many responses do we need?” varies depending on the goals of your study. In general, though, it should fall between a few hundred and a few thousand.

Is Your Site Conversion Meeting Your Set Targets?

The success of a website is multi-faceted.  On the lowest level, visitors who do not already know a site URL should still be able to get to the desired website through methodologies used to maximize site traffic, such as Search Engine Optimization (SEO) or Pay Per Click (PPC).  But then, upon arrival, visitors should find a compelling and usable website, thus ensuring a successful visit experience.   Hand in hand, these fundamentals are essential to achieving conversions.

The following chart represents dollars spent on online advertising in recent years, as well as future spending projections.

Obviously, companies will continue to spend dollars on Internet marketing and they will continue to use SEO or PPC to drive traffic to their website.  But is it possible to reduce online advertising expenditures and still enjoy similar revenue?

Yes, and a key lies in how effectively the traffic is converted after arriving to your site.  This is where usability analysis becomes vital.

When usability analysis steps in

An effective usability analysis can uncover the reasons visitors are leaving the site, why they are not purchasing and, in general, the problems they are encountering that keep them from converting.  Every additional conversion increases revenue, thereby improving ad expenditure ROI.

In a recent online user experience study conducted for an e-commerce website, it was revealed that 33% of visitors arrived at the site via search engine organic results.  This indicates a certain level of success with their Search Engine Optimization.  However, of those arriving via a search engine, 41% claimed their site visit was unsuccessful because navigation was difficult and/or organization was unclear.  In other words, they didn’t find the site all that usable.

So, although the site optimization effectively drove traffic to the site, the user experience fell short in converting visitors. For this particular client, visitors who failed with the intention of purchasing represented a significant revenue loss, approaching $150k per day.

Working together

Usability testing early in the website development cycle is most effective because it reduces development cost.  You can improve the user experience before your site is launched and have the confidence that you have delivered a quality product. But even if conducted later in the cycle, usability issues can be addressed and corrected for subsequent releases.

It is no secret that the cost per click continues to increase over time and money spent on SEO or PPC must be on-going to continue driving traffic to the site.  But, the effectiveness of recommendations made to improve the site based on the usability analysis is enjoyed year after year.  Ongoing usability analysis will only continue to improve the conversion percentage resulting in added revenue.

The marriage of SEO/PPC and usability analysis is not only a winning combination but a required blend for success in the Internet marketing arena.

Can You Quantify Your Site Redesign’s ROI?

In today’s economic environment, it is critical to achieve a return on investment (ROI) for any budget that is spent. In the online environment, where the landscape changes so quickly—either due to competitive challenges or the increasing savvy and changing needs of the online consumer—achieving a ROI can be especially challenging. But the need to spend money online to remain relevant and compelling is an ongoing push.

Brands may regularly find themselves needing to do a full site redesign, or a partial site refresh, as part of their online strategic marketing plan. However, any investment in a website should be justified against the bottom line, making it necessary to demonstrate the quantifiable impact of any changes that are made to the online property.

So, What Might You Do?

There is a multi-phased (Pre-test/Post-test) approach, primarily used to measure the impact that changes have on a site (such as a full or partial redesign). The current website is first researched to gather benchmarking measurements. Keep in mind that by conducting research against the current website, real-time usability feedback and site experience data can be leveraged to guide the site redesign effort itself. The site is then updated (fully or partially revamped) and the new website is tested again.

Case Study

In a project for a client in the office supply category, Usability Sciences utilized this research design with dramatic results. The client brand team was gearing up for a major site redesign. In advance of this effort, Usability Sciences collaborated with the brand team on a two-phased research project that would serve two functions: 1) provide qualitative feedback on the current website that would direct the brand on where to focus their redesign efforts, and 2) provide quantitative benchmark measures of the performance of the site on key business indicators such as online conversion, success with site visit, and ease of use.

Phase I of the project, the pre-test, ran for six months. An online survey was utilized that included website entry and website exit questions to determine visit purpose and visit success, with various other demographic and key performance questions. Usability Sciences then conducted an analysis with the resulting findings becoming instrumental in focusing efforts on improving overall navigation and the checkout process. Redesign efforts continued and the new site was released three months later.

The following month, Phase II of the research project was launched when the post-test began by fielding the exact same survey on the new site, collecting data for the next four months. Upon conclusion, Usability Sciences conducted a second analysis, this time with the intention of comparing pre-test responses to post-test.

  • Key Finding #1:The checkout process experienced a 28% increase in completed transactions. Changes to site checkout, such as shortening the number of steps in the process from start to finish, had a significant positive impact on the conversion rate for online transactions. By looking at checkout process data, we determined that the pre-test measure of successful completion of checkout was 25%, while the post-test measure of successful completion of checkout was 32%, an increase of 7 percentage points, or a 28% lift in productivity. By enabling visitors to more successfully navigate the checkout process, the transactional mission of the site is being met to a greater degree, and the redesigned site delivers online conversion at a quantifiably higher rate.
  • Key Finding #2: Browsing navigation success was improved by 5%. Changes to site navigation, such as establishing a consistent page layout and implementing changes to the look and feel of the toolbar, had a significant positive impact on visitor browsing behaviors. By looking at visit experience data, we determined that the pre-test measure of the success rate of those seeking products using the browse path (as opposed to the search tool) was 79%, while the post-test measure of the success rate of those using the browse path was 83%, an increase of 4 percentage points, or a 5% lift in efficiency. By enabling visitors to more successfully match products to their needs, the organizational mission of the site is being met to a greater degree, and the redesigned site delivers a more powerful online branding experience at a quantifiably higher rate.

Parting Thoughts

In closing, as you face your own business needs of having to demonstrate return on investment with each of your online initiatives, a good rule of thumb to remember is that you can’t manage what you don’t measure. The Pre-test/Post-test methodology is an excellent tool to keep in your measurement toolbox for times when you are introducing a new online strategy (new look and feel, new messaging, usability enhancements) and you need to demonstrate the quantifiable impact of your approach.