Online Research Company First Visit Checklist

Research which enables the improvement of website design, content and overall usability has proven  particularly valuable as the world’s  industries become more and more reliant on their websites  for financial success.

Online research surveys are an example of this type of research.  An invitation to participate in a survey is presented to a company’s website visitors.   The data produced by those who accept the invitation is especially valuable in that it comes direct from the company’s own customers or constituents.

Note: There are multiple avenues online to post questions to the site visitor; some are free or very inexpensive, and some, at the other end of the spectrum, are priced according to the value of their results. 

If your research initiatives include determining information such as:

  • customer satisfaction/dissatisfaction
  • visitor purpose when coming to the website
  • ease or difficulty of completing a multi-step transaction on the site
  • user suggestions for site improvement
  • customer expectations about what can be achieved on your website, etc.
  • likelihood of visitor to recommend the website to friends and colleagues
  • relative success or failure of each website visit,


you probably will want to contact a recognized expert in the field of online survey research.

You can speed the process along if you have the following questions answered before you meet with your online survey partner company.

1) Site traffic numbers (yearly averages, and daily unique visitors)

2) Point of contact for the survey building process in your company

3) How many stakeholders will need to be included in the process at your company

4) Target timeframes for launch of survey on the site,  length of data collection, and receipt of agreed upon deliverables.  Is there an event/deadline for when results of the survey need to be presented to company management?

5) Is your site entirely public, or are some parts secure?

6) How is a survey to be tested prior to launch on your site…do you have your own staging/testing environment, or is this handled for your company by an outside entity.  Can you supply the URL’s of the testing/staging environment?

7) Can you supply access to a ‘dummy’ or ‘test’ account (user name and password) for the research company for any ‘funnel’s such as online checkout process, travel reservations process, etc.

8) What sort of behavioral information do you wish to devise from the survey data collection?  Based on visiting some specific content on the site, on using a site tool, or site registration requirements?

9) Will you need  one final report, or more frequent interim reports/updates?

10) Can you supply look and feel information specific such as logo requirements, color ID numbers, fonts, etc

11) Is there a third party vendor who you will want to have access to certain parts of the data produced by your survey?  Are you ready to supply their requirements for merging the survey data with their reports to you?

12) Will there be privacy issues to be addressed with participants in the survey (is your company in the pharmaceutical arena, or some other industry where privacy issues are regulated by governmental agencies)?

13) Do you want your company employees to be blocked from the survey?

14) Is there a specific format you will need for the deliverables for the research (export of raw data collection, PowerPoint presentation, Tableau scorecards, etc.)?

If you arrive at your first meeting with your survey research partner with this information already prepared, you will be much more effective in moving the project along, and much closer to getting the data you want to enhance your company’s website!


–Pat Bentley, Project Manager, Online Experience Services


How Many Respondents are Too Many in Online UX Research?

When planning studies for the usability lab, sooner or later, the question gets asked, “How many users to do we need to test?” Depending on the goals of the study, and whom you ask, you’ll get answers ranging from 5 to 30. Most experts agree that testing more than that is not the best use of your limited usability budget, since each additional test participant costs money to recruit, test, and compensate.

In the world of online user experience research, a similar question comes up: “How many respondents do we need to consider the survey complete?” In the online realm, additional survey respondents are expensive not so much in terms of money, but in terms of time. How long can you wait for more and more people to complete the survey?

How many responses you’ll need really depends on what you plan to do with the data. What are the main goals of the project? Do you intend to use the data to inform design decisions (e.g., for a site re-design effort) or do you intend to use the data as benchmarks/metrics and compare it to some other data set (e.g., data collected in previous rounds or future rounds of research)? Related to the overarching goals of the research project are the analyses you’d like to have done on the data. Are you interested in analyzing click-stream data or are you strictly interested in survey responses? Will you want to “slice and dice” the data numerous ways to see how different demographic groups respond or how certain survey responses relate to other survey responses?

If you desire complex analyses, analysis of click-stream data, or multiple cross-tabulations of the various survey questions, then we recommend a minimum of 3,000 responses. This number of responses makes allowances for the large amount of variation that we see in site visitor behavior and helps to prevent any particular sub-group of respondents (e.g., first-time visitors) from being too small for meaningful analysis.

If you know at the outset that you are not interested in analyzing click-stream data and that your desired analysis of the survey responses does not involve complex or multiple cross-tabs, then answering the question of “How many responses do I need?” really boils down to two different scenarios:

  1. You need to compare this data set to another data set (e.g., from past or future rounds of research). If you will eventually have more than one data set AND you want to answer questions such as, “Did success increase from Round 1 to Round 2?” then we’d recommend gathering as much data as time would allow. If you intend to compare the data across data sets OR if you just don’t know whether or not you’ll need to compare the data at some point in the future, then we usually recommend a minimum of 1000 responses. Gathering 1000 or more sessions for a survey gives you greater flexibility in terms of how you might use the data in the future. One thousand sessions provide a sufficiently narrow margin of error (± 2.6% at a 90% confidence level) that you can draw conclusions about apparent differences between the two data sets and trust that those conclusions are reliable.
  2. You primarily are running the survey for qualitative purposes (for example, in order to inform design decisions, gather verbatim feedback, discover usability issues, etc.). If you know that you are not going to need to make numerical comparisons between two data sets, then you can feel reasonably comfortable with fewer sessions. The fewer sessions you gather, the wider your margin of error becomes. For example, at 90% confidence, here’s how the margin of error looks for various sample sizes less than 1000:
Survey sample size response table

As you can see, with 400 survey responses, your margin of error is somewhere close to ±4%. You can also see from the table that the relationship between number of sessions and margin of error is not linear, and there is a point of diminishing returns. If your research goals fall in this second category, you need to consider how wide a margin of error you feel comfortable with and balance that with how long you have available to let the survey run.

So, just as with lab-based studies, the answer to “How many responses do we need?” varies depending on the goals of your study. In general, though, it should fall between a few hundred and a few thousand.

Is Your Site Meeting the Needs of Your Future Customers?

Did you know that over a quarter of your site visitors are probably first time visitors? Did you know that first time visitors are less successful than all other visitors to your site? Understanding who your first time visitors are, why they visited your site, and how successful they were during that first visit is critical in providing both a satisfying user experience and helping drive the future landscape of the site. Here are a couple of examples. When analyzing the results of almost 9,000 visitor sessions from one of our online retail customers, we found that 25 percent were first time visitors to the site. Furthermore, first time visitors had a lower success rate (41 percent) than all other visitors to the site (51 percent).

In looking at their purchase history, 84 percent of first time visitors indicated that they had made a purchase in the customer’s store. This clearly identifies these visitors as not only being current in-store customers but also as a key segment in the future of your online customer base. We also learned that 86 percent of the first time visitors indicated they were the primary shopper for their household.

Perhaps most surprising was that the core demographic of the first time visitor was predominantly 25-45 year-old females while the frequent site customers were males 35-54. The site was catering to the frequent customer but not paying enough attention to the first time visitors and their needs. These are all key metrics that need to be taken into consideration to ensure the growth of the site. The takeaway is that you should continue to understand your first time visitor population, including their basic needs when creating future designs and modifications to your site. They are your potential  future customers.

Can You Quantify Your Site Redesign’s ROI?

In today’s economic environment, it is critical to achieve a return on investment (ROI) for any budget that is spent. In the online environment, where the landscape changes so quickly—either due to competitive challenges or the increasing savvy and changing needs of the online consumer—achieving a ROI can be especially challenging. But the need to spend money online to remain relevant and compelling is an ongoing push.

Brands may regularly find themselves needing to do a full site redesign, or a partial site refresh, as part of their online strategic marketing plan. However, any investment in a website should be justified against the bottom line, making it necessary to demonstrate the quantifiable impact of any changes that are made to the online property.

So, What Might You Do?

There is a multi-phased (Pre-test/Post-test) approach, primarily used to measure the impact that changes have on a site (such as a full or partial redesign). The current website is first researched to gather benchmarking measurements. Keep in mind that by conducting research against the current website, real-time usability feedback and site experience data can be leveraged to guide the site redesign effort itself. The site is then updated (fully or partially revamped) and the new website is tested again.

Case Study

In a project for a client in the office supply category, Usability Sciences utilized this research design with dramatic results. The client brand team was gearing up for a major site redesign. In advance of this effort, Usability Sciences collaborated with the brand team on a two-phased research project that would serve two functions: 1) provide qualitative feedback on the current website that would direct the brand on where to focus their redesign efforts, and 2) provide quantitative benchmark measures of the performance of the site on key business indicators such as online conversion, success with site visit, and ease of use.

Phase I of the project, the pre-test, ran for six months. An online survey was utilized that included website entry and website exit questions to determine visit purpose and visit success, with various other demographic and key performance questions. Usability Sciences then conducted an analysis with the resulting findings becoming instrumental in focusing efforts on improving overall navigation and the checkout process. Redesign efforts continued and the new site was released three months later.

The following month, Phase II of the research project was launched when the post-test began by fielding the exact same survey on the new site, collecting data for the next four months. Upon conclusion, Usability Sciences conducted a second analysis, this time with the intention of comparing pre-test responses to post-test.

  • Key Finding #1:The checkout process experienced a 28% increase in completed transactions. Changes to site checkout, such as shortening the number of steps in the process from start to finish, had a significant positive impact on the conversion rate for online transactions. By looking at checkout process data, we determined that the pre-test measure of successful completion of checkout was 25%, while the post-test measure of successful completion of checkout was 32%, an increase of 7 percentage points, or a 28% lift in productivity. By enabling visitors to more successfully navigate the checkout process, the transactional mission of the site is being met to a greater degree, and the redesigned site delivers online conversion at a quantifiably higher rate.
  • Key Finding #2: Browsing navigation success was improved by 5%. Changes to site navigation, such as establishing a consistent page layout and implementing changes to the look and feel of the toolbar, had a significant positive impact on visitor browsing behaviors. By looking at visit experience data, we determined that the pre-test measure of the success rate of those seeking products using the browse path (as opposed to the search tool) was 79%, while the post-test measure of the success rate of those using the browse path was 83%, an increase of 4 percentage points, or a 5% lift in efficiency. By enabling visitors to more successfully match products to their needs, the organizational mission of the site is being met to a greater degree, and the redesigned site delivers a more powerful online branding experience at a quantifiably higher rate.

Parting Thoughts

In closing, as you face your own business needs of having to demonstrate return on investment with each of your online initiatives, a good rule of thumb to remember is that you can’t manage what you don’t measure. The Pre-test/Post-test methodology is an excellent tool to keep in your measurement toolbox for times when you are introducing a new online strategy (new look and feel, new messaging, usability enhancements) and you need to demonstrate the quantifiable impact of your approach.