Also known (or better known) as “A/B/n Testing," A/B Testing is a process of scientifically testing different digital components in a controlled manner by creating one or more variations of a particular element and having users randomly and typically evenly exposed to those variations, measuring what if any differences those variations have on user behavior. You can A/B Test individual web components – like buttons, copy, layout – or whole pages, or even other digital experiences, like email CTAs. A/B Testing is the basic building block of many other Conversion Optimization tools, like Multivariate Testing, Multi-Page Testing, and Dynamic Website Personalization.
A quantitative or qualitative method that asks users to organize items into groups and assign categories to each group. This method helps create or refine the information architecture of a site by exposing users’ mental models.
Analyzing the record of screens or pages that users clicks on and sees, as they use a site or software product; it requires the site to be instrumented properly or the application to have telemetry data collection enabled. (Web Analytics, Click-through, Scroll Tracking, Heatmaps, etc.)
A researcher shares an approximation of a product or service that captures the key essence (the value proposition) of a new concept or product in order to determine if it meets the needs of the target audience; it can be done one-on-one or with larger numbers of participants, and either in person or online.
Shorthanded as CRO, Conversion Rate Optimization is the practice of using a number of tactics like website experimentation and dynamic website personalization to improve digital experience and/or performance scientifically and methodically. CRO typically requires both qualitative and quantitative customer insights, digital analytics, and a tool to experiment and test in a controlled, scientific approach.
Also known as Voice of Customer (VoC) Feedback, Customer Feedback is an important part of the Customer Experience discipline as we are designing experiences with them in mind, so getting feedback – good, bad, or otherwise – is critical. Typically Customer Feedback is gathered from open-ended and/or close-ended information often through a feedback link, web form, customer email, live chat dialog, phone transcript, and the list goes on.
Also called preference tests, participants are offered different visual design alternatives and are asked to provider their preference and describe why, often associating each alternative with a set of attributes selected from a closed list; these studies can be both qualitative and quantitative.
Participants are given a mechanism (diary or camera) to record and describe aspects of their lives that are relevant to a product or service, or simply core to the target audience; diary studies are typically longitudinal and can only be done for data that is easily recorded by participants.
A website UX tactic wherein web content is automatically and usually without notice personalized to increase relevance in a user's website experience. This is typically done implicitly (without the user actively providing feedback in how the site should be personalized) and can range wildly, from region-based content, tailored product recommendations, or even inserting their name or other information into the web experience. The hardest part of Dynamic Website Personalization is usually determining where to start.
A survey in which participants are recruited from an email message.
Researchers meet with and study participants in their natural environment, where they would most likely encounter the product or service in question. These can be especially useful for products or services that are often bound to a specific setting, like household robot vacuums or in-store kiosks. If done out of the home, this could be achieved with pre-scheduling participants or intercept recruiting.
Eyetracking is a highly sophisticated method to understand where your website or application is grabbing and keeping your users attention. Eyetracking technology is configured to precisely measure where participants look as they perform tasks or interact naturally with websites, applications, physical products, or environments.
Groups of 3–12 participants are lead through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises. These gatherings can be a great way of getting users of different levels of brand affinity, awareness, and preference into a room to see what they think, understand how they use the product, and get live feedback. Often Focus Groups are recorded – both audio and video – to get additional context clues like gestures or facial expressions or to help clarify who said what and when. Careful moderation is required to ensure all participants are providing value and to reduce groupthink mentalities.
A survey module that is triggered during the a user's session of a website or application, and is typically favorable when trying to get insights from web visitors who may or may not be customers.
Participants are brought into a lab (could be as simple as an office or conference room) to meet one-on-one with a UX researcher and are given a set of scenarios that lead to tasks and usage of specific interest within a product or service. This can be extremely useful in watching and understanding how and why users flow through a website, navigate an app, or otherwise use the internet. In addition to the objectives outlined, this can give UX researchers an opportunity to ask followup questions about how and why a user does what they do, and gain other customer-specific insights. Often the "ah-ha's" are found off-script.
Multi-Page Tests are like an A/B Test, but on a macro level: instead of individual components on one page, app or email, Multi-Page Tests are a way of testing entirely different user flows across several pages in a controlled, scientific manner. Or build a baseline for understanding personalized user flows and testing out new versions of critical paths, like signing up or checking out flow. Depending on the granularity of the changes, we may or may not be able to infer what changes exactly are contributing to the changes in behavior, but depending on the goals of the experiment this may or may not be a concern.
Multivariate Tests (or MVT for short) are like A/B Tests, in that they a a tactic to scientifically testing different digital components in a controlled manner, but in MVTs you create several variations of several elements on a page as a way understanding the co-unit interaction, and measure all the different combinations of changes. They take a lot more energy and planning to build and more time and traffic to run, but are a worthwhile investment to measure and understand the combined effects of different experiences.
A UX design method in which participants – typically actual or potential customers – are given design elements or creative materials in order to construct their ideal website or app experience in a concrete way that expresses what matters to them most and why. This can help UX designers breakthrough "creator's blindness" and a way to dispel assumptions about and understand better the end user's needs and preferences in the experience in question.
Also known as a clickable prototype, these UX design outputs are typically made by grouping wireframes together with click, tap, and other UX functionality, allowing clients, usability testers, or other stakeholders to emulate the designed website user flow. These are paramount for understanding and refining an intended user experience design and are typically met with resounding "ah-ha's" from all involved. But this comes with a cost: the inclusion of interaction and high-fidelity nature of prototypes makes them a high-effort deliverable, so revisions should be kept to a minimum and are therefore typically late-stage deliverables.
Usability studies in which participants are provided with a prompt and a script and asked to conduct a usability studies remotely and without moderation from a proctor or UX researcher, and at their convenience. Typically this is achieved with the use of remote usability study software, which can provide the required audio and computer screen recording and streamline the recruiting participants. However, the lack of moderation can result in participants veering off-course or failing to understand and perform the tasks, resulting in poor quality or even unusable studies, so careful planning and approach are critical.
Session replay is a video-like reproduction of a user’s interactions on a website or web application exactly or as close as possible to how the user actually experienced it. Session replay tools capture things like mouse movements, clicks, typing, scrolling, swiping, tapping, etc. and is a fantastic way to help add color to otherwise flat data, observing where customers are clicking, struggling, and/or falling out of the funnel.
Similar to A/B/n testing, Split Page Tests (also known as Split URL Tests) are macro-level, where you evenly split traffic between or among two or several different page versions and measure the overall impact to the shared page goals. This can be a great way to pit two designs against one another in a winner-take-all showdown. Or if you want to confidently burn a page down and start over, do it strategically with a Split Page Test. Know this: whenever testing multiple pages we become less sure of exactly what elements are contributing to the changes in behavior, just that there are changes and we’ve observed them.
A UX researcher meets with a participant one-on-one to discuss in-depth what the participant thinks about the topic in question, usually about a specific product, service, or experience. Without specificity, the quality of feedback will similarly be less specific and therefore less useful.
A skeletal sketch of a website or app meant to convey features, functionality, and layout. Wireframes are intentionally low fidelity to keep them low effort as revisions and iterations are welcome if not required for good UX design. That said, the low-fidelity, box-and-line look typically means they are not ideal for usability testing or user feedback, as it may create more confusion than insights.