Idea validation

Idea validation answers the fundamental question of product and service design:

Will people actually use it?

You might have a spine-tingling idea for a new service that looks like genius on paper – but will it actually attract people? Will it make you money or improve customer experience? Before you risk your budget on it, we can test the concept.

Why validate your ideas?

Ideas about what to put on websites or which services to create are underpinned by assumptions. But assumptions are sometimes false, and launching a service, or even a whole business based on a shaky assumption is a costly mistake.

Idea validation means testing the assumption; answering that basic ‘will people will use it’ question before we build your thing, helping you avoid your own costly mistake.

How we do it

We use agile testing methods to try out complex ideas in simple, quick ways on the people who matter, the end users. These experiments use ‘minimum viable products’ to gather data and feedback to help us evaluate whether your idea will actually meet your customers’ needs.

A Minimum Viable Product is a version of a product or service that allows us to extract the maximum learning with the minimum effort. That means creating the smallest, simplest version of an idea that can prove or disprove whether there’s a demand for it. There are several ways of doing this.

Minimum viable product options

Prototypes

We can create simplified, prototype versions of websites and services and then use these to carry out a variety of tests with users. Prototypes can be anything from a model of a product with a hand-sketched screen to a lo-fi version of software. Prototyping is an integral part of our User Interface Design and Service Design consultancy services.

Simulate the service manually

This is also known as the Wizard of Oz method – before building a technically complex and expensive service or product, we create an illusion of the end product by working it manually from ‘behind the curtain’.

Imagine a large DIY chain had the idea of increasing online sales by introducing an artificial intelligence powered advice chatbot on their website. But although they think it would increase conversions, they don’t know for sure and it would be expensive to build.

We could do something like this to try it out:

  1. Install an-off-the-shelf live chat plugin on their site and use split testing to show it to a segment of customers.
  2. Assign a person to pretend to be the bot for a trial period. The ‘wizard’ speaks to customers as the bot, conforming to an agreed set of rules that the bot would use.
  3. Track the results; do customers shown the bot buy more or less or return products more or less, for example? This test is drastically cheaper than building an actual AI bot, and will give us valuable learning about the effect this service would have.

Smoke Tests

A smoke test involves displaying information about a new service and tracking demand, without actually offering the service itself.

Imagine our DIY chain again; we might put a link on their website to ‘Live DIY Advice’. When a customer clicks the link, a message tells them the service isn’t available yet and asks what they wanted to use the service for. We are able to track demand for the service via the number of click-throughs, get information about where on the site customers are clicking through from, and gather feedback about the kind of information customers want.

This method can be used to test anything from new features to entire business concepts. Dropbox used it to demonstrate demand for their file-sharing service – instead of building a costly prototype, they made a video apparently showing their non-existent service working. Their beta waiting list shot from 5,000 people to 75,000 people overnight.

Sell it First

When you’re considering charging for a product or service, it’s essential to know how much people will pay for it. This experiment uses a version of a smoke test, but includes pricing information.

For example, we could split-test an online DIY support service using landing pages offering the option to register interest at different price points; free, £3.99 per month and £5.99 per month. Our feedback would demonstrate demand in relation to the different price points.

Slice

Creating a new service often involves building features or versions for many different customer segments, making the process complex and costly. Slice testing avoids this complexity by testing the whole concept on just one cross-section of a customer base.

If our DIY chain wanted to branch out to offer a directory site for tradespeople, with bespoke pages for a wide range of trades from plumbers to interior designers, we might launch a version just aimed at interior designers with only their version of the portfolio available.

We could use this to learn about how users interacted with the service as a whole, allowing us to make changes before scaling up.

Split Testing (Multivariate & A/B)

It’s often hard during the design process to make choices. Should that button go at the top of the page or the bottom? Should that box be orange or green? Will your users respond better to this tagline, or that one? Which of these images is more powerful? These choices are often small but crucial, meaning the difference between persuading a user to do what you want, or losing them along the way. But just trusting your instinct risks making the wrong choice and losing conversions . Split testing is a way of handing over the choice to your users themselves.

How we do it

We validate multiple design options by splitting them between different groups of users and comparing the results.