Beta testing considerations


In terms of beta testing a restaurant review site;

a) where would be the best place to find beta testers,

b) generally how many beta testers would be sufficient,

c) what are important aspects to focus on during beta testing,

d) what methods of measurement do you use, and

e) what feedback would be most important to obtain?

If any clarification is needed I'm happy to provide more specific details.

Thank you

Testing Beta

asked Jun 3 '11 at 05:59
509 points

1 Answer


This question has a lot of components, but I'll take a stab at it.

a) Recruitment for a restaurant review site beta is a little tricky. Generally speaking, there are many ways to approach recruitment. You could use sites like and LaunchRock (I think you saw my answer on this yesterday) to build some awareness socially. You could also work on engaging with bloggers, offering them invite codes that they can give away to their readers. You could canvas discussion forums and other places online that potential users gather to discuss food and restaurants. All these things can help you get beta testers.

The additional hurdle you're facing with your product is the network effect. This principle dictates that the value of some goods or services is entirely dependent on the number of other people who also use them. In the case of a restaurant review site, unless you have a large number of people supplying reviews within a geographic area, your review coverage will be sparse and the usefulness of the site limited. What you'll often see with very location-specific startups like this are localized launches, so you might want to consider targeting one geographic area for all your beta testers so that their reviews as part of the test help stimulate the network effect.

b) This answer becomes a follow-on from the first question. Whereas, in a normal beta test, you might only need 20 active testers to feel confident that product quality is where it needs to be and you're getting enough feedback, here you might want to target a lot more testers so you can get more reviews and build a stronger sense of community.

I have to throw in my usual warning, though, about beta participation. You'll want to recruit roughly 3 times as many people as you think you need, because participation rates in beta tests are often surprisingly low.

c) Quality/bugs; Usability/UI/UX; Feature coverage/feature requests; Competitive analysis/market research; etc.

d) Measuring and tracking is where things get messy. There are some really good tools out there for managing beta tests and collecting user feedback (disclosure: I work for a company that does this), but usually these tools are out of reach for bootstrapped startups. So what you find yourself up against is how to stay organized while using a bunch of different tools that aren't designed with beta testing in mind. It's not an impossible task, but it's a pain, which is one reason why web betas focus more on generating interest than actual testing.

You'll probably want to consider something like ZenDesk or Get Satisfaction for collecting in-site feedback. You'll also want some way of sending out surveys (Google Apps is supposed to have an interesting method of using forms embedded into emails, though I haven't personally tried it) and a way of assigning tasks. These are usually tracked via email and spreadsheet without dedicated tools. There are some benefits to spreadsheets (who doesn't love pivot tables?), but the manual nature of all this is very tedious and it's easy for things to fall through the cracks.

If you're working with a team, you'll probably be using some sort of bug tracking software, which will help you stay on top of bug-related feedback coming in through your beta.

When it comes to UI/UX, that's not really my area of expertise. But I do like Crazy Egg for testing/validating design ideas and seeing your actual users' behaviors. You could also experiment with Google Analytics' in-page analytics, though this isn't quite as helpful.

Overall, not sure what you wanted to measure specifically, but if you had other things in mind, let me know.

e) To me, this feels like a repeat of (c). Could you clarify?

answered Jun 3 '11 at 08:25
Adam Wright
241 points
  • I should probably add that for tracking bugs during your beta, it'll be important to identify what browser is being used. – Adam Wright 10 years ago
  • thanks for your response. There's a lot of valuable info in there. We are using LaunchRock, will be focusing on one major North American city through beta and launch, and will be expanding to other major cities slowly. You're absolutely right re: the network effect. The intended difference between E and C is that user feedback is obviously one big part of beta testing. However what pieces of info are valuable to get as feedback? This is asked moreso to give us a direction in our surveys, questionnaires and discussions with the beta testers. – Sam 10 years ago
  • Sorry for the delay—I've been traveling. What I'd suggest is an exercise that targets your thinking on possible feedback. To start, make a list of all the things you're certain of with your product (e.g., killer features you nailed, bugs you're sure you fixed, etc.). You want to test all those because a lack of objectivity can come back to haunt you. Next, make a list of the things you hope you've done successfully (e.g., features/ux that incrementally improve on competitors, other bugs, etc.). Finally, you want a list of the things you feel uncertain of. Hope the brainstorming helps! – Adam Wright 10 years ago

Your Answer

  • Bold
  • Italic
  • • Bullets
  • 1. Numbers
  • Quote
Not the answer you're looking for? Ask your own question or browse other questions in these topics:

Testing Beta