Advice for conducting a pilot


I have a chance to conduct an on-site pilot of my desktop software. How should I prepare for the pilot? What factors should I consider? What are the best practices for conducting a pilot?


asked Oct 22 '09 at 00:51
D Thrasher
894 points

2 Answers


In your pilot, are you testing for usability or just looking for system bugs? If you including usability, there is a lot to consider. More formal information is available in this Wikipedia, but here are a few points off the top of my head:

  1. How many testers will you have? If this is the first of many test sessions you do not want more than 5 - 10 people according to Jacob Neilsen.
  2. What tasks are you looking to test? Make sure you have a script/task list for testers to follow which closely mirrors what the end-users will be doing on a daily basis, otherwise you will not get useful feedback.
  3. How are you going to monitor the software usage? There are many usability testing software packages for tracking keystrokes, clicks, etc. which also pair with a screen mounted web cam that syncs the video feed with the key/click log. This is great for usability testing. If you want to have people just walk around and ask questions, you will need to train them first. There are also companies/consultants which specialize in this sort of thing, a Google search should bring up a few in your area.
  4. Consider the physical environment your testers will be in? It should be comfortable and relatively free of distractions.
  5. Who from your team will be present at the test?
  6. What level of guidance over and above the task list will you provide?
  7. When bugs are encountered, how will you be logging them?

Despite the URL pointing to a US Government site, this page also has relevant usability testing background information.

answered Oct 22 '09 at 02:06
Rob Allen
631 points
  • This is a great checklist. The link looks promising as well. – D Thrasher 14 years ago
  • And in answer to your question, we'll be looking for system bugs as well as usability with a small group of testers. I don't have the rest of the details yet! – D Thrasher 14 years ago


Rob makes some excellent points. Pilots can be tricky. The best advise I ever got on conducting pilots (or any kind of comparison experiment) was to do a Design of Experiment (DOE).

DOE is pretty formal and uses statistics to compare different treatments. For your pilot (experiment), you probably don't need all the fancy math. What you do need is to answer these questions:

  1. Define the present state. This goes a long way in figuring out what you might effect.
  2. Define the desired state, with your solution. This has to be quantifiable. Things like: Saving X amount of minutes, reducing defects or whatever. Get agreement on what success is.
  3. Define metrics to measure.
  4. Baseline the present state and confirm the measured metrics.
  5. Apply the treatment and measure them metrics.
  6. Analyze the results
  7. Repeat as necessary with different treatments or measurements.

This is clearly a shorter list than the whole method but is a good snapshot of how to go about proving that your solution adds value. There may be debate over what are "hard" savings as opposed to "soft" savings. Get those discussions done up front and drive toward the definition of success that everyone agrees with.

answered Oct 22 '09 at 02:29
Jarie Bolander
11,421 points
  • Excellent point. We definitely need to define the criteria for a successful test and have the means to gather metrics. – D Thrasher 14 years ago

Your Answer

  • Bold
  • Italic
  • • Bullets
  • 1. Numbers
  • Quote
Not the answer you're looking for? Ask your own question or browse other questions in these topics: