Mineta Analyzes Passenger Survey Options
New research from the Mineta Transportation Institute (MTI) provides insight about the most effective passenger survey methods for larger transit agencies. The Federal Transit Administration (FTA) now requires these surveys every five years to ensure participation from under-represented populations. But which survey methods – paper, online, or computer tablet – are most reliable, efficient, and cost-effective? The new report, Comparing Data Quality and Cost from Three Modes of On-Board Transit Passenger Surveys, tests each one and provides the data. Principal investigator was Hilary Nixon, PhD, working with Asha Weinstein Agrawal, PhD, Stephen Granger-Bevan, and Gregory Newmark, PhD. The report is available for free download at http://transweb.sjsu.edu/project/1206.html
“Because of the FTA directive, transit agencies have an even stronger interest than before in identifying which survey methods minimize costs while still gathering high-quality data,” said Dr. Nixon. “They have no reasonable way to determine the best method, so MTI took on that task. Although it might seem that online surveys could be a good method in a connected society, we found that paper surveys may still be the best option for most survey types.”
To design an appropriate experimental survey for this study, the research team conducted interviews with both transit survey experts and agency staff managing the surveys.
The 81-page report includes several tables that detail the results from each tested method, as well as copies of the survey instruments. The study findings suggest several general recommendations for current survey practice:
- Online surveys administered via an invitation distributed on the transit vehicle are not a good option.
- The old-fashioned, low-tech paper survey may still be the best option for many bus passenger surveys.
- Changes in survey results that accompany changes in survey methods should be interpreted with caution.
- Using a new survey method, especially one relying on more complex technologies, may create unexpected glitches.
The analysis focused on several key questions:
- Did return and completion rates vary by survey mode?
- Did the percentage of respondents skipping or providing unusable information for particular questions or question types vary by survey mode?
- Did responses vary across socio-demographic characteristics by survey mode?
- Did responses vary depending on passenger travel behavior by survey mode?
- Did customer satisfaction levels vary by survey mode?
- What was the cost per complete survey by mode?