A Study of Crowd Ability and its Influence on Crowdsourced Evaluation of Design Concepts
MICHIGAN UNIV ANN ARBOR DEPT OF MECHANICAL ENGINEERING
Pagination or Media Count:
Crowdsourced evaluation is a promising method of evaluating attributes of a design that require human input, such as maintainability of a vehicle. The challenge is to correctly estimate the design scores using a massive and diverse crowd, particularly when only a minority of evaluators give correct evaluations. As an alternative to simple averaging, this paper introduces a Bayesian network approach that models the human evaluation process and estimates design scores, taking human abilities in evaluating the design into account. Simulation results indicate that the proposed method is preferred to averaging since it identifies the experts from the crowd, under the assumptions that 1 experts do exist and 2 only experts have consistent evaluations. These assumptions, however, do not always hold as indicated by the results of a human study. Clusters of consistent yet incorrect human evaluators are shown to exist along with the cluster of experts. This suggests that additional data such as evaluators background are needed to isolate the correct clusters of experts for design evaluation tasks.
- Surface Transportation and Equipment