skip to main content
10.1145/2961111.2962594acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Who Should Take This Task?: Dynamic Decision Support for Crowd Workers

Authors Info & Claims
Published:08 September 2016Publication History

ABSTRACT

Context: The success of crowdsourced software development (CSD) depends on a large crowd of trustworthy software workers who are registering and submitting for their interested tasks in exchange of financial gains. Preliminary analysis on software worker behaviors reveals an alarming task-quitting rate of 82.9%.

Goal: The objective of this study is to empirically investigate worker decision factors and provide better decision support in order to improve the success and efficiency of CSD.

Method: We propose a novel problem formulation, DCW-DS, and an analytics-based decision support methodology to guide workers in acceptance of offered development tasks. DCS-DS is evaluated using more than one year's real-world data from TopCoder, the leading CSD platform.

Results: Applying Random Forest based machine learning with dynamic updates, we can predict a worker as being a likely quitter with 99% average precision and 99% average recall accuracy. Similarly, we achieved 78% average precision and 88% average recall for the worker winner class. For workers just following the top three task recommendations, we have shown that the average quitting rate goes down below 6%.

Conclusions: In total, the proposed method can be used to improve total success rate as well as reduce quitting rate of tasks performed.

References

  1. K. R. Lakhani, D. A. Garvin, and E. Lonstein, "TopCoder (A): Developing Software through Crowdsourcing," Harvard Business School Case 610--032, Jan. 2010.Google ScholarGoogle Scholar
  2. A. Kittur, J. V. Nickerson, M. Bernstein, E. Gerber, A. Shaw, J. Zimmerman, M. Lease, and J. Horton, "The Future of Crowd Work," in Proc. CSCW 2013, pp. 1301--1318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. K. Mao, Y. Yang, M. Li, and M. Harman, "Pricing Crowdsourcing-based Software Development Tasks," Piscataway, NJ, USA, 2013, pp. 1205--1208. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Y. Yang and R. Saremi, "Award vs. Worker Behaviors in Competitive Crowdsourcing Tasks," ESEM 2015, pp. 1--10.Google ScholarGoogle Scholar
  5. J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise," Advances in Neural Information Processing Systems. 22(2035-2043):7--13. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Mao, A.D. Procaccia, and Y. Chen, "Better human computation through principled voting," in Proc.of the AAAI Conference on Artificial Intelligence 2013, pp. 1142--1148. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. N. Kaufmann, T. Schulze, and D. Veit, "More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk", Proc. 17th AMCIS, 2011.Google ScholarGoogle Scholar
  8. A. Mao, E Kamar, and E. Horvitz, "Why Stop Now? Predicting Worker Engagement in Online Crowdsourcing," Proc. HCOMP 2013.Google ScholarGoogle Scholar
  9. T. D. LaToza et al., "Microtask programming: Building software with a crowd." In Proc. Symp. UI Software and Technology, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S. Faradani, B. Hartmann, and P.G. Ipeirotis, "What's the Right Price? Pricing Tasks for Finishing on Time", In Proc. Human Computation, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. K. Mao, Y. Yang, Q. Wang, Y. Jia, M. Harman, "Developer Recommendation for Crowdsourced Software Development Tasks," SOSE 2015: pp 347--356. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. TopCoder website: "10 Burning Questions on Crowdsourcing: Your starting guide to open innovation and crowdsourcing success," https://www.topcoder.com/blog/10-burning-questions-on-crowdsourcing-and-open-innovation/, Access date: March 14, 2016.Google ScholarGoogle Scholar
  13. N. Archak. "Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com", In Proc. Conference on World Wide Web, pages 21--30, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. H. Zhang, Y. Wu, W. Wu. Analyzing Developer Behavior and Community Structure in Software Crowdsourcing. Information Science and Application. Vol. 339. Pp981--988.Google ScholarGoogle Scholar
  15. M. Jorgensen and S. Grimstad. Over-Optimism in Software Development Projects: "The Winner's Curse". In Proc. 15th International Conference on Electronics,Google ScholarGoogle Scholar
  16. M. & Marsella, S. C. (2014). Encode Theory of Mind in Character Design for Pedagogical Interactive Narrative. Advances in HCI, vol. 2014, Article ID 386928. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. J. Yang, L.A. Adamic, and M.S. Ackerman, "Crowdsourcing and knowledge sharing: strategic user behaviour on tasks," In: Proc. ACM conference on Electronic Commerce, pp 246--255. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Stylianou, and A. S. Andreou, "Human Resource Allocation and Scheduling for Software Project Management," In: Software Project Management in a Changing World (G. Ruhe, C. Wohlin, eds.), Springer 2014.Google ScholarGoogle Scholar
  19. C.K. Chang, C. Chao, S. Hsieh, "SPMNet: a formal methodology for software management," in Proc. COMPAC, November, 1994.Google ScholarGoogle Scholar
  20. E. Alba, J.F. Chicano, "Software project management with GAs," Journal of Information Science, 177(11):2380--2401, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. R. Karim et al., "An Empirical Investigation of Single-objective and Multi-Objective Evolutionary Algorithms for Developer's Assignment to Bugs," to appear in Journal of Software: Evolution and Process, 2016.Google ScholarGoogle Scholar
  22. A. Slivkins and J. W. Vaughan. Online Decision Making in Crowdsourcing Markets: Theoretical Challenges. ACM SIGecom Exchanges, Vol. 12, 2013, pp 4--23 Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. D. Karger, S. Oh, and D. Shah, "Iterative learning for reliable crowdsourcing systems," In 25th Advances in Neural Information Processing Systems. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Y. Singer, and M. Mittal, "Pricing mechanisms for crowdsourcing markets," Proc. Intl. WWWConf. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M. S. Bernstein et al., "Analytic Methods for Optimizing Real-time Crowdsourcing", CS.SI 2012, 1204.2995Google ScholarGoogle Scholar
  26. G. Salton, M.G. McGill. Introduction to modern information retrieval. McGraw-Hill. (1986). Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Github repository "TopCoder-Winner-Quitter": https://github.com/yy2111/TopCoder_Winner_Quitter. Access date: March 15, 2016.Google ScholarGoogle Scholar
  28. P. A. Hancock, H. C. Ganey, "From the inverted-U to the extended-U: The evolution of a law of psychology," J. Human Performance in Extreme Environments, 2013, pp 5--14.Google ScholarGoogle Scholar
  29. Y. Yang and J. O. Pedersen, "A Comparative Study on Feature Selection in Text Categorization," in Proc. Conference on ML, Morgan Kaufmann Publishers. pp. 412--420, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, I.H. Witten, "The WEKA data mining software: an update," SIGKDD Explorations Newsletter 11(1), 10--18. Nov 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. S. Lessmann et al., "Benchmarking classification models for software defect prediction: a proposed framework and novel findings," IEEE TSE, vol. 34, no. 4, pp. 485--496, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. A. Dwarakanath, N.C. Shrikanth, K. Abhinav, A. Kass, "Trustworthiness in enterprise crowdsourcing: a taxonomy & evidence from data," Proc. ICSE 2016 pp 41--50. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ESEM '16: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
    September 2016
    457 pages
    ISBN:9781450344272
    DOI:10.1145/2961111

    Copyright © 2016 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 September 2016

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    ESEM '16 Paper Acceptance Rate27of122submissions,22%Overall Acceptance Rate130of594submissions,22%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader