Skip to main content
Log in

Investigating participants’ attributes for participant estimation in knowledge-intensive crowdsourcing: a fuzzy DEMATEL based approach

  • Published:
Electronic Commerce Research Aims and scope Submit manuscript

Abstract

In knowledge-intensive crowdsourcing (KI-C), estimating proper participants is an important way to ensure tasks crowdsourcing outcomes. Participants’ attributes (PAs) act as the main decision factors which are viewed as criteria for evaluating and estimating potential participants. Actually, multiple interdependent PAs have effect on participant estimation. It is an initial and vital work in estimating participants in KI-C to identify those PAs and measure their relationships. Consequently, this study first identifies PAs for participant estimation in KI-C by integrating PAs presented in the related academic studies and some practical KI-C sites. Subsequently, this study develops an integrated 2-tuple linguistic method and decision making trial and evaluation laboratory method to describe and measure causal relationships of the identified PAs. Identification of PAs would offer a common list of criteria for participant estimation in KI-C and aid to enrich studies in this field. Additionally, measurement of the PAs’ relationships through causality and prominence can assist requesters and managers of KI-C sites to understand and deal with those PAs in practical.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Howe, J. (2006). The rise of crowdsourcing. Wired Magazine, 14(6), 1–4.

    Google Scholar 

  2. Estellés-Arolas, E. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2), 189–200.

    Article  Google Scholar 

  3. Eric Schenk, C. G. (2009). Crowdsourcing: What can be outsourced to the crowd, and why? In Proceedings of the workshop on open source innovation.

  4. Luz, N., Silva, N., & Novais, P. (2015). A survey of task-oriented crowdsourcing. Artificial Intelligence Review, 44(2), 187–213.

    Article  Google Scholar 

  5. Basu Roy, S., Lykourentzou, I., Thirumuruganathan, S., Amer-Yahia, S., & Das, G. (2015). Task assignment optimization in knowledge-intensive crowdsourcing. The VLDB Journal, 24(4), 467–491. https://doi.org/10.1007/s00778-015-0385-2.

    Article  Google Scholar 

  6. Gong, Y. (2017). Estimating participants for knowledge-intensive tasks in a network of crowdsourcing marketplaces. Information Systems Frontiers, 19(2), 301–319. https://doi.org/10.1007/s10796-016-9674-6.

    Article  Google Scholar 

  7. Jiafu, S., Yu, Y., & Tao, Y. (2018). Measuring knowledge diffusion efficiency in R&D networks. Knowledge Management Research & Practice, 16(2), 208–219. https://doi.org/10.1080/14778238.2018.1435186.

    Article  Google Scholar 

  8. Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., et al. (2013). The future of crowd work. Paper presented at the Proceedings of the 2013 conference on Computer supported cooperative work, San Antonio, Texas, USA,

  9. Ebner, W., Leimeister, J. M., & Krcmar, H. (2009). Community engineering for innovations: The ideas competition as a method to nurture a virtual community for innovations. R&D Management, 39(4), 342–356. https://doi.org/10.1111/j.1467-9310.2009.00564.x.

    Article  Google Scholar 

  10. Marjanovic, S., Fry, C., & Chataway, J. (2012). Crowdsourcing based business models: In search of evidence for innovation 2.0. Science and Public Policy, 39(3), 318–332. https://doi.org/10.1093/scipol/scs009.

    Article  Google Scholar 

  11. Zhao, Y., & Zhu, Q. (2014). Evaluation on crowdsourcing research: Current status and future direction. Information Systems Frontiers, 16(3), 417–434. https://doi.org/10.1007/s10796-012-9350-4.

    Article  Google Scholar 

  12. Terwiesch, C., & Xu, Y. (2008). Innovation contests, Open sinnovation, and multiagent problem solving. Management Science, 54(9), 1529–1543.

    Article  Google Scholar 

  13. Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H. R., Bertino, E., & Dustdar, S. (2013). Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing, 17(2), 76–81. https://doi.org/10.1109/mic.2013.20.

    Article  Google Scholar 

  14. Geiger, D., & Schader, M. (2014). Personalized task recommendation in crowdsourcing information systems—Current state of the art. Decision Support Systems, 65(C), 3–16.

    Article  Google Scholar 

  15. Yuen, M. C., King, I., & Leung, K. S. (2015). TaskRec: A stask recommendation framework in crowdsourcing systems. Neural Processing Letters, 41(2), 223–238.

    Article  Google Scholar 

  16. Yuen, M.-C., King, I., & Leung, K.-S. (2016). An online-updating algorithm on probabilistic matrix factorization with active learning for task recommendation in crowdsourcing systems. Big Data Analytics, 1(1), 14. https://doi.org/10.1186/s41044-016-0012-2.

    Article  Google Scholar 

  17. Cheng, P., Lian, X., Chen, L., Han, J., & Zhao, J. (2016). Task assignment on multi-skill oriented spatial crowdsourcing. IEEE Transactions on Knowledge & Data Engineering, 28(8), 2201–2215.

    Article  Google Scholar 

  18. Satzger, B., Psaier, H., Schall, D., & Dustdar, S. (2013). Auction-based crowdsourcing supporting skill management. Information Systems, 38(4), 547–560.

    Article  Google Scholar 

  19. Mavridis, P., Gross-Amblard, D., & Miklós, Z. (2016). Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing. Paper presented at the Proceedings of the 25th International Conference on World Wide Web, Canada.

  20. Zhang, X., & Su, J. (2018). An approach to task recommendation in crowdsourcing based on 2-tuple fuzzy linguistic method. Kybernetes, 47(8), 1623–1641. https://doi.org/10.1108/K-12-2017-0468.

    Article  Google Scholar 

  21. Miao, C., Yu, H., Shen, Z., & Leung, C. (2016). Balancing quality and budget considerations in mobile crowdsourcing. Decision Support Systems, 90, 56–64. https://doi.org/10.1016/j.dss.2016.06.019.

    Article  Google Scholar 

  22. Daniel, F., Kucherbaev, P., Cappiello, C., Benatallah, B., & Allahbakhsh, M. (2018). Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys., 51(1), 1–40. https://doi.org/10.1145/3148148.

    Article  Google Scholar 

  23. Ford, R. C., Richard, B., & Ciuchta, M. P. (2015). Crowdsourcing: A new way of employing non-employees? Business Horizons, 58(4), 377–388. https://doi.org/10.1016/j.bushor.2015.03.003.

    Article  Google Scholar 

  24. Slivkins, A., & Vaughan, J. W. (2014). Online decision making in crowdsourcing markets: Theoretical challenges. SIGecom Exch., 12(2), 4–23. https://doi.org/10.1145/2692359.2692364.

    Article  Google Scholar 

  25. Zhang, X., Gong, B., Ni, H., Liang, Z., & Su, J. Identifying participants' characteristics influencing participant estimation in knowledge-intensive crowdsourcing. In 2019 8th international conference on industrial technology and management (ICITM), 2–4 March 2019 2019 (pp. 358–363).https://doi.org/10.1109/icitm.2019.8710681s .

  26. Zou, L., Zhang, J., & Liu, W. (2015). Perceived justice and creativity in crowdsourcing communities: Empirical evidence from China. Social Science Information, 54(3), 253–279. https://doi.org/10.1177/0539018415583382.

    Article  Google Scholar 

  27. Zhu, J. J., Li, S. Y., & Andrews, M. (2017). Ideator sexpertise and cocreator inputs in crowdsourcing-based new product development. Journal of Product Innovation Management, 34(5), 598–616. https://doi.org/10.1111/jpim.12400.

    Article  Google Scholar 

  28. Wang, M.-M., & Wang, J.-J. (2019). Understanding solvers' continuance intention in crowdsourcing contest platform: An extension of expectation-confirmation model. Journal of theoretical and applied electronic commerce research, 14(3), 17–33.

    Article  Google Scholar 

  29. Herrera, F., & Martinez, L. (2000). A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Transactions on Fuzzy Systems, 8(6), 746–752.

    Article  Google Scholar 

  30. Herrera, F., & Martinez, L. (2001). A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 31(2), 227–234. https://doi.org/10.1109/3477.915345.

    Article  Google Scholar 

  31. Brabham, D. C. (2008). Crowdsourcing as a model for problem solving: An introduction and cases. Convergence, 14(1), 75–90. https://doi.org/10.1177/1354856507084420.

    Article  Google Scholar 

  32. Afuah, A., & Tucci, C. L. (2012). Crowdsourcing as a solution to distant search. Academy of Management Review, 37(3), 355–375. https://doi.org/10.5465/amr.2010.0146.

    Article  Google Scholar 

  33. Kärkkäinen, H., Jussila, J., & Multasuo, J. (2012) Can crowdsourcing really be used in B2B innovation? Paper presented at the Proceeding of the 16th International Academic MindTrek Conference Tampere Finland.

  34. Brabham, D. C. (2010). Moving the crowd at threadless. Information, Communication & Society, 13(8), 1122–1145. https://doi.org/10.1080/13691181003624090.

    Article  Google Scholar 

  35. Thuan, N. H., Antunes, P., & Johnstone, D. (2016). Factors influencing the decision to crowdsource: A systematic literature review. Information Systems Frontiers, 18(1), 47–68.

    Article  Google Scholar 

  36. Gatautis, R., & Vitkauskaite, E. (2014). Crowdsourcing application in marketing activities. Procedia–Social and Behavioral Sciences, 110, 1243–1250. https://doi.org/10.1016/j.sbspro.2013.12.971.

    Article  Google Scholar 

  37. Brabham, D. C. (2010). Crowdsourcing as a model for problem solving: Leveraging the collective intelligence of online communities for public good. Convergence the International Journal of Research Into New Media Technologies, 14(1), 75–90.

    Article  Google Scholar 

  38. Rouse, A. C. (2010). A preliminary taxonomy of crowdsourcing. ACIS 2010: Information systems: Defining and establishing a high impact discipline. In Proceedings of the 21st Australasian conference on information systems, 1–10.

  39. Malone, T. W., Laubacher, R., & Dellarocas, C. (2010). The collective intelligence genome. IEEE Engineering Management Review, 38(3), 38–52. https://doi.org/10.1109/emr.2010.5559142.

    Article  Google Scholar 

  40. Schenk, E., & Guittard, C. (2011). Towards a characterization of crowdsourcing practices. Journal of innovation economics, 7(1), 93–107.

    Google Scholar 

  41. Geiger, D., Seedorf, S., Schulze, T., Nickerson, R. C., & Schader, M. (2011). Managing the crowd: Towards a taxonomy of crowdsourcing processes. In AMCIS 2011 Proceedings.

  42. Brabham, D. C. (2012). Motivations for participation in a crowdsourcing application to improve public engagement in transit planning. Journal of Applied Communication Research, 40(3), 307–328. https://doi.org/10.1080/00909882.2012.693940.

    Article  Google Scholar 

  43. Zhang, X., & Su, J. (2019). A combined fuzzy DEMATEL and TOPSIS approach for estimating participants in knowledge-intensive crowdsourcing. Computers & Industrial Engineering, 137, 106085. https://doi.org/10.1016/j.cie.2019.106085.

    Article  Google Scholar 

  44. Gao, D., Tong, Y., She, J., Song, T., Chen, L., & Xu, K. (2017). Top-k team recommendation and its variants in spatial crowdsourcing. Data Science and Engineering, 2(2), 136–150. https://doi.org/10.1007/s41019-017-0037-1.

    Article  Google Scholar 

  45. Satzger, B., Psaier, H., Schall, D., & Dustdar, S. (2011). Stimulating skill evolution in market-based crowdsourcing. Lecture Notes in Computer Science, 6896(5), 66–82.

    Article  Google Scholar 

  46. Berkovsky, S., Kuflik, T., & Ricci, F. (2008). Mediation of user models for enhanced personalization in recommender systems. User Modeling and User-Adapted Interaction, 18(3), 245–286.

    Article  Google Scholar 

  47. Zhong, Q., Zhang, Y., Li, C., & Li, Y. (2017). Task recommendation method based on workers' interest and competency for crowdsourcing. System Engineering Theory and Practice, 37(12), 3270–3280. https://doi.org/10.12011/1000-6788(2017)12-3270-11.

    Article  Google Scholar 

  48. Yang, P., Zhang, N., Zhang, S., Yang, K., Yu, L., & Shen, X. (2017). Identifying the most valuable workers in fog-assisted spatial crowdsourcing. IEEE Internet of Things Journal, 4(5), 1193–1203. https://doi.org/10.1109/jiot.2017.2726820.

    Article  Google Scholar 

  49. Yu, H., Miao, C., Shen, Z., & Leung, C. (2015). Quality and sbudget aware task allocation for spatial crowdsourcing. Paper presented at the Proceedings of the 2015 international conference on autonomous agents and multiagent systems, Istanbul, Turkey,

  50. Schall, D., Skopik, F., & Dustdar, S. (2012). Expert discovery and interactions in mixed service-oriented systems. IEEE Transactions on Services Computing, 5(2), 233–245. https://doi.org/10.1109/tsc.2011.2.

    Article  Google Scholar 

  51. Malhotra, A., & Majchrzak, A. (2014). Managing crowds in innovation challenges. California Management Review, 56(4), 103–123.

    Article  Google Scholar 

  52. Heo, M., & Toomey, N. (2015). Motivating continued knowledge sharing in crowdsourcing: The impact of different types of visual feedback. Online Information Review, 39(6), 795–811. https://doi.org/10.1108/OIR-01-2015-0038.

    Article  Google Scholar 

  53. Mladenow, A., Bauer, C., & Strauss, C. (2014). Social crowd integration in new product development: Crowdsourcing communities nourish the open innovation paradigm. Global Journal of Flexible Systems Management, 15(1), 77–86. https://doi.org/10.1007/s40171-013-0053-6.

    Article  Google Scholar 

  54. Jarrett, J., & Blake, M. B. (2016). Using collaborative filtering to automate worker-job recommendations for crowdsourcing services. In 2016 IEEE international conference on web services (ICWS), 27 June–2 July 2016 2016 (pp. 641–645). https://doi.org/10.1109/icws.2016.89.

  55. Jarrett, J., Silva, L. F. D., Mello, L., Andere, S., Cruz, G., & Blake, M. B. (2015). Self-generating a labor force for crowdsourcing: Is worker confidence a predictor of quality? In 2015 third IEEE workshop on hot topics in web systems and technologies (HotWeb), 12–13 Nov. 2015 (pp. 85–90). https://doi.org/10.1109/HotWeb.2015.9.

  56. Jøsang, A., Ismail, R., & Boyd, C. (2007). A survey of trust and reputation systems for online service provision. Decision Support Systems, 43(2), 618–644. https://doi.org/10.1016/j.dss.2005.05.019.

    Article  Google Scholar 

  57. Allahbakhsh, M., Samimi, S., Motahari-Nezhad, H., & Benatallah, B. (2014). Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes. In 2014 IEEE 7th international conference on service-oriented computing and applications, 17–19 Nov. 2014 (pp. 17–24). https://doi.org/10.1109/soca.2014.26.

  58. Maarry, K. E., Balke, W. T., Cho, H., Hwang, S. W., & Baba, Y. (2014). Skill ontology-based model for quality assurance in crowdsourcing. In International conference on database systems for advanced applications, (pp. 376–387).

  59. Schulze, T., Krug, S., & Schader, M. (2012). Workers' task choice in crowdsourcing and human computation markets. In Thirty third international conference on information systems (pp. 1–11).

  60. Khasraghi, H. J., & Aghaie, A. (2014). Crowdsourcing contests: Understanding the effect of competitors' participation history on their performance. Behaviour & Information Technology, 33(12), 1383–1395.

    Article  Google Scholar 

  61. Lian, J., Liu, Y., Zhang, Z.-J., Cheng, J.-J., & Xiong, F. (2012). Analysis of user's weight in microblog network based on user influence and active degree. Journal of Electronic Science and Technology, 10(4), 368–377.

    Google Scholar 

  62. Shao, B., Shi, L., Xu, B., & Liu, L. (2012). Factors affecting participation of solvers in crowdsourcing: An empirical study from China. Electronic Markets, 22(2), 73–82. https://doi.org/10.1007/s12525-012-0093-3.

    Article  Google Scholar 

  63. Zhang, X., & Su, J. (2018). An integrated QFD and 2-tuple linguistic method for solution selection in crowdsourcing contests for innovative tasks. Journal of Intelligent & Fuzzy Systems, 35(6), 6329–6342. https://doi.org/10.3233/jifs-181122.

    Article  Google Scholar 

  64. Liu, T. X., Yang, J., Adamic, L. A., & Chen, Y. (2014). Crowdsourcing with all-pay auctions: A field experiment on taskcn. Management Science, 60(8), 2020–2037. https://doi.org/10.1287/mnsc.2013.1845.

    Article  Google Scholar 

  65. Gujar, S., & Faltings, B. (2017). Auction based mechanisms for dynamic task assignments in expert crowdsourcing. In S. Ceppi, E. David, C. Hajaj, V. Robu, & I. A. Vetsikas (Eds.), Agent-mediated electronic commerce. Designing trading strategies and mechanisms for electronic markets, Cham, 2017 (pp. 50–65). Berlin: Springer International Publishing.

  66. Bayus, B. L. (2013). Crowdsourcing new product ideas over time: An sanalysis of the dell ideastorm community. Management Science, 59(1), 226–244. https://doi.org/10.1287/mnsc.1120.1599.

    Article  Google Scholar 

  67. Zheng, H., Li, D., & Hou, W. (2011). Task design, motivation, and participation in crowdsourcing contests. International Journal of Electronic Commerce, 15(4), 57–88.

    Article  Google Scholar 

  68. Martinez, L., & Herrera, F. (2012). An overview on the 2-tuple linguistic model for computing with words in decision making: Extensions, applications and challenges. Information Sciences, 207, 1–18. https://doi.org/10.1016/j.ins.2012.04.025.

    Article  Google Scholar 

  69. Xu, Z. (2004). A method based on linguistic aggregation operators for group decision making with linguistic preference relations. Information Sciences, 166(1), 19–30. https://doi.org/10.1016/j.ins.2003.10.006.

    Article  Google Scholar 

  70. Xu, Z., & Wang, H. (2017). On the syntax and semantics of virtual linguistic terms for information fusion in decision making. Information Fusion, 34, 43–48. https://doi.org/10.1016/j.inffus.2016.06.002.

    Article  Google Scholar 

  71. Gabus, A., & Fontela, E. (1973). Perceptions of the world problematique: Communication procedure, Communicating with Those Bearing Collective Responsibility (DEMATEL Report No. 1). Geneva: Battelle Geneva Research Centre.

  72. Chang, C.-C., & Chen, P.-Y. (2018). Analysis of critical factors for social games based on extended technology acceptance model: A DEMATEL approach. Behaviour & Information Technology, 37(8), 774–785. https://doi.org/10.1080/0144929x.2018.1480654.

    Article  Google Scholar 

  73. Karsak, E. E., & Dursun, M. (2015). An integrated fuzzy MCDM approach for supplier evaluation and selection. Computers & Industrial Engineering, 82, 82–93. https://doi.org/10.1016/j.cie.2015.01.019.

    Article  Google Scholar 

  74. Dong, Y., Wu, Y., Zhang, H., & Zhang, G. (2015). Multi-granular unbalanced linguistic distribution assessments with interval symbolic proportions. Knowledge-Based Systems, 82, 139–151. https://doi.org/10.1016/j.knosys.2015.03.003.

    Article  Google Scholar 

  75. Augustin, T., Hockemeyer, C., Kickmeier-Rust, M. D., & Albert, D. (2011). Individualized skill assessment in digital learning games: Basic definitions and mathematical formalism. IEEE Transactions on Learning Technologies, 4(2), 138–148. https://doi.org/10.1109/tlt.2010.21.

    Article  Google Scholar 

  76. Desmarais, M. C., & Baker, R. S. J. D. (2012). A review of recent advances in learner and skill modeling in intelligent learning environments. User Modeling and User-Adapted Interaction, 22(1), 9–38. https://doi.org/10.1007/s11257-011-9106-8.

    Article  Google Scholar 

  77. Chrysafiadi, K., & Virvou, M. (2015). Fuzzy logic for adaptive instruction in an e-learning environment for computer programming. IEEE Transactions on Fuzzy Systems, 23(1), 164–177. https://doi.org/10.1109/tfuzz.2014.2310242.

    Article  Google Scholar 

  78. Abdullah, L., & Zulkifli, N. (2019). A new DEMATEL method based on interval type-2 fuzzy sets for developing causal relationship of knowledge management criteria. Neural Computing and Applications, 31(8), 4095–4111. https://doi.org/10.1007/s00521-017-3304-1.

    Article  Google Scholar 

  79. Kaur, J., Sidhu, R., Awasthi, A., Chauhan, S., & Goyal, S. (2018). A DEMATEL based approach for investigating barriers in green supply chain management in Canadian manufacturing firms. International Journal of Production Research, 56(1–2), 312–332. https://doi.org/10.1080/00207543.2017.1395522.

    Article  Google Scholar 

  80. Fan, Z.-P., Suo, W.-L., & Feng, B. (2012). Identifying risk factors of IT outsourcing using interdependent information: An extended DEMATEL method. Expert Systems with Applications, 39(3), 3832–3840. https://doi.org/10.1016/j.eswa.2011.09.092.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and three anonymous reviewers for their constructive and helpful comments on earlier versions of this article. Additionally, this work was supported by the National Natural Science Foundation of China (71802002, 71671001, 71801002, 71701003), Anhui Education Department (KJ2019A0137).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuefeng Zhang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, X., Gong, B., Cao, Y. et al. Investigating participants’ attributes for participant estimation in knowledge-intensive crowdsourcing: a fuzzy DEMATEL based approach. Electron Commer Res 22, 811–842 (2022). https://doi.org/10.1007/s10660-020-09408-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10660-020-09408-1

Keywords

Navigation