Main Article Content
This paper presents a platform called RiPPLE (Recommendation in Personalised Peer-Learning Environments) that recommends personalized learning activities to students based on their knowledge state from a pool of crowdsourced learning activities that are generated by educators and the students themselves. RiPPLE integrates insights from crowdsourcing, learning sciences, and adaptive learning, aiming to narrow the gap between these large bodies of research while providing a practical platform-based implementation that instructors can easily use in their courses. This paper provides a design overview of RiPPLE, which can be employed as a standalone tool or embedded into any learning management system (LMS) or online platform that supports the Learning Tools Interoperability (LTI) standard. The platform has been evaluated based on a pilot in an introductory course with 453 students at The University of Queensland. Initial results suggest that the use of the RiPPLE platform led to measurable learning gains and that students perceived the platform as beneficially supporting their learning.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0) license that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
Anderson, J. R., Boyle, C. F., & Reiser, B. J. (1985). Intelligent tutoring systems. Science, 228(4698), 456–462. https://dx.doi.org/10.1126/science.228.4698.456
Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. Journal of the LearningSciences,4(2),167–207. https://dx.doi.org/10.1207/s15327809jls04022
Austin, P. C. (2011). An introduction to propensity score methods for reducing the effects of confounding in observational studies.MultivariateBehavioralResearch,46(3),399–424. https://dx.doi.org/10.108/00273171.2011.568786
Banks, J., Cochran-Smith, M., Moll, L., Richert, A., Zeichner, K., LePage, P., . . . McDonald, M. (2005). Teaching diverse
learners. In L. Darling-Hammond & J. Bransford (Eds.), Preparing Teachers for a Changing World: What Teachers Should Learn and Be Able to Do (pp. 232–274). San Francisco, California, USA: Jossey-Bass.
Barak, M., & Rafaeli, S. (2004). On-line question-posing and peer-assessment as means for web- based knowledge sharing in learning. International Journal of Human-Computer Studies, 61(1), 84–103. https://dx.doi.org/10.1016/j.ijhcs.2003.12.005
Bates, S. P., Galloway, R. K., & McBride, K. L. (2012). Student-generated content: Using PeerWise to enhance engagement and outcomes in introductory physics courses. AIP Conference Proceedings, 1413(1), 123–126. https://dx.doi.org/10.1063/1.3680009
Bates, S. P., Galloway, R. K., Riise, J., & Homer, D. (2014). Assessing the quality of a student-generated question repository. Physical Review Special Topics — Physics Education Research, 10(2), 020105. https://dx.doi.org/10.1103/physrevstper.10.020105
Biggs, J. (2012). What the student does: Teaching for enhanced learning. Higher Education Research & Development, 31(1), 39–55. https://dx.doi.org/10.1080/07294360.2012.642839
Bloom, B. S., Englehart, M., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of Educational Objectives: Handbook I, Cognitive Domain. New York, USA: David McKay.
Bodily, R., Kay, J., Aleven, V., Jivet, I., Davis, D., Xhakaj, F., & Verbert, K. (2018). Open learner models and learn- ing analytics dashboards: A systematic review. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK ’18), 5–9 March, 2018, Sydney, Australia (pp. 41–50). New York, USA: ACM. https://dx.doi.org/10.1145/3170358.3170409
Bovill, C. (2013). Students and staff co-creating curricula: A new trend or an old idea we never got around to implementing? In C. Rust (Ed.), Improving Student Learning through Research and Scholarship: 20 Years of ISL (pp. 96–108). Oxford, UK:OxfordCentreforStaffandLearningDevelopment. Retrievedfromhttp://eprints.gla.ac.uk/82348
Brusilovsky, P. (2012). Adaptive hypermedia for education and training. In P. J. Durlach & A. M. Les- gold (Eds.), Adaptive Technologies for Training and Education (p. 46–66). Cambridge University Press. https://dx.doi.org/10.1017/CBO9781139049580.006
Brusilovsky, P., & Milla ́n, E. (2007). User models for adaptive hypermedia and adaptive educational systems. In P. Brusilovsky, A. Kobsa, & W. Nejdl (Eds.), The Adaptive Web: Methods and Strategies of Web Personalization. Lecture Notes in ComputerScience(Vol.4321,pp.3–53).SpringerBerlinHeidelberg. https://dx.doi.org/10.1007/978-3-540-72079-91
Bull, S., Ginon, B., Boscolo, C., & Johnson, M. (2016). Introduction of learning visualisations and metacognitive sup- port in a persuadable open learner model. In Proceedings of the Sixth International Conference on Learning An- alytics & Knowledge (LAK ’16), 25–29 April, 2016, Edinburgh, Scotland (pp. 30–39). New York, USA: ACM. https://dx.doi.org/10.1145/2883851.2883853
Bull, S., & Kay, J. (2010). Open learner models. In R. Nkambou, J. Bourdeau, & R. Mizoguchi (Eds.), Advances in Intelligent TutoringSystems(Vol.308,pp.301–322).SpringerBerlinHeidelberg. https://dx.doi.org/10.1007/978-3-642-14363-215
Chambers, J. M., Cleveland, W. S., Kleiner, B., & Tukey, P. A. (1983). Graphical Methods for Data Analysis. Boca Raton, Florida, USA: CRC Press.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. https://dx.doi.org/10.1037/0033-2909.112.1.155 Collins, A., & Halverson, R. (2018). Rethinking Education in the Age of Technology: The Digital Revolution and Schooling in America. New York, USA: Teachers College Press.
Cooper, K., & Khosravi, H. (2018). Graph-based visual topic dependency models: Supporting assessment design and delivery at scale. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK ’18), 5–9 March,2018,Sydney,Australia(pp.11–15).NewYork,USA:ACM. https://dx.doi.org/10.1145/3170358.3170418
Denny, P., Hamer, J., & Luxton-Reilly, A. (2009). Students sharing and evaluating MCQs in a large first year engineering course. In 20th Annual Conference for the Australasian Association for Engineering Education (AAEE ’09), 6–9 December 2009, The University of Adelaide, Adelaide, Australia (pp. 575–580). Barton, Australia: Engineers Australia.
Denny, P., Hamer, J., Luxton-Reilly, A., & Purchase, H. (2008). PeerWise: Students sharing their multiple choice questions. In Proceedings of the Fourth International Workshop on Computing Education Research (ICER ’08), 6–7 September 2008, Sydney, Australia(pp.51–58).NewYork,USA:ACM. https://dx.doi.org/10.1145/1404520.1404526
Draper, S. W. (2009). Catalytic assessment: Understanding how MCQs and EVS can foster deep learning. British Journal of Educationa lTechnology, 40(2),285–293. https://dx.doi.org/10.1111/j.1467-8535.2008.00920.x
Essa, A. (2016). A possible future for next generation adaptive learning systems. Smart Learning Environments, 3(1), 16. https://dx.doi.org/10.1186/s40561-016-0038-y
Falmagne, J.-C., Cosyn, E., Doignon, J.-P., & Thie ́ry, N. (2006). The assessment of knowledge, in theory and in practice. In R. Missaoui & J. Schmidt (Eds.), Formal Concept Analysis. Lecture Notes in Computer Science (Vol. 3874, pp. 61–79). Springer Berlin Heidelberg.
Galloway, K. W., & Burns, S. (2015). Doing it for themselves: Students creating a high quality peer-learning environment. Chemistry Education Research and Practice,16(1),82–92. https://dx.doi.org/10.1039/c4rp00209a
Heffernan, N. T., & Heffernan, C. L. (2014). The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial IntelligenceinEducation, 24(4),470–497. https://dx.doi.org/10.1007/s40593-014-0024-x
Heffernan, N. T., Ostrow, K. S., Kelly, K., Selent, D., Van Inwegen, E. G., Xiong, X., & Williams, J. J. (2016). The future of adaptive learning: Does the crowd hold the key? International Journal of Artificial Intelligence in Education, 26(2), 615–644. https://dx.doi.org/10.1007/s40593-016-0094-z
Jose, F. (2016). White Paper: Knewton Adaptive Learning: Building the World’s Most Powerful Recommendation Engine for Education. Retrievedfromhttps://cdn.tc-library.org/Edlab/Knewton-adaptive-learning-white-paper-1.pdf
Karataev, E., & Zadorozhny, V. (2017). Adaptive social learning based on crowdsourcing. IEEE Transactions on Learning Technologies,10(2),128–139. https://dx.doi.org/10.1109/TLT.2016.2515097
Khosravi, H., & Cooper, K. (2018). Topic dependency models: Graph-based visual analytics for communicating assessment data. Journal of Learning Analytics, 5(3),136–153. https://dx.doi.org/10.18608/jla.2018.53.9
Khosravi, H., Cooper, K., & Kitto, K. (2017). RiPLE: Recommendation in peer-learning environments based on knowledge gaps and interests. Journal of Educational Data Mining, 9(1), 42–67. Retrieved from https://jedm.educationaldatamining.org/index.php/JEDM/article/view/239
King, A. (1992). Facilitating elaborative learning through guided student-generated questioning. Educational Psychologist, 27(1),111–126. https://dx.doi.org/10.1207/s15326985ep27018
Kitto, K., Lupton, M., Davis, K., & Waters, Z. (2017). Designing for student-facing learning analytics. Australasian Journal of Educational Technology, 33(5), 152–168. https://dx.doi.org/10.14742/ajet.3607
Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: A meta-analysis. Journal of Educational Psychology,106(4),901–918. https://dx.doi.org/10.1037/a0037123
Matthews, K. E. (2017). Five propositions for genuine students as partners practice. International Journal for Students as Partners,1(2). https://dx.doi.org/10.15173/ijsap.v1i2.3315
May, M., George, S., & Pre ́voˆt, P. (2011). TrAVis to enhance online tutoring and learning activities: Real time visualization of students tracking data. Interactive Technology and Smart Education, 8(1), 52–69. https://dx.doi.org/10.1108/17415651111125513
Meer, N., & Chapman, A. (2014). Co-creation of marking criteria: Students as partners in the assessment process. Business and Management Education in HE,1–15. https://dx.doi.org/10.11120/bmhe.2014.00008
Mojarad, S., Essa, A., Mojarad, S., & Baker, R. S. (2018). Studying adaptive learning efficacy using propensity score matching. In A. Pardo et al. (Eds.), Companion Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK ’18), 5–9 March 2018, Sydney, Australia. Society for Learning Analytics Research.
Morrison, B. B., & DiSalvo, B. (2014). Khan Academy gamifies computer science. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE ’14), 5–8 March 2014, Atlanta, Georgia, USA (pp. 39–44). New York,USA:ACM. https://dx.doi.org/10.1145/2538862.2538946
Mulryan-Kyne, C. (2010). Teaching large classes at college and university level: Challenges and opportunities. Teaching in Higher Education,15(2),175–185. https://dx.doi.org/10.1080/13562511003620001
Oxman, S., Wong, W., DV X Innovations, & DeVry Education Group. (2014). White Paper: Adaptive Learning Systems. Integrated Education Solutions. Retrieved from http://kenanaonline.com/files/0100/100321/DVxAdaptiveLearningW hitePaper.pd f
Park, O. c., & Lee, J. (2004). Adaptive instructional systems. In D. Jonassen (Ed.), Handbook of Research on Educational Communications and Technology (2nd ed., pp. 651–684). Mahwah, New Jersey, USA: Lawrence Erlbaum Associates Publishers.
Pela ́nek, R. (2016). Applications of the Elo rating system in adaptive educational systems. Computers & Education, 98(C), 169–179. https://dx.doi.org/10.1016/j.compedu.2016.03.017
Pressley, M., Wood, E., Woloshyn, V. E., Martin, V., King, A., & Menke, D. (1992). Encouraging mindful use of prior knowledge: Attempting to construct explanatory answers facilitates learning. Educational Psychologist, 27(1), 91–109. https://dx.doi.org/10.1207/s15326985ep27017
Purchase, H., Hamer, J., Denny, P., & Luxton-Reilly, A. (2010). The quality of a PeerWise MCQ repository. In Proceedings of the Twelfth Australasian Conference on Computing Education (ACE ’10), 1 January 2010, Brisbane, Australia (Vol. 103, pp. 137–146). Darlinghurst, Australia: Australian Computer Society, Inc.
Ritter, S., Anderson, J. R., Koedinger, K. R., & Corbett, A. (2007). Cognitive Tutor: Applied research in mathematics education. Psychonomic Bulletin & Review, 14(2),249–255. https://dx.doi.org/10.3758/BF03194060
Ritter, S., Carlson, R., Sandbothe, M., & Fancsali, S. E. (2015). Carnegie Learning’s adaptive learn- ing products. In O. Santos et al. (Eds.), Educational Data Mining 2015: 8th International Conference on Educational Data Mining (EDM2015), 26–29 June 2015, Madrid, Spain. Retrieved from http://www.educationaldatamining.org/EDM2015/proceedings/edm2015proceedings.pd f
Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika,70(1),41–55. https://dx.doi.org/10.1093/biomet/70.1.41
Smart Sparrow. (2016). Smart Sparrow — Adaptive eLearning Platform. Retrieved from https://www.smartsparrow.com/platform
Solemon, B., Ariffin, I., Din, M. M., & Anwar, R. M. (2013). A review of the uses of crowdsourcing in higher education. International Journal of Asian Social Science, 3(9), 2066–2073.
Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education,3(3),285–289. https://dx.doi.org/10.4300/JGME-D-11-00147.1
Tackett, S., Raymond, M., Desai, R., Haist, S. A., Morales, A., Gaglani, S., & Clyman, S. G. (2018). Crowdsourcing for assessment items to support adaptive learning. Medical Teacher, 40(8), 838–841. https://dx.doi.org/10.1080/0142159X.2018.1490704
Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education,76(3),467–481. https://dx.doi.org/10.1007/s10734-017-0220-3
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4),197–221. https://dx.doi.org/10.1080/00461520.2011.611369
Walsh, J. L., Harris, B. H., Denny, P., & Smith, P. (2018). Formative student-authored question bank: Perceptions, question quality and association with summative performance. Postgraduate Medical Journal, 94(1108), 97–103. https://dx.doi.org/10.1136/postgradmedj-2017-135018
Whitehill, J., Aguerrebere, C., & Hylak, B. (2019). Do learners know what’s good for them? Crowdsourcing subjective ratings of OERs to predict learning gains. In C. F. Lynch, A. Merceron, M. Desmarais, & R. Nkambou (Eds.), Educational Data Mining 2019: 12th International Conference on Educational Data Mining (EDM2019), 2–5 June 2019, Montreal, Canada(pp.462–467). Retrievedfromhttp://educationaldatamining.org/edm2019/proceedings/
Williams, J. J., Kim, J., Rafferty, A., Maldonado, S., Gajos, K. Z., Lasecki, W. S., & Heffernan, N. (2016). Axis: Generating explanations at scale with learnersourcing and machine learning. In Proceedings of the Third (2016) ACM Conference on Learning @ Scale (L@S 2016), 25–26 April 2016, Edinburgh, Scotland (pp. 379–388). New York, USA: ACM. https://dx.doi.org/10.1145/2876034.2876042
Yilmaz, B. (2017). Effects of Adaptive Learning Technologies on Math Achievement: A Quantitative Study of ALEKS Math Software (Unpublished doctoral dissertation). University of Missouri-Kansas City.