2023-09-14, 09:20– (Asia/Tokyo), Terrsa Hall B
It is common for educational institutions to collect surveys on course modules so that students can provide open-ended qualitative feedback to course managers and lecturers. However, it is very difficult to make sense of students’ needs from a pedagogical point of view when reading hundreds of seemingly diverse student responses.
Generally, it would be more useful to have qualitative feedback tagged based on pedagogical-driven taxonomy which is well understood by educators. The pedagogical taxonomy includes sub-topics for assessments, projects, practicals, assignments, content, teaching plan, pace, difficulty and student preferences. For example, a sub-topic for student preference can be about dissatisfaction with specific methodology such as e-learning for flipped classrooms. Some of the students do provide lengthy feedback which necessitates tagging to multiple categories from the taxonomy.
This paper explores the use of Large Language Models (LLMs) in few-shot learning methodologies to automatically tag students’ qualitative feedback. LLMs such as GPT are able to attain good model outcome with task-agnostic, few-shot performance learning. This ensures that fewer samples of survey responses for each topic in the taxonomy, are required for the LLM to learn, relative to non-LLM approaches. The LLM can be further improved without the need for human-driven tagging in future learning iterations by deploying active learning strategies such as BALD (Bayesian Active Learning By Disagreement).
Using the proposed methodologies, the qualitative survey responses can now be automatically tagged and organised in pedagogically meaningful topics. This tagged feedback can be further merged with other relevant information such as student demographics and subject grades, that can be rendered visually as dashboards for easy understanding.
Qualitative feedback with its rich information, when properly organised and visually rendered as dashboards, could be foundational in helping stakeholders improve their course design, student engagement, and pedagogical approach, across the different semesters. The stakeholders come from a diverse group including lecturers, pedagogy designers and program administrators.
Open-ended surveys on course modules are a rich source of information for educators to improve course design and teaching practices. However, it isn't easy to make sense of students’ needs by reading hundreds of seemingly diverse student comments. Pedagogical-driven taxonomies together with Large Language Models (LLMs) can be used to automatically tag student survey data according to these taxonomies. Tagged student responses, when merged with other relevant student data are visually represented as dashboards for easy understanding.
Student Survey, Topic Extraction, Pedagogical Taxonomies, Large Language Models, Natural Language Processing, Dashboarding
Chong Wei is a Data Scientist at Temasek Polytechnic. In his role as a Data Scientist, he is looking out for ways to help educators use data to improve teaching and learning. Chong Wei is rooted in the belief that the use of data should go hand in hand with empathy.