TY - GEN
T1 - Topic Modeling to Detect Student Expressions of Understanding in Collaborative Problem-Solving Dialogues
AU - Willis, Angelica
AU - Evens, Ashana
AU - Kim, Jung Hee
AU - Bryant, Kelvin
AU - Glass, Michael
PY - 2017
Y1 - 2017
N2 - When students are working together solving a problem, can a computer gauge how often they show understanding? In the COMPS project, students in small groups engage in typed-chat problem-solving dialogues. The instructors can oversee and join the conversations. This project applies topic modeling toward real-time computer assessment of the degree of understanding that the students exhibit, with the aim of posting an assessment of the state of the conversation to an instructor dashboard. Working from transcripts, we have manually annotated dialogue turns where students exhibit understanding of the exercise’s concepts. Here we report on training machine classifiers to recognize these instances. From each dialogue turn some domain independent features are extracted directly, representative examples are discourse marker words, emoticons, and timing information. Other features have been synthesized in an unsupervised way by topic modeling. Topic modeling derives latent bag-of-word features by clustering, using Latent Dirichlet Allocation. Named entity recognition is applied before topic modeling to remove particular features such as names. We have been training classifiers to identify the turns that were identified as either a) understanding of a concept before somebody else has enunciated the concept within the discussion, and b) showing understanding after the concept has been expressed. The dashboard will contain an indication of the numbers of turns classified into those categories. The “fail-safe” mode is for the computer to avoid a false report that the students don’t need assistance. Examples of dialogue along with a description and evaluation of the classifiers are reported here.
AB - When students are working together solving a problem, can a computer gauge how often they show understanding? In the COMPS project, students in small groups engage in typed-chat problem-solving dialogues. The instructors can oversee and join the conversations. This project applies topic modeling toward real-time computer assessment of the degree of understanding that the students exhibit, with the aim of posting an assessment of the state of the conversation to an instructor dashboard. Working from transcripts, we have manually annotated dialogue turns where students exhibit understanding of the exercise’s concepts. Here we report on training machine classifiers to recognize these instances. From each dialogue turn some domain independent features are extracted directly, representative examples are discourse marker words, emoticons, and timing information. Other features have been synthesized in an unsupervised way by topic modeling. Topic modeling derives latent bag-of-word features by clustering, using Latent Dirichlet Allocation. Named entity recognition is applied before topic modeling to remove particular features such as names. We have been training classifiers to identify the turns that were identified as either a) understanding of a concept before somebody else has enunciated the concept within the discussion, and b) showing understanding after the concept has been expressed. The dashboard will contain an indication of the numbers of turns classified into those categories. The “fail-safe” mode is for the computer to avoid a false report that the students don’t need assistance. Examples of dialogue along with a description and evaluation of the classifiers are reported here.
M3 - Conference contribution
BT - Unknown book
ER -