CPC G10L 15/22 (2013.01) [G09B 5/04 (2013.01); G10L 15/10 (2013.01); G10L 15/20 (2013.01); G10L 2015/227 (2013.01)] | 8 Claims |
1. A learning support device comprising:
at least one memory configured to store instructions; and
at least one processor configured to execute the instructions to:
acquire sound data of a plurality of utterances made by a plurality of learners organized over a plurality of groups;
for each utterance in the sound data, extract a letter string representing content of the utterance;
for each utterance in the sound data, identify the learned who has made the utterance;
for each utterance in the sound data, identify an emotion of the learner when making the utterance, based on at least one of the sound data and data of a moving image captured together with the sound data,
wherein in a case in which the emotion is identified based on the sound data, the emotion is identified by using a first learned model created by machine learning using utterance sound data in which emotion information representing emotions of users who have made the emotions is known, and
wherein in a case in which the emotion is identified based on the data of the moving image captured together with the data sounds, the emotion is identified by using a second learned model created by machine learning using moving image data in which the emotion information representing the emotions of the users who have made the emotions is known;
generate evaluation information representing evaluation for each learner based on the letter string and the emotion information;
for each group, output on a display:
the utterances made by the learners in the group to which the column corresponds, the utterances organized in time-series order;
for each utterance, indicate the emotion of the learner when making the utterance; and
for each utterance, a name of the learner that made the utterance, adjacent to the utterance; and
output, on the display, the evaluation information,
wherein generation of the evaluation information includes identifying an utterance that has triggered a change in utterance in a group based on the letter string and the emotion information and using the identified utterance for generation of evaluation information of a learner associated with the identified utterance, and
wherein the utterance that has triggered the change is identified by:
identifying an amount of change in an utterance quantity and in the emotion identified for each utterance, based on the letter string extracted for each utterance and the emotion information; and
identifying, as the utterance that has triggered the change, an utterance immediately before a time when the amount of change exceeds a predetermined value.
|