Skip to content

The Algorithmization of Learning: Learning Analytics vs. Self-Determined Study?

Learning analytics continues to be seen as a promising new technology in the education sector. The automated analysis of learning deficits is said to have the potential both to massively minimize failure rates and to improve teaching. But what’s the truth behind the hype? A discussion piece.

Despite massive progress in digital teaching due to covid semesters with as much distance learning as possible, many educational technologies in Germany are still in their infancy – an example being learning analytics. That in itself does not have to be a bad thing. In fact, the opportunities and risks of new technologies should be adequately examined and discussed before they are deployed across the board. The same applies to the approval of vaccines or drugs for the treatment of new diseases, for instance.

Before we talk about the opportunities and risks of learning analytics, a brief introduction: The term learning analytics is generally used to describe technologies that use algorithms to automatically evaluate the learning data of participants in digital courses. Ideally, this evaluation allows conclusions to be drawn about the learning progress and learning deficits of the course participants. The prerequisite is that at least a large part of the learning process happens with digital media, since only digital metadata that accumulates automatically when using digital learning platforms are available. This is why pure online courses with a large amount of digital content are particularly well suited for the use of learning analytics.

Let’s leave the issue of data protection aside for the moment. Roughly speaking, the collected and algorithmically analyzed data can now be used in two scenarios:

  1. The analysis of the learning status of individual course participants with the purpose of specific feedback according to the principle: “You still have gaps in section A, section B is completely worked on, that part is still completely uncovered – your statistical probability of passing the exam is x percent.”
  2. Feedback to the instructor on which student seems to be having problems at which point in the course (when evaluating the data of individual course participants specifically) or which difficulties students are having with individual parts of the course (when evaluating the data of all course participants anonymously).

Of course, these scenarios are ideal images that can only be found with some limitations in reality. For instance, an algorithm could theoretically record that I open a text and scroll through it. But how intensively I actually read and internalize it can only be determined, if at all, by measuring brain waves. For the purposes of this example, however, let’s assume a “perfect” online course with well-designed learning level quizzes after each learning unit: How should these scenarios be viewed?

In the first scenario, the student ideally receives specific feedback on what specifically still needs to be learned in order to pass the exam. This is convenient, as it actually reduces the risk of failing the exam. But isn’t it also an essential part of studying to be able to judge for myself what and how intensively I need to learn in order to pass an exam? Is this not a skill that is even more important than the knowledge of the learning content itself? Of course – this also involves the risk of failing one or two exams. But shouldn’t universities be a place where one can learn from failure? After all, if you don’t learn that either at school or in your studies, it may be too late for failure in your professional life. Also, it is probably better for me to comprehend the content myself than for an algorithm to pre-chew what I have not yet comprehended. In this respect, the supposed advantage of a lower failure rate quickly turns out to be a disadvantage.

The situation is different in the second scenario: The use of learning analytics to improve teaching. With one caveat: An analysis of learning data broken down to the individual course participant is not only highly problematic in terms of data protection law. It is also questionable how many students would not have any problem at all with their lecturer being able to evaluate in detail which students only started preparing for an exam the day before it was due.

In the case of anonymized learning data evaluation, however, the learning data can give the instructor a very helpful indication of where didactic adjustments would be beneficial in the course. If, for example, the instructor sees that all participants are getting stuck in module 2b, he or she could make specific adjustments to module 2b. This would certainly benefit the quality of teaching without jeopardizing self-determined study.

Conclusion: Learning analytics are anything but free of prerequisites. Without the design of digital and didactically well prepared courses, the accumulated learning data are most likely not meaningful enough. Using learning analytics as a “pre-chewer” of educational content is certainly not in the spirit of a higher education ideal. However, learning analytics can certainly make a contribution to improving the quality of digital and didactically sound teaching.

Leave a Reply

Your email address will not be published. Required fields are marked *