Something that struck me about recent “large language models” is how their answers resemble those of students who work (on purpose or unawares) by memorization. I mean students who learn that specific patterns, appearing in questions, require particular answers. Such students can be extremely skilled at this. But they don’t really understand why those answers are meaningful or correct. Their lack of understanding appears clearly upon closer examination and some digging. Exactly as it happens with large-language-model algorithms.
(This is one reason why I personally don’t consider these algorithms to have any genuine “knowledge”; not yet at least. But that’s not my point here.)
Now those algorithms manage to answer almost correctly many questions from exams; I speak for maths and sciences in my country (Norway) at the bachelor level; but I think the same can be said for other subjects and in most countries (correct me if I’m wrong). In my opinion this shows that our current education and graduation system is focusing on the wrong thing: not knowledge, but parroting. As Gibson asked in 1964:
Do we, in our schools and colleges, foster the spirit of inquiry, of skepticism, of adventurous thinking, of acquiring experience and reflecting on it? Or do we place a premium on docility, giving major recognition to the ability of the student to return verbatim in examinations that which he has been fed?
I think this is a huge problem that has been with us, and we have been aware of, for a long time. But the current AI development emphasizes it very strongly. And it urges us to ask (again) some questions. This is what I’d like to hear your opinions and points of view about:
-
Some institutions where I live are planning to use these AI algorithms to (help) generate or correct exams. I think this strategy completely misses the point: shouldn’t we prepare exams that these AI cannot answer, which would mean they rely on understanding rather than language patterns?
-
If we keep teaching and grading students based on memorization, won’t they become unemployed in the coming years? Their “knowledge” can be obtained more cheaply and quickly from AIs.
-
Assuming you agree at least partially on the theses above, then which education levels seem to be most affected?
-
Doesn’t the current packing and compression of lots of subjects into a very short time, at bachelor levels, make the situation even worse?
These are personal (though starkly held) views of course. I respect others and would be happy to hear arguments and counter-arguments!
Removed by mod