@goatsarah@QuasiMagia@hanabatake Good to know. I used it for programming and sometimes I need to correct the code a bit. I think it’s good to use it for developing because it quickly reveals itself if the information is incorrect.
I agree that it sucks in mathematics. However, when I asked it what question should one ask on an exam, a job interview… for coding, he gave good things to look after. It do not provide good answers but it help to turn unknown unknown in known unknown (https://english.stackexchange.com/questions/35284/known-unknown-vs-unknown-known)
I think it’s probably worse than traditional search engines, simply because it takes information from them already and parses it into conversational language.
The problem being, by being conversational it increases not just the clarity of the information it conveys, but the credibility as well. And that credibility is not always earned.
Chatgpt can be great to learn new stuff and you do not know where to begin. Better than the traditional search engine at least
@hanabatake @goatsarah yes but be careful, I tested chatGPT asking about topic I already know (such geometry) and it often gived mi wrong informations
@goatsarah @QuasiMagia @hanabatake Good to know. I used it for programming and sometimes I need to correct the code a bit. I think it’s good to use it for developing because it quickly reveals itself if the information is incorrect.
I agree that it sucks in mathematics. However, when I asked it what question should one ask on an exam, a job interview… for coding, he gave good things to look after. It do not provide good answers but it help to turn unknown unknown in known unknown (https://english.stackexchange.com/questions/35284/known-unknown-vs-unknown-known)
@hanabatake yesm even for coding, it does not give good answers but it helps to understand some things
LLMs are worse than useless. I posted some examples here and here.
Learning new stuff isn’t a good thing if the stuff you’re learning is wrong.
I think it’s probably worse than traditional search engines, simply because it takes information from them already and parses it into conversational language.
The problem being, by being conversational it increases not just the clarity of the information it conveys, but the credibility as well. And that credibility is not always earned.