I use it all the time. It is a good partner to challenge me, when I am looking for other points of view. “I believe x due to y. Challenge my point of view”

It helps me explore a topic fast, so that I know the lingo to search for it myself. I use it for making low stakes decisions where it often succeeds, such as shopping and research for shopping. I validate the results every time.

Is it net negative for society, not sure, maybe? Will it go away, no. So we should embrace it, but not the big tech AI, but smaller LLMs.

  • snoons
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    Yes, small, local LLMs run on your own systems negate the insane economic and environmental cost of corporate LLMs; however, there is still the question of validity and the long term effect ‘outsourcing’ certain thought processes will have on users.

    The results given by an LLM are definite and might miss nuance you would get be researching it yourself. Perhaps, for example, you wanted to learn about a topic, so you ask your LLM and it tells you everything it can find that is correct and verifiable; however, it completely disregards the work done by a researcher that turned out to be incorrect. It ignores this because it’s wrong but by reading the work you might learn other things, like the unique and still completely valid methodology the researcher used in their work that the LLM ignored because the results were wrong. 1

    That being said, there is also points where using an LLM might have been useful. You might remember a while ago there were grad students that uploaded a pre-print paper about a room-temperature super conductor they had created; turns out they had just created a special sort of copper alloy that wasn’t super conductive, but just had special magnetic properties. They would have known about this if they had read a paper on the same alloy that was published in the 1970’s. An LLM might have helped them there; however, their suprevisor should have know about that paper also, so… ¯\_(ツ)_/¯

    As well, there is the issue of atrophy. I’m not sure if you use your LLM to write emails and whatnot, but if one ‘outsources’ their reading and writing ability, they slowly lose that ability. I’m not sure if they’ll completely lose it, unlikely IMO, but it will certainly wain and one will become dependant on it until such time as they start to read and wirte by themselves again. It’s a bit like not reading books, there is a difference between the vernacular of someone that reads a lot compared with someone that doesn’t read at all. The brain is very fluid in this respect, and the ‘flows’ are important.

    I recall a bizarre thread in the steam discussion forums regarding a certain game; the user had used an LLM to create a post about the rough parts of the game (it was still in development). The post was well articulated of course, and there weren’t any mistakes in the grammar… when the user was writing comments by themselves without the LLM however… well lets just say the contrast was extreme. They simply couldn’t articulate anything very well by themselves, and likely have never written anything longer thena paragraph. They were using a corporate LLM ofc, but the difference is the same in this respect.

     

    1. It’s a common issue in scientific literature where if a researchers theory turns out to be wrong, they’ll retract the paper; however, it is still useful. Much like if there’s a team of people making a map of some maze and they always erase all the parts of the map that lead to a dead end.