• jj4211@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 days ago

    I had some files that i knew had duplicates, but didn’t exactly match and while the filenames were not identical, you could tell by looking if they were the same.

    Would have been very tedious to do all of them, LLM was able to identify a “good enough” number of duplicates and only made a few mistakes. Greatly sped up the manual work required to clean up the collection.

    But that’s so far from most advertised scenarios and not compelling from a “make lots of money” perspective.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        This was after applying various mechanisms of the traditional kind. Admittedly there was one domain specific strategy that want applied that would have caught a few more, but not all of them.

        The point is that I had a task that was hard to code up, but trivial yet tedious for a human. AI approaches can bridge that gap sometimes.

        In terms of energy consumption, it wouldn’t be so bad if the approaches weren’t horribly over used. That’s the problem now, 99% of usage is garbage. If it settled down to like 3 or 4% of usage it would still be just as useful, but no one would bat an eye at the energy demand.

        As with a lot of other bubble things, my favorite part is probably going to be it’s life after the bubble pops. When the actually useful use cases remain and the stupid stuff does out.