• IronKrill
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    While this is an important thing to understand about AI, it’s an overstated issue once understood. For most questions I ask AI, it doesn’t matter if it’s correct as long as it pulls some half useful info to get me on track (i.e programming). For other questions, I only ask it if I need to figure out where to look next, which it will usually do just fine.

    The first page of my search results is all AI generated garbage articles anyway, at least I know what I am getting with GPT and can take it as such.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Yup, as long as you are aware that it could be wrong and look at it critically LLMs at GPT scale are very useful tools. The best way I’ve heard it described is having a lightning fast intern who often gets things wrong but will always give it a go.

      So long as you’re calibrated to “how might this be wrong” when looking at the results it is exceptionally useful.