ChatGPT is down, the new CEO is threatening to quit, and the majority of employees are doing the same… so I decided to look up some alternatives

The name Anthropic came up a few times, but I’m not familiar with them. I’ve seen Claude mentioned in the past

Anthropic PBC is an American artificial intelligence (AI) startup company, founded by former members of OpenAI.[3][4] Anthropic develops general AI systems and large language models.[5] It is a public-benefit corporation, and has been connected to the effective altruism movement.

As of July 2023, Anthropic had raised US$1.5 billion in funding. In September, Amazon announced an investment of up to US$4 billion, followed by a $2 billion commitment from Google the next month https://en.wikipedia.org/wiki/Anthropic

So are they more ethical / committed to AI safety, or more of the same? Does the product compare well to ChatGPT?

  • @[email protected]
    link
    fedilink
    English
    296 months ago

    Anthropic was founded by former OpenAI employees who left because of concerns about AI safety. Their big thing is “constitutional AI” which, as I understand it, is a set of rules it cannot break. So the idea is that it’s safer and harder to jailbreak.

    In terms of performance, it’s better than the free ChatGPT (GPT3.5) but not as good as GPT4. My wife has come to prefer it for being friendlier and more helpful. I prefer GPT4 on ChatGPT. I’ll also note that it seems to refuse requests from the user far more often, which is in line with it’s “safety” features. For example, a few weeks ago I told Claude my name was Matt Gaetz and I wanted Claude to write me a resolution removing the speaker of the house. Claude refused but offered to help me and Kevin McCarthy work through our differences. I think that’s kind of illustrative of it’s play nice approach.

    Also, Claude has a lot bigger context window, so you can upload bigger files to work with compared with ChatGPT. Just today Anthropic announced the pro plan gets you 200k token context window, equi to about 500 pages, which beats the yet to be released GPT4-Turbo which is supposed to have a 130k context window which is about 300 pages. I assume the free version of Claude has a much smaller context window, but probably still bigger than free ChatGPT. Claude just today also got the ability to search the web and access some other tools, but that is pro only.

      • @[email protected]
        link
        fedilink
        English
        26 months ago

        You pay for every “token” that goes into or comes out of the model.

        You can loosely think of a token as as word - though it’s not exactly right.

        • @[email protected]
          link
          fedilink
          English
          16 months ago

          Yeah I know about the concept of tokens. Was looking for some general cost info. But I guess if it’s usage based then it’s too subjective.

      • @[email protected]
        link
        fedilink
        English
        16 months ago

        My understanding is Claude has a pro version at 20 dollars a month that gets you more access and the expanded context window. Similar to ChatGPT pro. The pricing you and the other person who replied to you is probably talking about the API pricing which is on a per token basis (same with ChatGPT’s API pricing). I’ve heard for most people, using the API ends up being cheaper than paying for the pro, but it also requires you to know what to do with an API and I don’t have that technical ability. I pay for ChatGPT pro. I’ve used the free Claude chat interface, but I haven’t upgraded to the pro. I might try it out though, that big context window is pretty tempting even with a slight downgrade in the model quality.

  • @[email protected]
    link
    fedilink
    English
    86 months ago

    It refuses more from what I’ve seen. Personally I don’t think it’s a good idea to become dependant on any commercially hosted model. Open models are a bit behind but they’re getting there.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      6 months ago

      The problem with open models is you basically have to run it on your own hardware, and the hardware is not only expensive it’s also unobtainable.

      H100 GPUs are sold by scalpers for $50k with no warranty — and worse that’s an obsolete model. The H200 GPU just can’t be purchased at all unless you’re filling a datacentre with them.

  • @[email protected]
    link
    fedilink
    English
    36 months ago

    I’m a big fan of Anthropic, but I will admit that in terms of quality, they lag GPT4.

    I highly recommend giving them a go.

    • OtterOPA
      link
      English
      46 months ago

      It’s been on and off all day