• new function calling capability in the Chat Completions API
  • updated and more steerable versions of gpt-4 and gpt-3.5-turbo
  • new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
  • 75% cost reduction on our state-of-the-art embeddings model
  • 25% cost reduction on input tokens for gpt-3.5-turbo
  • announcing the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models
  • Hellsadvocate@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I just want GPT4 without a limit. I wonder how the 16k gpt3.5 performs against the 32k claude instant. Or even the 100k…