OpenAI announced these API updates 3 days ago:

  • new function calling capability in the Chat Completions API
  • updated and more steerable versions of gpt-4 and gpt-3.5-turbo
  • new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
  • 75% cost reduction on our state-of-the-art embeddings model
  • 25% cost reduction on input tokens for gpt-3.5-turbo
  • announcing the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models
  • Sparking@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Well yeah, I do get that they conduct open research, but I still think it is disingenuous for the company to not release their source code. Or at least their LLM, if we are going to be somewhat charitable and allow that their specific tooling and API infrastructure should be proprietary so that they can maintain a business. There is no guaranteed that the code running on the other end of their API adheres to any of the research that they have revealed!

    I’m not too worried though, because other LLMs and parameter sets have gone open source, so the cat is already out of the bag. I also don’t really believe in the commercial viability of LLMs either, because there is no way to automate the verification that that they are generating correct content so whatever.