I’ve tested it with chatgpt, meta.ai, and two locally run models. What I’m using it to do isn’t particularly complex so they’ve all worked, even if the big models stopped the ones that ran on my own hardware would still be saving me time. They just take a little longer to run, on the order of 10-20 seconds rather than the 2-5 I’m seeing online.
The big version of the open source models can be run on rented servers for cheaper than $300 per month, so I’m not worried about your hypothetical situation at all.
I’ve tested it with chatgpt, meta.ai, and two locally run models. What I’m using it to do isn’t particularly complex so they’ve all worked, even if the big models stopped the ones that ran on my own hardware would still be saving me time. They just take a little longer to run, on the order of 10-20 seconds rather than the 2-5 I’m seeing online.
The big version of the open source models can be run on rented servers for cheaper than $300 per month, so I’m not worried about your hypothetical situation at all.