Yup. Look up the calculus and linear algebra that neural networks use to train. It’s an insane amount of calculations. So many calculations that it requires hundreds of processing units to crunch at a reasonable speeds. All that to get simple math questions wrong.
Yup. Look up the calculus and linear algebra that neural networks use to train. It’s an insane amount of calculations. So many calculations that it requires hundreds of processing units to crunch at a reasonable speeds. All that to get simple math questions wrong.
All that to hallucinate every response in ways, that make people feel like they know what they are talking about.
Which they don’t, and LLMs never will - unless they program in some responses, which then goes against the entire thing.
They should just use lookup tables