Generative AI Doesn’t Have a Coherent Understanding of the World, MIT Researchers Find
Long-time Slashdot reader Geoffrey.landis writes: Despite its impressive output, a recent study from MIT suggests generative AI doesn’t have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are implicitly learn … ⌘ Read more

⤋ Read More

I like this comment on Slashdot in the above link:

LLMs don’t have an understanding of anything. They can only regurgitate derivations of what they’ve been trained on and can’t apply that to something new in the same ways that humans or even other animals can. The models are just so large that the illusion is impressive.

So true.

⤋ Read More

@eldersnake@we.loveprivacy.club With enough data and enough computing power you can simulate anything right or create grand illusions that appear to real they’re hard to tell 😅 – But yes, at the end of the day LLM(s) today are just large probabilistic models, stochastic parrots.

⤋ Read More

They are however pretty good at auto-complete though. If you wire up Continue.dev with VSCode and a local Ollama powered codeastral model, it’s pretty decent. Or if you use the open source friendly Codeium.

⤋ Read More

Participate

Login to join in on this yarn.