Do you not understand how “answer unavailable” is a better answer than taking a small percent of strips of paper at random and filling in the rest with words that sound relevant?
taking a small percent of strips of paper at random and filling in the rest with words that sound relevant?
It’s like a mad libs
Right. They’re text generators. That’s the technology. It can’t do what you’re demanding because that’s not how it works. LLMs aren’t magic answer machines. They don’t know when to say “answer not available”. They don’t know what they’re being asked. They don’t know anything.
You know that answer unavailable is better because you have real intelligence, an LLM is just some mathematical functions so it can’t do that. If it could it would be getting much closer to actually being AI.
It doesn’t, though, any more than you have access to the information in a pile of 10 million shredded documents.
Right, in this case that we’re talking about…
Do you not understand how “answer unavailable” is a better answer than taking a small percent of strips of paper at random and filling in the rest with words that sound relevant?
It’s like a mad libs
Right. They’re text generators. That’s the technology. It can’t do what you’re demanding because that’s not how it works. LLMs aren’t magic answer machines. They don’t know when to say “answer not available”. They don’t know what they’re being asked. They don’t know anything.
You know that answer unavailable is better because you have real intelligence, an LLM is just some mathematical functions so it can’t do that. If it could it would be getting much closer to actually being AI.
That is what LLMs do in EVERY conversation. Most of the time you don’t notice it, because it fits your expectations.