You guys think these LLMs have self consciousness? as in self meta cognition?
Asked Viewed 9 times
0
0
0 votesI've been thinking
Current AI and LLM'S do not understand words, their meaning or the knowledge behind them. They see them in mathematical code only as that's the only language machines currently understand. Tokens as they call it assign an id to words, in pretraining learned to match to each-other. So love is 567 and you 78. But 567 doesn't explain what it fundamentally means what love it, the definition, true use, synonyms, homonyms or association. It just statistically guesses the best matches to closest other id tokens for best output to your query. In current LLM'S very accurately and with seemingly good responses. But still no understanding of what and why it said it.
It's like a human reading a scientific paper infront of an audience he didn't write in a field he isn't in. Yeah it's brilliant, but he doesn't even know what he's saying. Is he now very intelligent and a good reasoner?
machine learning
artificial intelligence
large language models
philosophy
asked 18 days ago4kmal
4