RebelOfBabylon on Nostr: Anybody else feel like the language we use with LLMs and neural nets in general is ...
Anybody else feel like the language we use with LLMs and neural nets in general is way too anthropomorphic? "It doesn't know X", " It understands Y". "it said Z". These things are glorified, n-dimensional curve fitting algorithms. They do not think. #asknostr
Published at
2024-06-27 22:21:08Event JSON
{
"id": "a9cb1efe0e26bd3f214c947cf2b1993e6f1cc083f4da48e68e67702994bc385c",
"pubkey": "d06e6018c1fcf7d80d4f18ae7ea669fa10f84389f95f6d1bdcea9727cb266c33",
"created_at": 1719526868,
"kind": 1,
"tags": [
[
"t",
"asknostr"
]
],
"content": "Anybody else feel like the language we use with LLMs and neural nets in general is way too anthropomorphic? \"It doesn't know X\", \" It understands Y\". \"it said Z\". These things are glorified, n-dimensional curve fitting algorithms. They do not think. #asknostr",
"sig": "250e5f1dca9aa6390dfe1ae32f96418be77d6f71a782eb24b9a7460860c3612874285c2c0086d3056e0026b9261120f0a79fb2e4e2a33814acf21dffbba2c1e6"
}