Event JSON
{
"id": "4bdf42922d09d97c05e2785618eab5f7b31ca531dbcf9b9885b5577ef7d786e1",
"pubkey": "2dde2ae210f1bf2d7649fe8f8b9ff5b7180e7e0c464ea129a23ff6cbb7e36550",
"created_at": 1700522010,
"kind": 1,
"tags": [
[
"p",
"b73da01f25ad5427c97bf64f7384f1e20da70799597dc362dfc16e3a15666816",
"wss://relay.mostr.pub"
],
[
"p",
"6418f30ce8c6b149acce5861f221a1c5eddac515c62067ebe83091ca6c4ac111",
"wss://relay.mostr.pub"
],
[
"e",
"989830b61f9ab380bbc278287533dc4543f2b1850e5b9918f8613ea1031c2bfd",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://mastodon.nzoss.nz/users/lightweight/statuses/111445410462690268",
"activitypub"
]
],
"content": "nostr:npub1ku76q8e9442z0jtm7e8h8p83ugx6wpuet97uxcklc9hr59txdqtqvastrj seems a lot of people, especially in education and edtech, are believing the hype, and contributing to it. As I see it, LLMs (I avoid the term AI as a rule, as it's inaccurate) are simply able to make grammatically plausible interpolative regurgitations of information within a 'knowledge space', defined by a training data set. But no one seems to be curating the results \u0026 ensuring that the output is substantially (and even in nuance) accurate. Much probably isn't.",
"sig": "c8221298e1f1973fa5af2661d0d0c2a97e84a1b04d5db942e988372757c173c16cc884db1ede54cb7b4ab336e6fa0fdca70c2546ab996187c793405f2371167e"
}