Event JSON
{
"id": "1b842408d9612d34c61bed2020c236eb448c92119cd71ed5805ad324d7a19e6f",
"pubkey": "3533b555ee56f35ac639b764e85164893b4050569f7ee943e038ee75939e9ef9",
"created_at": 1728990901,
"kind": 1,
"tags": [
[
"p",
"6ce67fdd99d149c21e821627ff749b17c755e9aeb8294bdb1558931590399ea9",
"wss://relay.mostr.pub"
],
[
"p",
"8ca0240eaf6bc332736f59eb486087e2113d7b599910cea6bd90b6a21460134f",
"wss://relay.mostr.pub"
],
[
"e",
"5adc93a3989e5fe47488f53a4c955ae7d832b928b1f38ff4cceda01340ecb4a6",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://mas.to/users/lewriley/statuses/113311147721343546",
"activitypub"
]
],
"content": "nostr:npub1dnn8lhve69yuy85zzcnl7aymzlr4t6dwhq55hkc4tzf3typen65sveyyg8 A striking thing about this paper and those cited within it is that they take a weird (to me) stance of investigating the ability of LLMs to reason, when it is already clear that they aren't built to do that. I am not trained in the field and have probably missed something crucial. Can someone explain to me why \"can LLMs reason\" even a worthy question?",
"sig": "68e1fa306b725f026e53b2b908bd2b6541b680ca368c6a7e36987a26b95c149cd27a7ee0648c64d61329bb95c7520646dc6657f14ea3c39e2ae8e5d571e913df"
}