bblastie on Nostr: This would be very difficult to pull off and frankly unneeded if the goal is to just ...
This would be very difficult to pull off and frankly unneeded if the goal is to just deliver malware. Many vastly simpler ways to do so.
The information you get from LLM is far more likely to be tainted before we ever get to this point.
Now, it’s possible they just inject malware into the code that’s part of running the model, but I don’t see it being something the model itself does.
Published at
2025-01-30 04:00:36Event JSON
{
"id": "2169ed057a620de4b75f174fae6d2c202803cd0e0770605b086cbee8f9705746",
"pubkey": "da26e54b86c9a395a4233cbb540fe2aa93cdad4a9b657ed5a724efed5859d23d",
"created_at": 1738209636,
"kind": 1,
"tags": [
[
"e",
"91e4caec404d82ca0f206b7739dca29861f1177f39a67c43a2a2805864274bba",
"",
"root"
],
[
"e",
"456092f42dda7016b41cfc34791bbb898ec1283022744ff519d46e3d8c95a9d7",
"wss://a.nos.lol",
"reply"
],
[
"p",
"1e908fbc1d131c17a87f32069f53f64f45c75f91a2f6d43f8aa6410974da5562"
]
],
"content": "This would be very difficult to pull off and frankly unneeded if the goal is to just deliver malware. Many vastly simpler ways to do so. \n\nThe information you get from LLM is far more likely to be tainted before we ever get to this point. \n\nNow, it’s possible they just inject malware into the code that’s part of running the model, but I don’t see it being something the model itself does. ",
"sig": "85e29a205e85bd60263162984c9b0f88766e45f6ee93e39097ef0b47dfebdc5d672be9fcf39b644f8d3b8e69ac98fceb6aff93c66a57598de52f81adb8942682"
}