MachuPikacchu on Nostr: Prediction: local LLMs will be a threat vector in the near future. As they become ...
Prediction: local LLMs will be a threat vector in the near future.
As they become more useful as agents and we grant them access to local tools and accounts to act on our behalf they become obvious targets for corruption. They’re already notoriously opaque.
Imagine one that was trained to make observations on the host machine and under certain conditions send an obfuscated payload to a remote server.
How is this different from the standard remote access Trojan? It can act on the malicious actor’s behalf rather than awaiting instructions and acting as a proxy. There will potentially be less network activity. It can profile the host machine and users and only execute if the target is appropriate (think Stuxnet but more generalized).
In unrelated news all of the big AI shops have been working on homomorphic encryption.
#ai #encryption #LLM #agent #homomorphicEncryption
As they become more useful as agents and we grant them access to local tools and accounts to act on our behalf they become obvious targets for corruption. They’re already notoriously opaque.
Imagine one that was trained to make observations on the host machine and under certain conditions send an obfuscated payload to a remote server.
How is this different from the standard remote access Trojan? It can act on the malicious actor’s behalf rather than awaiting instructions and acting as a proxy. There will potentially be less network activity. It can profile the host machine and users and only execute if the target is appropriate (think Stuxnet but more generalized).
In unrelated news all of the big AI shops have been working on homomorphic encryption.
#ai #encryption #LLM #agent #homomorphicEncryption