cabusar on Nostr: Hi, I would define LLM security as the ways to ensure both technical security of ...
Hi,
I would define LLM security as the ways to ensure both technical security of models and datasets (how to defend against datasets poisonning for exemple) and general security using generative AI (malicious prompt engineering for exemple).
Hope it answer your question. :)
Published at
2023-07-02 10:02:47Event JSON
{
"id": "393b639ef63b628c93eb3d50dd5ce0804a7fca841cd41e712820f9a4ed22e3a1",
"pubkey": "68836775c5111f9a483d6f3e32c77b65e42949d1a84260aa2aaa4e7d3f6a4736",
"created_at": 1688292167,
"kind": 1,
"tags": [
[
"e",
"6b1d289a0c1b42d52cbd102528b48924bbc1c4e31fe3f6144f9c9a6b198984f9",
"",
"root"
],
[
"e",
"303e8024aa0d2e2b7c81f5bfd0b24b272d3f106c64757eb59393e29c4523841b",
"",
"reply"
],
[
"p",
"68836775c5111f9a483d6f3e32c77b65e42949d1a84260aa2aaa4e7d3f6a4736"
],
[
"p",
"ae44681bb75c03a96f3af62e88b6d80de6d3f223f2d9459a31823e37bd27918d"
]
],
"content": "Hi,\n\nI would define LLM security as the ways to ensure both technical security of models and datasets (how to defend against datasets poisonning for exemple) and general security using generative AI (malicious prompt engineering for exemple).\n\nHope it answer your question. :)",
"sig": "8102aa18f8cc20d1b44557256d8150d3fcfe136f6d8e0ea703947573d014b8ac7b45a7dc76f920244075bfc721e68fc495fe784dec228c4799f63fe1d84dabbe"
}