Malte Engeler on Nostr: I am reading a lot of legal academic papers about individuals rights against ...
I am reading a lot of legal academic papers about individuals rights against „malfunctioning AI“ atm. Some authers suggest that companies that use „AI“ - especially LLM powered - should only be liable in case the violation of rights is build into the „AI“.
I am wondering (and this is a serious question): Is there something like an unbiased LLM. Isn‘t the creation of defamatory or inaccurate texts a fundamental feature (not a bug) of all LLM?
Published at
2024-03-23 15:00:20Event JSON
{
"id": "31b11812c575bd1cb9f1b8989bda5f76f8861bef21c6499085cd8bea9c1a939e",
"pubkey": "97b7416af6549a9a115317ef79eb8be793e1f8a2a08d5dc00eedf490a4751b31",
"created_at": 1711206020,
"kind": 1,
"tags": [
[
"proxy",
"https://legal.social/users/malteengeler/statuses/112145597739890565",
"activitypub"
]
],
"content": "I am reading a lot of legal academic papers about individuals rights against „malfunctioning AI“ atm. Some authers suggest that companies that use „AI“ - especially LLM powered - should only be liable in case the violation of rights is build into the „AI“. \n\nI am wondering (and this is a serious question): Is there something like an unbiased LLM. Isn‘t the creation of defamatory or inaccurate texts a fundamental feature (not a bug) of all LLM?",
"sig": "e3ff217fe24c58480785b829080ba1e29053c132cd9aa0ab54eb1b521495455eb5621717281fd19ad667c270eac1bd0f0077ba41c688da5461dada5c582a937c"
}