Angelo Veltens on Nostr: I also played arround with running an #LLM on the #RaspberryPi 5. Using #Ollama with ...
I also played arround with running an #LLM on the #RaspberryPi 5. Using #Ollama with the gemma2:2b model turns out to give quite a good performance (speed and quality wise), even for #german language. Using #HomeAssistant I can actually talk to it using voice and it responds within a few seconds. (I configured a system prompt that constraints it to give short answers)
Published at
2024-12-08 18:53:39Event JSON
{
"id": "6c7cdf16e06520c2a49ba6fc7f736904d885d67ac7de7912658e2bc095919275",
"pubkey": "64a3e3ff33c9b037d432cb31509d43095e4384896824912e560047ce064f39e9",
"created_at": 1733684019,
"kind": 1,
"tags": [
[
"t",
"LLM"
],
[
"t",
"raspberrypi"
],
[
"t",
"ollama"
],
[
"t",
"german"
],
[
"t",
"homeassistant"
],
[
"proxy",
"https://social.veltens.org/users/angelo/statuses/113618715923768972",
"activitypub"
]
],
"content": "I also played arround with running an #LLM on the #RaspberryPi 5. Using #Ollama with the gemma2:2b model turns out to give quite a good performance (speed and quality wise), even for #german language. Using #HomeAssistant I can actually talk to it using voice and it responds within a few seconds. (I configured a system prompt that constraints it to give short answers)",
"sig": "aedb256fe7e2b0330eed030666f6cbafa107df0ccd6fddf8ef4235575f4926a865d241d0ee5048a85a0b6a6bec7ad08fd2d6b0c56de2d5ac113084c820555df2"
}