Simon B. Støvring on Nostr: I've been spending time recently getting LLMs running locally on iPhones and Macs. ...
I've been spending time recently getting LLMs running locally on iPhones and Macs. Two days ago, I got Llama 3.2 running locally using Apple’s MLX framework, and today I implemented function calling.
In this example, I’m using two LLMs: one to identify which functions to call based on the user’s query, and another to generate responses based on the outputs of those functions.
It has been quite the roller coaster exploring local LLMs but I'm quite pleased with the results I'm seeing right now.
In this example, I’m using two LLMs: one to identify which functions to call based on the user’s query, and another to generate responses based on the outputs of those functions.
It has been quite the roller coaster exploring local LLMs but I'm quite pleased with the results I'm seeing right now.