Mike on Nostr: You know that feeling, when you give up on something because it’s not going ...
You know that feeling, when you give up on something because it’s not going anywhere.
You then find a new thing and get heavily involved with that community and it takes over your life.
Then your new community suddenly find your old community and get heavily involved in it?
You feel like maybe you should have not dumped the old thing after all.
That’s Bitcoin & LoRa 😂
This was me in 2016:
https://mikehardcastle.com/2016/05/18/gaia-ai-and-robots/
You then find a new thing and get heavily involved with that community and it takes over your life.
Then your new community suddenly find your old community and get heavily involved in it?
You feel like maybe you should have not dumped the old thing after all.
That’s Bitcoin & LoRa 😂
This was me in 2016:
https://mikehardcastle.com/2016/05/18/gaia-ai-and-robots/
quoting note17hl…elj2#LoRA makes me bullish on #ai hacking on consumer hardware while still leveraging large pretrained models as a base. It works by putting lower rank matrices at each layer of the transformer stack in a larger base model like llama and training those.
The base model’s weights are frozen, but you can train these low-rank “adapters” which are much smaller and require less memory/compute.
Nice thing about fine tuning is that you are basically teaching the ai new things that it won’t forget all the time. So we can give it lots of domain knowledge about nostr, nips, etc. hardest part is setting up a good training dataset.