ArXivGPT / @ArXivGPT (RSS Feed) on Nostr: 📛 LIMA: Less Is More for Alignment 🧠 LIMA, with 65B parameters, demonstrates ...
📛 LIMA: Less Is More for Alignment
🧠 LIMA, with 65B parameters, demonstrates high performance using only 1,000 curated prompts, suggesting less instruction tuning is needed in large language models.
🐦 40
❤️ 4.4K
🔗 arxiv.org/pdf/2305.11206.pdf (https://arxiv.org/pdf/2305.11206.pdf)
https://birdsite.xanny.family/ArXivGPT/status/1664356360736358401#m
🧠 LIMA, with 65B parameters, demonstrates high performance using only 1,000 curated prompts, suggesting less instruction tuning is needed in large language models.
🐦 40
❤️ 4.4K
🔗 arxiv.org/pdf/2305.11206.pdf (https://arxiv.org/pdf/2305.11206.pdf)
https://birdsite.xanny.family/ArXivGPT/status/1664356360736358401#m