svoboda on Nostr: It's all I'm seeing/hearing on Twatter today when I hopped into a couple Spaces this ...
It's all I'm seeing/hearing on Twatter today when I hopped into a couple Spaces this morning.
I refuse to believe that the Chinese have cracked the code on AI. There is no way the 90-95% figures are accurate, either. They (Government) are subsidizing the energy or the equipment, if not both. Also seeing a lot of people say asking certain questions yields no thinking but an immediate censored answer.
Something smells here.
I refuse to believe that the Chinese have cracked the code on AI. There is no way the 90-95% figures are accurate, either. They (Government) are subsidizing the energy or the equipment, if not both. Also seeing a lot of people say asking certain questions yields no thinking but an immediate censored answer.
Something smells here.
quoting nevent1q…cfah"Chinese AI startup DeepSeek, known for challenging leading AI vendors with open-source technologies, just dropped another bombshell: a new open reasoning LLM called DeepSeek-R1.
Based on the recently introduced DeepSeek V3 mixture-of-experts model, DeepSeek-R1 matches the performance of o1, OpenAI’s frontier reasoning LLM, across math, coding and reasoning tasks. The best part? It does this at a much more tempting cost, proving to be 90-95% more affordable than the latter."
Begs the question. How is it being powered? We know these LLMs are energy intensive so how can they be built and still be 95% less? Math ain't mathing.
https://venturebeat.com/ai/open-source-deepseek-r1-uses-pure-reinforcement-learning-to-match-openai-o1-at-95-less-cost/
#ai #artificialintelligence #technology