Baldur Bjarnason on Nostr: More context: the consensus (see https://klu.ai/blog/gpt-4-llm for example) seems to ...
More context: the consensus (see
https://klu.ai/blog/gpt-4-llm for example) seems to be that a GPT-4 ChatGPT inferencing operation runs on a cluster of 128 A100 GPUs. Each cluster uses around 50 000 watts, not counting the power requirements of the CPUs and other hardware.
Published at
2024-07-03 14:41:14Event JSON
{
"id": "b8d50a6804b171006bb16904e4424ba5ada51f71278965db91dfae7050c4720a",
"pubkey": "11f94b00429b537972e1e4b4858c9a4226382961ef5995e3b77ff20bf92899d3",
"created_at": 1720017674,
"kind": 1,
"tags": [
[
"e",
"6661709d670fbbe6abd8d26eb2ae074e98d1f273982a82ed9fa29c75be045396",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://toot.cafe/users/baldur/statuses/112723078284935210",
"activitypub"
]
],
"content": "More context: the consensus (see https://klu.ai/blog/gpt-4-llm for example) seems to be that a GPT-4 ChatGPT inferencing operation runs on a cluster of 128 A100 GPUs. Each cluster uses around 50 000 watts, not counting the power requirements of the CPUs and other hardware.",
"sig": "b064587b99b244090226bd92ec4003c6de40154f97ec13a1d84d27548e0dd3314e876ca3a3f5cb8c012f18ea380b7ba93aacdcb46e61eb31993ba5d3fa562d22"
}