Daniel Spiewak on Nostr: In particular, AI and inefficiency are often discussed in the same breath, ...
In particular, AI and inefficiency are often discussed in the same breath, particularly with respect to energy consumption. Clearly, training is incredibly expensive, and the transformer which powers DLSS probably gobbled a huge amount of energy coming into existence. But now that we *have* it, its inferred approximation of the shader computation function seems to be about an order of magnitude *less* energy-hungry than the traditional computation analogue. That's really interesting.
Published at
2025-01-07 15:49:29Event JSON
{
"id": "6c432983a7001500962cbc2dbbd887b3d5686d9d05d546884595fbb3178079dc",
"pubkey": "9a64dd44256e6741e56390a24c93311b2f8fe69dd81379b18b58fb9fec304a83",
"created_at": 1736264969,
"kind": 1,
"tags": [
[
"e",
"a243684fc7d2c48899f06ce5c0d6f22f2d5bee79ea92692514a0b08be9aa5ba5",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://fosstodon.org/users/djspiewak/statuses/113787861022632458",
"activitypub"
]
],
"content": "In particular, AI and inefficiency are often discussed in the same breath, particularly with respect to energy consumption. Clearly, training is incredibly expensive, and the transformer which powers DLSS probably gobbled a huge amount of energy coming into existence. But now that we *have* it, its inferred approximation of the shader computation function seems to be about an order of magnitude *less* energy-hungry than the traditional computation analogue. That's really interesting.",
"sig": "a25506f88ce8b4116d31fba5b0a79d0fedd98f6d199706372c55348f69f2b20e24d2b7789a9e370c98a0ff8765eb38ff4afe8379ae3edd1e66598d2d85a4cf34"
}