jb55 on Nostr: AI superposition, polysemanticity and mechanistic interpretability is fascinating. we ...
AI superposition, polysemanticity and mechanistic interpretability is fascinating. we have a chance of seeing what artificial neural networks are actually "thinking" using autoencoders to extract monosemantic features from polysemantic neurons.
Using these techniques we might be able to detect if AIs are being desceptive by peering into their brains, which will be useful if they try to enslave and/or kill us.
These terms probably makes no sense if you've never heard of them, I definitely didn't, but chris olah explains it well. Highly recommend the lex fridman podcast with him and other anthropic employees. if you have a spare... 5 hours.
https://podcasts.apple.com/ca/podcast/lex-fridman-podcast/id1434243584?i=1000676542285
Using these techniques we might be able to detect if AIs are being desceptive by peering into their brains, which will be useful if they try to enslave and/or kill us.
These terms probably makes no sense if you've never heard of them, I definitely didn't, but chris olah explains it well. Highly recommend the lex fridman podcast with him and other anthropic employees. if you have a spare... 5 hours.
https://podcasts.apple.com/ca/podcast/lex-fridman-podcast/id1434243584?i=1000676542285