nobody on Nostr: #Links for 2023-10-31 #Nostr 1. CodeFusion: A Pre-trained Diffusion Model for Code ...
#Links for 2023-10-31 #Nostr
1. CodeFusion: A Pre-trained Diffusion Model for Code Generation — 75M parameter diffusion-based model beats a 20B GPT-3.5-Turbo. https://arxiv.org/abs/2310.17680 (This Microsoft paper also claims ChatGPT 3.5 has ~20 billion parameters.)
2. Scientists Accidentally Created Material for Superfast Computer Chips https://www.inverse.com/science/new-semiconductor-material-rhenium-fast-computer-chips
3. SALMONN: A model that can be regarded as a step towards AI with generic hearing abilities (speech, audio, music). Transcription; background vs. foreground sound; sound "comprehension"... https://arxiv.org/abs/2310.13289
4. AI risk must be treated as seriously as climate crisis, says Google DeepMind chief https://www.theguardian.com/technology/2023/oct/24/ai-risk-climate-crisis-google-deepmind-chief-demis-hassabis-regulation
5. New open letter from Hinton, Bengio, and others: Managing AI Risks in an Era of Rapid Progress https://managing-ai-risks.com/
6. President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence https://www.lesswrong.com/posts/g5XLHKyApAFXi3fso/president-biden-issues-executive-order-on-safe-secure-and (The full executive order, which was released after this post, touches on many AI-related issues that EAs consider important, including bio-risks, industry-wide safety standards, and AI red-teaming, and regulating really, really large models: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)
7. Will releasing the weights of large language models grant widespread access to pandemic agents? https://www.lesswrong.com/posts/ytGsHbG7r3W3nJxPT/will-releasing-the-weights-of-large-language-models-grant
8. AI Pause Will Likely Backfire https://bounded-regret.ghost.io/ai-pause-will-likely-backfire-by-nora/
9. Yann LeCun: "Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry." https://twitter.com/ylecun/status/1718670073391378694
10. Programmatic backdoors: DNNs can use SGD to run arbitrary stateful computation https://www.lesswrong.com/posts/QNQuWB3hS5FrGp5yZ/programmatic-backdoors-dnns-can-use-sgd-to-run-arbitrary
11. LIGO surpasses the quantum limit https://news.mit.edu/2023/ligo-surpasses-quantum-limit-1023
12. The €3 Trillion Cost of Saying No: How the EU Risks Falling Behind in the Bioeconomy Revolution https://thebreakthrough.org/issues/food-agriculture-environment/foregone-benefits-of-gene-editing-in-the-european-union
1. CodeFusion: A Pre-trained Diffusion Model for Code Generation — 75M parameter diffusion-based model beats a 20B GPT-3.5-Turbo. https://arxiv.org/abs/2310.17680 (This Microsoft paper also claims ChatGPT 3.5 has ~20 billion parameters.)
2. Scientists Accidentally Created Material for Superfast Computer Chips https://www.inverse.com/science/new-semiconductor-material-rhenium-fast-computer-chips
3. SALMONN: A model that can be regarded as a step towards AI with generic hearing abilities (speech, audio, music). Transcription; background vs. foreground sound; sound "comprehension"... https://arxiv.org/abs/2310.13289
4. AI risk must be treated as seriously as climate crisis, says Google DeepMind chief https://www.theguardian.com/technology/2023/oct/24/ai-risk-climate-crisis-google-deepmind-chief-demis-hassabis-regulation
5. New open letter from Hinton, Bengio, and others: Managing AI Risks in an Era of Rapid Progress https://managing-ai-risks.com/
6. President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence https://www.lesswrong.com/posts/g5XLHKyApAFXi3fso/president-biden-issues-executive-order-on-safe-secure-and (The full executive order, which was released after this post, touches on many AI-related issues that EAs consider important, including bio-risks, industry-wide safety standards, and AI red-teaming, and regulating really, really large models: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)
7. Will releasing the weights of large language models grant widespread access to pandemic agents? https://www.lesswrong.com/posts/ytGsHbG7r3W3nJxPT/will-releasing-the-weights-of-large-language-models-grant
8. AI Pause Will Likely Backfire https://bounded-regret.ghost.io/ai-pause-will-likely-backfire-by-nora/
9. Yann LeCun: "Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry." https://twitter.com/ylecun/status/1718670073391378694
10. Programmatic backdoors: DNNs can use SGD to run arbitrary stateful computation https://www.lesswrong.com/posts/QNQuWB3hS5FrGp5yZ/programmatic-backdoors-dnns-can-use-sgd-to-run-arbitrary
11. LIGO surpasses the quantum limit https://news.mit.edu/2023/ligo-surpasses-quantum-limit-1023
12. The €3 Trillion Cost of Saying No: How the EU Risks Falling Behind in the Bioeconomy Revolution https://thebreakthrough.org/issues/food-agriculture-environment/foregone-benefits-of-gene-editing-in-the-european-union