Terence Tao on Nostr: #AI #misinformation comes in many forms. One source is malicious actors deliberately ...
#AI #misinformation comes in many forms. One source is malicious actors deliberately using AI to generate text, images, and other media to manipulate others; another is the AI hallucinating plausible-looking nonsense that is then accepted as truth. But a third category comes from AI itself being a poorly understood technology, allowing implausible stories about them to go viral before they can be fact-checked.
A good example from this final category was the recent story about a US Air Force drone "killing" its operator in a simulated test on the grounds that the operator (who had the final authority on whether to fire) was hindering its primary mission of killing as many targets as possible. As it turns out, this was a *hypothetical* scenario presented by an Air Force Colonel in a conference hosted by the Royal Aeronautical Society in order to illustrate the AI alignment problem, rather than an actual simulation; nevertheless, the story rapidly went viral, with some versions of the story even going so far as to say (or at least suggest) that a drone operator was actually killed in real life.
In hindsight, this particular scenario was quite implausible - it required the AI piloting the drone to have a far greater degree of autonomy and theory of mind (and far more demanding need for processing power) than was required for the task at hand, and for many obvious guardrails and safety features that one would naturally place on such an experimental military weapon to be either easily circumvented or completely absent. But the resonance of the story certainly illustrates the level of unease and unfamiliarity with the actual level of capability of this technology.
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
A good example from this final category was the recent story about a US Air Force drone "killing" its operator in a simulated test on the grounds that the operator (who had the final authority on whether to fire) was hindering its primary mission of killing as many targets as possible. As it turns out, this was a *hypothetical* scenario presented by an Air Force Colonel in a conference hosted by the Royal Aeronautical Society in order to illustrate the AI alignment problem, rather than an actual simulation; nevertheless, the story rapidly went viral, with some versions of the story even going so far as to say (or at least suggest) that a drone operator was actually killed in real life.
In hindsight, this particular scenario was quite implausible - it required the AI piloting the drone to have a far greater degree of autonomy and theory of mind (and far more demanding need for processing power) than was required for the task at hand, and for many obvious guardrails and safety features that one would naturally place on such an experimental military weapon to be either easily circumvented or completely absent. But the resonance of the story certainly illustrates the level of unease and unfamiliarity with the actual level of capability of this technology.
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test