What is Nostr?
ElectronicsQuestions /
npub152j…tfmu
2023-12-09 22:10:39
in reply to nevent1q…3n4f

ElectronicsQuestions on Nostr: Doesn't seem too bad, compared to what the EU usually does. No mention of anything ...

Doesn't seem too bad, compared to what the EU usually does. No mention of anything regarding AI run on home machines, except for possibly that "software" for example would have to make it clear that AI-generated images are AI generated, and possibly some manufacturer assurances in "high risk" areas such as "hiring and education". Not sure how to interpret the text, if it goes for downloadable software or just software running on someone's server, or what about open source with random private contributors. Use of facial recognition by police and governments would be restricted outside of certain exemptions. (Great, in my opinion, unless those exemptions are a rubber paragraph that can be used to exempt most of the restrictions, or the restrictions themselves are weak - it wouldn't surprise me...)

Big players like ChatGPT seems to be regulated harder, but as far as I can tell, not much more than they are already doing themselves.

Could cause problems for some kinds of companies that want to use AI in their business though. The article seems a bit vague in exactly what is regulated. But it seems like the focus is to target AI that is used to analyze, and would have an impact on, other people than the ones commanding it. (A two-edged sword in my opinion, until I know what it means more specifically - I value privacy very highly, and wouldn't want to be automatically excluded in innumerable situations, for example, but as usual the devil may be in the details, so that people like use are perfectly OK, maybe even mandatory, to exclude, etc... I haven't read the actual law text, and even if I did, it would probably not answer my questions. The EU has a tendency to make laws first, and after that take a look at what they mean...)

Some excerpts:

"European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law."

"Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales."

"Companies that make A.I. tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems."

"Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright."

"The new regulations will be closely watched globally. They will affect not only major A.I. developers like Google, Meta, Microsoft and OpenAI, but other businesses that are expected to use the technology in areas such as education, health care and banking. Governments are also turning more to A.I. in criminal justice and the allocation of public benefits."
Author Public Key
npub152jgfxwyqfpj5d448pq5sdevv37t66xshm8gpvkszp6kc0za7dyshftfmu