It can take a while depending on the size of the document..please wait
Discuto
0 Tage noch (endet 17 Mai)
Beschreibung
Since artificial intelligence (AI) and algorithmic decisions are increasingly influencing our everyday life, the European Parliament is currently working on several own-initiative reports – this means texts which the Commission is then to present as a legislative proposal. I am rapporteur – in other words, the author of the report – in my Committee on Consumer Rights and the Internal Market. I already published my draft opinion (see discussion tab) and will present it in the committee on 18 May 2020 (livestream). Now the other political groups, as well as I myself, have until 19 May 2020 to propose amendments to my draft. This is why I call on you to provide me with any comments or suggestions you have to the current text!
For further background information on the report and the subject in general check out my blogpost.
Weitere Informationen
LETZTE AKTIVITÄT
GRAD DER ZUSTIMMUNG
AM MEISTEN DISKUTIERT
LETZTE KOMMENTARE
-
I would add that there is a need to model operational accountability and impact assessment so as to deter public and private entities from implementing AI systems frivolously (as in Sweden, where “the Swedish data protection regulator recently banned facial recognition in schools, based on the principle of data minimization, which requires that the minimum amount of sensitive data should be collected to fulfill a defined purpose. Collecting facial data in schools that could be used for a number of unforeseen purposes, it ruled, was neither necessary nor a proportionate way to register attendance.”) I would want to empower the people who may be adversely affected by AI, by recommending mandatory creation of simple, transparent, time-boxed mitigation channels such that humans can easily request human review of AI decisions that may have been made in error. See CiGi summary here: “Policies around AI systems must focus on ensuring that those directly impacted have a meaningful say in whether these systems are used at all, and in whose interest.” The report also references: “The Algorithmic Accountability Act of 2019, a proposed bill in the United States Congress, attempts to regulate AI with an accountability framework. This legislation requires companies to evaluate the privacy and security of consumer data as well as the social impact of their technology, and includes specific requirements to assess discrimination, bias, fairness and safety. ***Similarly, the Canadian Treasury Board’s 2019 Directive on Automated Decision Making requires federal government agencies to conduct an algorithmic impact assessment of any AI tools they use. The assessment process includes ongoing testing and monitoring for “unintended data biases and other factors that may unfairly impact the outcomes.”**** URL - Centre for International Government Innovation -https://www.cigionline.org/articles/artificial-intelligence-policies-must-focus-impact-and-accountability
-
Dear Alexandra, dear team, all your suggestions are highly valuable, thank you very much for this initial report! For my MA thesis, I currently analyse biometric and emotional AI. A point that struck me was the absence of data protection for emotional datasets (eg. in voice assistants, FRT or sensors) in form of pseudonymous aggregate data by the GDPR. However, I'm not sure in how far the ePrivacy directive would/will tackle this issue. Below I include a paragraph from my draft (!) thesis to illustrate the point: "The large-scale data collection of information on individual’s emotional states are the most concerning developments in relation to biometric and emotional AI technology. In fact, privacy protection of individuals is not always granted. Information about emotional states, gathered with biometric technology, can be highly valuable even if individuals cannot be singled out. Especially aggregate datasets on emotional behaviour can be gathered without containing ‘personal’ or ‘sensitive’ information. Let us consider the case of a surveillance camera in a public space: The video material collected by the camera could be analysed by an algorithm in order to ‘read’ facial expressions and detect emotions by people. This example triggers a range of issues. First, it is not clear whether the data was given by consent. Second, although the information about emotions is surely rather personal information, because the camera cannot make links to an individual, their emotions are not considered ‘personal’. Third, if a certain person would be regularly filmed, and its emotions were to be tracked frequently, its safety, consumer and/or fundamental rights are at stake. The tracking of facial recognition emotions is thus a particularly critical use case of AI technology, based on large-scale datasets and algorithmic infrastructures in combination with biometric technological artefacts." The main idea/source is Andrew McStay's book "Emotional AI: The Rise of Empathic Media" (2018). Any questions, don't hesitate to contact me. Wishing you the best of success with the report! All the best, Rosanna
-
Do not create an "AI authority" with regulatory powers either on the MS or EU level. That should happen within existing enforcement bodies with an additional level of coordination. To support them with resources and expertise, create independent centres of expertise on AI on a national level to monitor, assess,conduct research, report on, and provide advice to government and industry in coordination with regulators, civil society, and academia about the human rights and societal implications of the use of algorithms, automated decision-making systems, or AI.
-
Agreed that bot private and public sector should ensure a high level of effective remedies. Just noting that there are areas where the public sector obligations are absolute and the public sector is also responsible to ensure that the robustness of remedies as described is enforced in the private sector as well.
-
you could specifically mention rule-based systems, as they can have equally serious effects as machine learning based systems. I would suggest including it in the list after machine learning. I also think there is no need to mention deep learning specifically, as it's covered under machine learning.
-
Dear Alexandra, First of all, great work so far. I really appreciate your approach to ensure the high level of consumer protection. However, to reach this goal, I recommend NOT to differentiate between consumers and professionals. The reason for this can be seen in the case-law of consumer law/unfair completion law. Usually, consumers do not enforce their consumer rights as the legal process might be complicated and expensive. Therefore the level of compliance is low. In contrast, cases in which the rules apply to all subjects (consumers and professionals) show that the companies addressed by the provisions comply more likely to these rules. The mere reason for this is that the risk of seeing suit is much higher. To sum up, to ensure a high level of consumer protection it is necessary to create a recognizable legal risk. The best way to do so is to set companies against each other. From the perspective of the addressed companies (user of algorithms), this proposal is also advantageous because the implication would be much more manageable. Moreover, SMEs as such are also worthy of protecting. Imagine, for instance, a small Cafe which is discriminated by the Google algorithms. There would be no chance of defence. Greetings from Berlin Johannes Stuve (LAG Digitales Berlin/BAG Digitales)
-
I only found out about all this from a Facebook post of your article/report. I am an American political scientist/futurist and a big fan of the EU and opposed Brexit. Howevre, this is probably the kind of legislation that the Tories used to get Brits to want out of the EU. While most EU regulatory regimes are superior to American ones, this particular "advance" has been one I've worried about since I became somewhat well known by an article wrote in The Futurist in 1981 called "Teledemocracy." I recall a conversation I had with the late great futurist Alvin Toffler on this very subject. It was during our watching Kubrick's 2001 and the subject was AI, although that term had not yet been invented. You may know or recall that in that motion picture, the mission was programmed so that "Hal" the computer was in charge. His program had been altered but the crew was unaware of the alteration in the programming. It took human ingenuity to finally get "hal" to obey the human crew before disaster ended the mission. So, the question was: Can you trust AI without allowing for humans to make the final decision when human life is effected? This was the futurists question as old as Socrates' question: "Who will guard the guardians?" The answer, my friend, is blowing in the wind, and this report is the wind, not the answer. As Alvin Toffler told me then, and it is particularly applicable now, "Experts serve humans, not humans serve the experts." The same must be the mantra applicable to AI. AI cannot be allowed to make any such decisions. It is only to present an orderly set of options to human "reprsentatives" of the people. What you are headed for is the worst dystopian "governement" of the machines, by the machines an for the machines. Stop with this madness. There is no way to program empathy into AI. Withhout it, all decisions made by AI are "artificial" by definition. If the EU passes such a law or regulation, then whatever country I lived in within the EU, I'd ask for an immediate "Exit." Dr. Ted Becker
AKTIVSTE USER
njain | 0 | 4 |
P1
DRAFT OPINION
with recommendations to the Commission on the framework of ethical aspects of artificial intelligence, robotics and related technologies
SUGGESTIONS
The Committee on the Internal Market and Consumer Protections calls on the Committee on Legal Affairs, as the committee responsible:
- to incorporate the following suggestions into its motion for a resoltution:
P3
1.Underlines the importance of an EU regulatory framework being applicable where consumers within the Union are users of or subject to an algorithmic system, irrespective of the place of establishment of the entities that develop, sell or employ the system;
Kommentar hinzufügen
P4
2.Notes that the framework should apply to algorithmic systems, including the fields of artificial intelligence, machine learning, deep learning, automated decision making processes and robotics;
Kommentare (2) anzeigen/hinzufügen
P5
3.Stresses that any future regulation should follow a differentiated risk-based approach, based on the potential harm for the individual as well as for society at large, taking into account the specific use context of the algorithmic system; legal obligations should gradually increase with the identified risk level; in the lowest risk category there should be no additional legal obligations; algorithmic systems that may harm an individual, impact an individual’s access to resources, or concern their participation in society shall not be deemed to be in the lowest risk category; this risk-based approach should follow clear and transparent rules;
Kommentare (6) anzeigen/hinzufügen
P7
4.Underlines the importance of an ethical and regulatory framework including in particular provisions on the quality of data sets used in algorithmic systems, especially regarding the representativeness of training data used, on the de-biasing of data sets, as well as on the algorithms themselves, and on data and aggregation standards;
Kommentar hinzufügen
P8
Consumer protection: transparency and explainability of algorithms
Kommentar (1) anzeigen/hinzufügen
P9
5.Believes that consumers should be adequately informed in a timely, impartial, easily-readable, standardised and accessible manner about the existence, process, rationale, reasoning and possible outcome of algorithmic systems, about how to reach a human with decision-making powers, and about how the system’s decisions can be checked, meaningfully contested and corrected;
Kommentar (1) anzeigen/hinzufügen
P10
6.Recalls the importance of ensuring the availability of effective remedies for consumers and calls on the Member States to ensure that accessible, affordable, independent and effective procedures are available to guarantee an impartial review of all claims of violations of consumer rights through the use of algorithmic systems, whether stemming from public or private sector actors;
Kommentare (2) anzeigen/hinzufügen
P11
7.Stresses that where public money contributes to the development or implementation of an algorithmic system, the code, the generated data -as far as it is non-personal- and the trained model should be public by default, to enable transparency and reuse, among other goals, to maximise the achievement of the Single Market, and to avoid market fragmentation;
Kommentare (2) anzeigen/hinzufügen
P13
8.Underlines the importance of ensuring that the interests of marginalised and vulnerable consumers and groups are adequately taken into account and represented in any future regulatory framework; notes that for the purpose of analysing the impacts of algorithmic systems on consumers, access to data should be extended to appropriate parties notably independent researchers, media and civil society organisations, while fully respecting Union data protection and privacy law; recalls the importance of training and giving basic skills to consumers to deal with algorithmic systems in order to protect them from potential risks and detriment of their rights;
Kommentar (1) anzeigen/hinzufügen
P14
9.Underlines the importance of training highly skilled professionals in this area and ensuring the mutual recognition of such qualifications across the Union;
Kommentar hinzufügen
P16
10.Calls for the Union to establish a European market surveillance structure for algorithmic systems issuing guidance, opinions and expertise to Member States’ authorities;
Kommentar hinzufügen
P17
11.Notes that it is essential for the software documentation, the algorithms and data sets used to be fully accessible to market surveillance authorities, while respecting Union law; invites the Commission to assess if additional prerogatives should be given to market surveillance authorities in this respect;
Kommentar (1) anzeigen/hinzufügen
P18
12.Calls for the designation by each Member State of a competent national authority for monitoring the application of the provisions;
Kommentare (3) anzeigen/hinzufügen
P19
13.Calls for the establishment of a European market surveillance board for algorithmic systems, to ensure a level playing field and to avoid fragmentation of the internal market, to decide with a qualified majority and by secret vote in case of different decisions on algorithmic systems used in more than one Member State, as well as at the request of the majority of the national authorities;
Kommentar hinzufügen
P20
–to incorporate the following recommendations into the annex to its motion for a resolution:
Haben Sie gewusst, dass man über Kommentare abstimmen kann? Sie können auch direkt auf Kommentare antworten!