It can take a while depending on the size of the document..please wait
Discuto
0 Tage noch (endet 17 Mai)
Beschreibung
Since artificial intelligence (AI) and algorithmic decisions are increasingly influencing our everyday life, the European Parliament is currently working on several own-initiative reports – this means texts which the Commission is then to present as a legislative proposal. I am rapporteur – in other words, the author of the report – in my Committee on Consumer Rights and the Internal Market. I already published my draft opinion (see discussion tab) and will present it in the committee on 18 May 2020 (livestream). Now the other political groups, as well as I myself, have until 19 May 2020 to propose amendments to my draft. This is why I call on you to provide me with any comments or suggestions you have to the current text!
For further background information on the report and the subject in general check out my blogpost.
Weitere Informationen
LETZTE AKTIVITÄT
GRAD DER ZUSTIMMUNG
AM MEISTEN DISKUTIERT
LETZTE KOMMENTARE
-
I would add that there is a need to model operational accountability and impact assessment so as to deter public and private entities from implementing AI systems frivolously (as in Sweden, where “the Swedish data protection regulator recently banned facial recognition in schools, based on the principle of data minimization, which requires that the minimum amount of sensitive data should be collected to fulfill a defined purpose. Collecting facial data in schools that could be used for a number of unforeseen purposes, it ruled, was neither necessary nor a proportionate way to register attendance.”) I would want to empower the people who may be adversely affected by AI, by recommending mandatory creation of simple, transparent, time-boxed mitigation channels such that humans can easily request human review of AI decisions that may have been made in error. See CiGi summary here: “Policies around AI systems must focus on ensuring that those directly impacted have a meaningful say in whether these systems are used at all, and in whose interest.” The report also references: “The Algorithmic Accountability Act of 2019, a proposed bill in the United States Congress, attempts to regulate AI with an accountability framework. This legislation requires companies to evaluate the privacy and security of consumer data as well as the social impact of their technology, and includes specific requirements to assess discrimination, bias, fairness and safety. ***Similarly, the Canadian Treasury Board’s 2019 Directive on Automated Decision Making requires federal government agencies to conduct an algorithmic impact assessment of any AI tools they use. The assessment process includes ongoing testing and monitoring for “unintended data biases and other factors that may unfairly impact the outcomes.”**** URL - Centre for International Government Innovation -https://www.cigionline.org/articles/artificial-intelligence-policies-must-focus-impact-and-accountability
-
Dear Alexandra, dear team, all your suggestions are highly valuable, thank you very much for this initial report! For my MA thesis, I currently analyse biometric and emotional AI. A point that struck me was the absence of data protection for emotional datasets (eg. in voice assistants, FRT or sensors) in form of pseudonymous aggregate data by the GDPR. However, I'm not sure in how far the ePrivacy directive would/will tackle this issue. Below I include a paragraph from my draft (!) thesis to illustrate the point: "The large-scale data collection of information on individual’s emotional states are the most concerning developments in relation to biometric and emotional AI technology. In fact, privacy protection of individuals is not always granted. Information about emotional states, gathered with biometric technology, can be highly valuable even if individuals cannot be singled out. Especially aggregate datasets on emotional behaviour can be gathered without containing ‘personal’ or ‘sensitive’ information. Let us consider the case of a surveillance camera in a public space: The video material collected by the camera could be analysed by an algorithm in order to ‘read’ facial expressions and detect emotions by people. This example triggers a range of issues. First, it is not clear whether the data was given by consent. Second, although the information about emotions is surely rather personal information, because the camera cannot make links to an individual, their emotions are not considered ‘personal’. Third, if a certain person would be regularly filmed, and its emotions were to be tracked frequently, its safety, consumer and/or fundamental rights are at stake. The tracking of facial recognition emotions is thus a particularly critical use case of AI technology, based on large-scale datasets and algorithmic infrastructures in combination with biometric technological artefacts." The main idea/source is Andrew McStay's book "Emotional AI: The Rise of Empathic Media" (2018). Any questions, don't hesitate to contact me. Wishing you the best of success with the report! All the best, Rosanna
Haben Sie gewusst, dass man über Kommentare abstimmen kann? Sie können auch direkt auf Kommentare antworten!