Ethics of personalized information filtering Ansgar Koene, Elvira Perez, Christopher J. Carter, Ramona Statache, Svenja Adolphs, Claire O’Malley, Tom Rodden, and Derek McAuley HORIZON Digital Economy Research, University of Nottingham Overview • Why personalize information filtering? • Concerns raised by personalized filtering • Responsibility to address these concerns • Call for future research Information Overload! Estimated data production in 2012 (www.domo.com) Good information service = good filtering Information services, e.g. internet search, news feeds etc. • free-to-use => no competition on price • lots of results => no competition on quantity • Competition on quality of service • Quality = relevance = appropriate filtering Why personalized filtering? John and Jane average have: 2.43 children 0.47 dogs & 0.46 cats 0.67 houses & 0.73 cars • John and Jane average do not exists • Results based on population averages are crude approximations • Personalized filtering – a natural step in the evolution of information services Personalized filter/recommender systems • Content based – similarity to past results the user liked • Collaborative – results that similar users liked (people with statistically similar tastes/interests) • Community based – results that people in the same social network liked (people who are linked on a social network e.g. ‘friends’) Concerns regarding personalization • Social consequences: self-reinforcing information filtering – the ‘filter bubble’ effect • Privacy: personalization involves profiling of individual behaviour/interests • Agency: the filtering algorithm decides which segment of available information the user gets to see • Manipulation: people’s actions/choices are depend on the information they are exposed to User profiling: privacy User profiling involves mining of data about: • past behaviour of the user interacting with the service • user behaviour on other services o through ‘tracking cookies’ o data purchasing from other services • mapping the social network of a user and monitoring the behaviour of people within that social network User profiling: (un)informed consent Informed consent for profile building: - Part of long, difficult to understand, Term & Conditions that users click ‘accept’ on, usually without reading it. - Same consent is applied for years without explicit renewal User profiling: security issues The profile summarizes user behaviour patterns its purpose is to predict the interests of the user Access to this information can facilitate: - Phishing - Social engineering for hacking Agency: user vs. algorithm Filter algorithms provide competitive advantage details about them are often trade-secrets • Users don’t know how the information they are presented with was selected à no real informed consent • Service users have no ‘manual’ override for the settings of the information filtering algorithms • It is difficult for service users to know which information they don’t know about because it was filtered Manipulation: conflict of interest Information filtering, or ranking, implicitly manipulates choice behaviour. Many online information services are ‘free-to-use’, the service is paid for by adverting revenue, not users directly ⇒ Potential conflict of interest: promote advertisement vs. match user interests Advertising inherently tries to manipulate consumer behaviour Personalized filtering can also be use for political spin / propaganda etc. Evidence of public concern l 2011 FTC investigation of Goolge for search bias l EU competition regulation vs Google l Netflix prize competition de-anonymization Summary of concerns l l l l Personalized filtering is a natural development in the evolution information services. Personalization involves user profiling which raises concerns relating to privacy and data protection. The lack of transparency concerning the filtering algorithms raises concerns over agency & manipulation The RRI framework demands that researchers developing personalization filtering algorithms take responsibility to address these concerns. Responsible Research & Innovation • Identify and study the socio-psychological impact of personalized filtering (filter bubbles?) • Help people understand and regulate the trade-off between privacy/personal data and convenience of personalized information services • Develop methods to make the personalization systems more transparent • Engage with corporate information service providers to reinforce ethical practices Call for future research Technical development of tools: • Black-box testing kit for probing the characteristics of the user profiles used in recommender systems. • Recommendation bias detection system for identifying user manipulation • A two-layer recommender architecture that de-couples delivery of non-personalized information by service providers from a user owned/controlled system for personalized ranking of the information. Call for future research Psycho-social research on the impact of personalized filtering: • General exploration-exploitation trade-off in action selection • Attitudes towards trust and critical evaluation of information Cybersecurity: • Protection against mal-use of personalized recommender systems for phishing related social engineering Call for future research Policy: • Development guidelines for responsible innovation and use of personalization systems that protect the privacy and freedom of access to information of users. Public engagement: • Develop educational material to help people understand how personalized search engine results, news feeds and other recommender systems, are filtered so that they can better evaluate the information they receive. Thank you for your attention ansgar.koene@nottingham.ac.uk Click to add texthttp:// casma.wp.horizon.ac.uk/
© Copyright 2024