Note: Once registered, we will provide the link 24 hours ahead of the event.
Orange Silicon Valley, (OSV), the US subsidiary of Orange, one of the world’s leading telecommunications operators, is hosting a series of panel discussions in the coming months on topics related to Responsible Artificial Intelligence. The first in the series is June 17th and begins with a philosophical reflection on what is Responsible AI and what are the emerging factors that contribute to making AI a more fair, transparent, and approachable domain.
Currently, there is substantial concern about the power and ubiquity of automated systems in our daily lives. The AI revolution may already be upon us but at Orange we are focused on “giving everyone the keys to a more responsible digital world”. Beyond the hype of AI, machine learning (ML) algorithms have been getting a lot of press recently, raising alarm at the bias that can creep-in through data and algorithms. This bias in turn is made more problematic by hiding behind ML’s aura of neutral rationality.
Our panel discussion will be an introduction into the forces at play in Responsible AI and aims to frame Responsible AI’s relevance to members of the OSV community by sharing insights by researchers and thought leaders in this field.
- Dr. Alexa Hagerty, Research Associate, Centre for the Study of Existential Risk, University of Cambridge will speak about cultural power frameworks, science and technolgy studies, (STS), and reflect on the natural versus the cultural forces at play in Responsible AI.
- Dr. Frederique Krupa, Director, Human Machine Design Lab, L’Ecole de Design, Nantes Atlantique will introduce cognitive and machine bias including where and how does bias enter the ML development process, how can it be mitigated, and should we mitigate it?
- Andrew Smart, Senior Research Scientist, Google AI will speak about translating complex social phenomena into random variables in AI models and datasets; he also views ethical trade offs not as technical problems to solve, but as broader conversations about values.
Dr. Alexa Hagerty is an anthropologist and STS scholar whose research investigates the societal impacts of AI with a focus on risk, responsible innovation, and affected communities. She is an Associate Researcher at the Centre for the Study of Existential Risk and an Associate Fellow at Leverhulme Centre for the Future of Intelligence, University of Cambridge. She received her PhD in Anthropology from Stanford University and her Master’s from the University of California at Berkeley.
Dr. Frédérique Krupa is a UX/UI designer, research methodology specialist, and director of the Human Machine Design Lab at the école de design Nantes Atlantique. She has been a professor at prestigious design schools such as Parsons School of Design, RISD, University of the Arts, ENSAD and Paris College of Art, where she founded and chaired the Masters of Fine Arts (MFA) in Transdisciplinary New Media. She completed her design doctorate with her PhD dissertation Girl games: gender, design and technology in the service of women’s recruitment in ICT? at the Sorbonne, followed by a postdoc at the computer science school 42, specialising in machine learning algorithms and AI ethics. Her current research focuses on establishing ML UX methods, a holistic, transdisciplinary and systemic approach to fair and accurate machine learning data, algorithms and predictive models.
Andrew Smart is an associate principal researcher on the Ethical ML team at Google. His research interests range from counterfactual causality in fair machine learning, the governance and human oversight of AI development within Google, and critical social science in machine learning. His recent work includes co-authorship on papers on algorithmic audits, critical race theory and AI and epistemology of machine learning. He has a background in anthropology, philosophy, cognitive science and has worked as a research scientist in aerospace, healthcare and the tech industry.
Orange Silicon Valley experts:
Dr. Luger received her Masters and PhD in Artificial Intelligence and Natural Language Processing from the University of Edinburgh. Sarah worked at IBM’s T.J. Watson Research Labs on the Watson Jeopardy Challenge and since her return to the Bay Area has built NLP teams at numerous start-ups specializing in language technology and human-in-the-loop AI. She runs the AAAI Rigorous Evaluation of AI Systems Workshop held at the Human Computation conference, holds patents in this domain, and most recently publishes research supporting Machine Translation for African languages. At Orange Silicon Valley, she is a member of the Big Data and AI experts group and leads the Responsible AI project.
Lance Pham: Senior Network Architect at Orange Silicon Valley who work on early prototyping and analysis of advanced IP, SDX, AIOPs technologies for enterprise and provider services. Prior to OSV, he’s worked at Xerox’s subsidiary startup for telco VPN services and Khosla Ventures’ backed CoSine Communications.
- This event has passed.