Events @ C4E

  • Wed, May 29, 2019
    Ethics of AI in Context: Emerging Scholars
    Robotic Agents and the Evolving Nature of "Social" (w/ Shane Saunderson) (Ethics of AI in Context: Emerging Scholars)

    Robotic Agents and the Evolving Nature of “Social”

    Building on The Media Equation (1996), which highlighted the reflexive way humans treat computers and other technologies as social actors, this seminar will explore the depths and implications of robots and other anthropomorphized technologies as they adopt increasingly humanlike traits. As these technologies learn to mimic and replicate the nuances of our interactions, what responsibility do we have for their deployment and transparency in use? How will the creation of increasingly humanlike technologies change the ways in which we work, play, and live? Even if we could create artificial people, should we?

    ☛ please register here

    Shane Saunderson
    University of Toronto
    Mechanical and Industrial Engineering

    Shane Saunderson received a B.Eng. in mechanical engineering from McGill University in 2005 and a M.B.A. in technology and innovation from Ryerson University in 2011. He is currently a Ph.D. candidate studying social Human-Robot Interaction under Prof. Goldie Nejat within the Autonomous Systems and Biomechatronics Laboratory (ASBLab) in the Department of Mechanical and Industrial Engineering at the University of Toronto. Shane holds a Vanier Canada Graduate Scholarship and is a Junior Fellow with Massey College. His research focuses on psychological influence caused by robots during social interactions with particular interest in topics such as persuasion, trust, and leadership.

    04:00 PM - 05:30 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Jun 5, 2019
    Ethics of AI in Context
    What Society Must Require from AI (w/ Ron Baecker)

    What Society Must Require from AI

    Presented By: Ron Baecker, Professor Emeritus and Bell Chair in Human-Computer Interaction

    ☛ register here

    Artificial intelligence (AI) algorithms, especially machine learning (ML) programs, are now being employed or proposed for use in:
    a) scanning résumés to weed out job applicants;
    b) evaluating risks children face in their families;
    c) informing judicial decisions about bail, sentencing, and parole;
    d) diagnosing medical conditions, and not just classifying medical images;
    e) identifying faces in the crowd for the police;
    f) caring for seniors;
    g) driving autonomous vehicles; and
    h) guiding and directing drones in eliminating terrorists.

    I will propose what society must require of algorithms that affect human welfare, health, life, and death. I shall discuss concepts including reliability, openness, transparency, explainability, trustworthiness, responsibility, accountability, empathy, compassion, fairness, and justice. The results will aid researchers in prioritizing problems for AI and HCI research, and will assist policy makers and citizens in determining when and how AI technology should be deployed.


    Ron Baecker is Emeritus Professor of Computer Science and Bell Chair in Human-Computer Interaction at the University of Toronto.
    He co-founded the Dynamic Graphics Project, and founded the university’s Knowledge Media Design Institute and its Technologies for Aging Gracefully lab (TAGlab). Recently, he has been a research lead in AGE-WELL, Canada’s technology and aging network.
    He has been named one of the 60 Pioneers of Computer Graphics by ACM SIGGRAPH, has been elected to the CHI (Computers and Human Interaction) Academy by ACM SIGCHI, has been named an ACM Fellow, and has been given the Canadian Human Computer Communications Society Achievement Award and a Canadian Digital Media Pioneer Award.
    He is the author of 5 books including Computers and Society: Modern Perspectives (Oxford University Press, 2019) and is the founding Editor of the Synthesis Lectures on Assistive, Rehabilitative, and Health-preserving Technologies (Morgan & Claypool, Publisher).

    This is a joint lecture with the Department of Computer Science and the Centre for Ethics.

    04:30 PM - 06:30 PM
    Department of Computer Science, University of Toronto
    Bahen Centre for Information Technology, Room 1130

  • Wed, Jun 12, 2019
    Ethics of AI in Context: Emerging Scholars
    Humanistic Management of Artificial Intelligence (w/ Ryan Khurana) (Ethics of AI in Context: Emerging Scholars)

    Humanistic Management of Artificial Intelligence

    Artificial intelligence is challenging the dominant paradigm of scientific management by increasing the importance of judgement in decision-making, which has been historically undervalued. The highly specialised jobs and process-driven bureaucratic structures that dominate large organizations favour excessive automation. This would reduce the number of roles available for qualified workers while simultaneously increasing the risk of catastrophic prediction failure, as humans are likely to be prematurely removed “from the loop.” In order to ensure the productivity benefits promised by artificial intelligence while avoiding large scale failure, a program of humanistic management that allows for error and values qualitative judgement needs to be adopted.

    ☛ please register here

    Ryan Khurana is the Executive Director of the Institute for Advancing Prosperity, a technology policy think tank in Toronto. Prior to this, he held roles in technology policy at the Competitive Enterprise Institute in Washington, D.C., and at the Institute of Economic Affairs in London, UK.

    04:00 PM - 05:30 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Mon, Jun 17, 2019
    Author Meets Critics, Ethics of AI in Context
    Mark Kingwell, Wish I Were Here: Boredom and the Interface (Author Meets Critics)

    Wish I Were Here: Boredom and the Interface (McGIll 2019)

    Mark Kingwell
    Department of Philosophy
    University of Toronto

    Lauren Bialystok 
    (Social Justice Education, OISE, University of Toronto)
    Molly Sauter Communication Studies, McGill University)
    Ira Wells (Victoria College, University of Toronto)

    ☛ register here

    Offering a timely meditation on the profound effects of constant immersion in technology, also known as the Interface, Wish I Were Here draws on philosophical analysis of boredom and happiness to examine the pressing issues of screen addiction and the lure of online outrage. Without moralizing, Mark Kingwell takes seriously the possibility that current conditions of life and connection are creating hollowed-out human selves, divorced from their own external world. While scrolling, swiping, and clicking suggest purposeful action, such as choosing and connecting with others, Kingwell argues that repeated flicks of the finger provide merely the shadow of meaning, by reducing us to scattered data fragments, Twitter feeds, Instagram posts, shopping preferences, and text trends captured by algorithms.

    Written in accessible language that references both classical philosophers and contemporary critics, Wish I Were Here turns to philosophy for a cure to the widespread unease that something is amiss in modern waking life.

    04:15 PM - 06:15 PM
    Centre for Ethics, University of Toronto
    Rm 200, Larkin Building

  • Thu, Jun 27, 2019
    Events on Campus
    Media Ethics: Human Ecology in a Connected World

    The 20th Annual Convention of the Media Ecology Association
    International Conference
    Toronto, 27-30 June 2019

    Presented by:

    12:00 AM - 11:59 PM
    St Michael's College
    81 St. Mary Street

  • Wed, Jul 10, 2019
    Ethics of AI in Context: Emerging Scholars
    AI and Medical Education (w/ Nishila Mehta & AIMSS) (Ethics of AI in Context: Emerging Scholars)

    AI and Medical Education

    Today, emerging technologies like artificial intelligence, gene editing, nanotechnology, and blockchain are being explored as ways to fundamentally “disrupt” medicine and healthcare. Despite the promises of such technologies, implementing them has presented countless unintended challenges. First and foremost, given the Hippocratic duties of healthcare providers to ‘do no harm’, it is essential that the role of these emerging technologies in medicine is carefully scrutinized by practitioners that understand and can think critically about them. Artificial intelligence (AI) can be broadly defined as the ability for a machine to perform human-like tasks after learning from experience. AI is poised to introduce significant changes to medicine and healthcare. Physicians will be expected to navigate these changes and use new technologies in a competent and ethical manner. Currently, curricular and extracurricular opportunities addressing AI in medicine across Ontario medical schools are sparse or nonexistent. Failing to prepare future physicians to respond and adapt to novel AI applications in medicine may lead to dire consequences including but not limited to decreased quality of care, exploitation of patient data, and widened health disparities. It is crucial that physicians, as patient advocates, are equipped with the skills and knowledge base to be a voice in the evolving dialogue surrounding the integration of AI into healthcare.

    ☛ please register here

    Nishila Mehta is a first-year medical student at the University of Toronto, and a recent graduate of York University’s Global Health program with a specialization in eHealth. She has diverse interests in health technology, quality improvement, and health equity, and has explored these by leading several research projects at hospital and university sites. Her interest in AI Ethics grew out of her undergraduate degree, where she observed the widespread societal consequences that emerging technologies could have. She has spent her year as an inaugural research fellow in Ethics of AI at the Centre for Ethics exploring the implications of Artificial Intelligence for medical education and global health equity. She has also worked alongside a student group at the faculty of medicine called the “Artificial Intelligence in Medicine Student Society” (AiMSS) to further explore how AI can be integrated into medical education.

    The Artificial Intelligence in Medicine Society (AIMSS) is a group for medical students at the University of Toronto. It was established in 2017 after students noticed the growing impact of machine learning and artificial intelligence on the healthcare field. Our mission is to provide medical students with insight on how AI is being applied to healthcare as well as the challenges it raises (especially ethically), connect students with opportunities and resources in the Toronto health-tech space, and advocate for greater integration of AI into the medical curriculum to prepare future doctors for the healthcare environment of tomorrow. We do this through speaker series, interactive workshops, and publishing in the scientific and popular literature. Our latest paper is a position paper endorsed by the Ontario Medical Students’ Association which outlines how we can better prepare medical students for AI in healthcare. It can be accessed here:

    04:00 PM - 05:30 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Jul 24, 2019
    Ethics of AI in Context: Emerging Scholars
    The Labour Behind AI: Micro-Work and the Platform Economy (w/ Julian Posada) (Ethics of AI in Context: Emerging Scholars)

    The Labour Behind AI: Micro-Work and the Platform Economy

    From data collection and annotation to AI impersonation, platforms like Amazon Mechanical Turk and Upwork fragment and outsource tasks to millions of workers around the globe, many of them situated in developing countries. This seminar focuses on how human labour in the platform economy helps to create and maintain AI systems. It positions platforms as organizational paradigms and retraces their historical evolution within contemporary neo-liberal capitalism. While “micro-work” platforms generate employment in developing countries, they are often disengaged from the traditional social role of enterprises and do not provide any social and economic protections to their workers. Due to their international nature, effective regulation of these platforms is challenging. However, this seminar concludes by presenting other potential alternatives that could improve the working conditions of workers in the global platform economy such as the implementation of ethical work principles and the empowerment of workers through co-operation.

    ☛ please register here

    Julian Posada
    University of Toronto

    Faculty of Information

    Julian Posada is a Ph.D. student at the Faculty of Information of the University of Toronto and a Junior Fellow of Massey College. His research focuses on alternative forms of organization, fair labour, and worker co-operation in the platform economy. Previously, he worked for the French National Centre for Scientific Research (CNRS) and holds a master’s degree in economic sociology from the School for Advanced Studies in the Social Sciences (EHESS) and a bachelor’s degree in the Humanities from Sorbonne University.

    04:00 PM - 05:30 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Aug 7, 2019
    Ethics of AI in Context: Emerging Scholars
    Business-as-Trust: Corporate Social Responsibility in the Era of AI (w/ Michael Motala) (Ethics of AI in Context: Emerging Scholars)

    Business-as-Trust: Corporate Social Responsibility in the Era of AI

    The new Artificial Intelligence-powered social technology economy has disrupted local and global markets with bewildering speed. From hoteling to online dating to urban transportation, GPS-enabled location-based apps like Uber, Facebook, Amazon, and Airbnb have broken down conventional axes of economic regulation, social interaction, and commercial power. Why and how must we reimagine the normative and practical foundations of corporate social responsibility and business ethics? Current approaches such as the stockholder and stakeholder theory of corporate social responsibility are vague, abstract, indeterminate, and have little relevance to the modern economy. To move past this impasse, this lecture, which is based on a forthcoming book entitled The New Business Ethics (Routledge, 2019), argues we must reimagine corporate social responsibility in five critical ways: as a practical process of decision-making and accountability that exists to foster and maintain trust in enterprise; as a dynamic and process-relational system of interconnected institutions and agents; as a discourse ethics concerned with articulating a new universal pragmatics; as an actor-centric model of market-state relations; and as a new social constitution of the digital economy grounded in the principles of responsibility, transparency, and accountability.

    ☛ please register here

    Michael Motala
    University of Toronto
    Political Science

    Michael Motala is an Ethics of Artificial Intelligence Graduate Research Fellow at the University of Toronto’s Centre for Ethics, and a PhD student studying political science. Michael’s research interests lie at the intersection of law, economics, political science, and pragmatist moral philosophy. He holds degrees from Columbia University, Osgoode Hall Law School, the London School of Economics and Political Science, and the University of Toronto’s Trinity College.

    04:00 PM - 05:30 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Aug 21, 2019
    Ethics of AI in Context: Emerging Scholars
    Automated Violence: Who Will Guard the Guards? (w/ Daniella Barreto & Nicole Leaver) (Ethics of AI in Context: Emerging Scholars)

    Automated Violence: Who Will Guard the Guards?

    We will discuss automated decision-making systems (ADMs) being deployed by police services, with a specific focus on the RCMP’s “Project Wide Awake.” The surveillance program, sans privacy impact assessment, has garnered little media attention despite the chilling precedent it sets regarding privacy rights in Canada. In June 2017, the RCMP acquired and launched a social media surveillance program specifically targeting Black Lives Matter activists in Vancouver, BC. Our discussion will highlight some of the key features of the project and unpack a series of questions, including: Was there an objective to collecting data on BLM activists? Was the data disclosed to any other databases and third-parties, such as the Canadian Security Intelligence Service (CSIS) database? And has this data been used to train ADMs? We will highlight how mass surveillance programs can exacerbate discriminatory and violent policing behaviours when data collection mechanisms and ADMs go unvetted and unchecked.

    ☛ please register here

    Daniella Barreto
    Amnesty International Canada
    Digital Activism Coordinator

    Daniella Barreto is a public health researcher and anti-racist queer activist. She holds an MSc. in population and public health and continues advocacy work with sex workers and people living with HIV. She is a co-founder of RUDE: The Podcast, a professional photographer and Nuance writing fellow. She is currently Digital Activism Coordinator at Amnesty International Canada.

    Nicole Leaver
    Artificial Intelligence Impact Alliance
    Public Sector Technology Researcher

    Nicole Leaver is a progressive policy researcher and graduate student at the Fletcher School of Law and Diplomacy, Tufts University. Her current research focuses on automated decision-making systems and inequality in Canada. She is a public sector technology researcher at the Artificial Intelligence Impact Alliance and a co-founder of RUDE: The Podcast.


    04:00 PM - 05:30 PM
    Centre for Ethics, University of Toronto
    200 Larkin

Past Events