Events @ C4E

  • Mon, Sep 30, 2019
    Perspectives on Ethics
    Luvell Anderson, Navigating Racial Satire (Perspectives on Ethics)

    Navigating Racial Satire

    What has to go wrong for racial satire to be racist? In 2014, Stephen Colbert came under fire for a tweet sent out on behalf of his show The Colbert Report. The tweet in question, “I am willing to show @Asian community I care by introducing the Ching-Chong-Ding-Dong Foundation for Sensitivity to Orientals or Whatever,” sparked a twitter response from writer and hashtag activist Suey Park. The tweet was a brief recap of a joke Colbert told on the show as a satirical response to Daniel Snyder’s creation of a charitable organization for Native Americans while continuing to maintain a racial slur for the same group as the name of his football team. We typically think of humor as a non-serious context. These sorts of contexts affect how we interpret utterances. Normally, we don’t interpret humorous utterances as straightforward assertions. In fact, some responses to the charge of racism against Colbert’s satirical performance claimed that recognizing it as satire was enough to exonerate the humor of the charge. But if this is so, what explains when charges of racism against satire persist? In this talk I critically explore candidate views of racist satire. I also draw a distinction between satire that is offensive and satire that is racist

    ☛ please register here

    Luvell Anderson
    Syracuse University
    Philosophy

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Oct 1, 2019
    Ethics of AI in Context
    Marzyeh Ghassemi, Can Machines Learn from Our Mistakes? (Ethics of AI in Context)

    Can Machines Learn from Our Mistakes?

    Healthcare runs on human-based algorithms, that routinely misdiagnose, mistreat, and mislead patients about their care. But what if mistakes aren’t bad? What if we could learn from these mistakes? And what does artificial intelligence have to do with it? Marzyeh Ghassemi’s talk will delve into how the machine learning revolution can be applied in a healthcare setting, to improve medical care and create actionable insights in human health.

    ☛ please register here

    Marzyeh Ghassemi
    University of Toronto
    Computer Science

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Oct 2, 2019
    Ethics at Noon
    Elena Comay del Junco, Aristotle and the Ethics of Nature (Ethics@Noon)

    Aristotle and the Ethics of Nature

    Aristotle holds certain natural beings to have greater or lesser degrees of value or perfection. This raises the question of what ethical entailments such a hierarchy might have.  I argue for three main points: first, that there is no sense in which an ethical approach to the natural world can be straightforwardly derived from Aristotle’s form of natural hierarchy, since it does not entail viewing “lower” species instrumentally. Moreover, such a hierarchy is in fact fully compatible with strict limits on interspecies exploitation. Second, the one passage in which Aristotle seems to ground the exploitation of non-human nature by humans in his natural philosophy conflicts with his larger theoretical commitments. Third and finally, Aristotle himself – even if he is often unclear and self contradictory – provides powerful materials for an ethics of nature.

    ☛ please register here

    Elena Comay del Junco
    University of Toronto
    Centre for Ethics

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Oct 2, 2019
    Ethics & the Arts, Ethics & Film: Lights, Camera, Ethics!, Ethics in the City
    Citizen Jane: Battle for the City (2016) (Ethics in the City Films)

    Writer and urban activist Jane Jacobs fights to save historic New York City during the ruthless redevelopment era of urban planner Robert Moses in the 1960s. Citizen Jane is a timely tale of what can happen when engaged citizens fight the power for the sake of a better world. Arguably no one did more to shape our understanding of the modern American city than Jane Jacobs, the visionary activist and writer who fought to preserve urban communities in the face of destructive development projects. Director Matt Tyrnauer (Valentino: The Last Emperor; Where’s My Roy Cohn?) vividly brings to life Jacobs’ 1960s showdown with ruthless construction kingpin Robert Moses over his plan to raze lower Manhattan to make way for a highway, a dramatic struggle over the very soul of the neighborhood.

    Join us for a screening plus discussion (and cookies)!

    ☛ please register here

    06:00 PM - 08:00 PM
    Centre for Ethics, University of Toronto
    Rm 200, Larkin Building

  • Fri, Oct 4, 2019
    Ethics & the Arts, Events in the Community
    Not My Utopia: A Screening

    Not My Utopia: A Screening

    Not My Utopia examines the technological status quo and looks towards other possible futures. Through inter-generational inquiry, this screening aims to provoke personal conversations between makers and audience about how to re-imagine the unfolding future. The present day urgency of Zeesy Power’s Smart City PSAs pushes back against libertarian utopias sold as the only answer to urban crises. Megan May Daalder’s documentary series Children of the Singularity questions assumptions of youth as passive consumers of the technologies and systems developed by their parents and invites them to be the preeminent philosophers of the future. Collectively, this screening look directly at the widespread gains and losses that are the legacy of technological development.

    Presented by Pleasure Dome; a Toronto based, non-profit, artist-run presentation organization and publisher dedicated to experimental media.

    07:30 PM - 10:00 PM
    Ryerson Image Arts Centre
    122 Bond St, RM 307

  • Mon, Oct 7, 2019
    Perspectives on Ethics
    John Basl, Artifact Welfare?: A Problem of Exclusion for Biocentrism (Perspectives on Ethics)

    Artifact Welfare?: A Problem of Exclusion for Biocentrism

    Biocentrism is the view that all and only living things have moral status or are deserving of direct moral concern. The project of defending Biocentrism includes adopting some strategy for excluding various kinds of things – biotic communities, ecosystems, species, and artifacts – from the domain of direct moral concern. This talk aims to showcase the failures of this strategy of exclusion specifically in the case of artifacts. The standard line for the Biocentrist is to argue that these things fail to meet the conditions for having a welfare or well-being, a necessary condition for having moral status of the relevant kind. The Biocentrist has, for good reason, typically adopted a view of non-sentient welfare that is teleological, grounding the welfare of non-sentient organisms in their goal-directed behaviors, and where pushed to articulate an account of goal-directedness, they have typically appealed to etiological account of function or teleology. When it comes to excluding artifacts, the reason artifacts are taken to lack a welfare is that, while goal-directed, their goal-directedness is derivative on our goals; whereas natural selection grounds genuine teleology, artificial selection does not. I explain why this appeal to natural selection can’t do the work the Biocentrist requires and consider a range of alternatives finding each lacking.

    John Basl
    Northeastern University
    Philosophy


    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Oct 8, 2019
    Ethics of AI in Context
    John Basl & Jeff Behrends, Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles (Ethics of AI in Context)

    Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles

    Many of those thinking about the ethics of autonomous vehicles believe there are important lessons to be learned by attending to so-called Trolley Cases, while a growing opposition is dismissive of their supposed significance. The optimists about the value of these cases think that because AVs might find themselves in circumstances that are similar to Trolley Cases, we can draw on them to ensure ethical driving behavior. The pessimists are convinced that these cases have nothing to teach us, either because they believe that the AV and trolley cases are in fact very dissimilar, or because they are distrustful of the use of thought experiments in ethics generally.
    Something has been lost in the moral discourse between the optimists and the pessimists. We too think that we should be pessimistic about the ways optimists have leveraged Trolley Cases to draw conclusions about how to program autonomous vehicles, but the typical defenses of pessimism fail to recognize how the tools of moral philosophy can and should be fruitfully applied to AV design. In this talk we first explain what’s wrong with typical arguments for dismissing the value of trolley cases and then argue that moral philosophers have erred by overlooking the significance of machine learning techniques in AV applications, highlighting how best to proceed.

    John Basl
    Northeastern University
    Philosophy

     

     


    Jeff Behrends

    Harvard University
    Philosophy

     

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Oct 9, 2019
    Ethics at Noon, Ethics of AI in Context
    Jeff Behrends, Ethics Education in Computer Science: The Embedded EthiCS Approach (Ethics@Noon)

    Ethics Education in Computer Science: The Embedded EthiCS Approach

    While scholarship on integrating ethical content into Computer Science curricula dates at least to the 1980s, recent moral crises in the tech industry have given rise to a period of intense interest in ethics education for computer scientists, both within academia and among the public at large. There can be little doubt at this point that a responsible education in computer science should equip students with some set of ethical knowledge and skills. But identifying precisely what that set ought to look like, and then designing a feasible curriculum to achieve it, are difficult tasks for a variety of reasons. At Harvard University, the Embedded EthiCS program marries the expertise from the faculty of Computer Science and Philosophy in an attempt to provide meaningful educational outcomes for students without significant investments in time for Computer Science faculty members, or a disruptive restructuring of the Computer Science curriculum. This talk will explain the basic structure of the program, and address its early successes and challenges.

    Jeff Behrends
    Harvard University
    Philosophy

     

    co-sponsor:

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Oct 16, 2019
    Ethics at Noon
    Emma McClure, Microaffirmations, Privilege, and a Duty to Redistribute (Ethics@Noon)

    Microaffirmations, Privilege, and a Duty to Redistribute

    Microaffirmations are the inverse of microaggressions: seemingly small acknowledgements that can accumulate into large positive impacts. Mary Rowe first proposed microaffirmations as a way for privileged people to consciously counter microaggressions. We could practice giving small supports to members of marginalized groups until these behaviors become habitual and replaced our propensity towards microaggressions.
    Recent psychological discussions have uncritically adopted this conceptualization, but I point out the pitfalls of continuing along this path. The current discussion elides the fact that privileged people constantly receive small supports. Indeed, privilege is partially constituted by being the recipient of unceasing microaffirmations. Moreover, the feminist relational autonomy literature has shown that everyone—privileged and marginalized alike—requires social support in order to develop and maintain our autonomous capacities.
    Thus, microaffirmations should not be thought of as providing vulnerable members of marginalized groups special treatment that we do not offer to anyone else. Instead, changing our microaffirmative practices would involve ending the special treatment we currently give by default to members of privileged groups. Ultimately, I argue for an imperfect moral duty to redistribute microaffirmations by supporting marginalized people and challenging privileged people’s assumed superiority.

    Emma McClure
    University of Toronto
    Centre for Ethics

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Mon, Oct 28, 2019
    Perspectives on Ethics
    Kathryn Norlock, Do I Really Consent to Twitter's Terms of Service? (Perspectives on Ethics)

    Do I Really Consent to Twitter’s Terms of Service?

    Seemingly consent-capable social media users cannot fully appreciate the stakes of the gambles that we take in social media. The risks that I focus on include negatively transformative experiences stemming from negativity bias, to which most humans are prone, and which results in our remembering insults and hostility far more easily than compliments or kindness. Our abilities to satisfy risk-related consent standards require self-monitoring of the impact of negative experiences, which are undermined by our own online habituation and our desires to return to ludic loops of variable reward. I conclude that we can’t even implicitly consent, let alone click the consent checkbox for meaningfully explicit consent.

    Kathryn Norlock
    Trent University
    Philosophy

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Oct 30, 2019
    Ethics at Noon
    Michael Lambek (Ethics@Noon)

    Michael Lambek
    University of Toronto
    Anthropology

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Nov 12, 2019
    Ethics of AI in Context
    Kristen Thomasen, Out of Their Cages and Into the City: Robots, Regulation, and the Changing Nature of Public Spaces (Ethics of AI in Context)

    Out of Their Cages and Into the City: Robots, Regulation, and the Changing Nature of Public Spaces

    Robots are an increasingly common feature in North American public spaces. From regulations permitting broader drone use in public airspace and autonomous vehicle testing on public roads, to delivery robots roaming sidewalks in some major U.S. cities, to the announcement of Sidewalk Toronto – a plan to convert waterfront space in one of North America’s largest cities into a robotics-filled smart community – the laws regulating North American public spaces are opening up to robots.
    In many of these examples, the growing presence of robots in public space is associated with opportunities to improve human lives through intelligent urban design, environmental efficiency, and greater transportation accessibility. However, the introduction of robots into public space has also raised concerns about, for example, the commercialization of these spaces by the companies that deploy robots; increasing surveillance that will negatively impact physical and data privacy; or the potential marginalization or exclusion of some members of society in favour of those who can pay to access, use, or support the new technologies available in these spaces.
    The laws that permit, regulate, or prohibit robotic systems in public spaces will in many ways determine how this new technology impacts the space and the people who inhabit that space. This begs the questions: how should regulators approach the task of regulating robots in public spaces? And should any special considerations apply to the regulation of robots because of the public nature of the spaces they occupy? This presentation will argue that the laws that regulate robots deployed in public space will affect the public nature of that space, potentially to the benefit of some human inhabitants of the space over others. For these reasons, this presentation will argue that special considerations should apply to the regulation of robots that will operate in public space, and will highlight some of these considerations.

    Kristen Thomasen
    University of Windsor
    Law

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Nov 13, 2019
    Ethics & the Arts, Ethics & Film: Lights, Camera, Ethics!, Ethics in the City
    The Land of Many Palaces (2015) (Ethics in the City Films)

     

    In Ordos, China, thousands of farmers are being relocated into a new city under a government plan to modernize the region. “The Land of Many Palaces” follows a government official whose job is to convince these farmers that their lives will be better off in the city, and a farmer in one of the last remaining villages in the region who is pressured to move. The film explores a process that will take shape on an enormous scale across China, since the central government announced plans to relocate 250,000,000 farmers to cities across the nation, over the next 20 years.

     

    06:00 PM - 08:00 PM
    Centre for Ethics, University of Toronto
    Rm 200, Larkin Building

  • Mon, Nov 18, 2019
    Perspectives on Ethics
    Sunit Das (Perspectives on Ethics)

    Sunit Das
    University of Toronto Faculty of Medicine, Division of Neurosurgery, St. Michael’s Hospital & Centre for Ethics, University of Toronto

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Nov 19, 2019
    Ethics of AI in Context
    Anna Goldenberg (Ethics of AI in Context)

    Anna Goldenberg
    University of Toronto
    Computer Science

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Nov 20, 2019
    Ethics at Noon
    Natasha Hay, The Ethics of Study: Walter Benjamin’s Counter-Pedagogy and the Communicability of Historical Violence (Ethics@Noon)

    The Ethics of Study: Walter Benjamin’s Counter-Pedagogy and the Communicability of Historical Violence

    I will investigate some ways in which the ethical practice of study, the use of language, and the critique of force, authority, or violence (Gewalt) come together in Walter Benjamin’s reflections on pedagogical strategies in the research seminar. Deeply concerned with the histories of violence that state power perpetuates and occludes in the civic institutions that structure social life, Benjamin was even more attuned to the modalities of this historical violence inscribed in the languages of cultural texts. His concept of history will bring out both the emancipatory and the counter-revolutionary power of certain practices of study that enter into relation with the irreconcilable ambiguity of these archives in which “there is no document of culture that is not at the same time a document of barbarism.” Reading some key publications from Benjamin’s participation in the student movement in conjunction with his early writings on language and translation, I will focus particularly on the ethical significance of silence and listening for the construction of a linguistic medium of study that is capable of letting itself be addressed by and perhaps in turn redressing the semiotic effects of structural violence. The guiding purpose of this talk will be to elucidate the ethical stakes of the communicability of histories of violence that is resistant to and can radically alter the paradigms in which the research seminar functions as a privileged site for knowing mastery over objects of reference and as an ‘ideal speech situation’ for intersubjective discourse.

    Natasha Hay
    University of Toronto
    Comparative Literature

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Nov 26, 2019
    Ethics of AI in Context
    Daniel Greene, Making Ethics in Machine Learning (Ethics of AI in Context)

    Making Ethics in Machine Learning

    Machine learning systems are implemented by all the big tech companies in everything from ad auctions to photo-tagging, and are supplementing or replacing human decision making in a host of more mundane, but possibly more consequential, areas like loans, bail, policing, and hiring. And we’ve already seen plenty of dangerous failures; from risk assessment tools systematically rating black arrestees as riskier than white ones, to hiring algorithms that learned to reject women. There’s a broad consensus across industry, academe, government, and civil society that there is a problem here, one that presents a deep challenge to core democratic values, but there is much debate over what kind of problem it is and how it might be solved. Taking a sociological approach to the current boom in ethical AI and machine learning initiatives that promise to save us from the machines, this talk explores how this problem becomes a problem, for whom, and with what solutions. Comparing today’s high-profile ethics manifestos with earlier moments in the history of technology allows us to see a nascent consensus around an approach we term ‘ethical design.’ At the same time, the recent surge in labor activism inside tech companies and anti-racist organizing outside them suggests how this expert-driven vision for more humane systems might be replaced or augmented with something more revolutionary. This talk draws on research conducted with Anna Lauren Hoffmann (UW), Luke Stark (MSR Montreal), and designer Geneviève Patterson.

    Daniel Greene
    University of Maryland
    iSchool

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Thu, Dec 5, 2019
    Events on Campus, Ethics of AI in Context
    Barbara J. Grosz, From Ethical Challenges of Intelligent Systems to Embedding Ethics in Computer Science Education

    From Ethical Challenges of Intelligent Systems to Embedding Ethics in Computer Science Education

    Computing technologies have become pervasive in daily life, sometimes bringing unintended but harmful consequences.  For students to learn to think not only about what technology they could create, but also whether they should create that technology and to recognize the ethical considerations that should constrain their design, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard’s Embedded EthiCS initiative, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will begin with a short description of my experiences teaching the course “Intelligent Systems: Design and Ethical Challenges” that inspired the design of Embedded EthiCS. It will then describe the goals behind the design, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions.

    Barbara J. Grosz
    Higgins Research Professor of Natural Sciences
    Harvard University
    presented by:
    11:10 AM - 01:00 PM
    Department of Computer Science, University of Toronto
    St. George Street

  • Mon, Jan 13, 2020
    Perspectives on Ethics
    Yannik Thiem (Perspectives on Ethics)

    Yannik Thiem
    Columbia University
    Philosophy

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Jan 14, 2020
    Ethics of AI in Context
    Zack Lipton (Ethics of AI in Context)

    Zack Lipton
    Carnegie Mellon University
    Business

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Jan 15, 2020
    Ethics at Noon
    Nikolas Kompridis (Ethics@Noon)

    Nikolas Kompridis

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Mon, Jan 20, 2020
    Perspectives on Ethics
    Alia Al-Saji (Perspectives on Ethics)

    Alia Al-Saji
    McGill University
    Philosophy

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Jan 28, 2020
    Ethics of AI in Context
    Parisa Moosavi (Ethics of AI in Context)

    Parisa Moosavi
    York University
    Philosophy

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Jan 29, 2020
    Ethics at Noon
    Josée Johnston (Ethics@Noon)

    Josée Johnston
    University of Toronto
    Sociology

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Mon, Feb 3, 2020
    Perspectives on Ethics
    Hasana Sharp (Perspectives on Ethics)

    Hasana Sharp
    McGill University
    Philosophy

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Feb 5, 2020
    Ethics & the Arts, Ethics & Film: Lights, Camera, Ethics!, Ethics in the City
    The Last Black Man in San Francisco (2019) (Ethics in the City Films)

     

    Jimmie Fails is in love with a Victorian house built by his grandfather in San Francisco’s Fillmore District. When the house’s current occupants leave for good, Jimmie and his friend Mont attempt to repair and reclaim the place that Jimmie most considers home, despite its prohibitive price tag and place in a gentrified, rapidly changing neighbourhood. Based on a true story, Joe Talbot’s directorial debut is a love letter to a disappearing side of San Francisco and a touching look at how communities are made — and kept alive — by the people who care for them.

     

    06:00 PM - 08:00 PM
    Centre for Ethics, University of Toronto
    Rm 200, Larkin Building

  • Wed, Feb 12, 2020
    Ethics at Noon
    Anna Su (Ethics@Noon)

    Anna Su
    University of Toronto
    Law

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Mon, Feb 24, 2020
    Perspectives on Ethics
    Ashwini Vasanthakumar (Perspectives on Ethics)

    Ashwini Vasanthakumar
    Queen’s University
    Law

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Feb 25, 2020
    Ethics of AI in Context
    Ida Koivisto (Ethics of AI in Context)

    Ida Koivisto
    Law

    University of Helsinki

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Mar 4, 2020
    Ethics at Noon
    Christina Starmans (Ethics@Noon)

    Christina Starmans
    University of Toronto
    Psychology

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Mar 4, 2020
    Ethics & the Arts, Ethics & Film: Lights, Camera, Ethics!, Ethics in the City
    My Winnipeg (2007) (Ethics in the City Films)

     

    Guy Maddin blends fact and fiction, documentary and drama, reality and myth in this dreamy black-and-white tour of Winnipeg. Widely regarded as Maddin’s best film, My Winnipeg won the award for Best Canadian Feature Film when it premiered at the 2007 Toronto International Film Festival (TIFF). A 2015 poll conducted by TIFF named it one of the Top 10 Canadian films of all time, while another in 2016 listed it as one of 150 essential works in Canadian cinema history.

     

    06:00 PM - 08:00 PM
    Centre for Ethics, University of Toronto
    Rm 200, Larkin Building

  • Mon, Mar 9, 2020
    Perspectives on Ethics
    Denise Ferreira da Silva (Perspectives on Ethics)

    Denise Ferreira da Silva
    University of British Columbia
    Institute for Gender, Race, Sexuality and Social Justice

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Mar 18, 2020
    Ethics at Noon
    Teresa Heffernan, The Immortality Industry and the Ethics of Death (Ethics@Noon)

    The Immortality Industry and the Ethics of Death

    Teresa Heffernan
    St. Mary’s University
    English

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Mon, Mar 23, 2020
    Perspectives on Ethics
    Sally Haslanger (Perspectives on Ethics)

    Sally Haslanger
    MIT
    Linguistics & Philosophy

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Tue, Mar 31, 2020
    Ethics of AI in Context
    Azim Shariff (Ethics of AI in Context)

    Azim Shariff
    University of British Columbia
    Psychology

    04:00 PM - 06:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

  • Wed, Apr 1, 2020
    Ethics at Noon
    Lauren Bialystok (Ethics@Noon)

    Lauren Bialystok
    University of Toronto
    Social Justice Education

    12:30 PM - 02:00 PM
    Centre for Ethics, University of Toronto
    200 Larkin

Past Events