Speaking

Moral AI and How We Get There

Keynotes and talks tailored for your group. Great for organizational leaders, tech, teams, and the general public.

Is Moral AI Just About Killer Robots? No, And Why You Should Care

A talk for the general public about morality and artificial intelligence.

Artifical Empathy: The Campaign That May Shape The Next Generation

A talk for all audiences about empathy and artificial intelligence.

DeKaBo 2024

Data Visualization

Workshops designed to introduce you to the top three strategies for turning your data visualizations for exploration and analysis into data visualizations that empower efficient data-driven decision-making.  My unique approach to data visualization marries insights from decades of cognitive neuroscience and decision science with the modern methodologies of product development to offer a very pragmatic and data-driven set of strategies anyone can use and master.  These workshops are particularly well-suited for people in scientific or technical fields who want to turbo boost their data storytelling and communication for non-technical audiences, but are relevant to people of all backgrounds.

Jana Schaich Borg presents to a group of people seated in a modern conference room. Attendees sit on chairs around tables, facing a large screen. The room has bright lighting, with contemporary decor and an open layout.

Invited Speaker at UNC Symposium on AI and Society on April 25, 2024

About the talk

Artificial intelligence (AI) is entering more and more areas of our lives. Each new application raises pressing ethical issues. AI pessimists are worried about potential abuses. AI optimists are hopeful about potential benefits. Both are correct, in our view. To show why, we will survey some good news and the bad news about safety, privacy, justice, and responsibility in AI. Then we will propose ways to make AI more moral by building human morality into AI systems and AI companies. This talk summarizes the main points in our recent book with Vincent Conitzer.

herCAREER Academy

About the talk

What if a machine would determine who gets an organ and who does not? Can Artificial Intelligence be fair? Prof. Dr. Jana Schaich Borg has spent 24 years researching and understanding how individuals make social decisions and how they impact others. For the past seven years, her focus has been on the interactions of humans and AI. There are two major questions here: How could we build morality into an AI system, so it can interact with society in a way that feels aligned with our human values? And how do we make sure we as a society employ AI in a way that is in line with our values? Beyond these big scientific and societal questions, the conversation with Prof. Dr. Schaich Borg will dive into the threats and opportunities that arise from AI, the prototypes of moral machines she is working on in her lab and the role women play and must play in shaping a future, where more and more decisions will be made by AI-systems. About the Speaker Prof. Dr. Jana Schaich Borg is an Associate Research Professor at the Social Science Research Institute at Duke University. She uses neuroscience, computational modeling, and new technologies to study how we make social decisions that influence or are influenced by other people. As a neuroscientist, she analyses the data she collects as a data scientist in interdisciplinary teams. Dr Schaich Borg’s current research projects focus on developing moral artificial intelligence and understanding social bonding, empathy, and human decision-making processes. Based on her research areas, she is involved in the development of practical strategies for the ethical development of artificial intelligence. She is skilled at breaking down the implications of complex analytical problems and communicating them to broad audiences in an understandable way. Together with Walter Sinnott-Armstrong and Vincent Conitzer, she wrote the book „Moral AI – And How We Get There“. The chapters deal with questions such as: What is AI? Is there safe AI? Can AI be fair? And: Can AI incorporate human morality?

Invited speaker at “Empathy, Morality, and AI” conference at Pennsylvania State University on April 9, 2024

Invited speaker for the Moral Psychology Research Group (MPRG), an invitation-only interdisciplinary group of world scholars in moral judgment on April 6, 2024

Invited speaker for the National Academies of Science, Engineering, and Medicine workshop on Exploring the Bidirectional Relationship Between Artificial Intelligence and Neuroscience on March 26, 2024

About the talk

There is a longstanding, bidirectional relationship between neuroscience and computer science, especially in the development of artificial intelligence (AI). As AI continues to expand, the consistent engagement of neuroscientists in conversations on the uses, regulation, implications, and public’s concern could shape the field’s trajectory. Anticipating the need for these cross-disciplinary conversations, the National Academies’ Forum on Neuroscience and Nervous System Disorders hosted a 1.5-day workshop in March 2024 that convened a diverse group of experts that examined the current and potential use of AI in neuroscience and examined strategies to enhance public and regulatory understanding of AI utilization.

Back to speaking by year

Invited Speaker at “AI meets Moral Philosophy and Moral Psychology” workshop at Conference on Neural Information Processing Systems (NeurIPS) on December 15, 2023

Invited plenary speaker for Duke Women’s Weekend on March 3, 2023

Back to speaking by year

Invited panelist for “Understanding Bias and Fairness in AI-enabled Healthcare Software”, hosted by the Duke-Margolis Center for Health Policy on December 17, 2021

About the talk

Duke-Margolis is hosting a virtual public meeting entitled Understanding Bias and Fairness in AI-enabled Healthcare Software. This meeting will convene stakeholders across disciplines for conversations on ways in which bias can affect artificial intelligence (AI) in healthcare software and how to promote fairness in artificial intelligence software, including methods to test for and prevent bias throughout the development process. The meeting will also discuss whether AI itself can play a role in reducing existing biases in the health care setting. Among the expert speakers will be a diverse array of computer scientists, bioethicists, statisticians, anthropologists, and federal regulators.

Session 1 of the Lifelong Learning Series on Artificial Intelligence at Duke University on September 29, 2021

About the talk

Paste the abstract or description here, or delete this Details block.

Back to speaking by year

Invited speaker for the International Association of Judges, Legal and Moral Implications of Artificial Intelligence on May 2020

Invited speaker for the Women in Data Science Regional Conference in April 2020

Back to speaking by year

Invited speakers for Distinguished Lecture on the Ethics of AI, North Carolina State University, Raleigh, NC on April 15, 2019

Back to speaking by year

Invited speaker for the Presidential Symposium at the Eurotransplant Annual Conference, Leiden, the Netherlands on October 4, 2018

Duke Center of Cognitive Neuroscience Seminar Series in Durham, NC on May 4, 2018

Back to speaking by year

Science Café, North Carolina Museum of Natural Sciences on October 12, 2017

Back to speaking by year

Invited panelist, Washington, DC on June 9, 2016

Invited speaker for the Statistical and Applied Mathematical Sciences Institute (SAMSI), Research Triangle Park, NC on April 9, 2016

Invited speaker for the Coursera Partners Conference, The Hague, Netherlands on March 21, 2016

Back to speaking by year

Association for Psychological Science Annual Convention, New York City, NY on May 24, 2015

Back to speaking by year

Invited speaker for the Moral Psychology Research Group, an invitation-only interdisciplinary group of world scholars in moral judgment, New Orleans, LA on November 8, 2014

Invited speaker at Duke Psychiatry and Behavioral Sciences Grand Rounds, Duke University School of Medicine, Durham, NC on August 21, 2014

Invited speaker for the Moral Psychology Research Group, Durham, NC on April 26, 2014

Back to speaking by year

Invited speaker at the Morality and the Cognitive Sciences Conference, Riga, Latvia on May 7, 2011

Back to speaking by year

Invited speaker at the Fourth International Legal Ethics Conference, Stanford, CA on July 17, 2010

Invited speaker for “Neuroimaging and the Law” workshop, Halifax, Canada on May 20, 2010

Back to speaking by year

Invited speaker at the MacArthur Foundation Psychopathy and the Law Symposium (a pre-conference to the Society for the Scientific Study of Psychopathy annual meeting), New Orleans, LA on April 16, 2009

Invited panelist, Stanford Technology Law Review, Stanford Law School, Stanford, CA on February 27, 2009

Back to speaking by year

Invited speaker at the MacArthur Foundation Law and Neuroscience Project meeting, Stanford, CA on January 24, 2008

Back to speaking by year

Speaker at Neural Systems of Social Behavior Conference, Austin, TX on May 12, 2007

About the talk

The emotion of disgust can be partitioned into three distinct functional domains: pathogen disgust, sexual disgust, and moral disgust. Using adaptationist logic, we propose that disgust first evolved to mediate the avoidance of disease-causing agents, and then was co-opted as new selective pressures arose to guide decisions regarding mating behavior and, ultimately, other social interactions. We discuss findings from our fMRI study investigating the possible neural correlates of these proposed domains. Specifically, our study explored whether: (i) pathogen, sexual, and moral disgust activate common neural systems, and (ii) these three domains also entrain separate cognitive and behavioral systems specific to their respective evolved functions. Fifty male participants completed a set of surveys, and afterwards were scanned while performing a memory task that presented neutral statements, statements describing pathogen related acts (pathogen disgust), statements describing incestuous acts (sexual disgust), and

statements describing non-sexual socio-moral transgressions (moral disgust). Conjunction analyses indicated that pathogen, sexual, and moral disgust indeed activate common neural systems, and planned comparisons provided evidence that each functional disgust domain also has additional, unique neural correlates. Self-report data revealed distinct patterns of reactions to pathogen, sexual, and moral disgust providing additional support for our proposed model. We will discuss these and other related findings, as well as consider the implications our data have for the study of morality.

Back to speaking by year

Psychopharmacology/Neuroscience Grand Rounds, Hartford Hospital, CT on February 23, 2006

Back to speaking by year

Invited speaker for the Program of Neuroscience Seminar series, Princeton University, NJ on September 25, 2005

Invited speaker at the Center for Bioethics, Columbia University, NY on February 24, 2005

Back to speaking by year