Skip to main content

Day 1

8:00 AM – 10:00 AM

Trustworthy AI for a Human-Centered Future
Co-Chairs: Iliana Maifeld-Carucci (Johns Hopkins University) and Christina Strobel (Hamburg University of Technology)
Description

This session shall explore the nature of trust inherent to a human value-centered future of AI. Contributions will be welcome from both policy and technical perspectives. With attention to concerns regarding the aims and distribution of power, the authors contributing to this panel should explore an empathetic and inter-relational approach to the use of AI systems centered on their measurable ethical trajectory and societal impacts. Such approaches should be grounded in clearly defined metrics and standards. A necessary re-centering of the benefits of AI to communities and individuals, honoring concerns with security, privacy, and explainability at several different levels, provides a few critical components of the trustworthiness required to secure an ethical, human-centered, AI future. These, along with other human-centered, ethical concerns shall be explored in this conference session.

  • Towards Fair and Explainable AI: Addressing bias in data using causal models and GANs — Amirarsalan Rajabi and Ozlem Ozmen Garibay (University of Central Florida)

    “Can we trust machine learning models to make fair decisions? This question becomes more relevant as these algorithms become more pervasive in many aspects of our lives and our society. While the main objective of artificial intelligence (AI) algorithms is traditionally to increase accuracy, the AI community is gradually focusing more on evaluating and developing algorithms to ensure fairness.This work explores the usefulness of adversarial learning, explicitly generative adversarial networks (GAN), in addressing the problem of fairness. We show that the proposed model is able to produce synthetic tabular data to augment the original dataset in order to improve demographic parity, while maintaining data utility. In doing so, our work increases algorithmic fairness while maintaining accuracy.”
  • Human-machine interfaces: an HCAI perspective — Brent Winslow (Design Interactive)

    “HCAI requires access to human data to progress. Human-machine interfaces, initially developed to treat injuries and disorders, continue to advance and represent a rich data source for human-AI interaction. Such interfaces represent a range of technologies from direct biological interfaces to digital phenotyping technologies leveraging data from existing sources. In order to be beneficial, such approaches should be stable, evidence-based, individualized, widely available, secure, ethical, and respectful to privacy. This presentation will explore the scale of current and emerging technologies for human-machine interfaces and provide recommendations for ethical and fair development and implementation of this technology.”
  • Are Care-Dependent Less Averse to Care Robots? — Anja Bodenschatz (University of Cologne)

    “The world population is ageing. One idea to cushion discrepancies between supply and demand in the elderly care sector is the deployment of robotic care. However, how well care robots are accepted remains an open question. Empirical evidence is scarce and yields mixed results. We conducted a quantitative vignette study to measure expected comfort levels of participants for scenarios with human and robotic caregivers and control for the care-dependency of participants. We find that structural differences in the expected comfort levels of our participants for different care scenarios depend on the actual care-dependence of the participants. Our findings imply that care-dependent people are less averse to care robots than often assumed.”
  • Applying Human Cognition to Assured Autonomy — Monica Lopez (Johns Hopkins University)

    “The scaled deployment of semi- and fully autonomous systems undeniably depends on assured autonomy. This reality, however, has become far more complex than expected because it necessarily demands an integrated tripartite solution not yet achieved: consensus-based standards and compliance across industry, scientific innovation within artificial intelligence R&D of explainability, and robust end-user education. In this paper I present my human-centered approach to the design, development, and deployment of autonomous systems and break down how human factors such as cognitive and behavioral insights into how we think, feel, act, plan, make decisions, and problem-solve are foundational to assuring autonomy.”
  • Uncovering AI Black Boxes with Machine Teaching — Hernisa Kacorri (University of Maryland, College Park)

    “As artificial intelligence (AI) becomes more present in everyday applications, so do our efforts to better capture, understand, and imagine this coexistence. Machine teaching lies at the core of these efforts as it enables end-users and domain experts with no machine learning expertise to build better intuition around AI-infused systems. Beyond helping to democratize machine learning, it offers an opportunity for a deeper understanding of how people perceive and interact with such systems to inform the design of AI interfaces and algorithms for a more human-centered future. We share insights from a series of studies on how different user groups conceptualize, experience, and reflect on their engagement with machine teaching.”
  • Acceptance of Artificial Intelligence in Cars – A Survey Approach — Christina Strobel (Hamburg University of Technology)

    “The exploratory descriptive survey analyzes acceptance of different automated systems used in partly and fully autonomous cars, and whether there is a difference between the level of acceptance for someone’s own use and desire for others to use them. The survey reports answers from 199 respondents to an online questionnaire run on Amazon Mechanical Turk (Amazon MTurk). The majority of respondents express high or very high acceptance of partly automated systems; however, when it comes to full automation, the acceptance rate drops significantly. Moreover, the acceptance rate for roughly half of the systems does not differ significantly for the respondent’s own use and use by others.”

10:00 AM – 10:15 AM

Break

10:15 AM – 12:45 PM

AI, Decision-Making, and the Impact on Humans
Co-Chairs: Salvatore Andolina (University of Palermo) and Joseph Konstan (University of Minnesota)
Description

AI Algorithms are making and supporting decisions in ways that increasingly affect humans in many aspects of their lives. Both autonomous systems and algorithm-in-loop decision-support systems use AI algorithms and data-driven models to provide or deny access to credit, healthcare, and other essential resources while steering life-changing decisions related to criminal justice, education, and other aspects of everyday life. Too often these systems are built without consideration of the human factors associated with their use. Models are too often opaque; recommendations too hard to interpret or interrogate; and systems unaware of the human values and consequences of their calculations.

This session brings together researchers with diverse backgrounds to discuss the impact of AI decision-making and decision-support algorithms on humans with a special focus on how to integrate human-centered principles into the algorithms and their surrounding systems.

  • I Disagree! Aligning Artificial Intelligence With The Messy Reality of Societal Disagreement — Michael Bernstein (Stanford University)

    “Machine learning classifiers for human-facing tasks such as comment toxicity and misinformation often score highly on metrics such as ROC AUC but are received poorly in practice. Why this gap? Today, metrics such as ROC AUC, precision, and recall are used to measure technical performance; however, human-computer interaction observes that evaluation of human-facing systems should account for people’s reactions to the system. In this work, we introduce a transformation that more closely aligns machine learning classification metrics with the values and methods of user-facing performance measures. The disagreement deconvolution takes in any multi-annotator (e.g., crowdsourced) dataset, disentangles stable opinions from noise by estimating intra-annotator consistency, and compares each test set prediction to the individual stable opinions from each annotator. Applying the disagreement deconvolution to existing social computing datasets, we find that current metrics dramatically overstate the performance of many human-facing machine learning tasks: for example, performance on a comment toxicity task is corrected from .95 to .73 ROC AUC.”
  • Human-Centered Recommendations: Actionable, Controllable, and Impactful — Salvatore Andolina (University of Palermo)

    “Everyday decision-making can highly benefit from systems that recommend relevant and useful information at the right time. In this talk, we present a human-centered AI perspective into the design of such systems. We review an example of the design and evaluation of a system that captures context across application boundaries and recommends actionable entities related to the current task. The system has been evaluated with real-world tasks, demonstrating that the recommendations had an impact on the tasks and led to high user satisfaction and a feeling of control. We reflect on the implications for the emerging field of Human-Centered AI.”
  • Human-Centered Approaches to Supporting AI Fairness in Practice — Michael Madaio (Microsoft)

    “AI/ML industry practitioners are increasingly asked to adhere to principles for fair and ethical AI, but they are often not equipped with processes or tools designed to support them in developing more fair AI systems. Although such resources do exist, they are often not designed with or by practitioners, leading to challenges in their adoption and use. In this talk, I will discuss qualitative and design research with industry practitioners around designing checklists to support fairness throughout the AI design lifecycle, and designing disaggregated evaluations of fair system performance. I will share insights into how fairness resources may be better designed to support AI practitioners, as well as insights into how the organizational contexts of AI teams may impact fairness efforts.”
  • HCAI: Exploring Augmentation and Assistance in the Small and the Large — Elizabeth Churchill (Google)

    “Computation, technology, and the use of AI techniques such as machine learning, natural language processing, machine vision, and data mining have unquestionably changed the way we live in the world. While much recent discourse has, appropriately, focused on the (often negative) ethical, legal, and societal ramifications of unfettered scaling “in the large”, there are many ways in which our lives are made better through the use of such techniques to create adaptive/accommodative experiences “in the small”. Further investment is needed around understanding how AI techniques can create more positive inclusive experiences, without longer term negative consequences. What does this mean for HCI/UX as a discipline? Some first thoughts I will share: a shift to thinking about AI techniques as being “design materials”; changing design processes to model scaling; directly surfacing/addressing conflicting development incentives; and exploring what “intelligent co-design” means to achieve impact in the small while considering consequences in the large for people and society.”
  • A Quantum Leap for Fairness: Quantum Bayesian Approach for Fair Decision Making — Ece Mutlu and Ozlem Ozmen Garibay (University of Central Florida)

    “Fair causal learning approaches enable us to model cause and effect knowledge structures to discover the sources of the bias, and to prevent unfair decision-making by amplifying transparency and explainability of artificial intelligence (AI) algorithms. These studies assume that the underlying probabilistic model of the world is known; whereas, it is well-known that humans do not obey the classical probability rules in making decisions. Decision making usually involves to some degree emotional changes, subconscious feelings, and subjective biases, yielding uncertainty in underlying probabilistic models. To tackle this problem, we introduce a quantum Bayesian fairness approach. In this work, we show that the quantum Bayesian perspective is useful in creating well-performing and fair decision rules even under high uncertainty.”
  • The Role of Human Cognitive Motivation in Human-AI Collaboration on Decision-Making Tasks — Krzysztof Z. Gajos (Harvard University)

    “People supported by AI-powered decision support tools frequently over rely on the AI: they accept an AI’s suggestion even when that suggestion is wrong. Adding explanations to the AI suggestions does not appear to reduce the overreliance and some studies suggest that it might even increase it. Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions. Specifically, even in high-stakes domains, people rarely engage analytically with each individual AI recommendation and explanation, and instead develop general heuristics about whether and when to follow the AI suggestions. We show that interventions applied at the decision-making time to disrupt heuristic reasoning can increase people’s cognitive engagement with the AI’s output and consequently reduce (but not entirely eliminate) human overreliance on the AI. Our research also points to two shortcomings in how we are pursuing the explainable AI research agenda. First, the commonly-used evaluation methods rely on proxy tasks that artificially focus people’s attention on the AI models leading to misleading (overly optimistic) results. Second, by insufficiently examining the sociotechnical contexts, we may be solving problems that are technically the most obvious but that are not the most valuable to the key stakeholders.”
  • Auditing and Assurance of Algorithms: Towards a Framework to Ensure Ethical Algorithmic Practices in Artificial Intelligence — Ramya Akula and Ivan Garibay (University of Central Florida)

    “Algorithms are more widely used in business, and enterprises are increasingly concerned that their AI algorithms might cause significant reputational or financial damage. From autonomous vehicles and banking to medical care, housing, and legal decisions, there will soon be “gazillions” of AI algorithms that make decisions with limited human interference. One of the primary reasons for using such algorithms is to solve problems that are not human-level. Auditing and Assurance of Algorithms is an emerging field to professionalize and industrialize AI algorithms. This paper aims to analyze the critical areas required for auditing and assurance and to spark discussion in practice.”
  • Toward Bounded Autonomy: Challenges and Vision — Joseph A. Konstan (University of Minnesota)

    “Autonomy underlies both ambitions for and fears of intelligent systems. The same autonomy that eliminates drudgery (e.g., having a robot vacuum the floor when it sees a need) also creates significant threats (e.g., autonomous weapons). In this talk we examine the question of whether it is possible to establish meaningful and enforceable bounds on autonomy, allowing humans to delegate authority only to the extent that they are comfortable. We review a set of examples of bounded autonomy and bounded delegation of authority, and identify key research challenges for the field of human-centered artificial intelligence.”

12:45 PM – 1:15 PM

Break

1:15 PM – 3:15 PM

Human-AI Collaboration
Co-Chairs: Roger Azevedo (University of Central Florida) and Joseph Kider (University of Central Florida)
Description

The central challenge motivating this is the advancement of use-inspired human-AI teaming and symbiosis AI by enhancing the development, use, and transfer of skills in resolving complex real-life situations, where different skills such as soft skills, such as emotional regulation, empathy, listening, negotiating, communicating, and collaborating with teams are essential for effectively addressing societal challenges.

Meeting this central challenge requires fundamentally reframing the role of human-AI teaming and symbiosis, and moving towards a future where AI agents are viewed as social, collaborative, and intelligent partners in acquiring and using knowledge and skills to perform more effectively, engagingly, and equitably. For example, AI team members and partners (e.g., intelligent virtual humans) capable of collaborating with teams of humans across disciplines by adopting a variety of roles and strategies aimed at observing, identifying, and understanding humans to support their knowledge and skills. In addition, we envision AI team members who will act as social modelers and collaborators while performing complex tasks, using metacognitive and meta-reasoning strategies to augment humans’ and their own understanding, while using and transferring their knowledge and skills.

  • Considerations for Development and Evaluation of Social Intelligence in Artificial Agents — Jessica Williams, Florian Jentsch, & Stephen M. Fiore (University of Central Florida)

    “In this paper we discuss the development of artificial theory of mind as foundational to agent ability to collaborate with human team members. Agents imbued with artificial social intelligence (ASI) will require various capabilities to gather the social data needed to inform artificial theory of mind models of their human counterparts. We draw from social signals theorizing and discuss a framework to guide consideration of core features of ASI. We discuss how the social signal processing domain contributes to the development of ASI by forming a foundation on which to help agents model and interpret the nuances, variability, evolution, and combinations of social cues necessary to support team coordination.”
  • Human and Artificial Intelligence and Safety at Work — Waldemar Karwowski (University of Central Florida)

    “This presentation discusses the future of hybrid, human, and artificial intelligent systems and safety at work from the digital (technological) life perspective. To fully engage the human factors community in the meaningful and robust discussion about the design of future workplaces, a new approach to safety focusing on the implications of technological life, tied to the humanity-centered AI, is needed. The potential implications of this framework for the safety of the future workplace design in the context of the fusion of artificial and human intelligence and consciousness and technological life are presented.”
  • Preventing Repeated AI Harms by Sharing AI Failures — Sean McGregor (Syntiant)

    “Mature industrial sectors (e.g., aviation) collect their real world failures in incident databases to improve design and process, but the AI industry lacks similar systematization. As a result, companies repeatedly make the same mistakes in the design, development, and deployment of intelligent systems. The AI Incident Database (AIID) is the start of formal record keeping of AI harms realized in the real world. The AIID dataset highlights several issues in human-machine collaboration through an analytic web front end for more than 1,000 incident reports archived to date. Insights from the project’s data and collaboration architecture will be presented.”
  • Human-Machine Teaming — Isaac Arthur (Science and Futurism with Isaac Arthur)

    “This Presentation from Futurist Isaac Arthur is a discussion of the Possible Scenarios for Human-Machine Teaming in terms of the civilizations that might develop around certain types of Human-AI Collaboration. We’ll look at which directions modern progress with AI might be taking us, and also how challenges from cultural expectations and worries from Science Fiction about the role of robots might shape the future of Human-AI Interaction.”
  • A Situation Awareness Perspective on Human-Machine Collaboration: Tensions and Opportunities — Constantinos K. Coursaris (HEC Montreal)

    “The rise of automation and artificial intelligence (AI) within the organization has come with many benefits, but also with concerns related to user empowerment and agency. For AI to be utilized optimally, a contextual use perspective is needed so as to inform the value proposition of human-agent collaboration in the organization. To this end, we propose a situation awareness (SA) approach as a promising lens, which would in turn promote a human-centered AI design and development perspective. This paper introduces the theoretical basis for SA and discusses three major tensions and the associated opportunities for developing SA in human-agent collaboration.”

3:15 PM – 3:30 PM

Break

3:30 PM – 5:30 PM

Exploring a Human-Centered Future for AI
Co-Chairs: Sean Koon (Kaiser Permanente) and Ivan Garibay (University of Central Florida)
Description

The modern tools that we collectively term “AI” will create unprecedented opportunities across a host of domains, augmenting human capabilities and automating many tasks in the work and daily activities of people.

There is a growing interest that not only should AI’s benefits be widely experienced but also that the potential impacts to individuals, groups, and society in general must be carefully considered and mitigated.

While this need for “Human-Centered AI”(HCAI) may be gaining a groundswell of support, the pathway is not altogether clear. Future efforts may draw from existing HCI research and human-centered design principles, but also will demand a fresh approach to the many open questions regarding how AI can best work for all humans.

This session begins an inquiry into the fundamental considerations for an evolution of HCAI. Presenters discuss imperatives, possibilities, and challenges in creating AI interactions that support both human potential and human well-being.

  • Developing Distinctive Aims and Characteristics for HCAI — Sean Koon (Kaiser Permanente)

    “As AI finds new applications across a range of industries there are new concerns regarding safety, autonomy, bias, well-being, ethics, and more. An emphasis on Human-Centered Artificial Intelligence (HCAI) has been proposed with the idea that these potential impacts might be systematically considered and mitigated, keeping human values at the forefront and humans in control. This presentation discusses HCAI’s potential development as an area of scientific and academic focus, HCAI’s need to demonstrate its relevance to industry and regulators, and some of the initial open questions for human-centered interaction design of AI-enhanced tools.”
  • Ambient systems for well-being: the role of Human-Centred AI — Antona Margherita

    “AI offers the potential to provide context-dependent support for human wellbeing in intelligent technological environments. This talk will present the experience acquired in the development of two ambient systems which monitor the physiological and psychological status of their inhabitants and propose interventions targeted to improve wellbeing (stress reduction and sleep quality improvement), with focus on human-centered design challenges as well as personalization issues.”
  • Universal Access in AI-enabled Environments — Constantine Stephanidis (University of Crete)

    ”In the emerging intelligent environments, AI technology will be a vital component of daily activities catering for human needs, well-being and prosperity. Despite its potential contributions, however, AI has been criticized regarding ethics, privacy, transparency, and fairness. A major concern pertains to bias and exclusion that may be introduced by AI algorithms against individuals or entire social groups, including persons with disabilities, older adults, as well as vulnerable individuals. In this respect, Universal Access is expected to constitute a pillar of HCAI efforts, pursuing to address, in a systematic manner, such risks of exclusion.”
  • Human-Centered AI: Reliable, Safe & Trustworthy — Ben Shneiderman (University of Maryland)

    “A new synthesis is emerging that integrates AI technologies with HCI approaches to produce Human-Centered AI (HCAI). Advocates of this new synthesis seek to amplify, augment, and enhance human abilities, so as to empower people, build their self-efficacy, support creativity, recognize responsibility, and promote social connections. These passionate advocates of HCAI are devoted to furthering human values, rights, justice, and dignity, by building reliable, safe, and trustworthy systems.

    The talk offers three ideas:

    • HCAI framework, which shows how it is possible to have both high levels of human control AND high levels of automation
    • Design metaphors emphasizing powerful supertools, active appliances, tele-operated devices, and information abundant displays
    • Governance structures to guide software engineering teams, safety culture lessons for managers, independent oversight to build trust, and government regulation to accelerate innovation
  • Human-centered AI: challenges and opportunities for the HCI community — Wei Xu (Zhejiang University)

    “While AI has benefited humans, it may also harm humans if not appropriately developed. To enable the development of human-centered AI systems, we conducted a high-level literature review and comprehensive analysis of current work in developing AI systems from an HCI perspective. Our review and analysis highlight the new changes introduced by AI technology and the new challenges HCI professionals face when humans interact with AI systems. We also identified seven main unique issues in developing AI systems that HCI professionals have not encountered when developing non-AI computing systems. To further enable the implementation of the human-centered AI approach (HCAI), we identified many opportunities for HCI professionals to play a key role to make contributions to address these issues. Finally, we propose enhanced HCI methods for HCI professionals to effectively address the new challenges and issues and also provide strategic recommendations for HCI professionals to more effectively influence the development of HCAI systems. In conclusion, we believe that with the HCAI approach, HCI professionals can be more effective in addressing the unique challenges and issues of AI systems and develop human-centered AI systems.”
  • Ethical AI for Social Good — Ramya Akula and Ivan Garibay (University of Central Florida)

    “The concept of AI for Social Good(AI4SG) is gaining momentum in both information societies and the AI community. Through all the advancement of AI-based solutions, it can solve societal issues effectively. To date, however, there is only a rudimentary grasp of what constitutes AI socially beneficial in principle, what constitutes AI4SG in reality, and what are the policies and regulations needed to ensure it. This paper fills the vacuum by addressing the ethical aspects that are critical for future AI4SG efforts”

Day 2

10:00 AM – 12:00 PM

HCAI PRIORITIES WORKSHOP, Session I

12:00 PM – 12:30 PM

Break

12:30 PM – 2:30 PM

HCAI PRIORITIES WORKSHOP, Session II

2:30 PM – 3:00 PM

Summary and Closing