HumanConnections.AI - Prosocial Tech Design Event

Talk of AI seems to be everywhere these days. If you are a technologist or an investor, the tone of the conversation is likely to be boom. If you are a researcher or writer in the social fields, its likely to be doom.

What has been missing is a more practical dialogue between technologists, investors, and researchers about how we actually make AI work for human flourishing.

This is extremely important as we navigate a completely new territory for humanity: AI and human relationships.

To close the gap between industry and researchers, The Human Flourishing Program at Harvard University and Preston-Werner Ventures hosted the first HumanConnections.AI Salon on October 8th, November 2024 in the heart of San Francisco

We used the principles of mutual respect, curiosity, vulnerability, and openness to ground a conversation between sides that don't often meet in a collaborative setting.


Our speakers and participants included leading social connection and AI researchers from universities like Harvard, the Massachusetts Institute of Technology, the University of Southern California, and the University of California, Berkeley and technologists from leading AI companies like OpenAI, Google, Meta, Replika, and GitHub and investors from Bloomberg Beta, Preston-Werner Ventures and Bessemer Venture Partners.

We are extremely thankful for the generous support from our sponsors Omidyar Network and Einhorn Collaborative.

To learn more, you can read the report developed after the Salon that informed a series of collaborations between partners that led to product design principles and practices.

From the report, key questions and recommendations from the participants included the following:

Collective Action for Prosocial Product Design

  • Co-create industry metrics for AI/human interfaces

  • Survey tech companies - What are the metrics you use? (Churn, Net Promoter Score, Conversion Rate, Customer Lifetime Value, Customer Retention Rate, etc.) - “Build on what already exists and matters to advance adoption.”

  • Create positive subjective metrics - individual and social - “measure what is good with top down support to optimize.” ◦ Create objective harm metrics

  • Make the measures domain specific and build the business case for metrics

  • Develop Open Source Continuous Integration and Continuous Delivery (CI/CD) practices as a framework for managing, testing, and deploying AI systems in a controlled, safe, and transparent manner so as to not negatively impact human social and emotional capabilities.

Research Questions

  • How does the quality and source of training data impact the way Social AI relates to human beings?

  • For the more advanced AI algorithms and models of Social AI, how do we understand how and why they are responding to a human being in certain ways?

  • Do the more complex models underpinning Social products get more difficult to oversee?

  • Are hybrid models (deep learning + rule based) helpful in getting both better performance and legibility improving oversight?

  • If the embedded values of this training data comes from a particular society or region does that change the way humans relate in other cultures and regions?

  • What are the fewest, most important metrics used to evaluate AI/human interactions?

  • How can human flourishing research and data be used to improve the training of AI systems and the product development process? •

  • What are the latest frameworks and tools being used to develop benchmarks for Social AI and impact on human flourishing?

Previous
Previous

Talk on Chatbot Design Principles | MIT Media Lab

Next
Next

Interview with Andrew McStay, Author of Automated Empathy.