The future of AI is human-centred. We provide human input to the open source AI community

Learn More

Mission

There’s a race for human feedback. When AI became a new paradigm for modern software, human feedback has emerged as an essential element of the AI tech stack. We are here to build public input into AI models used in critical domains like healthcare, governance, and democracy, and act as an independent, third-party custodian to create a global database of human feedback – for there to be an authoritative and democratic source of human feedback data for AI builders everywhere. 

Ways We Work

Human Input

Advance the integration of public input into AI through working with the nonprofit sector on AI models delivery and supporting open-source and policy work in human-centric systems: AI governance, democracy, and healthcare.

Datasets

Collect, generate, and curate human feedback datasets and inputs across the model training and development stack to accelerate the development of resilient and better aligned open-source AI systems.

Education

Educate newcomers to the ML field, and larger business, policy, and public institutions on the advancements in cutting-edge AI safety research and open science, making AI more accessible to all.

Work Streams

Democratic
input to AI

We’re looking to work with civic groups, academia, and the open-source AI community to release tooling and support thought leadership to advance civic consensus and use of AI in the public interest.

Foundation as a data
custodian

We aim to create an authoritative and democratic global database of human feedback, by generating and curating datasets, and providing front-end tools to AI companies to gather users’ feedback about LLM output.

Education +
Implementation

We actively educate newcomers to the ML field, making AI more accessible to all through the LLM Reading Group. We also offer AI Seminars, advisory services, and help implement and build ML models for nonprofits, with support from our partner network.

LLM Reading Group + Events

  • Register for the Spring edition of the LLM Reading Group! The full schedule (and some other events) is below.
  • March 5, 2024: The Linear Representation Hypothesis and the Geometry of Large Language Models, Kiho Park, Google DeepMind, opening the Spring session of the LLM Reading Group.
  • March 19, 2024: Who Are the Publics Engaging in AI? with Renee Sieber, McGill University.
  • March 21, 2024: Spotlight AI – FITC, New Frontier For People And Pixels: Creativity and Prediction Machines. Elena Yunusov and Ryan Kelln, Human Feedback Foundation
  • April 2, 2024: Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models with Matt Dahl, Yale; Varun Magesh, and Mirac Suzgun, Stanford.
  • April 10 – 11, 2024: Data Universe, Building Trustworthy AI that puts Humans First, with Elena Yunusov, Human Feedback Foundation, New York City.
  • April 16, 2024: Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning with Marzieh Fadaee and Ahmet Ustun, Cohere For AI.
  • April 30, 2024: How Well Can LLMs Negotiate? Negotiation Arena Platform and Analysis with Federico Bianchi, Stanford.
  • May 14, 2024: Detecting LLM Generated Text in Computing Education: A Comparative Study for ChatGPT Cases with Michael Liut, University of Toronto.
  • May 30, 2024: Special event TBA at Microsoft Reactor, Toronto.

Resources

  1. Read: Introduction to Human Feedback Foundation, currently the best general overview of the Human Feedback Foundation and our work streams.
  2. Join an active 1,300+ members community through an LLM Reading Group where we host scholars from Google DeepMind, McGill University, Yale University, Stanford University, University of Toronto and Cohere For AI. Mark your calendars! Invite friends.
  3. Catch up on OpenAI: Democratic Input to AI, Team submission and white paper — a deep dive into some of the team’s thinking about the future of democracy and governance.
  4. Share Human Feedback Foundation: AI Community Resources working document and save hours of research.
  5. Subscribe to our mailing list to catch up on all things open ML and get invited to the LLM Reading Group events, AI Pub Night, and more.