Public AI Incidents


Please help define the field of AI Incident research by contributing to the initial set of publicly known cases where deployed AI systems caused harms in the real world or very nearly caused harms ("incidents").

A few examples: an autonomous car kills a pedestrian; a trading algorithm causes a market "flash crash" where billions of dollars transfer between parties; a chatbot advises a mentally ill person to self-harm.

When in doubt of whether an event qualifies as an incident, please submit it! This project is intended to converge on a shared definition of "AI Incident" through exploration of the candidate incidents submitted by the partner community.

An initial set of 1,000 incident records have been collected to date and are indexed in the password protected area of the site. For more details or collaboration, please email us.