
AI Ethics and Safety emerged as a formal field of study following the Dartmouth Summer Research Project on Artificial Intelligence held in Hanover, New Hampshire in 1956. However, serious ethical concerns about autonomous systems did not gain institutional attention until decades later, when researchers at institutions like Stanford University and MIT began systematic investigations into algorithmic bias and decision-making transparency.
Modern AI ethics addresses algorithmic fairness, transparency, accountability, and the prevention of harmful outputs. Researchers at institutions in Geneva, Switzerland, and across the European Union developed frameworks ensuring AI systems comply with safety standards before deployment in critical sectors like healthcare and criminal justice.
Reference: