Dan Hendrycks | |
---|---|
Born | 1994 or 1995 (age 28–29) |
Education | University of Chicago (B.S., 2018) UC Berkeley (Ph.D., 2022) |
Scientific career | |
Fields | |
Institutions | UC Berkeley Center for AI Safety |
Dan Hendrycks (born 1994 or 1995[1]) is an American machine learning researcher. He serves as the director of the Center for AI Safety.
Hendrycks was raised in a Christian evangelical household in Marshfield, Missouri.[2][3] He received a B.S. from the University of Chicago in 2018 and a Ph.D. from the University of California, Berkeley in Computer Science in 2022.[4]
Hendrycks' research focuses on topics that include machine learning safety, machine ethics, and robustness.
He credits his participation in the effective altruism (EA) movement-linked 80,000 Hours program for his career focus towards AI safety, though denied being an advocate for EA.[2]
In February 2022, Hendrycks co-authored recommendations for the US National Institute of Standards and Technology (NIST) to inform the management of risks from artificial intelligence.[5][6]
In September 2022, Hendrycks wrote a paper providing a framework for analyzing the impact of AI research on societal risks.[7][8] He later published a paper in March 2023 examining how natural selection and competitive pressures could shape the goals of artificial agents.[9][10][11] This was followed by "An Overview of Catastrophic AI Risks", which discusses four categories of risks: malicious use, AI race dynamics, organizational risks, and rogue AI agents.[12][13]
Hendrycks is the safety adviser of xAI, an AI startup company founded by Elon Musk in 2023. To avoid any potential conflicts of interest, he receives a symbolic one-dollar salary and holds no company equity.[1][14]
In 2024 Hendrycks published a 568 page book entitled "Introduction to AI Safety, Ethics, and Society" based on courseware he had previously developed.[15]