Dan Hendrycks | |
---|---|
Education | University of Chicago (B.S., 2018) UC Berkeley (Ph.D., 2022) |
Scientific career | |
Fields | |
Institutions | UC Berkeley Center for AI Safety |
Dan Hendrycks is an American machine learning researcher. He serves as the director of the Center for AI Safety.
Hendrycks received a B.S. from the University of Chicago in 2018 and a Ph.D. from the University of California, Berkeley in Computer Science in 2022.[1]
Hendrycks' research focuses on topics that include machine learning safety, machine ethics, and robustness.
In February 2022, Hendrycks co-authored recommendations for the US National Institute of Standards and Technology (NIST) to inform the management of risks from artificial intelligence.[2][3]
In September 2022, Hendrycks wrote a paper providing a framework for analyzing the impact of AI research on societal risks.[4][5] He later published a paper in March 2023 examining how natural selection and competitive pressures could shape the goals of artificial agents.[6][7][8] This was followed by "An Overview of Catastrophic AI Risks", which discusses four categories of risks: malicious use, AI race dynamics, organizational risks, and rogue AI agents.[9][10]