The CSIS Wadhwani Center for AI and Advanced Technologies is pleased to host Elizabeth Kelly, Director of the United States Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce. This event will be livestreamed on July 31 at 10:00 AM ET. Director Kelly will be joined by Gregory C. Allen, Director of the Wadhwani Center for AI and Advanced Technologies at CSIS.
The U.S. AI Safety Institute (AISI) was announced by Vice President Kamala Harris at the UK AI Safety Summit in November 2023. The institute was established to advance the science, practice, and adoption of AI safety in the face of risks including those to national security, public safety, and individual rights. Director Kelly will discuss the U.S. AISI’s recently released Strategic Vision, its activities under President Biden’s AI Executive Order, and its approach to the AISI global network announced at the AI Seoul Summit.
Prior to becoming Director of the U.S. AISI, Elizabeth served as a Special Assistant to the President for Economic Policy at the White House National Economic Council, where she was a driving force behind President Biden’s AI EO.
This event is made possible by general support to CSIS.
———————————————
A nonpartisan institution, CSIS is the top national security think tank in the world.
Visit www.csis.org to find more of our work as we bring bipartisan solutions to the world’s greatest challenges.
Want to see more videos and virtual events? Subscribe to this channel and turn on notifications: https://cs.is/2dCfTve
Follow CSIS on:
• Twitter: www.twitter.com/csis
• Facebook: www.facebook.com/CSIS.org
• Instagram: www.instagram.com/csis/
source
date: 2024-07-31 15:24:50
duration: 00:49:14
author: UCr5jq6MC_VCe1c5ciIZtk_w
A Conversation with Elizabeth Kelly, the Director of the U.S. AI Safety Institute
Elizabeth Kelly, the inaugural director of the U.S. AI Safety Institute, joined the Center for Strategic and International Studies (CSIS) to discuss the institute’s mission and goals. The AI Safety Institute aims to advance the science of AI safety by assessing and mitigating the risks associated with advanced AI models.
Kelly explained that the institute is focused on testing, building, and evaluating AI models to ensure their safety and reliability. They are working closely with leading companies to develop guidelines for testing and evaluation, as well as conducting research to better understand how AI models work and how to mitigate risks.
The institute is also partnering with international organizations, including the UK AI Safety Institute, to share knowledge and best practices. Kelly emphasized the importance of international collaboration in addressing the global challenges posed by AI.
In the next 12 months, the AI Safety Institute plans to begin testing Frontier models prior to deployment, release guidance on potential misuse of dual-use Foundation models, and update guidance on synthetic content tools and techniques.
Kelly highlighted the institute’s convening in November, which will bring together technical experts to discuss benchmarks, capabilities, and risk mitigations. The event will also feature a broader range of stakeholders, including academia, industry, and Civil Society.
Overall, the AI Safety Institute is working to advance the science of AI safety and mitigate the risks associated with advanced AI models. By partnering with international organizations and conducting research, the institute aims to ensure that AI is developed and deployed in a responsible and safe manner.