Skip to content Skip to site navigation Skip to service navigation

Speakers and Talks

photo of Amy Steagall

Amy Steagall

Chief Information Security Officer  |  Stanford University IT

Amy Steagall is the Chief Information Security Officer at Stanford University overseeing Stanford’s efforts to protect its computing and information assets and to comply with information-related laws, regulations, and policies.

Prior to joining the Stanford team in 2020, Amy retired from the United States Air Force after 25 years of distinguished and honorable service. Throughout her career, she led large organizations and teams, culminating in her role at the Joint Force Headquarters-Cyber at NSA Texas. There she steered cyber mission teams operating in complex security environments and specializing in offensive and defensive cyberspace operations.

Amy holds a Bachelor of Science in Cybersecurity Management and Policy from the University of Maryland.

When: Thursday, October 26 | 9:15 - 9:30 a.m.

Session: Welcoming Remarks

photo of David Barnes

David M Barnes

Brigadier General, US Army (Retired)

Chief Artificial Intelligence Ethics Officer  |  US Army Artificial Intelligence Integration Center

Recently named as one of Forbes’ “Top 15 AI Ethicists,” BG(R) Dave Barnes empowers senior business and government leaders seeking to develop, implement, and lead strategic and responsible digital transformation, aligning business strategy while mitigating AI ethical, legal, and societal risks affecting their business or investment decisions.

He retired from the US Army after 32 years of distinguished service, most recently serving as Professor, United States Military Academy (PUSMA) Department of English and Philosophy, West Point, NY and as Chief AI Ethics Officer for the US Army’s Artificial Intelligence (AI) Integration Center (AI2C).

As Chief AI Ethics Officer, he advises the Army on incorporating ethics, law, and policy into Army AI design, development, testing, and employment, directly contributing to the development of the DoD AI Ethical Principles and the 2020 US Army AI Strategy. He directs the effort to develop and operationalize an Army Responsible AI strategy.

He has provided expert assistance to DARPA, the Chief Digital and Artificial Intelligence Office (CDAO), Defense Innovation Board (DIB), National Security Commission on AI, OSD Autonomy Community of Interest (CoI), OSD Biotechnology CoI’s Ethical, Legal, and Social Implications (ELSI) Subcommittee, and others. He also served as the US Army’s Senior Service Representative for the USSOCOM Commander’s 2019 Comprehensive Review of Culture and Ethics.

He is an internationally renowned expert in responsible and assured AI, human-machine teaming, and autonomy. He is a member of the Editorial Boards for AI and Ethics and the Journal of Military Ethics, and a Non-resident Fellow at the Stockholm Centre for the Ethics of War and Peace. He is the author of The Ethics of Military Privatization: The US Armed Contractor Phenomenon and multiple articles on the ethics of armed conflict and the ethics of emerging technology; he has been an invited panelist and speaker for over 100 national and international events.

BG(R) Barnes graduated from the United States Military Academy with a BS in Aero-Mechanical Engineering. He holds a MA in philosophy from the University of Massachusetts, Amherst, and a PhD in philosophy from the University of Colorado, Boulder.

When: Thursday, October 26 | 9:30 - 10 a.m.

Session:

 We are the LIMFAC: How Human Decisions Shape Every AI System
LIMFAC is short for limiting factors. We are the LIMFAC when it comes to AI and AI ethics. While much of recent news has been on technological failures of AI or existential worries, the reality is that it's really about people; people are the ones who are developing, deploying, and using this technology in the world today and will do so in the future. Each of our decisions--regardless of where we sit in an AI lifecycle—matter, including AI experts, lawmakers, consumers, etc. Even when an individual is optimizing code, she brings her particular worldview--whether it's conscious or not—into her code optimization decisions. In a sense, she's making a values-laden decision that can affect downstream in the developmental process. Furthermore, an AI system is more than the data, algorithm, and computing power. We are a part of that AI team; thus, we are the LIMFAC.
photo of Amina Al Sherif

Amina Al Sherif

Technical Engineering Lead in Machine Learning and Generative AI  |  Google

Amina Al Sherif is a technical engineering lead in machine learning and generative AI at Google. She immigrated to the United States in 2010 as a first-generation Arab American. She has spent twelve years in the Department of Defense serving as an Army officer in the Reserves and North Carolina National Guard as a tactical cyber operator and cyber targeter.

Amina has previously worked as an early-stage startup executive focused on artificial intelligence innovation and products. Prior to her startup experience, Amina worked at Google as a Cloud Engineer bringing the innovation and power of Google’s capabilities to the Department of Defense and Intelligence Community. She focused on big data and machine learning, gaming and simulation, and data privacy and security in cloud computing and analytics.

Amina holds a BA in Linguistics and Arabic from the University of Mississippi and a Master's in Professional Studies in Cybersecurity and Information Sciences from Penn State University. She is currently attending the University of Texas in San Antonio to pursue a career in astrophysics at the Ph.D. level.

When: Thursday, October 26 | 10:30 - 11:15 a.m.

Session:

Supercharging Security with Generative AI
Amina will delve into how the security landscape is changing due to the presence of AI and how Google is thinking about using GenAI to solve security challenges from an enterprise and product perspective. Amina will also cover how the field of GenAI SecOps is budding, and demonstrate how GenAI can be used to unlock data and insights in the security space.
photo of Tina Thorstenson

Tina Thorstenson

Executive Strategist, Industry Business Unit   |  CrowdStrike

Tina Thorstenson, long-time CISO and now Executive Strategist who leads our Industry Business Unit at CrowdStrike. She’s spent decades designing innovative approaches to protecting organizations most recently at Arizona State University.

When: Thursday, October 26 | 1 - 1:15 p.m.

Session:

Lighting Talk: ChatGPT: Be on the Inside When the Machines Take Over the World
Notes: an interactive discussion on AI and also specifically ChatGPT capabilities and use cases. Is the immense potential of AI worth the risk? Tina Thorstenson, former Deputy CIO & CISO for ASU, will provide a dynamic lightning talk about things that can keep security teams up at night, concerns specific to Higher Ed and the immediate benefits for every organization and their communities.
photo of Derek Chen

Derek Chen

Trust and Safety Manager  |  OpenAI

Derek Chen is a Trust and Safety Manager at OpenAI, focused on proactively detecting and responding to new abuses of OpenAI's products. He was the first hire for Trust and Safety, and previously worked at Google, BCG, and Meta (YC-backed AR/VR startup).

When: Thursday, October 26 | 1:15 - 1:30 p.m.

Session:

Lighting Talk: Safety Detection and Response at OpenAI
Derek will talk about what safety looks like at OpenAI, what kinds of misuse we're seeing, and where generative models can help mitigate abuse.
photo of Scott Hellman

Scott Hellman

Supervisory Special Agent  |  The FBI San Francisco Division

Supervisory Special Agent Scott Hellman has been investigating criminal and national security cybercrime for 15 years with the FBI. He earned a Bachelor's in chemistry, a J.D., and now leads a team of cyber-crime investigators in the San Francisco Bay area.

When: Thursday, October 26 | 1:30 - 1:45 p.m.

Session:

 Lighting Talk: FBI 2023 Cyber Threat Briefing in 15 Minutes
FBI Special Agent Scott Hellman will touch on current cyber-crime trends, how AI is being used to attack and defend, and what red flags you should look for to better protect yourself.
photo of Jack Cable

Jack Cable

Senior Technical Advisor  |  CISA

Jack Cable is a Senior Technical Advisor at CISA. Prior to that, Jack worked as a TechCongress Fellow for the Senate Homeland Security and Governmental Affairs Committee, advising Chairman Gary Peters on cybersecurity policy, including election security and open source software security. He previously worked as a Security Architect at Krebs Stamos Group. Jack also served as an Election Security Technical Advisor at CISA, where he created Crossfeed, a pilot to scan election assets nationwide. Jack is a top bug bounty hacker, having identified over 350 vulnerabilities in hundreds of companies. After placing first in the Hack the Air Force bug bounty challenge, he began working at the Pentagon’s Defense Digital Service. Jack holds a bachelor’s degree in Computer Science from Stanford University and has published academic research on election security, ransomware, and cloud security.

When: Thursday, October 26 | 1:45 - 2 p.m.

Session:

Lighting Talk: Artificial Intelligence Needs to be Secure by Design
Jack will talk about how Artificial Intelligence, as software, needs to be Secure by Design, and how this fits into CISA’s Secure by Design initiative.
Last modified September 8, 2023