New Responsible AI Guide for Stanford Offers Current Best Practices
As you might already know, generative artificial intelligence (AI) is a fast-growing field with the potential to affect our everyday lives.
Sometimes called GenAI, “generative AI” is the term for systems built using algorithms that can generate text, images, videos, audio, and 3D models in response to prompts. Popular examples of generative AI include ChatGPT and Google Bard.
With the rise of GenAI, we all have an opportunity—and responsibility—to work together to shape our way forward. That’s why security and privacy experts at Stanford, including the Information Security Office (ISO), are offering considerations for generative AI with a new site: Responsible AI at Stanford (responsibleai.stanford.edu).
What you’ll find with the Responsible AI site
The new Responsible AI at Stanford site provides our community guidance based on emerging best practices for our security and privacy context today.
The site is not exhaustive. But it is a synthesis of current best practices, and is designed to provide signposts for our work and decision-making.
The new site, developed through cross-disciplinary collaboration across Stanford, includes:
The Responsible AI site helps you get to know the current GenAI landscape, so that you can innovate, build, and explore more responsibly.
Soon, the Responsible AI at Stanford site will also include more opportunities for engagement and discovery, with connections to specific bodies of research and work at Stanford.
Explore the Responsible AI site to consider how you might move forward in today’s new AI landscape.