Innovation is integral to our mission and vision at Stanford. As you experiment with GenAI, continue to keep in mind Stanford’s commitment to data security and privacy.
For understanding use cases, we recommend using the framework in this 2023 article by Mark McCormack,"EDUCAUSE QuickPoll Results: Adopting and Adapting to Generative AI in Higher Ed Tech." The article introduces four categories that can guide our use of generative AI: dreaming, drudgery, design, and development.
You'll notice potential tools to use for each category. You can also experiment with this guidance in the AI Playground (open to Stanford students, faculty, and staff as a pilot):
Tools you might use:
Tools you might use:
Tools you might use:
Tools you might use:
Familiarize yourself with these resources that explain aspects of Stanford's approach to data security and privacy, which can also help guide your approach to using GenAI.
Learn how to more confidently use AI tools and models while considering best practices for data security and privacy.
Each one of us is responsible for taking precautionary measures to protect any data we use, access, or share at Stanford. Learn why and how to protect sensitive data.
Stanford provides helpful details about risk classifications that you can use to guide your practices, including examples and approved services.
As you try out these use cases, also consider our high-level prompt guidance.