Skip to content Skip to site navigation Skip to service navigation

Responsible AI at Stanford

Enabling innovation through AI best practices

Generative artificial intelligence (AI) is built using algorithms that can generate text, images, videos, audio, and 3D models in response to prompts. Popular examples of generative AI include ChatGPT and Google Gemini.

This type of AI can introduce both positive and negative impacts.

​With this guide, learn how to more confidently use AI tools and models while keeping Stanford's data safe.


Safety measures

Let's take a look at several key ways we can increase privacy and security when using third-party AI platforms and tools.

Be aware

A critical step to using AI is to build your awareness of how the platform you are using works.

Importantly, any data put into third-party AI systems is transmitted and stored on external third-party servers over which Stanford has no direct control. This can introduce risks of compromising these data, or even data loss.

You can build awareness of how large language models work and more from the New York Times (free for Stanford students, faculty, and staff through Stanford Libraries).

Be careful

In general, err on the side of caution when it comes to what you input with a generative AI platform.

Consider disabling options related to saved history to prevent the information from being logged or tracked for training model purposes.

Avoid inputting any sensitive data, such as Moderate or High Risk Data, whether using a personal or Stanford account with a third-party AI platform or tool that is not covered by a Stanford Business Associates Agreement. Review Stanford approved services by data risk classification. 

Have more questions? Start a discussion with ISO.

Be transparent

If your final product is significantly influenced by an AI platform, consider informing people how you used AI and cite appropriately.

Transparency will help your audiences understand more about the role of AI in your work and build trust in your integrity.

 


Risk factors

Let's now look at the main risk factors when working with generative AI platforms.

Compromising sensitive data

Typically you need to provide some information, even if just a prompt, to bring out a result from generative AI.

Yet these inputs often train the model and might be used to form future responses for others.

When creating prompts: Avoid inputting any sensitive data, such as Moderate Risk or High Risk Data.*

 


 

*This includes: Home addresses, passport numbers, personal health information, passwords, financial data, or intellectual property. This also includes: Controlled Unclassified Information (CUI), International Traffic in Arms Regulations (ITAR) data, as well as proprietary source code, and any information that would be protected under non-disclosure agreements (NDA).

Inaccurate results

Information provided by AI can be incorrect or can create “hallucinations.”

A hallucination by AI is when false or incorrect information is provided by AI, but in a very convincing manner. AI platforms typically will not introduce any skepticism about the false information, nor will they fact-check or provide citations.

Additionally, if prompted for citations and references, AI platforms have been known to generate inaccurate (but convincing) source information. 

In short: We can't trust results from generative AI completely. 

 


 

This can also apply to using AI to generate code, which could be badly constructed, insecure, create backdoors, and even risk intellectual property infringement.


Explore best practices

Discover security and privacy considerations for working with generative AI today.

The information provided for these context areas is not considered complete or exhaustive.

Area Best Practices
Data Privacy & Usage Avoid inputting data into generative AI about others that you wouldn’t want them to input about you.
Data Privacy & Usage Avoid inputting any sensitive data, such as Moderate or High Risk Data, whether using a personal or Stanford account with a third-party AI platform or tool that is not covered by a Stanford Business Associates Agreement. Review Stanford approved services by data risk classification.
Data Privacy & Usage If inputting Low Risk Data, think about whether you want it to be public.
Data Privacy & Usage It's recommended to opt out of sharing data for AI iterative learning wherever possible.
Data Privacy & Usage If generative AI is to be used to interact with users, obtain their informed consent. Users must be informed about how their data is being used and have the option to opt-out or delete their data.
Emerging Technology To keep meetings secure and private, avoid potentially risky third-party bots and integrations. (Third-party tools may have the ability to scrape your calendar for information, unknowingly transcribe or record meetings, save meetings in unknown places, and join meetings even when you’re not present.)
Recommended Best Practices For content creation: If use of generative AI is permitted at all, one should always transparently cite its use.
Recommended Best Practices Always refer to the specific policies and statements of discipline-relevant journals, publishers, and professional groups.
Promoting Discourse Discuss opportunities for AI to contribute positively to your goals.
Promoting Discourse Have conversations around ethical issues and limitations related to AI use and development.


Tools being explored

View a list of generative AI tools being evaluated by University IT (UIT) for potential implementation in various contexts, according to the needs of the Stanford community.


Resources and help

When reviewing the resources and policies shown on this page, keep in mind that the legality and ethics of how AI is developed and used is still evolving.

For example, the growth of the AI industry has sparked an increase in "data scraping" (or copying and using) huge amounts of information on the internet to train new AI models. While this practice is common, its legality and potential outcomes are not clearly decided at this time.

Other concerns related to generative AI are similarly under debate and scrutiny.

More resources and communities at Stanford

Stanford University community members are part of the ground-breaking research and work around generative AI.

You can explore more perspectives and explorative efforts with these sites:

And you can build knowledge about AI with this useful newsletter series from the New York Times (available free for Stanford students, staff, and faculty through Stanford Libraries):

Help

For privacy questions:

For IT security questions:

Considering a generative AI third-party tool?

To report an incident (such as a data breach or system compromise):

Report an incident

Education and awareness

Grow your awareness, build an education plan, and keep track of AI-related developments worldwide with these resources:

Do you have feedback about this page?

Share Feedback