What is GenAI?
Generative Artificial Intelligence, also known as GenAI, is a type of AI can generating original text, visual, and audio content by learning patterns from provided samples of human generated data.
Explore overall GenAI frequently asked questions.
What is an LLM?
LLM stands for Large Language Model, and is a type of Generative AI which specializes in text-based content and natural language. By using LLMs to enhance the interface between the person requesting content and the generative AI tool creating the content, LLMs can help generate more specific text, images, and custom code in response to prompts.
Explore overall GenAI frequently asked questions.
What is a token? What is tokenization?
Large Language Models (LLMs) will break down your prompts, as well as the responses they generate, into smaller chunks known as tokens. This tokenization makes the data more manageable for the LLMs and assist it in processing your data.
There are many methods of tokenization, and this process can vary between models. Some models may break your prompts down into individual words, subwords, or even single characters. This can change how your data is interpreted by the LLMs and is one of the many factors which can lead to receiving different answers to the same prompt.
How large of a file can the AI Playground understand? What are the context limits of LLMs?
You are likely used to referring to data in regards to file size such as bytes, megabytes (MBs), or gigabytes (GBs). However, the limits for LLMs are generally measured in the number of tokens needed instead of file size. This is generally referred to as the "context limit" and this can vary model to model.
Tokens and context limits do not translate directly to file size. This is partially due to variance in the method of tokenization across models. A very rough rule of thumb for translating file size into context limit, is that every four English characters translates to roughly one token. That means that each token might average around 4 bytes for typical English language text. Most models have a context size between 32,000 and 200,000 tokens. Even models with a 1,000,000 token context limit would fail to process the entirety of a 5MB text file.
What is the estimated energy cost of using GenAI?
Each GenAI model and tool uses energy differently. While the AI landscape continues to evolve, research and reporting currently indicate these energy costs:

For sources and additional reading, visit:
Where can I learn more about the university's guidance on the use of GenAI tools?
Several university teams have been hard at work creating a plethora of information on the appropriate use of GenAI and guidance on how to use AI responsibly. To make it easier for the Stanford community, the UIT team has compiled a list of several university specific resources on AI. You can view the complete list of these resources at GenAI Topics and Services List.
Is there a list of Stanford approved AI tools?
The Responsible AI at Stanford webpage hosts the GenAI Tool Evaluation Matrix. This matrix contains a list of AI tools, what they do, their availability, and their status at the university. Please refer to that resource for more information about what tools are approved for use with Stanford data.
Am I allowed to use DeepSeek and is it safe?
A secure, local version of DeepSeek is available within the AI Playground. As a reminder, high-risk data in attachments or prompts is not approved for use within any AI Playground model.
Outside of the AI Playground, please refrain from using models like DeepSeek-R1, hosted by the non-US company DeepSeek, for any Stanford business. This includes connecting to DeepSeek APIs over the internet or using the DeepSeek mobile application to process confidential data, such as Protected Health Information (PHI) or Personally Identifiable Information (PII). Currently, there is no enforceable contract in place between Stanford and DeepSeek that meets the risk management standards for HIPAA compliance and cybersecurity safeguards. As a result, using non-US DeepSeek models poses unacceptable risks to data security and regulatory compliance.
What are the benefits of the Stanford AI Playground?
The AI Playground is designed to let you learn by doing. With this environment, you can practice optimizing work-related tasks (not using high-risk data), such as: content generation, administrative tasks, coding and debugging, analyzing data, and more.
The Playground is protected by Stanford's single sign on system and all information shared with the AI Playground stays within the Stanford environment.
Can the AI Playground generate images, charts, graphs, etc. like ChatGPT and Anthropic?
Yes. The AI Playground is capable of many advanced features like image generation, creating charts and graphs based on provided data, as well as writing and rendering programs on screen. Below are a few screenshots of these features in action.
Image generation:

Creating a chart from a provided data set:

Building a dashboard from example and attached data:

Is there a roadmap? What is coming next with the AI Playground?
The team has many exciting new features planned for the future. Some of these new features include:
- Direct API access to models
- Saved prompt sharing
- Custom AI agents
- Slide decks generation
- AI assistants for the Admin Guide, FinGate, ServiceNow, SalesForce, and more
Due to the speed at which GenAI tools are developing, it can be difficult to maintain a list of models on the roadmap. Below is a list of some models we are reviewing for a possible future release within the AI Playground:
- Scholar AI Research Assistant
- Perplexity
- Midjourney
- Sora
How do I request an API key for direct access to the models in the AI Playground?
Direct API access to the models is expected to be released on April 7, 2025. We will update this FAQ item with links to the service page and the API key request form once available.
How often is the AI Playground updated? How often are new models released?
The UIT team wants to make sure the Playground is a safe and useful space for the Stanford community. As a result, the AI Playground is updated regularly. This includes feature updates, new models, platform updates, security enhancements, and performance improvements.
The release of new AI models is largely driven by feedback from the Stanford community. For more information on our plans for the future, check the FAQ entry above titled, "Is there a roadmap? What is coming next with the AI Playground?"
For information about past updates, please review the AI Playground Release Notes page. That page contains a list of every major update to the AI Playground since its release in the summer of 2024.
Does the AI Playground have conversation sharing or team collaboration capabilities?
You are able to share your conversations with other Stanford users. The AI Playground team is also working on a feature which will allow you to share your saved prompts with other Stanford users.
More advanced collaboration capabilities are not available within the AI Playground at this time.
Is the Stanford AI Playground a custom software or a product?
The AI Playground is built on the opensource LibreChat platform, with a flexible infrastructure behind the scenes that allows the UIT team room to change and grow the Playground over time.
Who can access the AI Playground?
As of October 1, all Stanford faculty, staff, students, Postdocs, and Visiting Scholars are able to access the AI Playground. The Playground works with both full and base SUNet IDs.
Please note: Based on feedback received durring this pilot phase, the pilot duration could change or conclude based on what we learn.
To access the AI Playground, a SUNet ID must be a member of one of the following groups:
stanford:faculty
→ stanford:faculty-affiliate
→ stanford:faculty-emeritus
→ stanford:faculty-onleave
→ stanford:faculty-otherteaching
stanford:staff
→ stanford:staff-academic
→ stanford:staff-emeritus
→ stanford:staff-onleave
→ stanford:staff-otherteaching
stanford:staff-affiliate
stanford:staff-casual
stanford:staff-retired
stanford:staff-temporary
stanford:staffcasual
stanford:stafftemp
stanford:student
→ stanford:student-onleave
stanford:student-ndo
stanford:student-postdoc
stanford:affiliate:visitscholarvs
stanford:affiliate:visitscholarvt
Where can I learn even more about the Stanford AI Playground in particular?
The UIT Tech Training team offers an interactive and beginner-friendly course titled "AI Playground 101". This class is a two hour long introduction to key GenAI concepts, effective prompt engineering, responsible AI use, and explores the AI Playground through instructor led hands-on activities. The instructor for the class, Joshua Barnett, helped lead the development of the AI Playground and many other UIT initiatives on AI.
Learn more about and sign up for this course at: Stanford AI Playground 101 Class Overview
Are there any limitations on the devices, OS, or browsers I can use to access the AI Playground?
There are no device or OS specific limitations for accessing the AI Playground. Any device capable of running an up to date, modern browser can use the AI Playground.
Why does the AI Playground sometimes provide different responses to the same prompts?
LLMs perform unique processes during the generation of content. As a result, even when using the same prompt, the random sampling that occurs during content generation can lead to different outputs. You may try adjusting the temperature in the configuration options for the selected LLM. This will help reduce the amount of "randomness" and "creativity" in the model's responses. The models selected also make a difference. Each model has different strengths and weaknesses, which can lead to different results. We encourage you to test out various models to find the ones that work best for your specific use cases.
Is the AI Playground always accurate? Why is information provided by the AI Playground sometimes incorrect?
The current nature of Generative AI is such that these tools can make mistakes. Always verify the information given to you by the AI Playground, or any other AI tools. These tools will sometimes generate incorrect information, and relay in such a way that implies the answer is correct. Double check any information generated by AI before sharing or taking action.
What should I do if I encounter a message stating "Error connecting to server" when using the AI Playground?
If you encounter an error message that reads, "Error connecting to server, try refreshing the page," this means that your request may be using too many tokens. Try adjusting your prompt to break your request into smaller chunks instead of processing the entire request at once. If you are uploading an attachment, try removing some pages or columns/rows from the file. File attachments have a size limit of 512 MB per file.
What file types are supported by the AI Playground?
The AI Playground works best with the following file types:
PDF files
→ .pdf files (normal, unsecured files only)
Comma Separated Values files
→ .csv files
Microsoft Excel files
→ .xls files
→ .xlsx files
Microsoft Word files
→ .doc files
→ .docx files
Image files
→ .png files
→ .jpg files
I uploaded a file but the AI Playground says it can't find it. What should I do?
Your upload may be timing out. If this happens try waiting for ten minutes, refreshing the page and trying again. If the problem continues, please open a support ticket with the error received and links to the conversations in which the errors occurred.
Why do the models sometimes have difficulty formatting text when responding to my prompt?
Issues with poor formatting in responses can occur for several reasons: context size, interference settings, bad training data, lack of specific instructions, and input constraints. You may also notice that some models, like Meta's Llama 3.1, are more prone to this issue than other models. If you encounter issues with poorly formatted responses you can try to:
1) reword your prompt, asking the model to format its response to make the output more readable.
2) switch to a different model and try your prompt again.
Can I generate images and text to use commercially?
Currently, the AI Playground is not intended to create works for commercial use. Though it isn’t directly prohibited, UIT strongly recommends that you cite whenever you use content generated by any AI tool, especially when shared publicly. This includes text, pictures, code, video, audio, etc. This applies to materials which are wholly generated or extensively altered by AI Playground or any other AI tool.
If you do decide that you want to use something generated by the AI Playground commercially, UIT recommends that you review and follow each particular model's published policies when sharing content generated by the AI Playground. Remember that you are responsible for the content you generate.
We suggest these resources for further learning and guidance:
Can you provide some examples of how to cite when content is generated by AI?
UIT recommends citing materials which were generated or extensively rewritten by any AI technology. Below are just some examples of what this could look like. Your use case is not required to look like or be worded exactly like the examples below.
Example 1: AI generated content in documents

Example 2: AI generated content in presentations

Example 3: AI generated content in code repositories

What are the copyright implications when using the AI Playground?
The relationship between GenAI and copyright law is complex, relatively new, and evolving. In reality, many popular LLMs and GenAI tools have been trained on copyright materials, which is then hard to disentangle from the results they produce.
UIT recommends that you follow each particular model's published policies when sharing content generated by the AI Playground. UIT also recommends that you cite publicly shared material which were generated or extensively rewritten by the AI Playground. Remember that you are responsible for the content you generate.
We suggest these resources for further learning and guidance:
Why am I no longer able to share conversations with anonymous viewers?
In order to meet university security and privacy guidelines, the ability for anonymous viewers to access shared conversations was replaced with different functionality. The new standard moving forward will be for only authenticated users to be able to access the shared conversations.
Can I utilize an API to directly access the models available within the Playground?
Not at this time, but this is on our roadmap. We anticipate being able to offer this service beginning in the first quarter of 2025.
What is the Information Release page that appears when I first log in? What happens with this information?
The Information Release is part of the university’s authentication system and is used solely for logging into the platform. This information will not leave university systems and is only shared at the time of logging in. This release is part of a new check added to the university's authentication process for some applications when signing in for the first time. The information is solely for authentication purposes and is only shared at the time of logging into the platform. The shared information is limited to what is displayed on screen below (i.e., name, affiliation, email, user ID, and the workgroup allowing access). You can select among three options for how frequently you are prompted to approve that information be shared: every time you log into this application, each time something changes in one of the data fields listed, or this time only. This information is only shared between the user directory and the platform running the AI Playground. Both systems are maintained by Stanford University IT, keeping the data within the university's environment.
Can anyone else at Stanford access the content I share with or generate via the AI Playground?
No. Information shared with the AI Playground is restricted to your account and not accessible by other people using the AI Playground.
As with other university services, a small number of people within University IT (UIT) are able to access information shared with the AI Playground, but only do so if required. See the next FAQ for more information on those circumstances.
Does UIT review the content entered into or generated by the AI Playground?
No. The entire UIT team wants to make sure the AI Playground is a trusted space for the entire campus community. While we are building out more robust reporting capabilities, the focus of those reports are on usage trends (such as active users, top users, total number of conversations, etc.) and not specific conversations. In rare circumstances, as a result of investigations, subpoenas, or lawsuits, the university may be required to review data stored in university systems or provide it to third parties. You can learn more about the appropriate use of Stanford compute systems and these situations in the Stanford Admin Guide.
What are the cautions against entering high-risk data into the AI Playground?
The AI Playground is not currently approved for high-risk data, PHI, or PII data. The UIT team is currently working with the Information Security Office (ISO) and the University Privacy Office (UPO) on a full review of the platform and all available models.
Does anyone outside of Stanford have access to the information I share with the AI Playground?
No. Where possible, UIT is working with vendors to ensure that the information you upload will not be retained by the vendor or alter the LLM in any way. Microsoft has committed to refrain from retaining any data shared with the OpenAI GPT models. Google has committed to refrain from retaining any data shared with the Gemini, Anthropic, and Meta models, except for data saved for abuse monitoring purposes.