The Stanford AI Playground is a user-friendly platform, built on open-source technologies, that allows you to safely try various AI models from vendors like OpenAI, Google, and Anthropic in one spot. The AI Playground is being managed by University IT (UIT) as a pilot for Stanford faculty, staff, students, postdocs, and visiting scholars.
AI Playground: An Introduction
With this short video, discover how exploring AI tools and technologies can benefit you. Plus, find out how the AI Playground works overall with short demos of how to use prompts and understand replies.
Playground safety
Do not use high-risk data in your attachments or prompts.
And remember, while LLMs are advanced tools, they are not flawless and may create errors or hallucinations. Take caution before trusting or using results verbatim.
Follow the steps to log in with Single Sign On (SSO).*
* You might be taken to an Information Release settings page, especially if it's your first time visiting.
Select your consent duration preference.
Click Accept to keep going. (All data shown is kept within Stanford systems.)
Note: The Information Release is part of the university's authentication system and is used solely for logging into the platform. This information will not leave university systems and is only shared at the time of logging in. Learn more in the FAQ section under "Data privacy and security in the AI Playground."
Beneath your prompts and the responses generated by the LLMs, you'll find several more advanced features:
Read aloud - Read outs the message via a computer-generated voice.
Edit - Allows you to edit your prompts as well as the responses of the various models.
Save & Submit - Saves your edit and resubmits the information to regerate the AI's response.
Save - Saves your edit without regenerating the response.
Cancel - Closes the edit window without saving changes.
Copy to clipboard - Copies the content of the selected message to your clipboard to be pasted into another window or program.
Regenerate - Forces the model to try to create a new response without any additional context.
Fork - Creates a new conversation that starts from the specific message selected. This can be useful for refocusing the conversation, creating branching sepearate scenarios, preserving context, and more.
You can select your preferred models at the top of the page. You can also adjust and switch models in the middle of a conversation.
For example, you can start a conversation in OpenAI with the prompt "Write an article about topic A." and then switch to the DALL-E-3 plugin to request an image to go with the article, then switch to Anthropic and request a list of headline options to go with the article.
OpenAI ChatGPT models
Choose between these available versions for OpenAI:
gpt-4o - Best for complex reasoning, image and PDF analysis, as well as advanced coding; strengths include deep comprehension.
gpt-3.5-turbo - Best for content creation, basic coding queries; strengths include speed and general use.
Google models
Choose between these available versions for Google:
gemini-1.5-pro - Best for advanced reasoning tasks, creative writing, detailed coding, and in-depth research. Great for understanding idioms and nuanced text from other languages. Strengths include high context limits, reasoning, and translation.
gemini-1.5-flash - Best for faster performance and quicker responses where speed is paramount. It trades off some of the Pro version's depth and complexity for increased efficiency. It's better for quick queries and snappy back-and-forths.
Plugins options
Plugin options include:
DALL-E 3 (Not available to students or Visiting Scholars at this time.) - Turns your natural text prompts into AI generated images.
Google Imagen 3 (Not available to students or Visiting Scholars at this time.) - Generates exceptional photorealistic images based on text descriptions.
Google - Google plugin is an AI assisted Google web search. You can use it paired with the GPT models, and even in conjunction with other plugins like the DALL-E image generation plugin.
Web Scraper - This plugin will read the content of live webpages provided in your prompt to help answer questions about the content within the single page specified. This plugin will not crawl entire websites.
Wolfram - Provides computational intelligence for solving complex mathmatical equations.
Azure Assistants options
The Azure Assistants feature leverages OpenAI models made available through Microsoft Azure. The only available assistant right now is:
Data/Code Analyst - Uses OpenAI’s code interpreter to better process files with diverse data and formatting, or to generate files with data and images of graphs. This allows the Azure Assistant to solve challenging code problems and provide more robust data analytics capabilities.
Anthropic models
Choose between these available versions for Anthropic:
claude-3-5-sonnet - Best for analyzing or creating large bodies of text and for code analysis; strengths include high context limits.
claude-3-haiku - Best for quick instruction based tasks with existing data; strengths include speed and high context limits.
Meta model
The only available model for Meta right now is:
Lllama-3.1 - Best for adapting different styles, tones, and analyzing multi-lingual texts; strengths can respond to various types of input.
LLM configuration options
You can also use the configuration options button to the right of the selected model to customize your settings.
Before entering your prompt, you can choose to modify the selected LLM's settings. This is optional, and most people can leverage the default options for the best experience.
Note that the settings might vary slightly for each model type, but many models will have the below configuration options:
Max context tokens - Defines maximum input size. (1000 tokens is about 750 words)
Max output tokens - Defines maximum outpot size. (1000 tokens is about 750 words)
Temperature - Controls the “creativity" or randomness of the content being generated.
Top P - Alternative to temperature, refines the number of possible responses based on context provided.
You have the ability to share conversations you have had with the LLMs. Your name and any messages you add to the conversation after creating the link stay private.
Share a link to a conversation:
In the left-side panel, to the right of the conversation's title, click the three dots for the More menu.
Click Share.
Click the Create link button to generate a shareable link.
Note: Be careful when using this feature. While you must log in to view the link, the conversation will become accessible to any authenticated Stanford users with the link.
Remove a link to a shared conversation:
Click on your own name in the bottom left corner
Click Settings.
In the new pop up window, click Data controls.
Next to Shared links, click on the Manage button.
This will display all the chats with shared links, the date shared, and give you an option to Delete the link.
Note: Deleting a shared link is a permanent action and cannot be undone. Resharing the conversation would include any new information input or generated in the conversation since the original link was generated.
Explore settings to customize options that impact your entire AI Playground experience.
To access the settings menu:
Click on your own name in the bottom left corner
Click Settings.
General settings:
Theme - Allows you to change between Light and Dark mode.
Auto-Scroll to latest message on chat open - When enabled, this will automatically move your view to the last message in the conversation.
Hide right-most side panel - When enabled, this will remove the pop up side panel menu.
Archived chats - Allow you to unarchive conversations or delete them from the system entirely.
Messages settings:
Press Enter to send message - When enabled, pressing the Enter key will send your message.
Save drafts locally - When enabled, texts and attachments you enter in the chat will be saved locally as drafts. Drafts are deleted once the message is sent.
Default fork option - Defines what information is visable when forking conversations.
Use the default fork option - When enabled, will assume the default fork option defined above should be used for every conversation fork.
Data controls:
Import conversations from a JSON file - Allows you to import conversations exported from other GPT chat applications.
Shared links - Allows you to view and delete all shared conversations under your account.
Clear all chats - Deletes all conversations from the left most side panel. (Does not delete archived conversations.)
Account settings:
Profile picture - Allows you to upload a profile picture for yourself, which is shown in your conversations with the AI models. (Image must be under 2MB.)
Display username in messages - When enabled, your name is shown next to your prompts in your conversations. When disabled, prompts you send will be labeled as "You" in conversations.
Here, you'll find details for using several recently-released features.
Organize conversations (Bookmarks)
Use the Bookmark tool to organize your conversations by similar topics, to find them easily in the future.
Step-by-step:
When viewing a conversation, click the bookmark icon in the top bar.
Click New Bookmark.
Fill in the Bookmark details and click Save.
To apply the bookmark to the current conversation (likely desired), check the box to “Add to current conversation.”
Now you can use the Bookmarks selector in the left-side panel to view only conversations with the applied bookmarks.
Manage, rename, and delete bookmarks in the right-side panel under the Bookmarks section.
View a short demo of applying an existing Bookmark and using it to filter conversations:
Compare models side-by-side
To compare the result of two models simultaneously for the same prompt, use the model compare feature.
Step-by-step:
Select one model for your comparison in the top menu drop-down options.
Click the plus icon in the top menu options.
You will notice the selected model is indicated in the prompt field.
Select the second model for your comparison in the top menu.
Type and enter your prompt into the prompt field.
You will see both results side-by-side.
Once you click out of the conversation, you will be able to see both responses by tabbing back and forth using the numbered tool below the conversation.
View a short demo of these steps:
Generate items using code with previews (Artifacts)
Explore the ability to generate code and corresponding prototypes using React, HTML5, three.js, WebGL, and more.
Step-by-step (for how to turn it on):
Click your name in the lower left corner.
Open Settings.
Click Beta features.
Click to switch on the Toggle Artifacts UI.
Note: Artifacts works with each of the major models, but tends to work best with Anthropic and Azure OpenAI models. The artifacts feature does not work in conjunction with plugins at this time.
Play the following short video for prompt ideas and to understand more about how it works.
Feedback
Do you have questions, suggestions, or thoughts to share about the AI Playground? Reach out and let us know what's on your mind.
Generative Artificial Intelligence, also known as GenAI, is a type of AI can generating original text, visual, and audio content by learning patterns from provided samples of human generated data. Explore overall GenAI frequently asked questions.
What is an LLM?
LLM stands for Large Language Model, and is a type of Generative AI which specializes in text-based content and natural language. By using LLMs to enhance the interface between the person requesting content and the generative AI tool creating the content, LLMs can help generate more specific text, images, and custom code in response to prompts. Explore overall GenAI frequently asked questions.
What is a token? What is tokenization?
Large Language Models (LLMs) will break down your prompts, as well as the responses they generate, into smaller chunks known as tokens. This tokenization makes the data more manageable for the LLMs and assist it in processing your data. There are many methods of tokenization, and this process can vary between models. Some models may break your prompts down into individual words, subwords, or even single characters. This can change how your data is interpreted by the LLMs and is one of the many factors which can lead to receiving different answers to the same prompt.
What is the estimated energy cost of using GenAI?
Each GenAI model and tool uses energy differently. While the AI landscape continues to evolve, research and reporting currently indicate these energy costs:
Where can I learn more about the university's guidance on the use of GenAI tools?
Several university teams have been hard at work creating a plethora of information on the appropriate use of GenAI and guidance on how to use AI responsibly. For more information, please visit:
What are the benefits of the Stanford AI Playground?
The AI Playground is designed to let you learn by doing. With this environment, you can practice optimizing work-related tasks (not using high-risk data), such as: content generation, administrative tasks, coding and debugging, analyzing data, and more.
What is coming next with the AI Playground?
Many exciting features are planned for the future, including robust usage reporting capabilities, custom agents, direct API access, AI assistants for ServiceNow, SalesForce, FinGate, and more.
Is the Stanford AI Playground a custom software or a product?
The AI Playground is built on open source LibreChat software with a flexible infrastructure which allows room for the playground to change and grow over time.
Who can access the AI Playground?
As of October 1, all Stanford faculty, staff, students, Postdocs, and Visiting Scholars are able to access the AI Playground. The Playground works with both full and base SUNet IDs. Please note: Based on feedback received durring this pilot phase, the pilot duration could change or conclude based on what we learn.
To access the AI Playground, a SUNet ID must be a member of one of the following groups: stanford:faculty → stanford:faculty-affiliate → stanford:faculty-emeritus → stanford:faculty-onleave → stanford:faculty-otherteaching
How often are the AI Playground LLMs and plugins getting updated?
This depends on the vendors who create the models. Instead of waiting, the UIT team is hard at work updating the middle layer between the website and the LLMs to provide new features and more tools with which to interact with the AI models.
Where can I learn even more about the Stanford AI Playground in particular?
The UIT Tech Training team offers an interactive and beginner-friendly course titled "AI Playground 101". This class is a two hour long introduction to key GenAI concepts, effective prompt engineering, responsible AI use, and explores the AI Playground through instructor led hands-on activities. The instructor for the class, Joshua Barnett, helped lead the development of the AI Playground and many other UIT initiatives on AI.
Why does the AI Playground sometimes provide different responses to the same prompts?
LLMs perform unique processes during the generation of content. As a result, even when using the same prompt, the random sampling that occurs during content generation can lead to different outputs. You may try adjusting the temperature in the configuration options for the selected LLM. This will help reduce the amount of "randomness" and "creativity" in the model's responses. The models selected also make a difference. Each model has different strengths and weaknesses, which can lead to different results. We encourage you to test out various models to find the ones that work best for your specific use cases.
What should I do if I encounter a message stating "Error connecting to server" when using the AI Playground?
If you encounter an error message that reads, "Error connecting to server, try refreshing the page," this means that your request may be using too many tokens. Try adjusting your prompt to break your request into smaller chunks instead of processing the entire request at once. If you are uploading an attachment, try removing some pages or columns/rows from the file. File attachments have a size limit of 512 MB per file.
Why do the models sometimes have difficulty formatting text when responding to my prompt?
Issues with poor formatting in responses can occur for several reasons: context size, interference settings, bad training data, lack of specific instructions, and input constraints. You may also notice that some models, like Meta's Llama 3.1, are more prone to this issue than other models. If you encounter issues with poorly formatted responses you can try to: 1) reword your prompt, asking the model to format its response to make the output more readable. 2) switch to a different model and try your prompt again.
Can I generate images and text to use commercially?
Currently, the AI Playground is not intended to create works for commercial use. Though it isn’t directly prohibited, UIT strongly recommends that you cite whenever you use content generated by any AI tool, especially when shared publicly. This includes text, pictures, code, video, audio, etc. This applies to materials which are wholly generated or extensively altered by AI Playground or any other AI tool.
If you do decide that you want to use something generated by the AI Playground commercially, UIT recommends that you review and follow each particular model's published policies when sharing content generated by the AI Playground. Remember that you are responsible for the content you generate. We suggest these resources for further learning and guidance:
What are the copyright implications when using the AI Playground?
The relationship between GenAI and copyright law is complex, relatively new, and evolving. In reality, many popular LLMs and GenAI tools have been trained on copyright materials, which is then hard to disentangle from the results they produce.
UIT recommends that you follow each particular model's published policies when sharing content generated by the AI Playground. UIT also recommends that you cite publicly shared material which was generated or extensively rewritten by the AI Playground. Remember that you are responsible for the content you generate. We suggest these resources for further learning and guidance:
Why am I no longer able to share conversations with anonymous viewers?
In order to meet university security and privacy guidelines, the ability for anonymous viewers to access shared conversations was replaced with different functionality. The new standard moving forward will be for only authenticated users to be able to access the shared conversations.
Can I utilize an API to directly access the models available within the Playground?
Not at this time, but this is on our roadmap. We anticipate being able to offer this service beginning in the first quarter of 2025.
Data privacy and security in the AI Playground
What is the Information Release page that appears when I first log in? What happens with this information?
The Information Release is part of the university’s authentication system and is used solely for logging into the platform. This information will not leave university systems and is only shared at the time of logging in. This release is part of a new check added to the university's authentication process for some applications when signing in for the first time. The information is solely for authentication purposes and is only shared at the time of logging into the platform. The shared information is limited to what is displayed on screen below (i.e., name, affiliation, email, user ID, and the workgroup allowing access). You can select among three options for how frequently you are prompted to approve that information be shared: every time you log into this application, each time something changes in one of the data fields listed, or this time only. This information is only shared between the user directory and the platform running the AI Playground. Both systems are maintained by Stanford University IT, keeping the data within the university's environment.
Can anyone else at Stanford access the content I share with or generate via the AI Playground?
No. Information shared with the AI Playground is restricted to your account and not accessible by other people using the AI Playground. As with other university services, a small number of people within University IT (UIT) are able to access information shared with the AI Playground, but only do so if required. See the next FAQ for more information on those circumstances.
Does UIT review the content entered into or generated by the AI Playground?
No. The entire UIT team wants to make sure the AI Playground is a trusted space for the entire campus community. While we are building out more robust reporting capabilities, the focus of those reports are on usage trends (such as active users, top users, total number of conversations, etc.) and not specific conversations. In rare circumstances, as a result of investigations, subpoenas, or lawsuits, the university may be required to review data stored in university systems or provide it to third parties. You can learn more about the appropriate use of Stanford compute systems and these situations in the Stanford Admin Guide.
What are the cautions against entering high-risk data into the AI Playground?
The AI Playground is not currently approved for high-risk data, PHI, or PII data. The UIT team is currently working with the Information Security Office (ISO) and the University Privacy Office (UPO) on a full review of the platform and all available models. Sharing high-risk data with LLMs does not currently meet university standards and could result in that data being used by vendors in training future models.
Does anyone outside of Stanford have access to the information I share with the AI Playground?
No. Where possible, UIT is working with vendors to ensure that the information you upload will not be retained by the vendor or alter the LLM in any way. Microsoft has committed to refrain from retaining any data shared with the OpenAI GPT models. Google has committed to refrain from retaining any data shared with the Gemini, Anthropic, and Meta models, except for data saved for abuse monitoring purposes.