Responsible Agentic AI
AI agents can plan, take multi-step actions, use tools, and operate with varying degrees of autonomy. They can send emails, edit files, run code, query systems, submit tickets, or interact with people and services.
Agentic AI can accelerate work. However, it can also take actions you didn't intend. This puts data, systems, and people at risk.

This guide helps Stanford community members use agentic AI more confidently while keeping Stanford’s data, systems, and reputation safe.

Safety measures
Let's take a look at several ways we can increase privacy and security when using agentic AI.
Before using an agent, always identify what data sources it can read, what actions it can take, where logs and outputs are stored, and who can access them.
Be aware
A critical first step: understand what your agent can access and what it can do.
- Agents may connect to external services (e.g., plugins) where Stanford has limited control. Data may be transmitted, stored, or used for model improvement depending on vendor terms.
- Agents often inherit your permissions. If your account can access a Drive folder, email list, or research system, the agent may be able to access these as well.
- Agents can take actions at scale (bulk edits, mass emails, large data pulls), increasing the potential impact of mistakes.
- Multi-step action is a possibility. Even well-configured agents can misunderstand goals, choose the wrong tool, or execute unintended steps.
Be careful
Limit autonomy and limit access to data.
Use least privilege
- Give the agent the minimum access needed (scoped folders, limited mailboxes, least-privileged API keys).
- Choose read-only access unless write access is truly necessary.
- Use separate service accounts for agents where feasible - avoid connecting agents directly to personal/admin accounts.
Use "human-in-the-loop" controls
- Require confirmation before external actions (sending messages, changing permissions, publishing content, running deletions, deploying code, making purchases).
- Enable "dry-run" modes when available (validate in a test environment).
- Start with small batches before scaling.
Protect sensitive data
- Avoid providing Moderate or High Risk Data to agentic AI tools that are not covered by a Stanford Business Associates Agreement (or other Stanford-approved agreement/assessment).
- Avoid placing sensitive data in prompts, agent memory, tool descriptions, or long-term agent "notes."
Reduce persistence
- Disable or limit memory, conversation history, and training/feedback options where possible.
- Understand what gets logged (prompts, tool calls, retrieved documents, outputs).
Constrain tool use
- Restrict which tools/plugins/connectors an agent can use.
- Block high-risk tools (web browsing with posting ability, email sending, calendar modification, financial tools) unless needed and approved.
Have questions or want to review a proposed use case? Start a discussion with ISO.
Be transparent
If your deliverable, decision, or process is significantly influenced by an agent:
- Disclose the role of agentic AI and what it did (e.g., drafted content, summarized sources, executed workflow steps, generated code).
- Document key settings (model/tool, autonomy level, data sources used, human review steps).
- Cite appropriately when agent output materially shaped the work.
Transparency helps build trust and supports reproducibility and accountability.

Risk factors
Let's now look at the main risk factors when working with generative AI platforms.
Compromising sensitive data (now amplified)
Agents can unintentionally expose data.
Data exposure can happen through these paths, or others:
- Tool calls that send content to external vendors
- Over-broad retrieval (pulling entire folders instead of a single file)
- Misconfigured sharing/permissions changes
- Auto-generated emails/messages that include confidential details
- Logs, memory, or telemetry retained by third parties
Avoid inputting sensitive data.*
*Sensitive data includes: home addresses, passport numbers, Social Security numbers; personal health information (PHI); passwords, API keys, access tokens, MFA codes; financial account data; student education records (FERPA); proprietary source code and intellectual property; CUI, ITAR, export-controlled data; NDA-protected information.
Unauthorized or unintended actions
Because agents can act, failures are not just “bad text”—they can become:
- Accidental deletion/overwrite of files
- Publishing drafts publicly
- Sending messages to the wrong audience
- Creating or modifying records in HR/finance/research systems
- Changing access controls (sharing a restricted folder broadly)
- Running commands that alter systems or incur costs
Inaccurate results and “confident action”
Agents can hallucinate and then act on hallucinations, such as:
- Citing non-existent policies and sending guidance to others
- Filing incorrect tickets or submitting wrong forms
- Implementing insecure code changes
- Misreading a spreadsheet and updating records incorrectly
All agent outputs and actions require verification proportional to impact.
Security vulnerabilities in agent workflows
Agentic patterns can introduce new attack surfaces:
- Prompt injection (malicious text in web pages/docs instructs the agent to exfiltrate data or take harmful actions)
- Tool hijacking (agent chooses an unsafe tool or endpoint)
- Data poisoning (agent learns from untrusted content)
- Credential leakage (tokens in logs, prompts, or generated code)
- Supply chain risk (plugins/connectors with weak security)

Tools being explored
University IT may evaluate agentic AI tools for Stanford use in various contexts (productivity, research, development, support). Consult Stanford’s available evaluation materials and service listings.

Resources and help
When reviewing resources and policies, keep in mind agentic AI increases both privacy/security and operational risk. The legal/ethical landscape continues to evolve.
Relevant policies and guidelines at Stanford
Remember to review how Stanford policies apply to agentic AI use, including (as applicable):
- Office of the Provost: Guiding Principles for AI at Stanford
- AI Guidelines for Marketing and Communications
- Minimum Security Standards at Stanford
- Minimum Privacy Standards at Stanford
- Health Insurance Portability and Accountability Act (HIPAA)
- European Union's General Data Protection Regulation (GDPR)
- Federal Family Educational Rights and Privacy Act (FERPA)
- Research Policy Handbook and HRPP
Help
We want to hear from you! Give us feedback on the Responsible Agentic AI page content.
For privacy questions:
For IT security questions:
Considering an agentic AI tool or automation?
To report an incident (such as a data breach or system compromise):
