Know Your Risks: Agentic AI Best Practices
Know Your Agentic AI Risk
Stanford is evolving its AI risk framework to meet the challenges of agentic AI. While data sensitivity remains vital, our new risk tiers now account for an AI’s ability to take actions, influence decisions, and operate autonomously.
These new tiers are aligned with Stanford’s Data Risk Classification, and account for an AI system’s actions or tool access, influence on decisions, and level of external exposure and autonomy.
We are also establishling an unacceptable risk tier for prohibited systems to ensure that any AI systems posing critical risks are clearly identified and not permitted.
All AI must be classified prior to deployment and updated as capabilities change.
| Stanford Data Risk Level | Description | Examples |
|---|---|---|
| Low | AI systems intended to serve internal informational purposes only: handling no sensitive data, initiating no actions or tool calls (connecting to external systems, APIs, or code), playing no role in regulated decisions, and having no external-facing exposure; easily reversible. Primarily enhances efficiency or user experience. |
|
| Moderate | AI systems that access internal business data to conduct structured analysis, inform internal decision-making; tool calls in a limited capacity within defined workflows. |
|
| High | AI systems that handle personal or sensitive data, access confidential or regulated information, execute tool calls affecting internal systems; inform financial, legal, HR, or operational decisions; engage directly with clients, and operate with a degree of autonomy across enterprise systems. |
|
Unacceptable (NEW for AI) | AI systems that pose a clear threat to safety, fundamental rights, or democratic processes or values. These systems are prohibited from deployment at the university. |
|

Explore best practices
The information provided below is not considered complete or exhaustive.
Data privacy & usage
- Avoid inputting data about others that you wouldn't want them to input about you.
- Avoid using agentic AI with Moderate/High Risk Data on third-party tools not covered by Stanford-approved agreements.
- If inputting Low Risk Data, consider whether you're comfortable with it becoming public.
- Opt out of data sharing for training/iterative learning when possible.
- If an agent interacts with users, obtain informed consent and provide an opt-out/delete path.
Autonomy & approvals
- Choose the lowest autonomy that meets the need.
- Require human approval for: external communications, permission changes, publishing, spending, deletions, deployments, and system-of-record updates.
- Use rate limits, batching, and rollback plans.
Identity, access, and secrets
- Use least privilege and scoped connectors.
- Prefer short-lived tokens; store secrets in approved secret managers, not prompts or config files.
- Separate agent identities from personal/admin identities where feasible.
- Monitor and rotate credentials.
Secure tool use and retrieval
- Restrict tools to an allow-list; disable unnecessary plugins/connectors.
- Treat retrieved documents/web content as untrusted input; defend against prompt injection.
- Use retrieval filters (folder scoping, label-based access, record-level permissions).
- Keep audit logs of tool calls and actions.
Code and technical workflows
- Never allow an agent to deploy to production without review.
- Run agent-generated code through standard security checks (SAST, dependency scanning, secret scanning).
- Watch for license/IP issues and provenance concerns.
- Use sandbox environments for execution.
Transparency & recordkeeping
- Document agent configuration, permissions, data sources, and review steps.
- Keep changelogs for actions taken (what changed, when, by which agent identity).
- Follow discipline- and publisher-specific guidance on AI use and attribution.
Recommended operating model (for teams)
- Classify the use case by data sensitivity and action criticality.
- Start with a pilot using Low Risk data, limited scope, and human approvals.
- Define: owner, reviewer, escalation path, incident response steps.
- Establish acceptance tests (e.g., “no external emails without approval,” “no permission changes,” “no writing outside folder X”).
- Review periodically as tools/models change.
Guardrails by phases
| Phases | Guardrails |
|---|---|
| Data Privacy & Usage | Avoid inputting data into generative AI about others that you wouldn’t want them to input about you. |
| Data Privacy & Usage | Avoid inputting any sensitive data, such as Moderate or High Risk Data, whether using a personal or Stanford account with a third-party AI platform or tool that is not covered by a Stanford Business Associates Agreement. Review Stanford approved services by data risk classification. |
| Preparation | Inventory agents and map data flows |
| Identity | Assign unique, verifiable NHIs |
| Behavior | Log all actions with attribution and baselines. Enable explainability for decisions. |
| Data Governance | Prevent injections; track lineage. |
| Segmentation | Apply micro-segmentation. Isolate in a non-production sandbox first. |
| Incident Response | Implement kill switches. Enable session revocation. Develop Incident Response plan using Department Incident Response Tool Kit. |
