USC Assistant Professor building methods, benchmarks, and open-source tools for AI auditing.
Homepage · Research · Open Source · FORTIS Lab · Contact
Important
FORTIS Labs, a new venture on decision integrity for AI agents. Drawing on a decade of anomaly-detection research and open-source work. Introductions from investors and design partners welcome at hello@fortislabs.ai.
Note
Assistant Professor at USC Computer Science, PI of FORTIS Lab (USC academic group). Research on AI auditing: methods, benchmarks, and open-source tools for inspecting AI systems. Lead developer of PyOD (9.8k★, 42M+ downloads), the canonical Python anomaly-detection library, named by OpenAI, Apache Beam, PostHog, MLflow, and Genentech. ~12k Google Scholar citations across all work.
AI systems are deployed faster than they can be verified. Foundation models and autonomous agents now make consequential decisions, execute code, and interact with external services, often without systematic inspection of what they do or why. My research builds the methods, benchmarks, and open-source tools for AI auditing.
Methodologically, this work extends my prior research on anomaly and outlier detection (the basis of the PyOD ecosystem) from data-distribution settings to foundation-model behavior and autonomous-agent decision traces, where unsafe, anomalous, or out-of-policy actions must be detected and reconstructed before deployment.
Three connected directions:
- 🔍 Auditing and Assurance: methods, benchmarks, and tools to inspect and evaluate AI systems.
- 🛡️ Safety and Security: failure modes, attack surfaces, runtime guardrails.
- 🌐 Science and Society: AI for climate, healthcare, and computational social systems where accountability is not optional.
Featured projects (see the full list on the homepage):
| Project | What It Does |
|---|---|
| agent-style | 21 writing rules for AI agents, loaded at generation time. (432★) |
| anywhere-agents | One config for Claude Code and Codex across every project and session. (171★) |
| PyOD | Agentic anomaly detection: 60+ detectors, 42M+ downloads. (9.8k★) |
Tip
External adoption of PyOD. Named by OpenAI as expected operational tooling, shipped as a first-class ModelHandler in Apache Beam (Apache Software Foundation), running the live-traffic alerting subsystem in PostHog, the canonical anomaly-detection flavor in MLflow community-flavor docs, and embedded in Genentech (Roche) drug-discovery validators. 5,493 public repositories and 139 packages depend on PyOD (May 2026 snapshot). DoD CDAO lists PyOD and TrustLLM; ESA OPS-SAT uses PyOD; NIST AI 100-2e2025 and the FLI AI Safety Index cite TrustLLM.
Other Notable Projects
- PyGOD (1.5k★): graph outlier detection, sister project to PyOD.
- AD-AGENT (99★): LLM-driven multi-agent anomaly detection platform.
- ADBench (1k★): NeurIPS 2022 official anomaly detection benchmark.
- Anomaly-Detection-Resources (9.3k★): curated resource hub for anomaly detection.
- CS-Paper-Checklist (1.6k★): practical sanity checklist for CS paper writing.
- TrustLLM (625★, collaborator): LLM trustworthiness benchmark cited by NIST AI 100-2e2025, FLI AI Safety Index, U.S. Senate HSGAC, DoD CDAO.
- agent-config: personal working repo and canonical source for
anywhere-agents.
I lead the FORTIS Lab at USC, working on AI auditing, anomaly detection, and trustworthy AI systems. Current roster: 4 PhD students plus master and undergraduate researchers.
- 🌐 Homepage · Google Scholar · LinkedIn
- ✉️
yue.z [AT] usc.edu




