Skip to content
View yzhao062's full-sized avatar
💜
Busy Since Using Agents
💜
Busy Since Using Agents

Highlights

  • Pro

Organizations

@pygod-team @Open-Source-ML @USC-FORTIS

Block or report yzhao062

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
yzhao062/README.md

Yue Zhao (赵越)

USC Assistant Professor building methods, benchmarks, and open-source tools for AI auditing.

Homepage Google Scholar PyOD stars PyOD downloads

Homepage  ·  Research  ·  Open Source  ·  FORTIS Lab  ·  Contact

Important

FORTIS Labs, a new venture on decision integrity for AI agents. Drawing on a decade of anomaly-detection research and open-source work. Introductions from investors and design partners welcome at hello@fortislabs.ai.

Note

Assistant Professor at USC Computer Science, PI of FORTIS Lab (USC academic group). Research on AI auditing: methods, benchmarks, and open-source tools for inspecting AI systems. Lead developer of PyOD (9.8k★, 42M+ downloads), the canonical Python anomaly-detection library, named by OpenAI, Apache Beam, PostHog, MLflow, and Genentech. ~12k Google Scholar citations across all work.


Research

AI systems are deployed faster than they can be verified. Foundation models and autonomous agents now make consequential decisions, execute code, and interact with external services, often without systematic inspection of what they do or why. My research builds the methods, benchmarks, and open-source tools for AI auditing.

Methodologically, this work extends my prior research on anomaly and outlier detection (the basis of the PyOD ecosystem) from data-distribution settings to foundation-model behavior and autonomous-agent decision traces, where unsafe, anomalous, or out-of-policy actions must be detected and reconstructed before deployment.

Three connected directions:

  • 🔍 Auditing and Assurance: methods, benchmarks, and tools to inspect and evaluate AI systems.
  • 🛡️ Safety and Security: failure modes, attack surfaces, runtime guardrails.
  • 🌐 Science and Society: AI for climate, healthcare, and computational social systems where accountability is not optional.

Open Source

Featured projects (see the full list on the homepage):

Project What It Does
agent-style 21 writing rules for AI agents, loaded at generation time. (432★)
anywhere-agents One config for Claude Code and Codex across every project and session. (171★)
PyOD Agentic anomaly detection: 60+ detectors, 42M+ downloads. (9.8k★)

Tip

External adoption of PyOD. Named by OpenAI as expected operational tooling, shipped as a first-class ModelHandler in Apache Beam (Apache Software Foundation), running the live-traffic alerting subsystem in PostHog, the canonical anomaly-detection flavor in MLflow community-flavor docs, and embedded in Genentech (Roche) drug-discovery validators. 5,493 public repositories and 139 packages depend on PyOD (May 2026 snapshot). DoD CDAO lists PyOD and TrustLLM; ESA OPS-SAT uses PyOD; NIST AI 100-2e2025 and the FLI AI Safety Index cite TrustLLM.

Other Notable Projects
  • PyGOD (1.5k★): graph outlier detection, sister project to PyOD.
  • AD-AGENT (99★): LLM-driven multi-agent anomaly detection platform.
  • ADBench (1k★): NeurIPS 2022 official anomaly detection benchmark.
  • Anomaly-Detection-Resources (9.3k★): curated resource hub for anomaly detection.
  • CS-Paper-Checklist (1.6k★): practical sanity checklist for CS paper writing.
  • TrustLLM (625★, collaborator): LLM trustworthiness benchmark cited by NIST AI 100-2e2025, FLI AI Safety Index, U.S. Senate HSGAC, DoD CDAO.
  • agent-config: personal working repo and canonical source for anywhere-agents.

FORTIS Lab

I lead the FORTIS Lab at USC, working on AI auditing, anomaly detection, and trustworthy AI systems. Current roster: 4 PhD students plus master and undergraduate researchers.


Contact

Pinned Loading

  1. pyod pyod Public

    A Python library for anomaly detection across tabular, time series, graph, text, and image data. 60+ detectors, benchmark-backed ADEngine orchestration, and an agentic workflow for AI agents.

    Python 9.8k 1.5k

  2. agent-style agent-style Public

    21 writing rules for AI coding and writing agents. Drop-in for Claude Code, Codex, Copilot, Cursor, and Aider, so their output reads like a tech pro.

    Python 432 23

  3. anywhere-agents anywhere-agents Public

    One config to rule all your AI agents: portable (every project, every session), effective (curated writing, routing, skills), and safer (destructive-command guard).

    Python 172 19

  4. HeadyZhang/agent-audit HeadyZhang/agent-audit Public

    Static security scanner for LLM agents — prompt injection, MCP config auditing, taint analysis. 49 rules mapped to OWASP Agentic Top 10 (2026). Works with LangChain, CrewAI, AutoGen.

    Python 169 18

  5. Justin0504/Aegis Justin0504/Aegis Public

    Runtime policy enforcement for AI agents. Cryptographic audit trail, human-in-the-loop approvals, kill switch. Zero code changes.

    TypeScript 353 35

  6. anomaly-detection-resources anomaly-detection-resources Public

    Anomaly detection related books, papers, videos, and toolboxes. Last update late 2025 for LLM and VLM works!

    Python 9.3k 1.8k