Higher Ed AI Policy Monitor
Work in progress

AI Governance in US Higher Ed

A monitoring dataset of how American colleges and universities govern AI on campus — covering {{ fmt(data.kpi.total_monitored) }} institutions across {{ data.kpi.states_covered }} states, with {{ fmt(data.kpi.classified_count) }} policies analyzed for stance, framing, and audience.

{{ fmt(data.kpi.total_monitored) }}
Institutions Monitored
{{ data.kpi.pct_with_policy }}%
Verified AI Policy
{{ fmt(data.kpi.verified_policy) }} / {{ fmt(data.kpi.total_monitored) }}
{{ data.kpi.states_covered }}
States Covered
{{ fmt(data.kpi.classified_count) }}
Policies Classified
{{ data.kpi.dominant_stance }}
Dominant Stance
01 · Coverage

AI Policy Coverage by State

Rule-based

Share of institutions in each state with a verified AI policy hub. Darker greens mark states with stronger overall coverage; gray indicates states not yet in the dataset.

0-14% 15-29% 30-44% 45-59% 60-69% 70%+ No data

Coverage Gap by State

Rule-based

Percentage of institutions in each state with a verified AI policy. Sorted alphabetically. Hover any bar for raw counts.

Verified policy No policy found
02 · Classification

Stance Distribution

LLM-classified

How {{ fmt(data.stance_distribution.stance_total) }} classified policies position themselves on AI — from outright encouragement to prohibition. Conditional dominates: most institutions allow AI but with explicit guardrails.

StanceCount%
{{ s.label }} {{ s.count }} {{ (s.count / data.stance_distribution.stance_total * 100).toFixed(1) }}%

Stance by Audience

LLM-classified

Prohibitive policies are overwhelmingly student-facing. Encouraged policies tend to address a balanced audience. Researcher-specific governance remains nearly absent.

{{ a }}
{{ data.stance_audience_matrix.data.stances[i] }} {{ val === 0 ? '—' : val }}
≤30% 31-60% >60%

Who Owns the AI Policy

LLM-classified

Which campus office "owns" the policy. Public universities lead with teaching and learning; private nonprofits and community colleges frame AI primarily as an academic integrity issue. Ethics-led framing is rare across every institution type.

03 · Tools

Which AI Tools Do Policies Name?

Rule-based

Ranked by the number of institutions that mention each tool at least once across crawled policy snapshots. ChatGPT dominates both breadth (721 institutions) and depth (12,264 raw mentions).

How Are Tools Referenced?

Rule-based

For each tool, we classify the surrounding 300-character window of every mention as approved (licensed, supported), governed (must, required), mentioned (no governance context), or prohibited (banned). Most tools are merely named — not actively governed.

Tool {{ c }}
{{ row.tool }} {{ row[c] === 0 ? '—' : row[c] }}
Low Medium High
04 · Adoption

Policy Adoption by Type

Rule-based

Public universities lead at 62.3% coverage. For-profits trail at 3.8% — and with 2,469 institutions, they represent by far the largest ungoverned segment.

{{ data.type_breakdown.data[key].coverage_rate }}%
{{ data.type_breakdown.data[key].label }}
{{ data.type_breakdown.data[key].with_hub }} / {{ data.type_breakdown.data[key].total }}

Who Do Policies Cover?

LLM-classified

Enterprise-wide policies govern all campus members — staff, faculty, students. Most institutions still limit scope to the academic community, leaving operational and HR contexts ungoverned.

05 · Operational Quality

How Deep Do AI Policies Go?

Rule-based

Percentage of policies that mention each operational domain. Nearly all cover academics (~90%) and staff (~85%), but research governance, IT procurement, and formal training remain consistent blind spots — even at well-resourced public universities.

Policy Quality Signals

Rule-based

Three operational prerequisites that indicate whether a policy is ready for real-world use. Across the board, most policies are silent on these basics.

Data Risk Awareness

Mentions data classification, PII, FERPA, HIPAA, or sensitive data handling.

Human Oversight Required

Requires human review or verification of AI output before use in decisions.

Formal AI Training

Mentions professional development, AI literacy programs, or certification courses.

06 · Maturity

Policy Maturity Index

Rule-based

Institutions scored into four tiers based on the depth and breadth of their AI governance. Only 3.6% reach comprehensive; 63.6% have no verified policy at all. The long tail is the story.

{{ fmt(data.maturity_distribution.data.tiers.comprehensive) }}
Comprehensive
{{ fmt(data.maturity_distribution.data.tiers.developing) }}
Developing
{{ fmt(data.maturity_distribution.data.tiers.minimal) }}
Minimal
{{ fmt(data.maturity_distribution.data.tiers.none) }}
None

Scoring: dedicated AI page (+3), formal policy (+2), >1,000 words (+1), names tools (+1), balanced audience (+1).

07 · Freshness

Are Policies Keeping Up?

Rule-based

Staleness signals flag policies that may reference outdated tools or framing — Bard rebranded to Gemini in early 2024; GPT-3 was superseded by GPT-4. Over half of classified policies show at least one signal.

{{ data.staleness.data.pct_stale }}%
Show Staleness Signals
{{ fmt(data.staleness.data.total_stale) }}
Institutions Flagged
{{ fmt(data.staleness.data.total_institutions) }}
Total Classified

When were these policies first created?

Rule-based

Year each institution first adopted its AI policy, extracted from explicit lifecycle language ("approved", "effective", "adopted", "issued", "published") in the policy text. Only {{ data.policy_timeline.data.coverage_pct }}% of classified institutions publish a dateable adoption marker — the absence itself is a governance signal.

{{ fmt(data.policy_timeline.data.total_dated) }}
With Adoption Date
{{ fmt(data.policy_timeline.data.unknown) }}
No Date Found
{{ data.policy_timeline.data.coverage_pct }}%
Coverage
08 · Methodology

How This Dataset Was Built

Every US higher education institution in the federal IPEDS directory was monitored for AI-related policy content. We searched each institution's public website for an AI policy page, verified what we found through a follow-up review, and saved a snapshot for analysis. Each verified policy was then read and categorized along three dimensions to make the dataset comparable across thousands of institutions.

01 · Identify

Started from the federal IPEDS institutional directory — every accredited US college and university — to ensure no segment of higher education was missed.

02 · Discover & Verify

Searched each institution's website for AI-related policy content, then verified candidates through a second-pass review. Only confirmed policy pages count toward "verified" coverage.

03 · Classify

Each verified policy was archived and categorized along three dimensions: stance (how restrictive), framing (which office owns it), and audience (who it addresses).