AI Policy Coverage by State
Rule-basedShare of institutions in each state with a verified AI policy hub. Darker greens mark states with stronger overall coverage; gray indicates states not yet in the dataset.
Coverage Gap by State
Rule-basedPercentage of institutions in each state with a verified AI policy. Sorted alphabetically. Hover any bar for raw counts.
Stance Distribution
LLM-classifiedHow {{ fmt(data.stance_distribution.stance_total) }} classified policies position themselves on AI — from outright encouragement to prohibition. Conditional dominates: most institutions allow AI but with explicit guardrails.
| Stance | Count | % |
|---|---|---|
| {{ s.label }} | {{ s.count }} | {{ (s.count / data.stance_distribution.stance_total * 100).toFixed(1) }}% |
Stance by Audience
LLM-classifiedProhibitive policies are overwhelmingly student-facing. Encouraged policies tend to address a balanced audience. Researcher-specific governance remains nearly absent.
| {{ a }} | |
|---|---|
| {{ data.stance_audience_matrix.data.stances[i] }} | {{ val === 0 ? '—' : val }} |
Who Owns the AI Policy
LLM-classifiedWhich campus office "owns" the policy. Public universities lead with teaching and learning; private nonprofits and community colleges frame AI primarily as an academic integrity issue. Ethics-led framing is rare across every institution type.
Which AI Tools Do Policies Name?
Rule-basedRanked by the number of institutions that mention each tool at least once across crawled policy snapshots. ChatGPT dominates both breadth (721 institutions) and depth (12,264 raw mentions).
How Are Tools Referenced?
Rule-basedFor each tool, we classify the surrounding 300-character window of every mention as approved (licensed, supported), governed (must, required), mentioned (no governance context), or prohibited (banned). Most tools are merely named — not actively governed.
| Tool | {{ c }} |
|---|---|
| {{ row.tool }} | {{ row[c] === 0 ? '—' : row[c] }} |
Policy Adoption by Type
Rule-basedPublic universities lead at 62.3% coverage. For-profits trail at 3.8% — and with 2,469 institutions, they represent by far the largest ungoverned segment.
Who Do Policies Cover?
LLM-classifiedEnterprise-wide policies govern all campus members — staff, faculty, students. Most institutions still limit scope to the academic community, leaving operational and HR contexts ungoverned.
How Deep Do AI Policies Go?
Rule-basedPercentage of policies that mention each operational domain. Nearly all cover academics (~90%) and staff (~85%), but research governance, IT procurement, and formal training remain consistent blind spots — even at well-resourced public universities.
Policy Quality Signals
Rule-basedThree operational prerequisites that indicate whether a policy is ready for real-world use. Across the board, most policies are silent on these basics.
Data Risk Awareness
Mentions data classification, PII, FERPA, HIPAA, or sensitive data handling.
Human Oversight Required
Requires human review or verification of AI output before use in decisions.
Formal AI Training
Mentions professional development, AI literacy programs, or certification courses.
Policy Maturity Index
Rule-basedInstitutions scored into four tiers based on the depth and breadth of their AI governance. Only 3.6% reach comprehensive; 63.6% have no verified policy at all. The long tail is the story.
Scoring: dedicated AI page (+3), formal policy (+2), >1,000 words (+1), names tools (+1), balanced audience (+1).
Are Policies Keeping Up?
Rule-basedStaleness signals flag policies that may reference outdated tools or framing — Bard rebranded to Gemini in early 2024; GPT-3 was superseded by GPT-4. Over half of classified policies show at least one signal.
When were these policies first created?
Rule-basedYear each institution first adopted its AI policy, extracted from explicit lifecycle language ("approved", "effective", "adopted", "issued", "published") in the policy text. Only {{ data.policy_timeline.data.coverage_pct }}% of classified institutions publish a dateable adoption marker — the absence itself is a governance signal.
How This Dataset Was Built
Every US higher education institution in the federal IPEDS directory was monitored for AI-related policy content. We searched each institution's public website for an AI policy page, verified what we found through a follow-up review, and saved a snapshot for analysis. Each verified policy was then read and categorized along three dimensions to make the dataset comparable across thousands of institutions.
Started from the federal IPEDS institutional directory — every accredited US college and university — to ensure no segment of higher education was missed.
Searched each institution's website for AI-related policy content, then verified candidates through a second-pass review. Only confirmed policy pages count toward "verified" coverage.
Each verified policy was archived and categorized along three dimensions: stance (how restrictive), framing (which office owns it), and audience (who it addresses).