AI in school ERP — five real uses we shipped (and what we won't)
Every week another vendor ships an "AI-powered" ERP. Most of it is a thin wrapper around ChatGPT bolted onto a feature nobody asked for. We've spent the last six months figuring out which AI features actually save schools time — and we shipped five.
What earns its place
1. AI Quiz Generator
A teacher uploads a lesson PDF, says "generate 10 questions". The system produces a mix of MCQ, true/false, short-answer, and descriptive questions. Teacher reviews each one — edit, keep, or discard — then publishes.
This saves about 20 minutes per quiz. Across a school with 50 teachers writing one quiz per week, that's 17 hours a week. The trust-gate review is non-negotiable: AI suggests, teacher decides.
2. AI Answer Evaluator
For descriptive answers, the system suggests a mark based on the rubric. Teacher clicks Apply or Dismiss. Full audit trail so you can answer the parent who asks "why did my child get 7 out of 10".
This is where AI actually replaces grading drudgery — but the teacher remains responsible for the final mark. Always.
3. AI Form Filler
One photo of an Aadhaar / PAN / Passport / Marksheet / Birth Certificate, and the admission form pre-fills. Admission staff verify and save.
Before: 18 minutes per admission form. After: 90 seconds. We've measured this on three schools.
4. AI Lesson Recommender
For students who've taken at least 5 quizzes, the system identifies their weakest topics and recommends 3 specific lessons to review. Click-to-load — never auto-fetched, so the cost is transparent.
5. AI Announcement Drafter
Type a one-line brief — "remind parents about tomorrow's holiday" — AI drafts a circular, teacher edits, then it goes through the regular Communication engine. Same templates, same consent gates, same audit log.
What we said no to
"AI parent chatbot"
A bot that answers parents' questions about their child. Too risky. The questions parents ask are about their kid — the bot needs perfect data isolation, perfect retrieval, perfect refusal patterns. The cost of one wrong answer (telling parent A about parent B's child) outweighs the support-team time saved.
"AI counsellor for student behavioural issues"
Detecting students in distress from their writing. Two reasons we said no: the false-positive rate is genuinely high, and the consequences of a false positive (flagging a student wrongly) are severe. This needs a human in the loop from start to finish, not a model.
"AI fee-collection assistant"
Predicting which parents will be late on fees and auto-messaging them. We considered it. The problem: any "prediction" of payment behaviour skirts the edge of credit-scoring, which schools shouldn't be doing.
"AI exam paper generator from past papers"
Generating original exam papers by scraping past board papers. Copyright concerns + the AI would just regurgitate the patterns the board has already moved on from. Quiz generation from your own lessons is fine; auto-generation of high-stakes exams is not.
The four guardrails
Every AI feature we ship has these four:
- Per-feature kill switch. Master EnableAiTutor toggle plus per-feature row. Fail-CLOSED.
- Daily + monthly quota. Pre-flight check before every call. On deny, we don't make the HTTP call.
- Full audit log. Every call: who, when, feature, status, tokens in/out, model used, cost.
- Trust-gate review. AI suggests, human decides. No AI feature commits an action without human approval.
That's the lens we run new feature ideas through. If it fails any of those four, we don't ship.
The takeaway: AI in school ERP is real — but only in the spots where the cost of being slightly wrong is low and the time saved is meaningful. The rest is marketing. Request a demo if you want to see all five running on a sandbox tenant.