Skip to content
Ankor

// EdTech / Assessments

AI proctoring that catches cheats without punishing honest candidates

Human-like proctoring, engineered for real exam conditions

// Client XAM // Timeline Multi-phase product build // Services AI Product Engineering LLM Integration AI Agent Development
XAM — AI proctoring that catches cheats without punishing honest candidates

// Outcome

80+ orgs
Platform footprint

Schools, colleges, hiring teams and test-prep networks running assessments on XAM

// Challenge

The problem, in plain language.

Online assessments live or die on trust. XAM needed proctoring that could flag genuine cheating in real time across thousands of concurrent sessions — without the false-positive tax that makes honest candidates feel surveilled. The team also wanted AI to compress the time educators spent authoring and grading, not just watching.

Approach

We started by separating two problems the team had been solving as one: detecting cheating, and earning the trust of the humans who act on those detections. A model that screams “suspicious” ten times per exam is worse than no model at all — it trains reviewers to ignore it. So we framed proctoring as an evidence pipeline, not a verdict engine. Every signal — gaze drift, off-screen audio, multi-face presence, window focus loss — is scored, timestamped, and bundled into a reviewable timeline, with an LLM summarising the session in language a non-technical invigilator can act on in under thirty seconds.

Solution

We shipped four interlocking surfaces under the XAM product umbrella: SENSE for real-time proctoring (computer-vision engagement sensing plus voice and speech evaluation over WebRTC), ZAi Gen for AI-generated question sets sourced from documents, URLs, or knowledge bases with built-in plagiarism and authenticity checks, FLOW for adaptive difficulty that recalibrates per candidate mid-test, and INSiGHT for post-exam analytics that surface skill gaps and suggest study paths. The stack runs lean — Python and PyTorch for the proctoring models, TensorFlow.js for the bits that execute client-side to cut latency and bandwidth, and a Firebase-backed Node layer that keeps session orchestration cheap even when thousands of exams run concurrently. LLM calls are tightly scoped to summarisation and question generation, never to grading decisions, so the audit trail stays human-reviewable end to end.

What changed

XAM now serves more than eighty organisations across K-12, higher education, hiring, and competitive test prep, with uptime above 99.17% and a free tier that lets small institutions adopt without procurement friction. More importantly, the proctoring queue finally feels tractable: reviewers see ranked, explained events instead of raw video dumps, and honest candidates finish their exams without being asked to prove their innocence. The AI authoring tools have quietly become the headline feature for educators — what started as a proctoring play turned into a full assessment operating system.

// Gallery

Inside the build.

XAM — gallery 1
XAM — gallery 2
XAM — gallery 3
XAM — gallery 4
XAM — gallery 5

// Client voice

“The proctoring model earned the trust of our reviewers because it explains its flags. Candidates stopped feeling watched and our panels stopped drowning in review queues.”

Head of Product, EdTech platform

// Ready to ship?

Ready to ship something like this?

Short call. No deck. We will tell you honestly whether we are the right team for your problem.