← BackArchitecture

End-to-End Architecture

GrayPass verifies identity from how users interact — reaction times, keystroke cadence, and optional gaze — without storing raw biometrics in plaintext.

1) Client Runtime

  • Signals: Reaction Time (RT), Keystroke Intervals (KS), optional gaze with calibration & blink-to-click.
  • Integrity: focus checks, input duration, coarse entropy; one-time nonce must be echoed.
  • Privacy: defaults to engineered features; opt-in encrypted raw series for training.

2) Feature Engineering (15-D)

  • RT & KS: mean, std, median, p10, p90, count (each)
  • Gaze: sdX, sdY, meanVel (zeros if disabled)
  • Quantization: deterministic rounding for stability across sessions

3) Enrolment

  • Compute salted brainprint (one-way hash) from the quantized vector + per-user salt.
  • Encrypt the unsalted 15-D vector at rest.
  • Record telemetry (session, device, screen, language, focus, duration, SDK, baseline confidence).
  • Optionally store encrypted raw series (consent) for model improvement.

4) Authentication (Multi-Lane)

  1. Fast Hash: recompute salted hash; exact match → accept.
  2. Fuzzy Timing: decrypt enrolment vector; mean absolute ms-difference; logistic confidence; pass if under tolerance.
  3. Embedding Similarity: 15-D → 64-D L2 embedding; cosine similarity → distance → isotonic regression → calibrated probability.

Decision: accept on hash; else require fuzzy pass and probability ≥ threshold (tuned for FAR/FRR & EER).

5) Security & Privacy

  • Encryption at rest for vectors & (opt-in) raw series.
  • Salted one-way brainprints; per-request nonces & rate limits.
  • Admin-guarded model lifecycle & exports; pseudonymized analysis packs.

6) Telemetry

  • Events: session-level info incl. confidence for reliability & abuse defense.
  • Metrics: counters for aggregates & drift.

7) Model Operations

  • Build pairs by user; train contrastive on cosine distance for the 64-D embedding.
  • Fit isotonic regression distance→probability; evaluate with DET/EER; retune thresholds.
  • Hot-swap deploy (upload + reload) without downtime.

8) Why It Works

Combining a deterministic, privacy-preserving salted brainprint with a calibrated learned embedding yields fast, tolerant, and discriminative authentication — while keeping users in control of their data.

Beyond the core loop

What keeps GrayPass ahead

Baseline telemetry, replay-hardened schedules, and cancelable templates sit beside the runtime to make the architecture feel like a living system—not just a static diagram.

Baseline Control Stack

Deterministic evaluation you can defend

GrayPass keeps a continuous, encrypted replay loop that produces DET/EER curves, latency histograms, and per-session decision notes. Every anonymized record carries timing, distance, probability, and human-readable reason codes so we can audit the system without revealing private inputs.

  • Privileged operators can request curated evaluation snapshots for browsers, cohorts, or modes.
  • Client instrumentation reports runtime, input pace, and focus quality for both enrolment and auth attempts.
  • A lab dashboard publishes baseline curves for desktop Chrome & Safari so investors can see the operating point in seconds.
Replay-Hard Challenge Response

Server-driven schedules beat macros & RMM

Before a sensitive attempt, GrayPass issues a signed schedule—think shuffled Stroop cues with jittered delays and cursor targets. The client renders exactly what the server dictates, streams fine-grained timestamps plus tab visibility, and the server scores adherence, variance, and micro-kinematics. If something smells scripted, we attach forensic reason codes to the denial.

  • Challenges expire quickly and must be executed in-order; no challenge, no auth.
  • Adherence scores sit beside the behavioural vector to separate humans from playback bots or remote takeover tools.
  • Our internal PAD harness replays automation and RMM scenarios daily to prove we still exceed a 95% catch rate at FAR ≤0.5%.
Cancelable Templates

Rotate secrets without touching the raw behaviour

Feature vectors are projected through per-user randomness, snapped to a deterministic grid, and wrapped in helper data. We store only the helper and a template hash—so if anything leaks, a rotation just swaps the seed and template reference while the brainprint stays private.

  • Helper data sits in rotation-friendly records with explicit activation windows.
  • Matcher compares queries to the active helper rather than raw vectors, so database spills are useless.
  • Rotation drills take minutes: issue a new seed, update the helper, and keep FRR within ±0.2% of the baseline.