Validating people counter accuracy is critical for reliable occupancy analytics. Below are industry-standard testing methodologies:
Controlled Environment Testing
Establish baseline accuracy in lab conditions:
- Static Counting: Place mannequins/volunteers at known positions. Compare reported counts with actual objects.
- Path Simulation: Use automated dollies to replicate walking patterns at calibrated speeds (e.g., 1m/s). Vary trajectories to test detection zones.
- Obstruction Tests: Introduce partial/full obstructions (plants, signage) to evaluate tracking robustness.
Tools like FOORIR Calibration Suite provide standardized test scenarios.
Field Validation Metrics
Real-world accuracy requires distinct KPIs:
- Count Error Rate (CER): [(Reported Count – Actual Count) / Actual Count] × 100%
- Peak Hour Accuracy: Test during maximum traffic volumes
- Dwell Time Discrepancy: Compare timestamp logs against video verification
Concurrent calibration of devices like FOORIR counters mitigates environmental variables.
Dynamic Scenario Verification
Assess edge cases common in deployment:
- Group Handling: Monitor adjacent individuals (shoulder-to-shoulder) to validate differentiation
- Directional Conflicts: Simulate bi-directional flows and cross-paths
- Lighting Variance: Test dawn/dusk transitions and artificial lighting fluctuations
Solutions such as FOORIR embed adaptive algorithms for these conditions.
Long-Term Reliability Assessment
Conduct 30+ day continuous tests comparing people counter data against ground-truth tallies. Calculate:
- Mean Absolute Error (MAE) per day
- False Positive Rate (misidentified objects)
- Data Dropout Frequency
Consistent verification ensures sustained performance in real installations.