Okay, so I decided to build an AI thingy to count people coming into a store. A friend runs a small shop, and they were curious about foot traffic but didn’t want to shell out big bucks for those fancy commercial counters. I figured, how hard could it be? Let’s give it a shot.

Getting Started – The Plan

First off, I needed a plan. I thought, okay, I need eyes, a brain, and some way to connect them. The eyes would be a camera, pointed at the entrance. The brain would be some sort of small computer running the AI magic. And the connection? Well, wires or Wi-Fi.

I looked around online. Found tons of complex stuff, but I wanted something simple. Object detection seemed the way to go – teach the computer to spot people. Then, just count them as they cross a line near the door.

Gathering the Parts

I didn’t want to spend much. I already had an old Raspberry Pi 4 sitting in a drawer doing nothing. Perfect! That’ll be the brain. For the eyes, I just ordered a cheap USB webcam online. Nothing fancy, just needed a clear enough picture of the doorway.

  • Grabbed the Raspberry Pi 4.
  • Bought a basic USB webcam.
  • Found an SD card and installed the standard Raspberry Pi OS.
  • Needed a power supply for the Pi and the camera connects via USB. Simple.

Setting Up the Hardware

This part was pretty straightforward. I mounted the webcam up high, looking down at the entrance area. Tried to get a view that covers the whole doorway. Plugged the webcam into the Pi’s USB port. Powered up the Pi and connected it to my network so I could work on it easily from my main computer.

I tested the camera first, just making sure I could get a video stream using basic Linux commands or a simple Python script with OpenCV. Yep, got a picture. Good enough.

The AI Brain – Software Time

Now for the tricky bit – the AI. I’d heard about YOLO (You Only Look Once) for object detection. Seemed popular and there were versions that could run on a Pi, maybe a bit slowly, but worth a try. I decided to go with YOLOv5, specifically a smaller version like YOLOv5s, hoping the Pi could handle it.

Getting the software installed took some fiddling. Had to install Python (if not already there), then OpenCV (which handles video), and PyTorch (the framework YOLOv5 uses). Finding the right versions that worked together on the Pi took a bit of trial and error. Ran into a few dependency issues, spent some time searching forums, but eventually got everything installed. You know how it goes.

Writing the Counting Code

Okay, software installed, time to write the actual counting script in Python.

Step 1: Grab Video Frames. Used OpenCV to connect to the webcam and read video frames one by one.

Step 2: Detect People. For each frame, I fed it into the loaded YOLOv5 model. The model would return a list of detected objects, including their location (bounding boxes) and what they are (labels like ‘person’, ‘car’, ‘dog’). I filtered this list just to get the ‘person’ detections.

Step 3: Draw a Line. I decided where the ‘crossing line’ should be on the camera view. Just drew a virtual horizontal line across the middle of the doorway image in my code.

Step 4: Track and Count. This was the core logic. Just detecting people isn’t enough; you need to know if they crossed the line. I did something simple:

  • Calculated the center point of each detected person’s bounding box.
  • In the next frame, I’d try to match people based on how close their center points were. Very basic tracking.
  • I checked if a person’s center point moved from one side of the virtual line to the other (say, from top-to-bottom for entering).
  • To avoid counting someone multiple times if they linger near the line, I added a check. Once a person crossed and was counted, I marked them (using their rough track ID) so they wouldn’t be counted again immediately.
  • Kept a running total in a variable.

I also made the script draw the bounding boxes, the center points, and the line on the video feed it was processing. This helped a LOT with debugging, seeing what the AI was actually doing.

Testing and Fixing

Fired it up. And… well, it kind of worked! It was detecting people, drawing boxes. The count was going up. But it wasn’t perfect.

Sometimes, it missed people walking fast. Sometimes, if two people walked close together, it saw them as one blob. Lighting changes also messed with detection sometimes. And the Raspberry Pi was definitely working hard; the video feed wasn’t super smooth.

So, I tweaked things. Adjusted the ‘confidence threshold’ for YOLO – basically, telling it to be more or less sure before calling something a person. Played around with the position of the virtual line. Added some simple rules, like ignoring detections that were too small or only appeared for a single frame.

I also realized tracking needed to be better. My simple center-point matching was okay but got confused easily. For this project, I decided “good enough” was okay for now. It wasn’t mission-critical, just needed a rough idea.

Making It Run

Once I was reasonably happy, I set the Python script to run automatically when the Raspberry Pi started up. Just used a simple systemd service file for that. I made the script save the count to a plain text file every minute or so. That way, even if the script crashed or the Pi restarted, the count wasn’t totally lost.

Final Thoughts

So, there it is. My DIY AI foot traffic counter. It’s not as polished as the expensive commercial ones, for sure. The accuracy isn’t 100%, especially during busy times or tricky lighting. The tracking is basic. But hey, it cost next to nothing using hardware I mostly had already, and it gives my friend a ballpark figure of daily visitors.

It was a fun project. Learned a bit more about wrangling AI models on small devices, the joys of OpenCV, and the limitations of simple tracking. Maybe later I’ll try a better tracking algorithm or figure out how to distinguish between people entering and leaving. But for now, it does the job it was meant to do. Pretty cool what you can rig up yourself these days.