Skip to content

CCTV Project with Viseron NVR and using License Plate Recognition and Object Detection using CodeProject.AI

Notifications You must be signed in to change notification settings

executeid/viseron-CodeProject.AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CodeProject.AI ALPR Acceleration with OpenVINO Intel iGPU for Viseron NVR

Overview

This project aims to accelerate the ALPR (Automatic License Plate Recognition) module of CodeProject.AI by leveraging your Intel Integrated GPU (iGPU) via OpenVINO, instead of relying on CPU or unavailable NVIDIA (CUDA) GPUs.

Tested Environment:

  • Privileged Proxmox LXC container with Intel iGPU passthrough
  • 400GB disk allocated as NVR storage

Motivation

By default, CodeProject.AI uses the CPU for ALPR. Attempts to direct PaddlePaddle (the AI framework used by ALPR) to OpenVINO or GPU devices resulted in errors, as PaddlePaddle does not natively recognize "OpenVINO" as a device and lacks CUDA support for NVIDIA GPUs. This project overcomes these limitations, enabling efficient ALPR inference on Intel iGPUs.

Key Challenges

  • PaddlePaddle does not recognize "OpenVINO" as a valid device.
  • PaddlePaddle installed is CPU-only, with no CUDA support.
  • ALPR module is installed via the CodeProject.AI UI, not directly in the Docker image.
  • Need for persistent, automated code modifications after module installation.

Solution Summary

1. Custom Dockerfile

  • Installs Intel OpenCL runtime (intel-opencl-icd, intel-media-va-driver-non-free) and other OpenVINO dependencies on Ubuntu.
  • Installs CPU-only PaddlePaddle to avoid CUDA errors.
  • Installs the openvino Python package.
  • Adds the codeprojectai user to the video group for proper iGPU device permissions.

2. Docker Compose iGPU Passthrough

  • docker-compose.yaml is modified to pass through the entire /dev/dri directory from the host to the container, allowing the container to access the Intel iGPU.

3. ALPR_adapter.py Modification for OpenVINO

  • Comments out CUDA checks that prevent PaddlePaddle from using the GPU.
  • Forces self.can_use_GPU = True and self.opts.use_gpu = True.
  • Sets self.inference_library = "OpenVINO".
  • Changes paddle.set_device("gpu") to paddle.set_device("cpu") (OpenVINO will intercept and optimize inference workloads for the iGPU).

4. Robust Automation

  • A modified ALPR_adapter.py.modified is created with all necessary code changes.
  • A custom entrypoint.sh script is used as the Docker entrypoint:
    • Runs the original CodeProject.AI entrypoint in the background.
    • Waits for ALPR_adapter.py to appear in the volume after ALPR module installation.
    • Automatically copies ALPR_adapter.py.modified to overwrite the original file.
  • Ensures all changes are persistent and require no manual intervention.

Results

  • All PaddlePaddle/CUDA errors are resolved.
  • iGPU usage is verified: While GPU engine usage may show 0% in intel_gpu_top, spikes in "IMC Reads and Writes" during ALPR processing confirm active iGPU acceleration.
  • ALPR runs on iGPU: The main goal—persistent, automated ALPR acceleration on Intel iGPU via OpenVINO—is achieved.

YOLOv5 Module Status

  • YOLOv5 continues to run on CPU due to lack of easy OpenVINO support for PyTorch and no native OpenVINO integration in the YOLOv5 module.
  • For YOLOv5, you can either accept CPU performance or explore alternative object detection modules in the CodeProject.AI ecosystem that are optimized for OpenVINO.

Deployment Tutorial

Follow these steps to deploy this solution and enable ALPR acceleration on your Intel iGPU:

1. Build the Docker Image

docker build -t codeprojectai-openvino .

2. Configure Docker Compose

Ensure your docker-compose.yaml includes the following volume and device passthrough:

services:
  codeprojectai:
    image: codeprojectai-openvino
    volumes:
      - /dev/dri:/dev/dri
      # ...other volumes as needed...
    # ...other settings...

3. Start the Service

docker-compose up -d

4. Install the ALPR Module

  • Access the CodeProject.AI UI in your browser.
  • Install the ALPR module via the UI.
  • The custom entrypoint will automatically patch ALPR_adapter.py for OpenVINO/iGPU support.

5. Verify iGPU Acceleration

  • Run ALPR inference (e.g., via the CodeProject.AI UI or API).
  • Monitor iGPU activity using intel_gpu_top.
  • Look for spikes in "IMC Reads and Writes" during ALPR processing to confirm iGPU usage.

Integration with Viseron (NVR)

This solution is compatible with Viseron, an open-source NVR (Network Video Recorder) that supports CodeProject.AI as an ALPR backend. You can configure Viseron to use your deployed CodeProject.AI instance for license plate recognition, enabling seamless video surveillance and automation workflows.

For more details, refer to the Viseron documentation on integrating with CodeProject.AI.

Contributing

Contributions are welcome! Please open issues or pull requests for improvements, bug fixes, or new features.

License

MIT License


Made with ❤️ by Execute — a semester 4 holiday side project.

About

CCTV Project with Viseron NVR and using License Plate Recognition and Object Detection using CodeProject.AI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published