This project aims to accelerate the ALPR (Automatic License Plate Recognition) module of CodeProject.AI by leveraging your Intel Integrated GPU (iGPU) via OpenVINO, instead of relying on CPU or unavailable NVIDIA (CUDA) GPUs.
Tested Environment:
- Privileged Proxmox LXC container with Intel iGPU passthrough
- 400GB disk allocated as NVR storage
By default, CodeProject.AI uses the CPU for ALPR. Attempts to direct PaddlePaddle (the AI framework used by ALPR) to OpenVINO or GPU devices resulted in errors, as PaddlePaddle does not natively recognize "OpenVINO" as a device and lacks CUDA support for NVIDIA GPUs. This project overcomes these limitations, enabling efficient ALPR inference on Intel iGPUs.
- PaddlePaddle does not recognize "OpenVINO" as a valid device.
- PaddlePaddle installed is CPU-only, with no CUDA support.
- ALPR module is installed via the CodeProject.AI UI, not directly in the Docker image.
- Need for persistent, automated code modifications after module installation.
- Installs Intel OpenCL runtime (
intel-opencl-icd,intel-media-va-driver-non-free) and other OpenVINO dependencies on Ubuntu. - Installs CPU-only PaddlePaddle to avoid CUDA errors.
- Installs the
openvinoPython package. - Adds the
codeprojectaiuser to thevideogroup for proper iGPU device permissions.
docker-compose.yamlis modified to pass through the entire/dev/dridirectory from the host to the container, allowing the container to access the Intel iGPU.
- Comments out CUDA checks that prevent PaddlePaddle from using the GPU.
- Forces
self.can_use_GPU = Trueandself.opts.use_gpu = True. - Sets
self.inference_library = "OpenVINO". - Changes
paddle.set_device("gpu")topaddle.set_device("cpu")(OpenVINO will intercept and optimize inference workloads for the iGPU).
- A modified
ALPR_adapter.py.modifiedis created with all necessary code changes. - A custom
entrypoint.shscript is used as the Docker entrypoint:- Runs the original CodeProject.AI entrypoint in the background.
- Waits for
ALPR_adapter.pyto appear in the volume after ALPR module installation. - Automatically copies
ALPR_adapter.py.modifiedto overwrite the original file.
- Ensures all changes are persistent and require no manual intervention.
- All PaddlePaddle/CUDA errors are resolved.
- iGPU usage is verified: While GPU engine usage may show 0% in
intel_gpu_top, spikes in "IMC Reads and Writes" during ALPR processing confirm active iGPU acceleration. - ALPR runs on iGPU: The main goal—persistent, automated ALPR acceleration on Intel iGPU via OpenVINO—is achieved.
- YOLOv5 continues to run on CPU due to lack of easy OpenVINO support for PyTorch and no native OpenVINO integration in the YOLOv5 module.
- For YOLOv5, you can either accept CPU performance or explore alternative object detection modules in the CodeProject.AI ecosystem that are optimized for OpenVINO.
Follow these steps to deploy this solution and enable ALPR acceleration on your Intel iGPU:
docker build -t codeprojectai-openvino .Ensure your docker-compose.yaml includes the following volume and device passthrough:
services:
codeprojectai:
image: codeprojectai-openvino
volumes:
- /dev/dri:/dev/dri
# ...other volumes as needed...
# ...other settings...docker-compose up -d- Access the CodeProject.AI UI in your browser.
- Install the ALPR module via the UI.
- The custom entrypoint will automatically patch
ALPR_adapter.pyfor OpenVINO/iGPU support.
- Run ALPR inference (e.g., via the CodeProject.AI UI or API).
- Monitor iGPU activity using
intel_gpu_top. - Look for spikes in "IMC Reads and Writes" during ALPR processing to confirm iGPU usage.
This solution is compatible with Viseron, an open-source NVR (Network Video Recorder) that supports CodeProject.AI as an ALPR backend. You can configure Viseron to use your deployed CodeProject.AI instance for license plate recognition, enabling seamless video surveillance and automation workflows.
For more details, refer to the Viseron documentation on integrating with CodeProject.AI.
Contributions are welcome! Please open issues or pull requests for improvements, bug fixes, or new features.
MIT License
Made with ❤️ by Execute — a semester 4 holiday side project.