Neuro Optix is an innovative robotic surveillance and safety system that combines advanced automation and artificial intelligence. Constructed from durable 3D-printed parts and acrylic sheets, this robotic car features four large wheels driven by DC motors for easy navigation across various terrains. At its core, the OAK-D AI camera is mounted on the car's top lid to monitor distances between personnel and machinery, enhancing safety in environments like construction sites by measuring distances between workers and heavy equipment such as excavators. The KRIA platform handles real-time model inference and visual display, while servo motors enable precise camera movement for comprehensive coverage. Remote control is achieved through a connection system using Streamlit + Ngrok or Luxonics Hub, allowing operation via a web app from anywhere globally. This connectivity ensures versatile deployment across industries, offering real-time surveillance and safety monitoring to enhance operational efficiency.
Traditionally companies hire multiple safety officers for each construction site to ensure workers safety Managing worker safety on construction sites. This approach have following issues:
Additionally, there are persistent problems related to corruption and negligence. These challenges necessitate a robust and efficient monitoring system to ensure the safety and well-being of workers.
To address the challenges, we propose a multi-faceted approach.
Stationary cameras are installed around the site to provide continuous surveillance and monitor worker activities.
Safety helmet cameras are utilized to offer a first-person perspective, ensuring real-time monitoring of individual workers' safety and compliance with safety protocols.
Additionally, robots equipped with advanced computer vision technology are deployed to autonomously navigate the site, detect potential hazards, and provide dynamic surveillance, enhancing overall site safety and efficiency.
We have used Robot because it can be controlled. Moreover AI portion can be installed on IP cameras/normal cameras as well but this documentation will focus on the deployment of the AI on the Robot using Kria.
Neuro Optix offers a superior solution to traditional safety measures, such as fixed cameras and safety helmet cameras, through its mobility and advanced AI technology. Unlike stationary cameras, Neuro Optix can navigate various terrains, providing dynamic, real-time surveillance and ensuring comprehensive environmental coverage. Safety helmet cameras depend on individual wearers and may miss critical blind spots. In contrast, Neuro Optix’s AI-powered system continuously monitors and analyzes the surroundings to proactively prevent collisions and enhance safety. By integrating cutting-edge computer vision and remote connectivity, Neuro Optix delivers more reliable and efficient safety monitoring, making it an ideal choice for diverse and high-risk environments.
In the initial design phase of NeuroOptix, the body of the robotic car is meticulously crafted using AutoCAD software. This phase involves creating detailed 3D models that outline the structure, and dimensions of the car.
In the fabrication process of NeuroOptix, components are first meticulously designed using AutoCAD software. These designs are then transferred to a 3D printer to create custom parts with precise dimensions, ensuring they fit perfectly into the robotic car's framework. Additionally, certain structural elements are cut from acrylic sheets to provide sturdy support and enhance the overall durability of the vehicle. This dual approach of 3D printing and acrylic sheet cutting allows for a tailored construction that balances flexibility, strength, and ease of assembly.
In the assembly of NeuroOptix, soldering is used to securely connect and insulate the wires of DC motors ensuring reliable electrical connections essential for the car's operational integrity.
After designing and printing all components using AutoCAD software and a 3D printer, along with cutting necessary parts from acrylic sheets, the stage is set for assembly. Each printed and acrylic component has been crafted with precision to ensure compatibility. The next step involves methodically assembling these parts, ensuring all connections are securely integrated.
The Kria KR260 is a high-performance, adaptive computing module designed for advanced embedded applications. Developed by AMD Xilinx, this versatile FPGA (Field-Programmable Gate Array) kit provides powerful processing capabilities and flexible interfacing options, making it ideal for a wide range of AI and machine learning tasks.
The KR260 provides the necessary interfaces to connect and control the OAK-D AI camera. It processes the depth sensing and object detection data captured by the camera, enabling real-time analysis and decision-making.
By leveraging the powerful FPGA architecture of the KR260, our system can handle more complex AI models and perform advanced computations. This ensures that the data from the OAK-D camera is processed quickly and accurately.
The OAK-D Lite is a powerful and compact AI vision system designed for advanced computer vision applications.
Drive the movement of the robotic car, providing propulsion across different terrains. Controlled by motor drivers, these motors ensure smooth and precise motion.
Control pan-tilt movements of the OAK-D AI camera and other articulated functions. This capability enhances NeuroOptix's surveillance capabilities, enabling it to dynamically adjust its field of view and monitor specific areas of interest.
Essential for controlling the speed, torque, direction, and efficiency of the DC motors. The L298N motor drivers interface between the Raspberry Pi's output signals and the motors, regulating power delivery to ensure optimal performance and reliability during operation.
It is used to manage and control various aspects of NeuroOptix. It interfaces with motor drivers to regulate DC motors for precise movement control and coordinates servo motors to adjust the OAK-D AI camera's position.
Vitis AI is a development platform from AMD Xilinx that simplifies deploying deep learning models on FPGAs (Field-Programmable Gate Arrays). It allows developers, even those without extensive FPGA expertise (just like us), to harness the high performance and flexibility of FPGAs for AI applications. A key feature is its ability to convert standard deep learning models into the xModel format for deployment on FPGA DPUs (Deep Processing Units), streamlining the process and making FPGA technology more accessible for AI workloads.
OpenCV, a widely-used open-source computer vision library, is utilized for image processing tasks. It provides the tools necessary to process and analyze images captured by the Luxonis OAK-D Lite, enabling functionalities such as object detection and depth perception.
The DepthAI API is used to interface with the Luxonis OAK-D Lite. This API facilitates the execution of advanced computer vision tasks by leveraging the AI processing capabilities of the OAK-D, including real-time object detection and depth sensing.
Streamlit is employed to control the robot's movements, while Ngrok is used to enable global access for remote control.
The Luxonis Hub is a central management tool that plays a crucial role in the project. It allows for the control of multiple Luxonis OAK-D Lite devices, managing live video feeds, deploying AI models, and handling device interactions over the network. This centralized control simplifies the management of complex tasks and ensures efficient operation of the AI cameras.
Luxonis Hub is also utilized for controlling the robot's movements, offering lower latency compared to Streamlit.
Data capture starts with the OAK-D camera, which feeds into the Kria module for processing. This processed data is then used for motor control and visualization, allowing remote monitoring by the Safety Officer to ensure safe operations globally.
Flow of data, starting with data capture by OAK-D, followed by processing in KRIA, and ending with control of motors and visualization.
OAK-D: This component captures data, calculates disparity, and calculates distance.
KRIA: This component includes the following sub-components:
PYNQ: It is the software running in Kria that processes the data received from OAK-D and perform inference of AI models.
Streamlit Hosting: It hosts a web-based interface for visualization and control.
Arduino: It is responsible for controlling motors based on the data received from KRIA via serially.
Safety Officer: This component represents a human user who can remotely access and control the system via a secure connection through Ngrok.
Microcontroller (Arduino): The central component is likely an Arduino board, which serves as the controller for the circuit. It processes inputs and sends commands to other components.
Power Supply: The circuit includes a power supply that provides the necessary voltage and current to the components. This power supply is typically connected to the Arduino and the motor driver modules.
Motor Driver Modules: There are two motor driver modules connected to the Arduino. These driver modules are used to control the speed and direction of motors. The motor drivers act as intermediaries between the Arduino and the motors, allowing for higher current and voltage to be used than the Arduino alone can provide.
The Arduino is programmed to send control signals to the motor driver modules based on user input or sensor data. This allows it to control the motors' operation, such as starting, stopping, and changing speed or direction.
This setup could be used to control a robotic vehicle, where the main board processes data and the Arduino directs the motors to steer and move the vehicle.
To set up the Kria KR260, follow these detailed instructions to ensure a smooth installation and update process.
Visit the Official Documentations for the Kria KR260 to familiarize yourself with the setup process and requirements.
Updating the firmware is crucial to avoid potential issues. Open the terminal and run the following command to download the firmware image:
wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1wACTcpbwLPOH9UUuURk5qcnIYeEverSB' -O k26_update3.BIN
After Downloading the firmware run the following command
sudo xmutil bootfw_update -i <path-to-FW.BIN file> sudo xmutil bootfw_status sudo shutdown -r now sudo xmutil bootfw_update -v
Make sure to replace <path-to-FW.BIN file> with the original path.
To install the Wi-Fi driver for the W11MI Tenda adapter, ensure that you are connected to the internet via Ethernet. If there are any connectivity issues, follow the DNS setup instructions.
sudo nano /etc/resolv.conf
Add the following line:
nameserver 8.8.8.8
Save and close the file. Now you will have the internet access.
sudo apt-get install build-essential git dkms linux-headers-$(uname -r) git clone https://github.com/McMCCRU/rtl8188gu.git cd rtl8188gu make sudo make install sudo apt install --reinstall linux-firmware sudo reboot
After rebooting, you should see the Wi-Fi option appear in your settings. You can now connect to your Wi-Fi network.
To perform AI model inference on the Kria KR260, you need to install PYNQ. Follow these instructions to properly set up PYNQ on your device.
Clone the necessary repository from GitHub by running the following command in your terminal:
git clone https://github.com/amd/Kria-RoboticsAI.git
Install the dos2unix utility, which will help convert Windows-style line endings to Unix-style:
sudo apt install dos2unix
Navigate to the scripts folder within the cloned repository and convert all shell scripts to Unix format. This ensures compatibility and avoids execution issues.
cd /home/ubuntu/Kria-RoboticsAI/files/scripts for file in $(find . -name "*.sh"); do echo ${file} dos2unix ${file} done
In order to install the pynq you need to run the following commands:
sudo su cd /home/ubuntu/Kria-RoboticsAI cp files/scripts/install_update_kr260_to_vitisai35.sh /home/ubuntu cd /home/ubuntu source ./install_update_kr260_to_vitisai35.sh reboot
It will take 10~15 min to install depending upon you internet connection.
Note: Make sure that your internet connection is good otherwise the installation would be failed.
Now as the pnyq is installed we need to setup the environment first. So run the following command.
sudo su source /etc/profile.d/pynq_venv.sh cd $PYNQ_JUPYTER_NOTEBOOKS pynq get-notebooks pynq-dpu -p
Vitis-AI is being used to optimize the model for DPU. Otherwise simple model cannot be deployed on the DPU. So As we are using yolov5 so we need to optimize the model first for.pt extension to the.xmodel.
For Vitis-AI we shall need ubuntu. We are using the Ubuntu 20 LTS. Now we are going to install the Vitis-AI 3.5 so follow the following link to install. After installation clone the following repo.
Now as we want to optimize Yolov5 models. We need to Navigate in to the Yolov5 folder i.e "Quantizing-Compiling-Yolov5-Hackster-Tutorial". Now make sure you have the test data of the model you have trained on.
Now Activate the Vitis environment and run the following commands please change the commands according to your path.
python3 quant.py -w -d -q calib
python3 quant.py -w yolo_m.pt -d Safety-Helmet-pro-3/test/ -q calib python3 quant.py -w yolo_m.pt -d Safety-Helmet-pro-3/test/ -q test vai_c_xir --xmodel build/quant_model/DetectMultiBackend_int.xmodel --arch /opt/vitis_ai/compiler/arch/DPUCZDX8G/KV260/arch.json --net_name yolov5_kv260 --output_dir ./KV260
After Running these three commands your compiled xmodel shall be stored in the Kv260 model. Now you can use this model in to your pynq DPU code.
Neuro Optix utilizes advanced vision technology to distinguish between workers adhering to safety protocols and those not in compliance. This continuous monitoring guarantees consistent safety practices among all workers.
OpenCV facilitates advanced image processing and computer vision techniques, enabling the robotic arm to accurately manipulate and interact with objects. This capability enhances the project's overall functionality in tasks requiring precise object handling.
We foresee several advancements for our Remote-Controlled Worker Monitoring System to further elevate its functionality:
We would integrate the drone with the car. Then the car is capable to launch drone on the sites remotely so that safety officer can monitor work at height moreover the feed from drone camera will be inferenced at Kria Kr260 board to perform some PPE detections.
Incorporating drones into the system will enable the monitoring of workers operating at elevated levels. This will provide a holistic view of the construction site, enhancing worker safety through comprehensive surveillance.
Drones will also be utilized for quality inspections from above, ensuring construction standards are met and maintained. This aerial perspective will help in identifying and rectifying issues that might not be visible from the ground.
The robot will be capable of providing training and informing workers about safety protocols by analysing the worksite. For instance, if an activity involves working at height, the robot will instruct workers to use fall protection restraints. This functionality is powered by a GenAI model, deployed on the cloud.
The robot will utilize its sensors to monitor environmental conditions (e.g temperature, wind speed). In adherence to ILO guidelines, it will notify workers to take breaks during peak sunlight hours to prevent heat-related illnesses such as heat exhaustion or heat stroke.
Robot will autonomously perform certain tasks such as guiding the crane operator and patrolling the site using 3D LiDAR.
There are no models linked
There are no models linked