New object detection algorithm for solar cleaning robots
A Chinese research team at Tarim University have developed a lightweight object detection and pose recognition solution for solar panel cleaning robots.

A Chinese research team at Tarim University have developed a lightweight object detection and pose recognition solution for solar panel cleaning robots.
Scientists at Tarim University of China have proposed a way to address the challenging problem of pose recognition for photovoltaic panel cleaning robots.
Their novel solution is based on a low power version of the You Only Look Once (YOLOv8) model for object detection and computer vision tasks. Other versions of YOLO have been investigated for solar applications, such as defect detection and panel inspection.
The effective use of PV cleaning robots requires precise object detection and pose recognition but also low-power consumption, according to the researchers. In this regard, there are machine vision challenges to be addressed: panels have diverse tilt angles and orientations, there is imaging interference from ambient light, dust, and dirt, as well as partial occlusion caused by other panels
The team proposed a lightweight panel pose recognition model based on You Only Look Once (YOLO) version 8 nano (YOLOv8n) object detection algorithm. It said that this version represents the “most lightweight variant” within the YOLOv8 machine vision and object detection family as it prioritizes efficiency and real-time processing to enable the use of low-power hardware.
The work is detailed in “YOLOv8n‑PP: a lightweight pose recognition algorithm for photovoltaic array cleaning robot,” published in Journal of Real-Time Image Processing.
The researchers used a “diverse and comprehensive dataset” for photovoltaic panel poses to ensure that their method demonstrates “strong generalization performance” for a variety of environments. The dataset, named P-Pose, consisted of PV pose images collected from the photovoltaic power plant of Jingke Technology in Alar City, China.
They integrated YOLOv8n with Mobile-ViT machine vision technology to create YOLOv8n-Photovoltaic-Pose (YOLOv8n-PP). The scientists said that Mobile-ViT is a lighter version of self-attention-based vision transformer (ViT) for mobile applications. ViT was reportedly developed as an alternative to a transformer based on convolutional neural networks to achieve a faster inference speed.
“This integration helps mitigate the effects of varying target poses from the robot’s mobile perspective,” said the researchers. Additionally, they used a bounding box regression to enhance the precision and accuracy of PV panel recognition, known as MPDIoU loss.
“Combining YOLOv8n, Mobile-ViT, and MPDIoU loss, we propose a method called YOLOv8n-Photovoltaic-Pose (YOLOv8n-PP), which leverages the strengths of these components to achieve accurate and efficient pose recognition for photovoltaic cleaning applications,” they said.
For the training validation they used a 64-bit Windows 10 computer with an Intel Xeon(R) Silver 4210R CPU, and an NVIDIA GeForce RTX 3060Ti GPU. Python 3.8 was the programming language, along with PyTorch 2.0.0 deep learning framework for network training.
The team conducted a detailed analysis comparing its YOLOv8-PP method to several other versions of the YOLO, finding that its proposed solution achieved “the best results” across various evaluation metrics. “Notably, the precision and recall of our approach are 3.45% and 5.78% higher, respectively, compared to the baseline YOLOv8n model,” it said.
The method showed improvements in both precision and recall, providing an “effective solution for PV pose recognition” with YOLOv8n-PP not only improving detection accuracy but also enhancing stability.
Room for improvement was noted in its ability to deal with extreme occlusion and highly reflective environments.
Future research entails deploying YOLOv8-PP in a PV cleaning robot, field testing, and incorporating additional types of sensors, such as infrared imaging, to further improve the model’s detection performance.
What's Your Reaction?






