In current times, the digitization and 3D modeling of rural environments have become essential for environmental management. The ability to design detailed and precise 3D representations of rural environments facilitates the process of analyzing and evaluating the state of natural and cultural heritage, simulation of natural phenomena, biodiversity conservation, and the promotion of sustainable ecosystems.
In this context, the capture of high-resolution multiscale aerial images emerges as a promising solution. Entities such as the National Aerial Orthophotography Plan (PNOA), the European Earth Observation Open Science Data Hub (EEOSDA), or the Copernicus program represent valuable alternatives. These images, captured from aerial or satellite platforms, offer a spatial and temporal resolution that encompasses large land areas, allowing for detailed and updated data on natural environments. PNOA provides high-resolution aerial images at the national level. EEOSDA provides open access to Earth observation data at the European level, including climate information and atmospheric measurements. The Copernicus program, using a constellation of satellites, provides satellite information, including optical and radar images.
Image segmentation is one of the most important techniques in computer vision. By using this type of technique, the aim is to differentiate between specific entities in the images. For a long time, the variety of techniques has been growing until we have an infinite number of them available, from automatic techniques through image editing to the arrival of neural networks capable of learning and providing more precise and efficient solutions.
To carry out the vegetation identification, we have implemented a computer vision algorithm based on detection, filtering, and contour extraction in images. The algorithm focuses on identifying each shaped olive grove crop in the image by extracting its centroids at the pixel level.
The figure above displays the main steps of the implemented algorithm. It begins by applying a Gaussian filter to the PNOA image (Fig a) using a 5x5 kernel size. This step reduces the noise inherent in the image, which improves the quality of detection. Subsequently, the image is converted to the HSV (Hue Saturation Value) color space (Fig b). In the HSV color space, the vegetation tends to better show several shades of green with common intensities and brightness levels, making it distinguishable from other elements.
The next step is to apply image thresholding and obtain a binary image, as in Fig c, upon which morphological operations of dilation and erosion are applied. These operations facilitate the extraction of contours related to vegetation in the image. Subsequently, the binary image is overlapped over the HSV input image to identify and remove the shadow cast by vegetation (Fig d). This process improves the accuracy of vegetation detection by reducing the impact of shadows (see figure below to visualize how our method reduces the impact of shadows), resulting in a clearer and more accurate representation of vegetation zones (see Fig e). Finally, an additional color space conversion is required to differentiate vegetation from terrain. This process involves converting the image to the LAB color space as Fig f shows. In the LAB color space, each pixel is characterized by three components: luminance (L) provides information on the brightness levels of vegetation, while the chromaticity components (A and B) contribute to the understanding of variations in shades of green present in vegetation. Leveraging these components, our method robustly separates vegetation from terrain. This process results in a binary image in which vegetation is identified in contrast to any other element in the image.
The final critical step in the vegetation identification process involves applying the Canny algorithm to extract contours from the binary image above generated (Fig g). This step helps to extract image contours by identifying sharp changes in the gradient intensity of each pixel as shown in Figure below b. Once the contours have been extracted, it is imperative to apply a contour closure algorithm. This operation involves filling and closing contours previously identified by connecting neighboring contour segments (Figure below c). Furthermore, in addition to contour closure, a filtering process is carried out to select only the contours corresponding to the vegetation (see Figure below d). This filtering is achieved by applying selection criteria based on contour area and shape. Consequently, contours not related to vegetation, such as those representing buildings, roads or other landscape elements, are discarded.
As a result, our algorithm obtains the central points of each contour. These centroids are crucial for accurately identifying and characterizing the location and shape of vegetation in the image. Besides providing information on vegetation’s spatial distribution, centroids are utilized to calculate additional metrics, such as vegetation density and distribution in the study area.
The code repository can be found on this link
There are no models linked
There are no models linked
There are no datasets linked
There are no datasets linked