Mapping the spatial variability of Botrytis bunch rot risk in vineyards using UAV multispectral imagery.
Vélez, S.; Ariza-Sentís, M.; and Valente, J.
European Journal of Agronomy, 142: 126691. 1 2023.
Paper
doi
link
bibtex
@article{
title = {Mapping the spatial variability of Botrytis bunch rot risk in vineyards using UAV multispectral imagery},
type = {article},
year = {2023},
pages = {126691},
volume = {142},
month = {1},
publisher = {Elsevier},
day = {1},
id = {1e73afe1-1f7b-321d-88b5-ed8bec648865},
created = {2022-12-12T06:49:27.644Z},
accessed = {2022-12-12},
file_attached = {true},
profile_id = {235249c2-3ed4-314a-b309-b1ea0330f5d9},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:53:04.946Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4787d485-df78-4119-ad84-be82dd804b00,6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
bibtype = {article},
author = {Vélez, Sergio and Ariza-Sentís, Mar and Valente, João},
doi = {10.1016/J.EJA.2022.126691},
journal = {European Journal of Agronomy},
keywords = {velez2023mappingspatialvariability}
}
The Fast Detection of Crop Disease Leaves Based on Single-Channel Gravitational Kernel Density Clustering.
Ren, Y.; Li, Q.; and Liu, Z.
Applied Sciences 2023, Vol. 13, Page 1172, 13(2): 1172. 1 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {The Fast Detection of Crop Disease Leaves Based on Single-Channel Gravitational Kernel Density Clustering},
type = {article},
year = {2023},
keywords = {clustering algorithm,color space,gravitational kernel density,image segmentation},
pages = {1172},
volume = {13},
websites = {https://www.mdpi.com/2076-3417/13/2/1172/htm,https://www.mdpi.com/2076-3417/13/2/1172},
month = {1},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {15},
id = {47e6da0f-2ff4-3745-9e6d-cc2196fc1818},
created = {2023-01-31T07:25:59.471Z},
accessed = {2023-01-31},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-01-31T07:26:03.318Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Plant diseases and pests may seriously affect the yield of crops and even threaten the survival of human beings. The characteristics of plant diseases and insect pests are mainly reflected in the occurrence of lesions on crop leaves. Machine vision disease detection is of great significance for the early detection and prevention of plant diseases and insect pests. A fast detection method for lesions based on a single-channel gravitational kernel density clustering algorithm was designed to examine the complexity and ambiguity of diseased leaf images. Firstly, a polynomial was used to fit the R-channel feature histogram curve of a diseased leaf image in the RGB color space, and then the peak point and peak area of the fitted feature histogram curve were determined according to the derivative attribute. Secondly, the cluster numbers and the initial cluster center of the diseased leaf images were determined according to the peak area and peak point. Thirdly, according to the clustering center of the preliminarily determined diseased leaf images, the single-channel gravity kernel density clustering algorithm in this paper was used to achieve the rapid segmentation of the diseased leaf lesions. Finally, the experimental results showed that our method could segment the lesions quickly and accurately.},
bibtype = {article},
author = {Ren, Yifeng and Li, Qingyan and Liu, Zhe},
doi = {10.3390/APP13021172},
journal = {Applied Sciences 2023, Vol. 13, Page 1172},
number = {2}
}
Plant diseases and pests may seriously affect the yield of crops and even threaten the survival of human beings. The characteristics of plant diseases and insect pests are mainly reflected in the occurrence of lesions on crop leaves. Machine vision disease detection is of great significance for the early detection and prevention of plant diseases and insect pests. A fast detection method for lesions based on a single-channel gravitational kernel density clustering algorithm was designed to examine the complexity and ambiguity of diseased leaf images. Firstly, a polynomial was used to fit the R-channel feature histogram curve of a diseased leaf image in the RGB color space, and then the peak point and peak area of the fitted feature histogram curve were determined according to the derivative attribute. Secondly, the cluster numbers and the initial cluster center of the diseased leaf images were determined according to the peak area and peak point. Thirdly, according to the clustering center of the preliminarily determined diseased leaf images, the single-channel gravity kernel density clustering algorithm in this paper was used to achieve the rapid segmentation of the diseased leaf lesions. Finally, the experimental results showed that our method could segment the lesions quickly and accurately.
Field-based robotic leaf angle detection and characterization of maize plants using stereo vision and deep convolutional neural networks.
Xiang, L.; Gai, J.; Bao, Y.; Yu, J.; Patrick, |.; Schnable, S.; and Lie Tang, |.
Journal of Field Robotics. 2 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Field-based robotic leaf angle detection and characterization of maize plants using stereo vision and deep convolutional neural networks},
type = {article},
year = {2023},
keywords = {based plant phenotyping,convolutional neural network,field,keypoint detection,leaf angle,stereo vision},
websites = {https://onlinelibrary.wiley.com/doi/full/10.1002/rob.22166,https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.22166,https://onlinelibrary.wiley.com/doi/10.1002/rob.22166},
month = {2},
publisher = {John Wiley & Sons, Ltd},
day = {27},
id = {87c98ef2-9ed6-3a4f-be2d-e127d3f97548},
created = {2023-03-09T08:51:03.766Z},
accessed = {2023-03-09},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-03-09T08:52:59.386Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {238906f4-3e7c-4ebb-b571-5e94ea26a909,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Maize (Zea mays L.) is one of the three major cereal crops in the world. Leaf angle is an important architectural trait of crops due to its substantial role in light interception by the canopy and hence photosynthetic efficiency. Traditionally, leaf angle has been measured using a protractor, a process that is both slow and laborious. Efficiently measuring leaf angle under field conditions via imaging is challenging due to leaf density in the canopy and the resulting occlusions. However, advances in imaging technologies and machine learning have provided new tools for image acquisition and analysis that could be used to characterize leaf angle using three-dimensional (3D) models of field-grown plants. In this study, PhenoBot 3.0, a robotic vehicle designed to traverse between pairs of agronomically spaced rows of crops, was equipped with multiple tiers of PhenoStereo cameras to capture side-view images of maize plants in the field. PhenoStereo is a customized stereo camera module with integrated strobe lighting for high-speed stereoscopic image acquisition under variable outdoor lighting conditions. An automated image processing pipeline (AngleNet) was developed to measure leaf angles of nonoccluded leaves. In this pipeline, a novel representation form of leaf angle as a triplet of keypoints was proposed. The pipeline employs convolutional neural networks to detect each leaf angle in two-dimensional images and 3D modeling approaches to extract quantitative data from reconstructed models. Satisfactory accuracies in terms of correlation coefficient (r) and mean absolute error (MAE) were achieved for leaf angle (r MAE > 0.87, < 5°) and internode heights (r MAE > 0.99, < 3.5cm). Our study demonstrates the feasibility of using stereo vision to investigate the distribution of leaf angles in maize under field conditions. The proposed system is an efficient alternative to traditional leaf angle phenotyping and thus could accelerate breeding for improved plant architecture. K E Y W O R D S convolutional neural network, field-based plant phenotyping, keypoint detection, leaf angle, stereo vision J Field Robotics. 2023;1-20. wileyonlinelibrary.com/journal/rob | 1 This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.},
bibtype = {article},
author = {Xiang, Lirong and Gai, Jingyao and Bao, Yin and Yu, Jianming and Patrick, | and Schnable, S and Lie Tang, |},
doi = {10.1002/ROB.22166},
journal = {Journal of Field Robotics}
}
Maize (Zea mays L.) is one of the three major cereal crops in the world. Leaf angle is an important architectural trait of crops due to its substantial role in light interception by the canopy and hence photosynthetic efficiency. Traditionally, leaf angle has been measured using a protractor, a process that is both slow and laborious. Efficiently measuring leaf angle under field conditions via imaging is challenging due to leaf density in the canopy and the resulting occlusions. However, advances in imaging technologies and machine learning have provided new tools for image acquisition and analysis that could be used to characterize leaf angle using three-dimensional (3D) models of field-grown plants. In this study, PhenoBot 3.0, a robotic vehicle designed to traverse between pairs of agronomically spaced rows of crops, was equipped with multiple tiers of PhenoStereo cameras to capture side-view images of maize plants in the field. PhenoStereo is a customized stereo camera module with integrated strobe lighting for high-speed stereoscopic image acquisition under variable outdoor lighting conditions. An automated image processing pipeline (AngleNet) was developed to measure leaf angles of nonoccluded leaves. In this pipeline, a novel representation form of leaf angle as a triplet of keypoints was proposed. The pipeline employs convolutional neural networks to detect each leaf angle in two-dimensional images and 3D modeling approaches to extract quantitative data from reconstructed models. Satisfactory accuracies in terms of correlation coefficient (r) and mean absolute error (MAE) were achieved for leaf angle (r MAE > 0.87, < 5°) and internode heights (r MAE > 0.99, < 3.5cm). Our study demonstrates the feasibility of using stereo vision to investigate the distribution of leaf angles in maize under field conditions. The proposed system is an efficient alternative to traditional leaf angle phenotyping and thus could accelerate breeding for improved plant architecture. K E Y W O R D S convolutional neural network, field-based plant phenotyping, keypoint detection, leaf angle, stereo vision J Field Robotics. 2023;1-20. wileyonlinelibrary.com/journal/rob | 1 This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
Towards Computer-Vision Based Vineyard Navigation for Quadruped Robots.
Milburn, L.; Gamba, J.; and Semini, C.
. 1 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Towards Computer-Vision Based Vineyard Navigation for Quadruped Robots},
type = {article},
year = {2023},
keywords = {Computer-Vision,Index Terms-Agricultural Robotics,Quadruped Control,Vine-yard Navigation},
websites = {https://arxiv.org/abs/2301.00887v1},
month = {1},
day = {2},
id = {66219f83-86f9-3016-a39b-3c88a9521a8f},
created = {2023-03-22T08:21:14.884Z},
accessed = {2023-03-22},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-03-23T06:17:47.904Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4c7c81ce-f24b-44ae-bc2a-bf60600a3a24,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {There is a dramatic shortage of skilled labor for modern vineyards. The Vinum
project is developing a mobile robotic solution to autonomously navigate
through vineyards for winter grapevine pruning. This necessitates an autonomous
navigation stack for the robot pruning a vineyard. The Vinum project is using
the quadruped robot HyQReal. This paper introduces an architecture for a
quadruped robot to autonomously move through a vineyard by identifying and
approaching grapevines for pruning. The higher level control is a state machine
switching between searching for destination positions, autonomously navigating
towards those locations, and stopping for the robot to complete a task. The
destination points are determined by identifying grapevine trunks using
instance segmentation from a Mask Region-Based Convolutional Neural Network
(Mask-RCNN). These detections are sent through a filter to avoid redundancy and
remove noisy detections. The combination of these features is the basis for the
proposed architecture.},
bibtype = {article},
author = {Milburn, Lee and Gamba, Juan and Semini, Claudio},
doi = {10.1016/j}
}
There is a dramatic shortage of skilled labor for modern vineyards. The Vinum
project is developing a mobile robotic solution to autonomously navigate
through vineyards for winter grapevine pruning. This necessitates an autonomous
navigation stack for the robot pruning a vineyard. The Vinum project is using
the quadruped robot HyQReal. This paper introduces an architecture for a
quadruped robot to autonomously move through a vineyard by identifying and
approaching grapevines for pruning. The higher level control is a state machine
switching between searching for destination positions, autonomously navigating
towards those locations, and stopping for the robot to complete a task. The
destination points are determined by identifying grapevine trunks using
instance segmentation from a Mask Region-Based Convolutional Neural Network
(Mask-RCNN). These detections are sent through a filter to avoid redundancy and
remove noisy detections. The combination of these features is the basis for the
proposed architecture.
DualSeg: Fusing transformer and CNN structure for image segmentation in complex vineyard environment.
Wang, J.; Zhang, Z.; Luo, L.; Wei, H.; Wang, W.; Chen, M.; and Luo, S.
Computers and Electronics in Agriculture, 206(January): 107682. 2023.
Website
doi
link
bibtex
abstract
@article{
title = {DualSeg: Fusing transformer and CNN structure for image segmentation in complex vineyard environment},
type = {article},
year = {2023},
keywords = {CNN,DualSeg,FFM,Grape peduncle,Semantic segmentation,Transformer},
pages = {107682},
volume = {206},
websites = {https://doi.org/10.1016/j.compag.2023.107682},
publisher = {Elsevier B.V.},
id = {abee0c52-1180-3bce-a363-ab99434df835},
created = {2023-03-23T06:17:46.404Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-03-23T06:24:07.233Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
private_publication = {false},
abstract = {Semantic segmentation is a kind of classification at the pixel level, which has a remarkable effect in dealing with complex scenes. Most popular semantic segmentation models are based on Convolutional Neural Network (CNN) and Transformer structures. Convolution operations and attention modules are good at extracting local and global features. In this paper, we propose a parallel network structure, termed DualSeg, which leverages the advantages of CNN at local processing and Transformer at global interaction. And a feature fusion module (FFM) is designed to fuse parameters between modules in different branches to retain local and global features to the maximum extent. In the vineyards, the grape clusters and grape peduncles are often heavily obscured by leaves, branches and clusters, making it difficult to distinguish them accurately. Therefore, the scene is taken as an example to verify the segmentation effect of the model. In the experiment, the DualSeg model and mainstream segmentation models are used to test the segmentation effect of different models in this scene. And the experimental results showed that the DualSeg model had the best segmentation effect of all models. Specifically, the model had an IoU value of grape peduncle segmentation of 72.1%, which was more than 3.9% higher than that of other models in the case of 80 K iterations. In addition, the mIoU value of this model is 83.7%, which was the highest value compared with other models. The demonstrated performance of DualSeg model indicated that harvesting robots could be use it.},
bibtype = {article},
author = {Wang, Jinhai and Zhang, Zongyin and Luo, Lufeng and Wei, Huiling and Wang, Wei and Chen, Mingyou and Luo, Shaoming},
doi = {10.1016/j.compag.2023.107682},
journal = {Computers and Electronics in Agriculture},
number = {January}
}
Semantic segmentation is a kind of classification at the pixel level, which has a remarkable effect in dealing with complex scenes. Most popular semantic segmentation models are based on Convolutional Neural Network (CNN) and Transformer structures. Convolution operations and attention modules are good at extracting local and global features. In this paper, we propose a parallel network structure, termed DualSeg, which leverages the advantages of CNN at local processing and Transformer at global interaction. And a feature fusion module (FFM) is designed to fuse parameters between modules in different branches to retain local and global features to the maximum extent. In the vineyards, the grape clusters and grape peduncles are often heavily obscured by leaves, branches and clusters, making it difficult to distinguish them accurately. Therefore, the scene is taken as an example to verify the segmentation effect of the model. In the experiment, the DualSeg model and mainstream segmentation models are used to test the segmentation effect of different models in this scene. And the experimental results showed that the DualSeg model had the best segmentation effect of all models. Specifically, the model had an IoU value of grape peduncle segmentation of 72.1%, which was more than 3.9% higher than that of other models in the case of 80 K iterations. In addition, the mIoU value of this model is 83.7%, which was the highest value compared with other models. The demonstrated performance of DualSeg model indicated that harvesting robots could be use it.
Designing a Proximal Sensing Camera Acquisition System for Vineyard Applications: Results and Feedback on 8 Years of Experiments.
Rançon, F.; Keresztes, B.; Deshayes, A.; Tardif, M.; Abdelghafour, F.; Fontaine, G.; Da Costa, J., P.; and Germain, C.
Sensors 2023, Vol. 23, Page 847, 23(2): 847. 1 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Designing a Proximal Sensing Camera Acquisition System for Vineyard Applications: Results and Feedback on 8 Years of Experiments},
type = {article},
year = {2023},
keywords = {rancon2023designingproximalsensing},
pages = {847},
volume = {23},
websites = {https://www.mdpi.com/1424-8220/23/2/847/htm,https://www.mdpi.com/1424-8220/23/2/847},
month = {1},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {11},
id = {eabbddde-c9a9-37da-ac49-4f47f2e73a56},
created = {2023-03-23T06:21:22.030Z},
accessed = {2023-03-23},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-12-05T11:30:22.566Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {1619600c-2adf-4216-9e4c-d260d584753e,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The potential of image proximal sensing for agricultural applications has been a prolific scientific subject in the recent literature. Its main appeal lies in the sensing of precise information about plant status, which is either harder or impossible to extract from lower-resolution downward-looking image sensors such as satellite or drone imagery. Yet, many theoretical and practical problems arise when dealing with proximal sensing, especially on perennial crops such as vineyards. Indeed, vineyards exhibit challenging physical obstacles and many degrees of variability in their layout. In this paper, we present the design of a mobile camera suited to vineyards and harsh experimental conditions, as well as the results and assessments of 8 years’ worth of studies using that camera. These projects ranged from in-field yield estimation (berry counting) to disease detection, providing new insights on typical viticulture problems that could also be generalized to orchard crops. Different recommendations are then provided using small case studies, such as the difficulties related to framing plots with different structures or the mounting of the sensor on a moving vehicle. While results stress the obvious importance and strong benefits of a thorough experimental design, they also indicate some inescapable pitfalls, illustrating the need for more robust image analysis algorithms and better databases. We believe sharing that experience with the scientific community can only benefit the future development of these innovative approaches.},
bibtype = {article},
author = {Rançon, Florian and Keresztes, Barna and Deshayes, Aymeric and Tardif, Malo and Abdelghafour, Florent and Fontaine, Gael and Da Costa, Jean Pierre and Germain, Christian},
doi = {10.3390/S23020847},
journal = {Sensors 2023, Vol. 23, Page 847},
number = {2}
}
The potential of image proximal sensing for agricultural applications has been a prolific scientific subject in the recent literature. Its main appeal lies in the sensing of precise information about plant status, which is either harder or impossible to extract from lower-resolution downward-looking image sensors such as satellite or drone imagery. Yet, many theoretical and practical problems arise when dealing with proximal sensing, especially on perennial crops such as vineyards. Indeed, vineyards exhibit challenging physical obstacles and many degrees of variability in their layout. In this paper, we present the design of a mobile camera suited to vineyards and harsh experimental conditions, as well as the results and assessments of 8 years’ worth of studies using that camera. These projects ranged from in-field yield estimation (berry counting) to disease detection, providing new insights on typical viticulture problems that could also be generalized to orchard crops. Different recommendations are then provided using small case studies, such as the difficulties related to framing plots with different structures or the mounting of the sensor on a moving vehicle. While results stress the obvious importance and strong benefits of a thorough experimental design, they also indicate some inescapable pitfalls, illustrating the need for more robust image analysis algorithms and better databases. We believe sharing that experience with the scientific community can only benefit the future development of these innovative approaches.
DualSeg: Fusing transformer and CNN structure for image segmentation in complex vineyard environment.
Wang, J.; Zhang, Z.; Luo, L.; Wei, H.; Wang, W.; Chen, M.; and Luo, S.
Computers and Electronics in Agriculture, 206(September 2022): 107682. 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {DualSeg: Fusing transformer and CNN structure for image segmentation in complex vineyard environment},
type = {article},
year = {2023},
keywords = {CNN,DualSeg,FFM,Grape peduncle,Semantic segmentation,Transformer},
pages = {107682},
volume = {206},
websites = {https://doi.org/10.1016/j.compag.2023.107682},
publisher = {Elsevier B.V.},
id = {c4dd463c-82de-3c58-8e9c-74690bb11043},
created = {2023-03-23T06:24:06.347Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-03-23T06:24:16.472Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {35f1a258-89fa-488b-8684-355f11ee4e9b,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Semantic segmentation is a kind of classification at the pixel level, which has a remarkable effect in dealing with complex scenes. Most popular semantic segmentation models are based on Convolutional Neural Network (CNN) and Transformer structures. Convolution operations and attention modules are good at extracting local and global features. In this paper, we propose a parallel network structure, termed DualSeg, which leverages the advantages of CNN at local processing and Transformer at global interaction. And a feature fusion module (FFM) is designed to fuse parameters between modules in different branches to retain local and global features to the maximum extent. In the vineyards, the grape clusters and grape peduncles are often heavily obscured by leaves, branches and clusters, making it difficult to distinguish them accurately. Therefore, the scene is taken as an example to verify the segmentation effect of the model. In the experiment, the DualSeg model and mainstream segmentation models are used to test the segmentation effect of different models in this scene. And the experimental results showed that the DualSeg model had the best segmentation effect of all models. Specifically, the model had an IoU value of grape peduncle segmentation of 72.1%, which was more than 3.9% higher than that of other models in the case of 80 K iterations. In addition, the mIoU value of this model is 83.7%, which was the highest value compared with other models. The demonstrated performance of DualSeg model indicated that harvesting robots could be use it.},
bibtype = {article},
author = {Wang, Jinhai and Zhang, Zongyin and Luo, Lufeng and Wei, Huiling and Wang, Wei and Chen, Mingyou and Luo, Shaoming},
doi = {10.1016/j.compag.2023.107682},
journal = {Computers and Electronics in Agriculture},
number = {September 2022}
}
Semantic segmentation is a kind of classification at the pixel level, which has a remarkable effect in dealing with complex scenes. Most popular semantic segmentation models are based on Convolutional Neural Network (CNN) and Transformer structures. Convolution operations and attention modules are good at extracting local and global features. In this paper, we propose a parallel network structure, termed DualSeg, which leverages the advantages of CNN at local processing and Transformer at global interaction. And a feature fusion module (FFM) is designed to fuse parameters between modules in different branches to retain local and global features to the maximum extent. In the vineyards, the grape clusters and grape peduncles are often heavily obscured by leaves, branches and clusters, making it difficult to distinguish them accurately. Therefore, the scene is taken as an example to verify the segmentation effect of the model. In the experiment, the DualSeg model and mainstream segmentation models are used to test the segmentation effect of different models in this scene. And the experimental results showed that the DualSeg model had the best segmentation effect of all models. Specifically, the model had an IoU value of grape peduncle segmentation of 72.1%, which was more than 3.9% higher than that of other models in the case of 80 K iterations. In addition, the mIoU value of this model is 83.7%, which was the highest value compared with other models. The demonstrated performance of DualSeg model indicated that harvesting robots could be use it.
Machine-Learning Methods for the Identification of Key Predictors of Site-Specific Vineyard Yield and Vine Size.
Taylor, J., A.; Bates, T., R.; Jakubowski, R.; and Jones, H.
American Journal of Enology and Viticulture,ajev.2022.22050. 1 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Machine-Learning Methods for the Identification of Key Predictors of Site-Specific Vineyard Yield and Vine Size},
type = {article},
year = {2023},
pages = {ajev.2022.22050},
websites = {https://www.ajevonline.org/content/early/2023/01/11/ajev.2022.22050,https://www.ajevonline.org/content/early/2023/01/11/ajev.2022.22050.abstract},
month = {1},
publisher = {American Journal of Enology and Viticulture},
day = {17},
id = {1f625568-08df-3a18-baa3-201381871406},
created = {2023-03-23T07:14:15.118Z},
accessed = {2023-03-23},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-03-23T09:28:40.666Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {35f1a258-89fa-488b-8684-355f11ee4e9b,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Background and goals Lake Erie Concord growers have access to high-resolution spatial soil and production data but lack protocols and information on the optimum time to collect these data. This study intends to provide clearer information regarding the type and timing of sensor information to support in-season management.
Methods and key findings A three-year study in a 2.6 ha vineyard collected yield, pruning mass, canopy vigor and soil data, including yield and pruning mass from the previous year, at 321 sites. Stepwise linear regression and random forest regression approaches were used to model site-specific yield and pruning mass using spatial historical production data, multi-temporal in-season canopy vigor and soil data. The more complex yield elaboration process was best modelled with non-linear random forest regression while the simpler development of pruning mass was best modelled by linear regression.
Conclusions and significance Canopy vigor in the weeks preceding bloom was the most important predictor of the current season’s yield and should be used to generate stratified sampling designs for crop estimation at 30 days after bloom. In contrast, pruning mass was not well predicted by canopy vigor, even late-season canopy vigor, which is widely advocated for pruning mass estimation in viticulture. The previous year’s pruning mass was the dominant predictor of pruning mass in the current season. To model pruning mass going forward, the best approach is to start measuring it. Further work is still needed to develop robust, local site-specific yield and pruning mass models for operational decision-making in concord vineyards.},
bibtype = {article},
author = {Taylor, James A. and Bates, Terence R. and Jakubowski, Rhiann and Jones, Hazaël},
doi = {10.5344/AJEV.2022.22050},
journal = {American Journal of Enology and Viticulture}
}
Background and goals Lake Erie Concord growers have access to high-resolution spatial soil and production data but lack protocols and information on the optimum time to collect these data. This study intends to provide clearer information regarding the type and timing of sensor information to support in-season management.
Methods and key findings A three-year study in a 2.6 ha vineyard collected yield, pruning mass, canopy vigor and soil data, including yield and pruning mass from the previous year, at 321 sites. Stepwise linear regression and random forest regression approaches were used to model site-specific yield and pruning mass using spatial historical production data, multi-temporal in-season canopy vigor and soil data. The more complex yield elaboration process was best modelled with non-linear random forest regression while the simpler development of pruning mass was best modelled by linear regression.
Conclusions and significance Canopy vigor in the weeks preceding bloom was the most important predictor of the current season’s yield and should be used to generate stratified sampling designs for crop estimation at 30 days after bloom. In contrast, pruning mass was not well predicted by canopy vigor, even late-season canopy vigor, which is widely advocated for pruning mass estimation in viticulture. The previous year’s pruning mass was the dominant predictor of pruning mass in the current season. To model pruning mass going forward, the best approach is to start measuring it. Further work is still needed to develop robust, local site-specific yield and pruning mass models for operational decision-making in concord vineyards.
Leaf area index estimation of pergola-trained vineyards in arid regions using classical and deep learning methods based on UAV-based RGB images.
Ilniyaz, O.; Du, Q.; Shen, H.; He, W.; Feng, L.; Azadi, H.; Kurban, A.; and Chen, X.
Computers and Electronics in Agriculture, 207(February): 107723. 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Leaf area index estimation of pergola-trained vineyards in arid regions using classical and deep learning methods based on UAV-based RGB images},
type = {article},
year = {2023},
keywords = {CNN,Data augmentation,Leaf area index,Machine learning,Spectral features,Textural features,UAV},
pages = {107723},
volume = {207},
websites = {https://doi.org/10.1016/j.compag.2023.107723},
publisher = {Elsevier B.V.},
id = {273167f1-1ed5-37ca-b3c1-5273b3c459ff},
created = {2023-03-23T09:28:39.759Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-03-23T09:28:49.839Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {4787d485-df78-4119-ad84-be82dd804b00,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Timely and accurate mapping of leaf area index (LAI) in vineyards plays an important role for management choices in precision agricultural practices. However, only a little work has been done to extract the LAI of pergola-trained vineyards using higher spatial resolution remote sensing data. The main objective of this study was to evaluate the ability of unmanned aerial vehicle (UAV) imageries to estimate the LAI of pergola-trained vineyards using shallow and deep machine learning (ML) methods. Field trials were conducted in different growth seasons in 2021 by collecting 465 LAI samples. Firstly, this study trained five classical shallow ML models and an ensemble learning model by using different spectral and textural indices calculated from UAV imageries, and the most correlated or useful features for LAI estimations in different growth stages were differentiated. Then, due to the classical ML approaches need the arduous computation of multiple indices and feature selection procedures, another ResNet-based convolutional neural network (CNN) model was constructed which can be directly fed by cropped images. Furthermore, this study introduced a new image data augmentation method which is applicable to regression problems. Results indicated that the textural indices performed better than spectral indices, while the combination of them can improve estimation results, and the ensemble learning method showed the best among classical ML models. By choosing the optimal input image size, the CNN model we constructed estimated the LAI most effectively without extracting and selecting the features manually. The proposed image data augmentation method can generate new training images with new labels by mosaicking the original ones, and the CNN model showed improved performance after using this method compared to those using only the original images, or augmented by rotation and flipping methods. This data augmentation method can be applied to other regression models to extract growth parameters of crops using remote sensing data, and we conclude that the UAV imagery and deep learning methods are promising in LAI estimations of pergola-trained vineyards.},
bibtype = {article},
author = {Ilniyaz, Osman and Du, Qingyun and Shen, Huanfeng and He, Wenwen and Feng, Luwei and Azadi, Hossein and Kurban, Alishir and Chen, Xi},
doi = {10.1016/j.compag.2023.107723},
journal = {Computers and Electronics in Agriculture},
number = {February}
}
Timely and accurate mapping of leaf area index (LAI) in vineyards plays an important role for management choices in precision agricultural practices. However, only a little work has been done to extract the LAI of pergola-trained vineyards using higher spatial resolution remote sensing data. The main objective of this study was to evaluate the ability of unmanned aerial vehicle (UAV) imageries to estimate the LAI of pergola-trained vineyards using shallow and deep machine learning (ML) methods. Field trials were conducted in different growth seasons in 2021 by collecting 465 LAI samples. Firstly, this study trained five classical shallow ML models and an ensemble learning model by using different spectral and textural indices calculated from UAV imageries, and the most correlated or useful features for LAI estimations in different growth stages were differentiated. Then, due to the classical ML approaches need the arduous computation of multiple indices and feature selection procedures, another ResNet-based convolutional neural network (CNN) model was constructed which can be directly fed by cropped images. Furthermore, this study introduced a new image data augmentation method which is applicable to regression problems. Results indicated that the textural indices performed better than spectral indices, while the combination of them can improve estimation results, and the ensemble learning method showed the best among classical ML models. By choosing the optimal input image size, the CNN model we constructed estimated the LAI most effectively without extracting and selecting the features manually. The proposed image data augmentation method can generate new training images with new labels by mosaicking the original ones, and the CNN model showed improved performance after using this method compared to those using only the original images, or augmented by rotation and flipping methods. This data augmentation method can be applied to other regression models to extract growth parameters of crops using remote sensing data, and we conclude that the UAV imagery and deep learning methods are promising in LAI estimations of pergola-trained vineyards.
Using deep learning for pruning region detection and plant organ segmentation in dormant spur-pruned grapevines.
Guadagna, P.; Fernandes, ·., M.; Chen, ·., F.; Santamaria, ·., A.; Teng, ·., T.; Frioni, ·., T.; Caldwell, ·., D., G.; Poni, ·., S.; Semini, ·., C.; Gatti, ·., M.; and Gatti, M.
Precision Agriculture 2023,1-23. 3 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Using deep learning for pruning region detection and plant organ segmentation in dormant spur-pruned grapevines},
type = {article},
year = {2023},
keywords = {Agriculture,Atmospheric Sciences,Chemistry and Earth Sciences,Computer Science,Physics,Remote Sensing/Photogrammetry,Soil Science & Conservation,Statistics for Engineering},
pages = {1-23},
websites = {https://link.springer.com/article/10.1007/s11119-023-10006-y},
month = {3},
publisher = {Springer},
day = {22},
id = {09fb60d8-e775-3eb7-94eb-d244ea79398c},
created = {2023-03-28T05:58:44.167Z},
accessed = {2023-03-28},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-03-29T11:22:31.558Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {35f1a258-89fa-488b-8684-355f11ee4e9b,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Even though mechanization has dramatically decreased labor requirements, vineyard management costs are still affected by selective operations such as winter pruning. Robotic solutions are becoming more common in agriculture, however, few studies have focused on grapevines. This work aims at fine-tuning and testing two different deep neural networks for: (i) detecting pruning regions (PRs), and (ii) performing organ segmentation of spur-pruned dormant grapevines. The Faster R-CNN network was fine-tuned using 1215 RGB images collected in different vineyards and annotated through bounding boxes. The network was tested on 232 RGB images, PRs were categorized by wood type (W), orientation (Or) and visibility (V), and performance metrics were calculated. PR detection was dramatically affected by visibility. Highest detection was associated with visible intermediate complex spurs in Merlot (0.97), while most represented coplanar simple spurs allowed a 74% detection rate. The Mask R-CNN network was trained for grapevine organs (GOs) segmentation by using 119 RGB images annotated by distinguishing 5 classes (cordon, arm, spur, cane and node). The network was tested on 60 RGB images of light pruned (LP), shoot-thinned (ST) and unthinned control (C) grapevines. Nodes were the best segmented GOs (0.88) and general recall was higher for ST (0.85) compared to C (0.80) confirming the role of canopy management in improving performances of hi-tech solutions based on artificial intelligence. The two fine-tuned and tested networks are part of a larger control framework that is under development for autonomous winter pruning of grapevines.},
bibtype = {article},
author = {Guadagna, P and Fernandes, · M and Chen, · F and Santamaria, · A and Teng, · T and Frioni, · T and Caldwell, · D G and Poni, · S and Semini, · C and Gatti, · M and Gatti, M},
doi = {10.1007/S11119-023-10006-Y},
journal = {Precision Agriculture 2023}
}
Even though mechanization has dramatically decreased labor requirements, vineyard management costs are still affected by selective operations such as winter pruning. Robotic solutions are becoming more common in agriculture, however, few studies have focused on grapevines. This work aims at fine-tuning and testing two different deep neural networks for: (i) detecting pruning regions (PRs), and (ii) performing organ segmentation of spur-pruned dormant grapevines. The Faster R-CNN network was fine-tuned using 1215 RGB images collected in different vineyards and annotated through bounding boxes. The network was tested on 232 RGB images, PRs were categorized by wood type (W), orientation (Or) and visibility (V), and performance metrics were calculated. PR detection was dramatically affected by visibility. Highest detection was associated with visible intermediate complex spurs in Merlot (0.97), while most represented coplanar simple spurs allowed a 74% detection rate. The Mask R-CNN network was trained for grapevine organs (GOs) segmentation by using 119 RGB images annotated by distinguishing 5 classes (cordon, arm, spur, cane and node). The network was tested on 60 RGB images of light pruned (LP), shoot-thinned (ST) and unthinned control (C) grapevines. Nodes were the best segmented GOs (0.88) and general recall was higher for ST (0.85) compared to C (0.80) confirming the role of canopy management in improving performances of hi-tech solutions based on artificial intelligence. The two fine-tuned and tested networks are part of a larger control framework that is under development for autonomous winter pruning of grapevines.
Early yield prediction in different grapevine varieties using computer vision and machine learning.
Palacios, F.; Diago, M., P.; Melo-Pinto, P.; and Tardaguila, J.
Precision Agriculture, 24(2): 407-435. 4 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Early yield prediction in different grapevine varieties using computer vision and machine learning},
type = {article},
year = {2023},
keywords = {Digital viticulture,Non-invasive sensing technologies,SegNet architecture,Yield estimation},
pages = {407-435},
volume = {24},
websites = {https://link.springer.com/article/10.1007/s11119-022-09950-y},
month = {4},
publisher = {Springer},
day = {1},
id = {52f774ad-e2fc-3b85-ae57-bdbce8ce989e},
created = {2023-04-25T05:48:43.854Z},
accessed = {2023-04-25},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-04-27T05:58:46.405Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {35f1a258-89fa-488b-8684-355f11ee4e9b,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Yield assessment is a highly relevant task for the wine industry. The goal of this work was to develop a new algorithm for early yield prediction in different grapevine varieties using computer vision and machine learning. Vines from six grapevine (Vitis vinifera L.) varieties were photographed using a mobile platform in a commercial vineyard at pea-size berry stage. A SegNet architecture was employed to detect the visible berries and canopy features. All features were used to train support vector regression (SVR) models for predicting number of actual berries and yield. Regarding the berries’ detection step, a F1-score average of 0.72 and coefficients of determination (R2) above 0.92 were achieved for all varieties between the number of estimated and the number of actual visible berries. The method yielded average values for root mean squared error (RMSE) of 195 berries, normalized RMSE (NRMSE) of 23.83% and R2 of 0.79 between the number of estimated and the number of actual berries per vine using the leave-one-out cross validation method. In terms of yield forecast, the correlation between the actual yield and its estimated value yielded R2 between 0.54 and 0.87 among different varieties and NRMSE between 16.47% and 39.17% while the global model (including all varieties) had a R2 equal to 0.83 and NRMSE of 29.77%. The number of actual berries and yield per vine can be predicted up to 60 days prior to harvest in several grapevine varieties using the new algorithm.},
bibtype = {article},
author = {Palacios, Fernando and Diago, Maria P. and Melo-Pinto, Pedro and Tardaguila, Javier},
doi = {10.1007/S11119-022-09950-Y/FIGURES/7},
journal = {Precision Agriculture},
number = {2}
}
Yield assessment is a highly relevant task for the wine industry. The goal of this work was to develop a new algorithm for early yield prediction in different grapevine varieties using computer vision and machine learning. Vines from six grapevine (Vitis vinifera L.) varieties were photographed using a mobile platform in a commercial vineyard at pea-size berry stage. A SegNet architecture was employed to detect the visible berries and canopy features. All features were used to train support vector regression (SVR) models for predicting number of actual berries and yield. Regarding the berries’ detection step, a F1-score average of 0.72 and coefficients of determination (R2) above 0.92 were achieved for all varieties between the number of estimated and the number of actual visible berries. The method yielded average values for root mean squared error (RMSE) of 195 berries, normalized RMSE (NRMSE) of 23.83% and R2 of 0.79 between the number of estimated and the number of actual berries per vine using the leave-one-out cross validation method. In terms of yield forecast, the correlation between the actual yield and its estimated value yielded R2 between 0.54 and 0.87 among different varieties and NRMSE between 16.47% and 39.17% while the global model (including all varieties) had a R2 equal to 0.83 and NRMSE of 29.77%. The number of actual berries and yield per vine can be predicted up to 60 days prior to harvest in several grapevine varieties using the new algorithm.
GrapesNet: Indian RGB & RGB-D vineyard image datasets for deep learning applications.
Barbole, D., K.; and Jadhav, P., M.
Data in Brief, 48: 109100. 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {GrapesNet: Indian RGB & RGB-D vineyard image datasets for deep learning applications},
type = {article},
year = {2023},
keywords = {Artificial intelligence,Deep learning,Grape bunch detection etc,Grape bunch segmentation,Vineyard dataset},
pages = {109100},
volume = {48},
websites = {https://doi.org/10.1016/j.dib.2023.109100},
publisher = {Elsevier Inc.},
id = {976568e4-4810-31b2-ae89-98f389feb052},
created = {2023-04-27T05:58:46.129Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-09-18T10:51:03.364Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {e09e8a73-297c-40aa-8415-81ec386a90b0,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {In most of the countries, grapes are considered as a cash crop. Currently huge research is going on in development of automated grape harvesting systems. Speedy and reliable grape bunch detection is prime need for various deep learning based automated systems which deals with object detection and object segmentation tasks. But currently very few datasets are available on grape bunches in vineyard, because of which there is restriction to the research in this area. In comparison to the vineyard in outside countries, Indian vineyard structure is more complex, so it becomes hard to work in real-time. To overcome these problems and to make vineyard dataset for suitable for Indian vineyard scenarios, this paper proposed four different datasets on grape bunches in vineyard. For creating all datasets in GrapesNet, natural environmental conditions have been considered. GrapesNet includes total 11000+ images of grape bunches. Necessary data for weight prediction of grape cluster is also provided with dataset like height, width and real weight of cluster present in image. Proposed datasets can be used for prime tasks like grape bunch detection, grape bunch segmentation, and grape bunch weight estimation etc. of future generation automated vineyard harvesting technologies.},
bibtype = {article},
author = {Barbole, Dhanashree K. and Jadhav, Parul M.},
doi = {10.1016/j.dib.2023.109100},
journal = {Data in Brief}
}
In most of the countries, grapes are considered as a cash crop. Currently huge research is going on in development of automated grape harvesting systems. Speedy and reliable grape bunch detection is prime need for various deep learning based automated systems which deals with object detection and object segmentation tasks. But currently very few datasets are available on grape bunches in vineyard, because of which there is restriction to the research in this area. In comparison to the vineyard in outside countries, Indian vineyard structure is more complex, so it becomes hard to work in real-time. To overcome these problems and to make vineyard dataset for suitable for Indian vineyard scenarios, this paper proposed four different datasets on grape bunches in vineyard. For creating all datasets in GrapesNet, natural environmental conditions have been considered. GrapesNet includes total 11000+ images of grape bunches. Necessary data for weight prediction of grape cluster is also provided with dataset like height, width and real weight of cluster present in image. Proposed datasets can be used for prime tasks like grape bunch detection, grape bunch segmentation, and grape bunch weight estimation etc. of future generation automated vineyard harvesting technologies.
Dataset on unmanned aerial vehicle multispectral images acquired over a vineyard affected by Botrytis cinerea in northern Spain.
Vélez, S.; Ariza-Sentís, M.; and Valente, J.
Data in brief, 46. 2 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Dataset on unmanned aerial vehicle multispectral images acquired over a vineyard affected by Botrytis cinerea in northern Spain},
type = {article},
year = {2023},
keywords = {velez2023datasetuav},
volume = {46},
websites = {https://pubmed.ncbi.nlm.nih.gov/36660442/},
month = {2},
publisher = {Data Brief},
day = {1},
id = {745d3673-1aff-3790-a316-54c1d3ddb869},
created = {2023-05-12T09:35:50.539Z},
accessed = {2023-05-12},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:53:05.154Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {e09e8a73-297c-40aa-8415-81ec386a90b0,5a010301-acb6-4642-a6b2-8afaee1b741c,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Remote sensing makes it possible to gather data rapidly, precisely, accurately, and non-destructively, allowing it to assess grapevines accurately in near real-time. In addition, multispectral cameras capture information in different bands, which can be combined to generate vegetation indices useful in precision agriculture. This dataset contains 16,504 multispectral images from a 1.06 ha vineyard affected by Botrytis cinerea, in the north of Spain. The photos were taken throughout four UAV flights at 30 m height with varying camera angles on 16 September 2021, the same date as the grape harvest. The first flight took place with the camera tilted at 0° (nadir angle), the second flight at 30°, the third flight at 45°, and the fourth flight was also performed at 0° but was scheduled in the afternoon to capture the shadows of the plants projected on the ground. This dataset was created to support researchers interested in disease detection and, in general, UAV remote sensing in vineyards and other woody crops. Moreover, it allows digital photogrammetry and 3D reconstruction in the context of precision agriculture, enabling the study of the effect of different tilt angles on the 3D reconstruction of the vineyard and the generation of orthomosaics.},
bibtype = {article},
author = {Vélez, Sergio and Ariza-Sentís, Mar and Valente, João},
doi = {10.1016/J.DIB.2022.108876},
journal = {Data in brief}
}
Remote sensing makes it possible to gather data rapidly, precisely, accurately, and non-destructively, allowing it to assess grapevines accurately in near real-time. In addition, multispectral cameras capture information in different bands, which can be combined to generate vegetation indices useful in precision agriculture. This dataset contains 16,504 multispectral images from a 1.06 ha vineyard affected by Botrytis cinerea, in the north of Spain. The photos were taken throughout four UAV flights at 30 m height with varying camera angles on 16 September 2021, the same date as the grape harvest. The first flight took place with the camera tilted at 0° (nadir angle), the second flight at 30°, the third flight at 45°, and the fourth flight was also performed at 0° but was scheduled in the afternoon to capture the shadows of the plants projected on the ground. This dataset was created to support researchers interested in disease detection and, in general, UAV remote sensing in vineyards and other woody crops. Moreover, it allows digital photogrammetry and 3D reconstruction in the context of precision agriculture, enabling the study of the effect of different tilt angles on the 3D reconstruction of the vineyard and the generation of orthomosaics.
Fields2Cover: An Open-Source Coverage Path Planning Library for Unmanned Agricultural Vehicles.
Mier, G.; Valente, J.; and De Bruin, S.
IEEE Robotics and Automation Letters, 8(4): 2166-2172. 2023.
doi
link
bibtex
abstract
@article{
title = {Fields2Cover: An Open-Source Coverage Path Planning Library for Unmanned Agricultural Vehicles},
type = {article},
year = {2023},
keywords = {Agricultural automation,field robots,software architecture for robotic and automation},
pages = {2166-2172},
volume = {8},
publisher = {IEEE},
id = {d88d6fbd-a5ca-3b6b-84d8-933affd4d847},
created = {2023-05-17T08:05:02.769Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-05-17T08:05:02.769Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
private_publication = {false},
abstract = {This letter describes Fields2Cover, a novel open source library for coverage path planning (CPP) for agricultural vehicles. While there are several CPP solutions nowadays, there have been limited efforts to unify them into an open source library and provide benchmarking tools to compare their performance. Fields2Cover provides a framework for planning coverage paths, developing novel techniques, and benchmarking state-of-the-art algorithms. The library features a modular and extensible architecture that supports various vehicles and can be used for a variety of applications, including farms. Its core modules are: a headland generator, a swath generator, a route planner and a path planner. An interface to the Robot Operating System (ROS) is also supplied as an add-on. In this letter, the functionalities of the library for planning a coverage path in agriculture are demonstrated using 8 state-of-the-art methods and 7 objective functions in simulation and field experiments.},
bibtype = {article},
author = {Mier, Gonzalo and Valente, Joao and De Bruin, Sytze},
doi = {10.1109/LRA.2023.3248439},
journal = {IEEE Robotics and Automation Letters},
number = {4}
}
This letter describes Fields2Cover, a novel open source library for coverage path planning (CPP) for agricultural vehicles. While there are several CPP solutions nowadays, there have been limited efforts to unify them into an open source library and provide benchmarking tools to compare their performance. Fields2Cover provides a framework for planning coverage paths, developing novel techniques, and benchmarking state-of-the-art algorithms. The library features a modular and extensible architecture that supports various vehicles and can be used for a variety of applications, including farms. Its core modules are: a headland generator, a swath generator, a route planner and a path planner. An interface to the Robot Operating System (ROS) is also supplied as an add-on. In this letter, the functionalities of the library for planning a coverage path in agriculture are demonstrated using 8 state-of-the-art methods and 7 objective functions in simulation and field experiments.
An expertized grapevine disease image database including five grape varieties focused on Flavescence dorée and its confounding diseases, biotic and abiotic stresses.
Tardif, M.; Amri, A.; Deshayes, A.; Greven, M.; Keresztes, B.; Fontaine, G.; Sicaud, L.; Paulhac, L.; Bentejac, S.; and Da Costa, J., P.
Data in Brief, 48: 109230. 6 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {An expertized grapevine disease image database including five grape varieties focused on Flavescence dorée and its confounding diseases, biotic and abiotic stresses},
type = {article},
year = {2023},
keywords = {tardif2023expertizedgrapevinedisease},
pages = {109230},
volume = {48},
month = {6},
publisher = {Elsevier},
day = {1},
id = {a41d6550-5ce6-34ee-af7c-efdf2fc326d0},
created = {2023-09-15T11:03:55.635Z},
accessed = {2023-09-15},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:42:33.862Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {e09e8a73-297c-40aa-8415-81ec386a90b0},
private_publication = {false},
abstract = {The grapevine is vulnerable to diseases, deficiencies, and pests, leading to significant yield losses. Current disease controls involve monitoring and spraying phytosanitary products at the vineyard block scale. However, automatic detection of disease symptoms could reduce the use of these products and treat diseases before they spread. Flavescence dorée (FD), a highly infectious disease that causes significant yield losses, is only diagnosed by identifying symptoms on three grapevine organs: leaf, shoot, and bunch. Its diagnosis is carried out by scouting experts, as many other diseases and stresses, either biotic or abiotic, imply similar symptoms (but not all at the same time). These experts need a decision support tool to improve their scouting efficiency. To address this, a dataset of 1483 RGB images of grapevines affected by various diseases and stresses, including FD, was acquired by proximal sensing. The images were taken in the field at a distance of 1-2 meters to capture entire grapevines and an industrial flash was ensuring a constant luminance on the images regardless of the environmental circumstances. Images of 5 grape varieties (Cabernet sauvignon, Cabernet franc, Merlot, Ugni blanc and Sauvignon blanc) were acquired during 2 years (2020 and 2021). Two types of annotations were made: expert diagnosis at the grapevine scale in the field and symptom annotations at the leaf, shoot, and bunch levels on computer. On 744 images, the leaves were annotated and divided into three classes: ‘FD symptomatic leaves’, ‘Esca symptomatic leaves’, and ‘Confounding leaves’. Symptomatic bunches and shoots were, in addition of leaves, annotated on 110 images using bounding boxes and broken lines, respectively. Additionally, 128 segmentation masks were created to allow the detection of the symptomatic shoots and bunches by segmentation algorithms and compare the results to those of the detection algorithms.},
bibtype = {article},
author = {Tardif, Malo and Amri, Ahmed and Deshayes, Aymeric and Greven, Marc and Keresztes, Barna and Fontaine, Gaël and Sicaud, Laetitia and Paulhac, Laetitia and Bentejac, Sophie and Da Costa, Jean Pierre},
doi = {10.1016/J.DIB.2023.109230},
journal = {Data in Brief}
}
The grapevine is vulnerable to diseases, deficiencies, and pests, leading to significant yield losses. Current disease controls involve monitoring and spraying phytosanitary products at the vineyard block scale. However, automatic detection of disease symptoms could reduce the use of these products and treat diseases before they spread. Flavescence dorée (FD), a highly infectious disease that causes significant yield losses, is only diagnosed by identifying symptoms on three grapevine organs: leaf, shoot, and bunch. Its diagnosis is carried out by scouting experts, as many other diseases and stresses, either biotic or abiotic, imply similar symptoms (but not all at the same time). These experts need a decision support tool to improve their scouting efficiency. To address this, a dataset of 1483 RGB images of grapevines affected by various diseases and stresses, including FD, was acquired by proximal sensing. The images were taken in the field at a distance of 1-2 meters to capture entire grapevines and an industrial flash was ensuring a constant luminance on the images regardless of the environmental circumstances. Images of 5 grape varieties (Cabernet sauvignon, Cabernet franc, Merlot, Ugni blanc and Sauvignon blanc) were acquired during 2 years (2020 and 2021). Two types of annotations were made: expert diagnosis at the grapevine scale in the field and symptom annotations at the leaf, shoot, and bunch levels on computer. On 744 images, the leaves were annotated and divided into three classes: ‘FD symptomatic leaves’, ‘Esca symptomatic leaves’, and ‘Confounding leaves’. Symptomatic bunches and shoots were, in addition of leaves, annotated on 110 images using bounding boxes and broken lines, respectively. Additionally, 128 segmentation masks were created to allow the detection of the symptomatic shoots and bunches by segmentation algorithms and compare the results to those of the detection algorithms.
AI-based Maize and Weeds Detection on the Edge with CornWeed Dataset.
Iqbal, N.; Manss, C.; Scholz, C.; König, D.; Igelbrink, M.; and Ruckelshausen, A.
. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {AI-based Maize and Weeds Detection on the Edge with CornWeed Dataset},
type = {article},
year = {2023},
keywords = {Index Terms-plant detection,agriculture,data acquisition,deep learning,maize data,vision transformer},
id = {5e053b2f-e7dd-3afc-9d43-b08a200f3278},
created = {2023-09-20T08:23:11.234Z},
accessed = {2023-09-20},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-08T10:14:50.375Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Artificial intelligence (AI) is used more heavily in agricultural applications. Yet, the lack of wireless-fidelity (Wi-Fi) connections on agricultural fields makes AI cloud services unavailable. Consequently, AI models have to be processed directly on the edge. In this paper, we evaluate state-of-the-art detection algorithms for their use in agriculture, in particular plant detection. Thus, this paper presents the CornWeed data set, which has been recorded on farm machines, showing labelled maize crops and weeds for plant detection. The paper provides accuracies for the state-of-the-art detection algorithms on the CornWeed data set, as well as frames per second (FPS) metrics for the considered networks on multiple edge devices. Moreover, for the FPS analysis, the detection algorithms are converted to open neural network exchange (ONNX) and TensoRT engine files as they could be used as future standards for model exchange.},
bibtype = {article},
author = {Iqbal, Naeem and Manss, Christoph and Scholz, Christian and König, Daniel and Igelbrink, Matthias and Ruckelshausen, Arno},
doi = {10.5281/zenodo.7961764}
}
Artificial intelligence (AI) is used more heavily in agricultural applications. Yet, the lack of wireless-fidelity (Wi-Fi) connections on agricultural fields makes AI cloud services unavailable. Consequently, AI models have to be processed directly on the edge. In this paper, we evaluate state-of-the-art detection algorithms for their use in agriculture, in particular plant detection. Thus, this paper presents the CornWeed data set, which has been recorded on farm machines, showing labelled maize crops and weeds for plant detection. The paper provides accuracies for the state-of-the-art detection algorithms on the CornWeed data set, as well as frames per second (FPS) metrics for the considered networks on multiple edge devices. Moreover, for the FPS analysis, the detection algorithms are converted to open neural network exchange (ONNX) and TensoRT engine files as they could be used as future standards for model exchange.
Automatic diagnosis of a multi-symptom grapevine disease by decision trees and Graph Neural Networks.
Tardif, M.; Keresztes, B.; Deshayes, A.; Martin, D.; Greven, M.; and {Da Costa}, J.
Precision agriculture '23,1011–1017. 2023.
Paper
link
bibtex
abstract
@article{
title = {Automatic diagnosis of a multi-symptom grapevine disease by decision trees and Graph Neural Networks},
type = {article},
year = {2023},
keywords = {tardif2023automaticdiagnosismultigraph},
pages = {1011–1017},
id = {4522f247-e38d-3c57-9146-584cf54f2aad},
created = {2023-09-21T11:07:48.768Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T20:25:06.407Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The grapevine disease called "Flavescence dorée" (FD) is actively monitored in Europe as it decreases yield and kills grapevines while being highly contagious. In this study, three methods for its automatic diagnosis from images acquired by proximal sensing (RGB camera) are proposed, compared and discussed. Method A uses a Convolutional Neural Network (CNN) classifier applied on raw images. The two other methods both process in two steps: (i) individual symptom detection using a CNN-based box detector and a deep segmentation algorithm, (ii) symptom-based diagnosis based either on a Random Forest classifier (method B) or on a Graph Neural Network (method C). A 6- fold cross validation was performed on 787 images of vines suffering from FD or from other biotic or abiotic stress factors. Methods B and C reached almost equally good results and outperformed method A: they achieved respectively, in (precision, recall), (0.69, 0.81), (0.87, 0.88) and (0.88, 0.88).},
bibtype = {article},
author = {Tardif, Malo and Keresztes, Barna and Deshayes, Aymeric and Martin, D. and Greven, Marc and Da Costa, Jean-Pierre},
journal = {Precision agriculture '23}
}
The grapevine disease called "Flavescence dorée" (FD) is actively monitored in Europe as it decreases yield and kills grapevines while being highly contagious. In this study, three methods for its automatic diagnosis from images acquired by proximal sensing (RGB camera) are proposed, compared and discussed. Method A uses a Convolutional Neural Network (CNN) classifier applied on raw images. The two other methods both process in two steps: (i) individual symptom detection using a CNN-based box detector and a deep segmentation algorithm, (ii) symptom-based diagnosis based either on a Random Forest classifier (method B) or on a Graph Neural Network (method C). A 6- fold cross validation was performed on 787 images of vines suffering from FD or from other biotic or abiotic stress factors. Methods B and C reached almost equally good results and outperformed method A: they achieved respectively, in (precision, recall), (0.69, 0.81), (0.87, 0.88) and (0.88, 0.88).
Automatic diagnosis of a multi-symptom grape vine disease using computer vision.
Tardif, M.; Amri, A.; Keresztes, B.; Deshayes, A.; Martin, D.; Greven, M.; and {Da Costa}, J.
Acta Horticulturae, 1360: 53-60. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Automatic diagnosis of a multi-symptom grape vine disease using computer vision},
type = {article},
year = {2023},
keywords = {tardif2023automaticdiagnosismulti},
pages = {53-60},
volume = {1360},
id = {f3ba706c-2c98-3ba5-a4c7-8ffef672315d},
created = {2023-09-21T11:07:48.771Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:42:35.262Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {“Flavescence dorée” (FD) is a highly transmissible disease very closely monitored in Europe as it reduces vine productivity and causes vine death. Currently, this disease is controlled by a two-pronged approach: spray insecticide on a regular basis to kill the vector and by experts surveying each row in a vineyard. Unfortunately, these experts are not able to carry out such a task every year on every vineyard and need an aid for planning their surveys. In this study, we propose and evaluate an original automatic method for the detection of FD, based on computer vision and artificial intelligence applied to images acquired by proximal sensing. A two-step approach is used, mimicking expert’s scouting in the vine rows: i) the three known isolated symptoms are detected, ii) isolated detections are combined to make a diagnosis at vine scale. To achieve this, a detection deep neural network is used to detect and classify non-healthy leaves into three classes – ‘FD symptomatic leaf', ‘Esca leaf’ and ‘Confounding leaf’ – while a segmentation network retrieves FD symptomatic shoots and bunches. Finally, the association of the detected symptoms is performed by a RandomForest classifier allowing a diagnosis at the image scale. The experimental evaluation is conducted on images collected on 14 blocks planted with 5 grape cultivars, allowing the study of the impact of acquisition conditions and variability of symptom expressions among grape cultivars.},
bibtype = {article},
author = {Tardif, Malo and Amri, Ahmed and Keresztes, Barna and Deshayes, Aymeric and Martin, Damian and Greven, Marc and Da Costa, Jean-Pierre},
doi = {10.17660/ActaHortic.2023.1360.7},
journal = {Acta Horticulturae}
}
“Flavescence dorée” (FD) is a highly transmissible disease very closely monitored in Europe as it reduces vine productivity and causes vine death. Currently, this disease is controlled by a two-pronged approach: spray insecticide on a regular basis to kill the vector and by experts surveying each row in a vineyard. Unfortunately, these experts are not able to carry out such a task every year on every vineyard and need an aid for planning their surveys. In this study, we propose and evaluate an original automatic method for the detection of FD, based on computer vision and artificial intelligence applied to images acquired by proximal sensing. A two-step approach is used, mimicking expert’s scouting in the vine rows: i) the three known isolated symptoms are detected, ii) isolated detections are combined to make a diagnosis at vine scale. To achieve this, a detection deep neural network is used to detect and classify non-healthy leaves into three classes – ‘FD symptomatic leaf', ‘Esca leaf’ and ‘Confounding leaf’ – while a segmentation network retrieves FD symptomatic shoots and bunches. Finally, the association of the detected symptoms is performed by a RandomForest classifier allowing a diagnosis at the image scale. The experimental evaluation is conducted on images collected on 14 blocks planted with 5 grape cultivars, allowing the study of the impact of acquisition conditions and variability of symptom expressions among grape cultivars.
The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings.
Rudenko, M.; Plugatar, Y.; Korzin, V.; Kazak, A.; Gallini, N.; and Gorbunova, N.
Inventions 2023, Vol. 8, Page 92, 8(4): 92. 7 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings},
type = {article},
year = {2023},
keywords = {artificial intelligence,computer vision,environmental engineering,graft combinations’ affinity,grape diseases,grape seedlings,neural networks,rootstock,viticulture},
pages = {92},
volume = {8},
websites = {https://www.mdpi.com/2411-5134/8/4/92/htm,https://www.mdpi.com/2411-5134/8/4/92},
month = {7},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {19},
id = {1235f0f2-9d34-3030-b753-a1416e1ce7ab},
created = {2023-10-26T09:20:20.156Z},
accessed = {2023-10-26},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:23.972Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {This study explores the application of computer vision for enhancing the selection of rootstock-graft combinations and detecting diseases in grape seedlings. Computer vision has various applications in viticulture, but publications and research have not reported the use of computer vision in rootstock-graft selection, which defines the novelty of this research. This paper presents elements of the technology for applying computer vision to rootstock-graft combinations and includes an analysis of grape seedling cuttings. This analysis allows for a more accurate determination of the compatibility between rootstock and graft, as well as the detection of potential seedling diseases. The utilization of computer vision to automate the grafting process of grape cuttings offers significant benefits in terms of increased efficiency, improved quality, and reduced costs. This technology can replace manual labor and ensure economic efficiency and reliability, among other advantages. It also facilitates monitoring the development of seedlings to determine the appropriate planting time. Image processing algorithms play a vital role in automatically determining seedling characteristics such as trunk diameter and the presence of any damage. Furthermore, computer vision can aid in the identification of diseases and defects in seedlings, which is crucial for assessing their overall quality. The automation of these processes offers several advantages, including increased efficiency, improved quality, and reduced costs through the reduction of manual labor and waste. To fulfill these objectives, a unique robotic assembly line is planned for the grafting of grape cuttings. This line will be equipped with two conveyor belts, a delta robot, and a computer vision system. The use of computer vision in automating the grafting process for grape cuttings offers significant benefits in terms of efficiency, quality improvement, and cost reduction. By incorporating image processing algorithms and advanced robotics, this technology has the potential to revolutionize the viticulture industry. Thanks to training a computer vision system to analyze data on rootstock and graft grape varieties, it is possible to reduce the number of defects by half. The implementation of a semi-automated computer vision system can improve crossbreeding efficiency by 90%. Reducing the time spent on pairing selection is also a significant advantage. While manual selection takes between 1 and 2 min, reducing the time to 30 s using the semi-automated system, and the prospect of further automation reducing the time to 10–15 s, will significantly increase the productivity and efficiency of the process. In addition to the aforementioned benefits, the integration of computer vision technology in grape grafting processes brings several other advantages. One notable advantage is the increased accuracy and precision in pairing selection. Computer vision algorithms can analyze a wide range of factors, including size, shape, color, and structural characteristics, to make more informed decisions when matching rootstock and graft varieties. This can lead to better compatibility and improved overall grafting success rates.},
bibtype = {article},
author = {Rudenko, Marina and Plugatar, Yurij and Korzin, Vadim and Kazak, Anatoliy and Gallini, Nadezhda and Gorbunova, Natalia},
doi = {10.3390/INVENTIONS8040092},
journal = {Inventions 2023, Vol. 8, Page 92},
number = {4}
}
This study explores the application of computer vision for enhancing the selection of rootstock-graft combinations and detecting diseases in grape seedlings. Computer vision has various applications in viticulture, but publications and research have not reported the use of computer vision in rootstock-graft selection, which defines the novelty of this research. This paper presents elements of the technology for applying computer vision to rootstock-graft combinations and includes an analysis of grape seedling cuttings. This analysis allows for a more accurate determination of the compatibility between rootstock and graft, as well as the detection of potential seedling diseases. The utilization of computer vision to automate the grafting process of grape cuttings offers significant benefits in terms of increased efficiency, improved quality, and reduced costs. This technology can replace manual labor and ensure economic efficiency and reliability, among other advantages. It also facilitates monitoring the development of seedlings to determine the appropriate planting time. Image processing algorithms play a vital role in automatically determining seedling characteristics such as trunk diameter and the presence of any damage. Furthermore, computer vision can aid in the identification of diseases and defects in seedlings, which is crucial for assessing their overall quality. The automation of these processes offers several advantages, including increased efficiency, improved quality, and reduced costs through the reduction of manual labor and waste. To fulfill these objectives, a unique robotic assembly line is planned for the grafting of grape cuttings. This line will be equipped with two conveyor belts, a delta robot, and a computer vision system. The use of computer vision in automating the grafting process for grape cuttings offers significant benefits in terms of efficiency, quality improvement, and cost reduction. By incorporating image processing algorithms and advanced robotics, this technology has the potential to revolutionize the viticulture industry. Thanks to training a computer vision system to analyze data on rootstock and graft grape varieties, it is possible to reduce the number of defects by half. The implementation of a semi-automated computer vision system can improve crossbreeding efficiency by 90%. Reducing the time spent on pairing selection is also a significant advantage. While manual selection takes between 1 and 2 min, reducing the time to 30 s using the semi-automated system, and the prospect of further automation reducing the time to 10–15 s, will significantly increase the productivity and efficiency of the process. In addition to the aforementioned benefits, the integration of computer vision technology in grape grafting processes brings several other advantages. One notable advantage is the increased accuracy and precision in pairing selection. Computer vision algorithms can analyze a wide range of factors, including size, shape, color, and structural characteristics, to make more informed decisions when matching rootstock and graft varieties. This can lead to better compatibility and improved overall grafting success rates.
Towards smart pruning: ViNet, a deep-learning approach for grapevine structure estimation.
Gentilhomme, T.; Villamizar, M.; Corre, J.; and Odobez, J., M.
Computers and Electronics in Agriculture, 207: 107736. 4 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Towards smart pruning: ViNet, a deep-learning approach for grapevine structure estimation},
type = {article},
year = {2023},
keywords = {Convolutional network,Deep learning,Grapevine pruning,Plant skeleton,Precision viticulture,Vineyard},
pages = {107736},
volume = {207},
month = {4},
publisher = {Elsevier},
day = {1},
id = {cb56e55e-71e9-3e0b-b8ac-4284e305a41c},
created = {2023-10-26T10:44:38.867Z},
accessed = {2023-10-26},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:24.310Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Image and video tools for analyzing crop scenes and plants are essential for applying precision agriculture to crop maintenance, harvesting, or pruning. In this paper, we are interested in vine pruning, a task that requires a precise understanding of the vine structure with branch type identification, orientations, and node locations. However, estimating such a structure is highly challenging, given the large variety in grapevine appearances, lighting conditions, viewpoint, the interweaving of branches, occlusions, and the level of details needed. To address these challenges, we propose ViNet: a deep-learning approach for estimating the structure of grapevine, which comprises two main steps: The first one detects nodes and identifies the branch types of the plant, as well as the spatial relation between them, whilst the second one uses the extracted nodes and branches to build a graph, out of which the structure of the grapevine is inferred. In doing so, we make four main contributions: (i) we put forward for the first time a method for automatic segmentation and extraction of the grapevine structure from images; (ii) we propose a novel approach leveraging the powerful stacked hourglass network to infer node location, branch types and the spatial relationships between them; (iii) we propose a novel shortest path weighted graph optimization step to extract connections between nodes and infer the structure, allowing to address the problem of having an unknown number of branches in the tree; (iv) we publicly release a dataset of more than 1500 grapevine images fully annotated with the structure information. Extensive experiments on this dataset demonstrate the efficiency of our approach at predicting the structure of a grapevine, achieving a precision and recall for node prediction of 95% and 90%, respectively, as well as ablation studies validating our design choices.},
bibtype = {article},
author = {Gentilhomme, Theophile and Villamizar, Michael and Corre, Jerome and Odobez, Jean Marc},
doi = {10.1016/J.COMPAG.2023.107736},
journal = {Computers and Electronics in Agriculture}
}
Image and video tools for analyzing crop scenes and plants are essential for applying precision agriculture to crop maintenance, harvesting, or pruning. In this paper, we are interested in vine pruning, a task that requires a precise understanding of the vine structure with branch type identification, orientations, and node locations. However, estimating such a structure is highly challenging, given the large variety in grapevine appearances, lighting conditions, viewpoint, the interweaving of branches, occlusions, and the level of details needed. To address these challenges, we propose ViNet: a deep-learning approach for estimating the structure of grapevine, which comprises two main steps: The first one detects nodes and identifies the branch types of the plant, as well as the spatial relation between them, whilst the second one uses the extracted nodes and branches to build a graph, out of which the structure of the grapevine is inferred. In doing so, we make four main contributions: (i) we put forward for the first time a method for automatic segmentation and extraction of the grapevine structure from images; (ii) we propose a novel approach leveraging the powerful stacked hourglass network to infer node location, branch types and the spatial relationships between them; (iii) we propose a novel shortest path weighted graph optimization step to extract connections between nodes and infer the structure, allowing to address the problem of having an unknown number of branches in the tree; (iv) we publicly release a dataset of more than 1500 grapevine images fully annotated with the structure information. Extensive experiments on this dataset demonstrate the efficiency of our approach at predicting the structure of a grapevine, achieving a precision and recall for node prediction of 95% and 90%, respectively, as well as ablation studies validating our design choices.
Vision-Based Monitoring of the Short-Term Dynamic Behaviour of Plants for Automated Phenotyping.
Wagner, N.; and Cielniak, G.
2023.
Paper
link
bibtex
abstract
@misc{
title = {Vision-Based Monitoring of the Short-Term Dynamic Behaviour of Plants for Automated Phenotyping},
type = {misc},
year = {2023},
pages = {624-633},
id = {804abb0a-db90-31a1-9a72-f23b72a1314a},
created = {2023-10-27T06:12:27.656Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:24.482Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Modern computer vision technology plays an increasingly important role in agriculture. Automated monitoring of plants for example is an essential task in several applications , such as high-throughput phenotyping or plant health monitoring. Under external influences like wind, plants typically exhibit dynamic behaviours which reveal important characteristics of their structure and condition. These behaviours , however, are typically not considered by state-of-the-art automated phenotyping methods which mostly observe static plant properties. In this paper, we propose an automated system for monitoring oscillatory plant movement from video sequences. We employ harmonic inversion for the purpose of efficiently and accurately estimating the eigenfrequency and damping parameters of individual plant parts. The achieved accuracy is compared against values obtained by performing the Discrete Fourier Transform (DFT), which we use as a baseline. We demonstrate the applicability of this approach on different plants and plant parts, like wheat ears, hanging vines, as well as stems and stalks, which exhibit a range of oscillatory motions. By utilising harmonic inversion, we are able to consistently obtain more accurate values for the eigenfrequencies compared to those obtained by DFT. We are furthermore able to directly estimate values for the damping coefficient, achieving a similar accuracy as via DFT-based methods, but without the additional computational effort required for the latter. With the approach presented in this paper, it is possible to obtain estimates of mechanical plant characteristics in an automated manner, enabling novel automated acquisition of novel traits for phenotyping.},
bibtype = {misc},
author = {Wagner, Nikolaus and Cielniak, Grzegorz}
}
Modern computer vision technology plays an increasingly important role in agriculture. Automated monitoring of plants for example is an essential task in several applications , such as high-throughput phenotyping or plant health monitoring. Under external influences like wind, plants typically exhibit dynamic behaviours which reveal important characteristics of their structure and condition. These behaviours , however, are typically not considered by state-of-the-art automated phenotyping methods which mostly observe static plant properties. In this paper, we propose an automated system for monitoring oscillatory plant movement from video sequences. We employ harmonic inversion for the purpose of efficiently and accurately estimating the eigenfrequency and damping parameters of individual plant parts. The achieved accuracy is compared against values obtained by performing the Discrete Fourier Transform (DFT), which we use as a baseline. We demonstrate the applicability of this approach on different plants and plant parts, like wheat ears, hanging vines, as well as stems and stalks, which exhibit a range of oscillatory motions. By utilising harmonic inversion, we are able to consistently obtain more accurate values for the eigenfrequencies compared to those obtained by DFT. We are furthermore able to directly estimate values for the damping coefficient, achieving a similar accuracy as via DFT-based methods, but without the additional computational effort required for the latter. With the approach presented in this paper, it is possible to obtain estimates of mechanical plant characteristics in an automated manner, enabling novel automated acquisition of novel traits for phenotyping.
Grape yield estimation with a smartphone’s colour and depth cameras using machine learning and computer vision techniques.
Parr, B.; Legg, M.; and Alam, F.
Computers and Electronics in Agriculture, 213: 108174. 10 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Grape yield estimation with a smartphone’s colour and depth cameras using machine learning and computer vision techniques},
type = {article},
year = {2023},
keywords = {Berry detection,Depth camera,Grapes,RGB-D,YOLO,Yield estimation},
pages = {108174},
volume = {213},
month = {10},
publisher = {Elsevier},
day = {1},
id = {0eb92be6-6b9a-3104-85d0-45a943b53730},
created = {2023-10-27T06:13:29.083Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:24.634Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {A smartphone with both colour and time of flight depth cameras is used for automated grape yield estimation of Chardonnay grapes. A new technique is developed to automatically identify grape berries in the smartphone's depth maps. This utilises the distortion peaks in the depth map caused by diffused scattering of the light within each grape berry. This technique is then extended to allow unsupervised training of a YOLOv7 model for the detection of grape berries in the smartphone's colour images. A correlation coefficient (R2) of 0.946 was achieved when comparing the count of grape berries observed in RGB images to those accurately identified by YOLO. Additionally, an average precision score of 0.970 was attained. Two techniques are then presented to automatically estimate the size of the grape berries and generate 3D models of grape bunches using both colour and depth information.},
bibtype = {article},
author = {Parr, Baden and Legg, Mathew and Alam, Fakhrul},
doi = {10.1016/J.COMPAG.2023.108174},
journal = {Computers and Electronics in Agriculture}
}
A smartphone with both colour and time of flight depth cameras is used for automated grape yield estimation of Chardonnay grapes. A new technique is developed to automatically identify grape berries in the smartphone's depth maps. This utilises the distortion peaks in the depth map caused by diffused scattering of the light within each grape berry. This technique is then extended to allow unsupervised training of a YOLOv7 model for the detection of grape berries in the smartphone's colour images. A correlation coefficient (R2) of 0.946 was achieved when comparing the count of grape berries observed in RGB images to those accurately identified by YOLO. Additionally, an average precision score of 0.970 was attained. Two techniques are then presented to automatically estimate the size of the grape berries and generate 3D models of grape bunches using both colour and depth information.
Proximal sensing for geometric characterization of vines: A review of the latest advances.
Moreno, H.; and Andújar, D.
Computers and Electronics in Agriculture, 210: 107901. 7 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Proximal sensing for geometric characterization of vines: A review of the latest advances},
type = {article},
year = {2023},
pages = {107901},
volume = {210},
month = {7},
publisher = {Elsevier},
day = {1},
id = {0e530cf4-c596-3685-a147-e26b52683013},
created = {2023-10-27T06:36:38.933Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T14:59:44.683Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Several variables, including a rising human population, varying weather patterns in the context of ongoing climate change, and the rapid worldwide spread of epidemics, all contribute to boosting agricultural demand. To assure food availability, quality, and safety while increasing yields and profitability, precision agriculture must progress swiftly. Precision viticulture aims to optimize vineyard management in this setting by reducing resource consumption and environmental impact while simultaneously enhancing the yield, product quality, and oenological potential of vineyards. This comprehensive review article offers an overview of the real-world and laboratory applications of optical and non-optical sensors in precision viticulture for 3D modelling. Hence, there is a pressing need to track the development of crops at a wide range of spatial and temporal scales, in a wide variety of environments, and for a wide range of objectives in a non-destructive manner. Due to the intrinsic spatial heterogeneity of vineyards, the adoption of precision viticulture necessitates crop monitoring using contactless and non-invasive sensors such as ultrasonic, LiDAR (Light Detection and Ranging), depth, or RGB cameras to prevent low accuracy and sparse sampling. This study aims to assist researchers in gaining a broad understanding of the sensing technologies for precision viticulture, the present problems, and the advancement of the state of the art. The study focuses on sensors used for Proximal Sensing to geometrically characterize vines using statically or dynamically ground-based measurements through a wide range of mobile sensing platforms. The employed sensors, data extraction, and analysis procedures are described. Moreover, the present and future potential of Proximal Sensing and Remote Sensing in vineyards is discussed.},
bibtype = {article},
author = {Moreno, Hugo and Andújar, Dionisio},
doi = {10.1016/J.COMPAG.2023.107901},
journal = {Computers and Electronics in Agriculture},
keywords = {moreno2023proximalsensinggeometric}
}
Several variables, including a rising human population, varying weather patterns in the context of ongoing climate change, and the rapid worldwide spread of epidemics, all contribute to boosting agricultural demand. To assure food availability, quality, and safety while increasing yields and profitability, precision agriculture must progress swiftly. Precision viticulture aims to optimize vineyard management in this setting by reducing resource consumption and environmental impact while simultaneously enhancing the yield, product quality, and oenological potential of vineyards. This comprehensive review article offers an overview of the real-world and laboratory applications of optical and non-optical sensors in precision viticulture for 3D modelling. Hence, there is a pressing need to track the development of crops at a wide range of spatial and temporal scales, in a wide variety of environments, and for a wide range of objectives in a non-destructive manner. Due to the intrinsic spatial heterogeneity of vineyards, the adoption of precision viticulture necessitates crop monitoring using contactless and non-invasive sensors such as ultrasonic, LiDAR (Light Detection and Ranging), depth, or RGB cameras to prevent low accuracy and sparse sampling. This study aims to assist researchers in gaining a broad understanding of the sensing technologies for precision viticulture, the present problems, and the advancement of the state of the art. The study focuses on sensors used for Proximal Sensing to geometrically characterize vines using statically or dynamically ground-based measurements through a wide range of mobile sensing platforms. The employed sensors, data extraction, and analysis procedures are described. Moreover, the present and future potential of Proximal Sensing and Remote Sensing in vineyards is discussed.
An improved lightweight network based on deep learning for grape recognition in unstructured environments.
Liu, B.; Zhang, Y.; Wang, J.; Luo, L.; Lu, Q.; Wei, H.; and Zhu, W.
Information Processing in Agriculture. 2 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {An improved lightweight network based on deep learning for grape recognition in unstructured environments},
type = {article},
year = {2023},
keywords = {AlNet,Depthwise separable convolution,Grape recognition,ResBlock-M,YOLOX},
month = {2},
publisher = {Elsevier},
day = {20},
id = {2c84e3fc-fe5b-3a49-92cd-c31603b04189},
created = {2023-10-27T06:43:20.203Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:25.239Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {In unstructured environments, dense grape fruit growth and the presence of occlusion cause difficult recognition problems, which will seriously affect the performance of grape picking robots. To address these problems, this study improves the YOLOX-Tiny model and proposes a new grape detection model, YOLOX-RA, which can quickly and accurately identify densely growing and occluded grape bunches. The proposed YOLOX-RA model uses a 3 × 3 convolutional layer with a step size of 2 to replace the focal layer to reduce the computational burden. The CBS layer in the ResBlock_Body module of the second, third, and fourth layers of the backbone layer is removed, and the CSPLayer module is replaced by the ResBlock-M module to speed up the detection. An auxiliary network (AlNet) with the remaining network blocks was added after the ResBlock-M module to improve the detection accuracy. Two depth-separable convolutions (DSC) are used in the neck module layer to replace the normal convolution to reduce the computational cost. We evaluated the detection performance of SSD, YOLOv4 SSD, YOLOv4-Tiny, YOLO-Grape, YOLOv5-X, YOLOX-Tiny, and YOLOX-RA on a grape test set. The results show that the YOLOX-RA model has the best detection performance, achieving 88.75 % mAP, a recognition speed of 84.88 FPS, and model size of 17.53 MB. It can accurately detect densely grown and shaded grape bunches, which can effectively improve the performance of the grape picking robot.},
bibtype = {article},
author = {Liu, Bingpiao and Zhang, Yunzhi and Wang, Jinhai and Luo, Lufeng and Lu, Qinghua and Wei, Huiling and Zhu, Wenbo},
doi = {10.1016/J.INPA.2023.02.003},
journal = {Information Processing in Agriculture}
}
In unstructured environments, dense grape fruit growth and the presence of occlusion cause difficult recognition problems, which will seriously affect the performance of grape picking robots. To address these problems, this study improves the YOLOX-Tiny model and proposes a new grape detection model, YOLOX-RA, which can quickly and accurately identify densely growing and occluded grape bunches. The proposed YOLOX-RA model uses a 3 × 3 convolutional layer with a step size of 2 to replace the focal layer to reduce the computational burden. The CBS layer in the ResBlock_Body module of the second, third, and fourth layers of the backbone layer is removed, and the CSPLayer module is replaced by the ResBlock-M module to speed up the detection. An auxiliary network (AlNet) with the remaining network blocks was added after the ResBlock-M module to improve the detection accuracy. Two depth-separable convolutions (DSC) are used in the neck module layer to replace the normal convolution to reduce the computational cost. We evaluated the detection performance of SSD, YOLOv4 SSD, YOLOv4-Tiny, YOLO-Grape, YOLOv5-X, YOLOX-Tiny, and YOLOX-RA on a grape test set. The results show that the YOLOX-RA model has the best detection performance, achieving 88.75 % mAP, a recognition speed of 84.88 FPS, and model size of 17.53 MB. It can accurately detect densely grown and shaded grape bunches, which can effectively improve the performance of the grape picking robot.
Weakly and semi-supervised detection, segmentation and tracking of table grapes with limited and noisy data.
Ciarfuglia, T., A.; Motoi, I., M.; Saraceni, L.; Fawakherji, M.; Sanfeliu, A.; and Nardi, D.
Computers and Electronics in Agriculture, 205: 107624. 2 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Weakly and semi-supervised detection, segmentation and tracking of table grapes with limited and noisy data},
type = {article},
year = {2023},
keywords = {Computer vision,Deep learning,Fruit detection and segmentation,Self-supervised learning,Yield prediction},
pages = {107624},
volume = {205},
month = {2},
publisher = {Elsevier},
day = {1},
id = {06f46e4f-3763-3746-902c-5144a0b126b6},
created = {2023-10-27T06:44:30.963Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:24.774Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Detection, segmentation and tracking of fruits and vegetables are three fundamental tasks for precision agriculture, enabling robotic harvesting and yield estimation applications. However, modern algorithms are data hungry and it is not always possible to gather enough data to apply the best performing supervised approaches. Since data collection is an expensive and cumbersome task, the enabling technologies for using computer vision in agriculture are often out of reach for small businesses. Following previous work in this context (Ciarfuglia et al., 2022), where we proposed an initial weakly supervised solution to reduce the data needed to get state-of-the-art detection and segmentation in precision agriculture applications, here we improve that system and explore the problem of tracking fruits in orchards. We present the case of vineyards of table grapes in southern Lazio (Italy) since grapes are a difficult fruit to segment due to occlusion, colour and general illumination conditions. We consider the case in which there is some initial labelled data that could work as source data (e.g. wine grape data), but it is considerably different from the target data (e.g. table grape data). To improve detection and segmentation on the target data, we propose to train the segmentation algorithm with a weak bounding box label, while for tracking we leverage 3D Structure from Motion algorithms to generate new labels from already labelled samples. Finally, the two systems are combined in a full semi-supervised approach. Comparisons with state-of-the-art supervised solutions show how our methods are able to train new models that achieve high performances with few labelled images and with very simple labelling.},
bibtype = {article},
author = {Ciarfuglia, Thomas A. and Motoi, Ionut M. and Saraceni, Leonardo and Fawakherji, Mulham and Sanfeliu, Alberto and Nardi, Daniele},
doi = {10.1016/J.COMPAG.2023.107624},
journal = {Computers and Electronics in Agriculture}
}
Detection, segmentation and tracking of fruits and vegetables are three fundamental tasks for precision agriculture, enabling robotic harvesting and yield estimation applications. However, modern algorithms are data hungry and it is not always possible to gather enough data to apply the best performing supervised approaches. Since data collection is an expensive and cumbersome task, the enabling technologies for using computer vision in agriculture are often out of reach for small businesses. Following previous work in this context (Ciarfuglia et al., 2022), where we proposed an initial weakly supervised solution to reduce the data needed to get state-of-the-art detection and segmentation in precision agriculture applications, here we improve that system and explore the problem of tracking fruits in orchards. We present the case of vineyards of table grapes in southern Lazio (Italy) since grapes are a difficult fruit to segment due to occlusion, colour and general illumination conditions. We consider the case in which there is some initial labelled data that could work as source data (e.g. wine grape data), but it is considerably different from the target data (e.g. table grape data). To improve detection and segmentation on the target data, we propose to train the segmentation algorithm with a weak bounding box label, while for tracking we leverage 3D Structure from Motion algorithms to generate new labels from already labelled samples. Finally, the two systems are combined in a full semi-supervised approach. Comparisons with state-of-the-art supervised solutions show how our methods are able to train new models that achieve high performances with few labelled images and with very simple labelling.
Real-time tracking and counting of grape clusters in the field based on channel pruning with YOLOv5s.
Shen, L.; Su, J.; He, R.; Song, L.; Huang, R.; Fang, Y.; Song, Y.; and Su, B.
Computers and Electronics in Agriculture, 206: 107662. 3 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Real-time tracking and counting of grape clusters in the field based on channel pruning with YOLOv5s},
type = {article},
year = {2023},
keywords = {Channel pruning,Grape,Multiple object tracking,Real-time counting,YOLOv5s},
pages = {107662},
volume = {206},
month = {3},
publisher = {Elsevier},
day = {1},
id = {28015a97-5be1-3629-8e36-31f19d0d5e27},
created = {2023-10-27T06:45:46.225Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:24.922Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Accurate fruit counting helps grape wine industry make better logistics and decisions before harvest, and therefore produce higher quality wine. In view of poor real-time performance of the existing fruit tracking and counting methods, and a lack of effective counting methods for cluster-like fruits due to their huge shape variabilities. In this study, an end-to-end lightweight counting pipeline is developed to automate the processing of video data for real-time tracking and counting of grape clusters in field conditions. First, based on channel pruning algorithm, a more lightweight YOLOv5s cluster detection model is obtained, where number of model parameters, model size and floating-point operations (FLOPs) are reduced by 79 %, 76 %, and 58 %, respectively, and the pruned model size is only 3.4 MB. Secondly, the soft non-maximum suppression is introduced in prediction stage to improve detection performance for clusters with overlapping grapes. Test results show that mAP reaches 82.3 % and average inference time is 6.1 ms per image, which effectively reduces model parameters and complexity while ensuring detection accuracy. Finally, online multiple object tracking of clusters is implemented by integrating the detection results and SORT algorithm, where two counting modes are set by introducing counting lines. Test results on 8 videos indicated that the average counting accuracy of the proposed method reached 84.9 %, correlation coefficient with manual counting reached 0.9905, and speed of video processing reached up to 50.4 frames per second (FPS), meeting field real-time requirements. This study provides a timely technical reference for the development of orchard robots to achieve real-time automated yield estimation and accurate crop management decisions.},
bibtype = {article},
author = {Shen, Lei and Su, Jinya and He, Runtian and Song, Lijie and Huang, Rong and Fang, Yulin and Song, Yuyang and Su, Baofeng},
doi = {10.1016/J.COMPAG.2023.107662},
journal = {Computers and Electronics in Agriculture}
}
Accurate fruit counting helps grape wine industry make better logistics and decisions before harvest, and therefore produce higher quality wine. In view of poor real-time performance of the existing fruit tracking and counting methods, and a lack of effective counting methods for cluster-like fruits due to their huge shape variabilities. In this study, an end-to-end lightweight counting pipeline is developed to automate the processing of video data for real-time tracking and counting of grape clusters in field conditions. First, based on channel pruning algorithm, a more lightweight YOLOv5s cluster detection model is obtained, where number of model parameters, model size and floating-point operations (FLOPs) are reduced by 79 %, 76 %, and 58 %, respectively, and the pruned model size is only 3.4 MB. Secondly, the soft non-maximum suppression is introduced in prediction stage to improve detection performance for clusters with overlapping grapes. Test results show that mAP reaches 82.3 % and average inference time is 6.1 ms per image, which effectively reduces model parameters and complexity while ensuring detection accuracy. Finally, online multiple object tracking of clusters is implemented by integrating the detection results and SORT algorithm, where two counting modes are set by introducing counting lines. Test results on 8 videos indicated that the average counting accuracy of the proposed method reached 84.9 %, correlation coefficient with manual counting reached 0.9905, and speed of video processing reached up to 50.4 frames per second (FPS), meeting field real-time requirements. This study provides a timely technical reference for the development of orchard robots to achieve real-time automated yield estimation and accurate crop management decisions.
Exploratory approach for automatic detection of vine rows in terrace vineyards.
Figueiredo, N.; Padua, L.; Cunha, A.; Sousa, J., J.; and Sousa, A.
Procedia Computer Science, 219: 139-144. 1 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Exploratory approach for automatic detection of vine rows in terrace vineyards},
type = {article},
year = {2023},
keywords = {Artificial Intelligence,Precision agriculture,Remote sensing,Terrace vineyards},
pages = {139-144},
volume = {219},
month = {1},
publisher = {Elsevier},
day = {1},
id = {957ecde5-2285-3250-bb4f-b4e50a33e778},
created = {2023-10-27T07:00:02.237Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-10-27T07:00:24.059Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The Alto Douro Demarcated Region in Portugal is the oldest and most regulated wine-growing region in the world, formed by an ecosystem of unique value allowing the cultivation of vines on its characteristics terraces vineyards. The detection of vine rows in terrace vineyards constitutes an essential task regarding the achievement of important goals such as multi-Temporal crop evaluation and production estimation. Despite the advances and research in this field, most studies are limited to flat vineyards with straight vine rows. In this study an exploratory approach in the precision agriculture for automatic detection of vine rows in terrace vineyards is presented with remote sensing techniques associated with artificial intelligence such as Machine Learning and Deep learning. At the current stage the preliminary results are encouraging for the detection of vine rows in straight and curved lines considering the complexity of the terrain.},
bibtype = {article},
author = {Figueiredo, Nuno and Padua, Luis and Cunha, Antonio and Sousa, Joaquim J. and Sousa, Antonio},
doi = {10.1016/J.PROCS.2023.01.274},
journal = {Procedia Computer Science}
}
The Alto Douro Demarcated Region in Portugal is the oldest and most regulated wine-growing region in the world, formed by an ecosystem of unique value allowing the cultivation of vines on its characteristics terraces vineyards. The detection of vine rows in terrace vineyards constitutes an essential task regarding the achievement of important goals such as multi-Temporal crop evaluation and production estimation. Despite the advances and research in this field, most studies are limited to flat vineyards with straight vine rows. In this study an exploratory approach in the precision agriculture for automatic detection of vine rows in terrace vineyards is presented with remote sensing techniques associated with artificial intelligence such as Machine Learning and Deep learning. At the current stage the preliminary results are encouraging for the detection of vine rows in straight and curved lines considering the complexity of the terrain.
Computer-Vision Based Real Time Waypoint Generation for Autonomous Vineyard Navigation with Quadruped Robots.
Milburn, L.; Gamba, J.; Fernandes, M.; and Semini, C.
2023 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2023,239-244. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Computer-Vision Based Real Time Waypoint Generation for Autonomous Vineyard Navigation with Quadruped Robots},
type = {article},
year = {2023},
keywords = {Agricultural Robotics,Autonomous Vineyard Navigation,Computer-Vision,Quadruped Control},
pages = {239-244},
publisher = {IEEE},
id = {50d3d502-c446-33df-8937-a08ca8fa1aaf},
created = {2023-10-27T07:07:26.026Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-10-27T07:07:33.908Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The VINUM project seeks to address the shortage of skilled labor in modern vineyards by introducing a cutting-edge mobile robotic solution. Leveraging the capabilities of the quadruped robot, HyQReal, this system, equipped with arm and vision sensors, offers autonomous navigation and winter pruning of grapevines reducing the need for human intervention. At the heart of this approach lies an architecture that empowers the robot to easily navigate vineyards, identify grapevines with unparalleled accuracy, and approach them for pruning with precision. A state machine drives the process, deftly switching between various stages to ensure seamless and efficient task completion. The system's performance was assessed through experimentation, focusing on waypoint precision and optimizing the robot's workspace for single-plant operations. Results indicate that the architecture is highly reliable, with a mean error of 21.5cm and a standard deviation of 17.6cm for HyQReal. However, improvements in grapevine detection accuracy are necessary for optimal performance. This work is based on a computer-vision-based navigation method for quadruped robots in vineyards, opening up new possibilities for selective task automation. The system's architecture works well in ideal weather conditions, generating and arriving at precise waypoints that maximize the attached robotic arm's workspace. This work is an extension of our short paper presented at the Italian Conference on Robotics and Intelligent Machines (I-RIM), 2022 [1].},
bibtype = {article},
author = {Milburn, Lee and Gamba, Juan and Fernandes, Miguel and Semini, Claudio},
doi = {10.1109/ICARSC58346.2023.10129563},
journal = {2023 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2023}
}
The VINUM project seeks to address the shortage of skilled labor in modern vineyards by introducing a cutting-edge mobile robotic solution. Leveraging the capabilities of the quadruped robot, HyQReal, this system, equipped with arm and vision sensors, offers autonomous navigation and winter pruning of grapevines reducing the need for human intervention. At the heart of this approach lies an architecture that empowers the robot to easily navigate vineyards, identify grapevines with unparalleled accuracy, and approach them for pruning with precision. A state machine drives the process, deftly switching between various stages to ensure seamless and efficient task completion. The system's performance was assessed through experimentation, focusing on waypoint precision and optimizing the robot's workspace for single-plant operations. Results indicate that the architecture is highly reliable, with a mean error of 21.5cm and a standard deviation of 17.6cm for HyQReal. However, improvements in grapevine detection accuracy are necessary for optimal performance. This work is based on a computer-vision-based navigation method for quadruped robots in vineyards, opening up new possibilities for selective task automation. The system's architecture works well in ideal weather conditions, generating and arriving at precise waypoints that maximize the attached robotic arm's workspace. This work is an extension of our short paper presented at the Italian Conference on Robotics and Intelligent Machines (I-RIM), 2022 [1].
Object detection and tracking on UAV RGB videos for early extraction of grape phenotypic traits.
Ariza-Sentís, M.; Baja, H.; Vélez, S.; and Valente, J.
Computers and Electronics in Agriculture, 211: 108051. 8 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Object detection and tracking on UAV RGB videos for early extraction of grape phenotypic traits},
type = {article},
year = {2023},
keywords = {Instance segmentation,MOTS,PointTrack,Spatial Embeddings,UAV,Video,Viticulture,YOLACT},
pages = {108051},
volume = {211},
month = {8},
publisher = {Elsevier},
day = {1},
id = {4aea33b6-c42e-325f-ae80-42ef172375ec},
created = {2023-10-27T07:09:48.980Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:25.743Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Grapevine phenotyping is the process of determining the physical properties (e.g., size, shape, and number) of grape bunches and berries. Grapevine phenotyping information provides valuable characteristics to monitor the sanitary status of the vine. Knowing the number and dimensions of bunches and berries at an early stage of development provides relevant information to the winegrowers about the yield to be harvested. However, the process of counting and measuring is usually done manually, which is laborious and time-consuming. Previous studies have attempted to implement bunch detection on red bunches in vineyards with leaf removal and surveys have been done using ground vehicles and handled cameras. However, Unmanned Aerial Vehicles (UAV) mounted with RGB cameras, along with computer vision techniques offer a cheap, robust, and timesaving alternative. Therefore, Multi-object tracking and segmentation (MOTS) is utilized in this study to determine the traits of individual white grape bunches and berries from RGB videos obtained from a UAV acquired over a commercial vineyard with a high density of leaves. To achieve this goal two datasets with labelled images and phenotyping measurements were created and made available in a public repository. PointTrack algorithm was used for detecting and tracking the grape bunches, and two instance segmentation algorithms - YOLACT and Spatial Embeddings - have been compared for finding the most suitable approach to detect berries. It was found that the detection performs adequately for cluster detection with a MODSA of 93.85. For tracking, the results were not sufficient when trained with 679 frames.This study provides an automated pipeline for the extraction of several grape phenotyping traits described by the International Organization of Vine and Wine (OIV) descriptors. The selected OIV descriptors are the bunch length, width, and shape (codes 202, 203, and 208, respectively) and the berry length, width, and shape (codes 220, 221, and 223, respectively). Lastly, the comparison regarding the number of detected berries per bunch indicated that Spatial Embeddings assessed berry counting more accurately (79.5%) than YOLACT (44.6%).},
bibtype = {article},
author = {Ariza-Sentís, Mar and Baja, Hilmy and Vélez, Sergio and Valente, João},
doi = {10.1016/J.COMPAG.2023.108051},
journal = {Computers and Electronics in Agriculture}
}
Grapevine phenotyping is the process of determining the physical properties (e.g., size, shape, and number) of grape bunches and berries. Grapevine phenotyping information provides valuable characteristics to monitor the sanitary status of the vine. Knowing the number and dimensions of bunches and berries at an early stage of development provides relevant information to the winegrowers about the yield to be harvested. However, the process of counting and measuring is usually done manually, which is laborious and time-consuming. Previous studies have attempted to implement bunch detection on red bunches in vineyards with leaf removal and surveys have been done using ground vehicles and handled cameras. However, Unmanned Aerial Vehicles (UAV) mounted with RGB cameras, along with computer vision techniques offer a cheap, robust, and timesaving alternative. Therefore, Multi-object tracking and segmentation (MOTS) is utilized in this study to determine the traits of individual white grape bunches and berries from RGB videos obtained from a UAV acquired over a commercial vineyard with a high density of leaves. To achieve this goal two datasets with labelled images and phenotyping measurements were created and made available in a public repository. PointTrack algorithm was used for detecting and tracking the grape bunches, and two instance segmentation algorithms - YOLACT and Spatial Embeddings - have been compared for finding the most suitable approach to detect berries. It was found that the detection performs adequately for cluster detection with a MODSA of 93.85. For tracking, the results were not sufficient when trained with 679 frames.This study provides an automated pipeline for the extraction of several grape phenotyping traits described by the International Organization of Vine and Wine (OIV) descriptors. The selected OIV descriptors are the bunch length, width, and shape (codes 202, 203, and 208, respectively) and the berry length, width, and shape (codes 220, 221, and 223, respectively). Lastly, the comparison regarding the number of detected berries per bunch indicated that Spatial Embeddings assessed berry counting more accurately (79.5%) than YOLACT (44.6%).
Applying Knowledge Distillation on Pre-Trained Model for Early Grapevine Detection.
Hollard, L.; and Mohimont, L.
,149-156. 6 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Applying Knowledge Distillation on Pre-Trained Model for Early Grapevine Detection},
type = {article},
year = {2023},
keywords = {Deep Learning,Fine-tuning,Knowledge Distillation,Pseudo-labelling,Yield forecast},
pages = {149-156},
websites = {https://ebooks.iospress.nl/doi/10.3233/AISE230024},
month = {6},
publisher = {IOS Press},
day = {23},
id = {2d7b8c7b-5e5c-3b73-a0dc-43befad392bb},
created = {2023-10-27T07:11:58.041Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:25.911Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {The development of Artificial Intelligence has raised interesting opportunities for improved automation in smart agriculture. Smart viticulture is one of the domains that can benefit from Computer-vision tasks through field sustainability. Computer-vision solutions present additional constraints as the amount of data for good training convergence has to be complex enough to cover sufficient features from desired inputs. In this paper, we present a study to implement a grapevine detection improvement for early grapes detection and grape yield prediction whose interest in Champagne and wine companies is undeniable. Earlier yield predictions allow a better market assessment, the harvest work's organization and help decision-making about plant management. Our goal is to carry estimations 5 to 6 weeks before the harvest. Furthermore, the grapevines growing condition and the large amount of data to process for yield estimation require an embedded device to acquire and compute deep learning inference. Thus, the grapes detection model has to be lightweight enough to run on an embedded device. These models were subsequently pre-trained on two different types of datasets and several layer depth of deep learning models to propose a pseudo-labelling Teacher-Student related Knowledge Distillation. Overall solutions proposed an improvement of 7.56%, 6.98, 8.279%, 7.934% and 13.63% for f1 score, precision, recall, mean average precision at 50 and mean average precision 50-95 respectively on BBCH77 phenological stage.},
bibtype = {article},
author = {Hollard, Lilian and Mohimont, Lucas},
doi = {10.3233/AISE230024}
}
The development of Artificial Intelligence has raised interesting opportunities for improved automation in smart agriculture. Smart viticulture is one of the domains that can benefit from Computer-vision tasks through field sustainability. Computer-vision solutions present additional constraints as the amount of data for good training convergence has to be complex enough to cover sufficient features from desired inputs. In this paper, we present a study to implement a grapevine detection improvement for early grapes detection and grape yield prediction whose interest in Champagne and wine companies is undeniable. Earlier yield predictions allow a better market assessment, the harvest work's organization and help decision-making about plant management. Our goal is to carry estimations 5 to 6 weeks before the harvest. Furthermore, the grapevines growing condition and the large amount of data to process for yield estimation require an embedded device to acquire and compute deep learning inference. Thus, the grapes detection model has to be lightweight enough to run on an embedded device. These models were subsequently pre-trained on two different types of datasets and several layer depth of deep learning models to propose a pseudo-labelling Teacher-Student related Knowledge Distillation. Overall solutions proposed an improvement of 7.56%, 6.98, 8.279%, 7.934% and 13.63% for f1 score, precision, recall, mean average precision at 50 and mean average precision 50-95 respectively on BBCH77 phenological stage.
Toward Grapevine Digital Ampelometry Through Vision Deep Learning Models.
Magalhaes, S., C.; Castro, L.; Rodrigues, L.; Padilha, T., C.; De Carvalho, F.; Neves Dos Santos, F.; Pinho, T.; Moreira, G.; Cunha, J.; Cunha, M.; Silva, P.; and Moreira, A., P.
IEEE Sensors Journal, 23(9): 10132-10139. 5 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Toward Grapevine Digital Ampelometry Through Vision Deep Learning Models},
type = {article},
year = {2023},
keywords = {Artificial neural networks (ANN),computer vision,image processing,precision agriculture,vine species identification},
pages = {10132-10139},
volume = {23},
month = {5},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
day = {1},
id = {e7b7c3b3-ee18-3021-adad-3cfc3b22dcc4},
created = {2023-10-27T07:20:42.268Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:26.513Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Several thousand grapevine varieties exist, with even more naming identifiers. Adequate specialized labor is not available for proper classification or identification of grapevines, making the value of commercial vines uncertain. Traditional methods, such as genetic analysis or ampelometry, are time-consuming, expensive, and often require expert skills that are even rarer. New vision-based systems benefit from advanced and innovative technology and can be used by nonexperts in ampelometry. To this end, ac dl and ac ml approaches have been successfully applied for classification purposes. This work extends the state of the art by applying digital ampelometry techniques to larger grapevine varieties. We benchmarked MobileNet v2, ResNet-34, and VGG-11-BN DL classifiers to assess their ability for digital ampelography. In our experiment, all the models could identify the vines' varieties through the leaf with a weighted $F1$ score higher than 92%.},
bibtype = {article},
author = {Magalhaes, Sandro Costa and Castro, Luis and Rodrigues, Leandro and Padilha, Tiago Cerveira and De Carvalho, Frederico and Neves Dos Santos, Filipe and Pinho, Tatiana and Moreira, Germano and Cunha, Jorge and Cunha, Mario and Silva, Paulo and Moreira, Antonio Paulo},
doi = {10.1109/JSEN.2023.3261544},
journal = {IEEE Sensors Journal},
number = {9}
}
Several thousand grapevine varieties exist, with even more naming identifiers. Adequate specialized labor is not available for proper classification or identification of grapevines, making the value of commercial vines uncertain. Traditional methods, such as genetic analysis or ampelometry, are time-consuming, expensive, and often require expert skills that are even rarer. New vision-based systems benefit from advanced and innovative technology and can be used by nonexperts in ampelometry. To this end, ac dl and ac ml approaches have been successfully applied for classification purposes. This work extends the state of the art by applying digital ampelometry techniques to larger grapevine varieties. We benchmarked MobileNet v2, ResNet-34, and VGG-11-BN DL classifiers to assess their ability for digital ampelography. In our experiment, all the models could identify the vines' varieties through the leaf with a weighted $F1$ score higher than 92%.
Segmentation Methods Evaluation on Grapevine Leaf Diseases.
Molnár, S.; and Tamás, L.
FedCSIS,1081-1085. 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Segmentation Methods Evaluation on Grapevine Leaf Diseases},
type = {article},
year = {2023},
pages = {1081-1085},
websites = {https://github.com/MrD1360/deep_segmentation_vineyards_navigation},
id = {f7691e2d-7d12-3b00-90a4-cc908e722e03},
created = {2023-10-27T07:22:31.109Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:12:05.960Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The problem of vine disease detection (VDD) was addressed in a number of research papers, however, a generic solution is not yet available for this task in the community. The region of interest segmentation and object detection tasks are often complementary. A similar situation is encountered in VDD applications as well, in which crop or leaf detection can be done via instance segmentation techniques as well. The focus of this work is to validate the most suitable methods from the main literature on vine leaf segmentation and disease detection on a custom dataset containing leaves both from the laboratory environment and cropped from images in the field. We tested five promising methods including the Otsu's thresholding, Mask R-CNN, MobileNet, SegNet, and Feature Pyramid Network variants. The results of the comparison are available in Table I summarizing the accuracy and runtime of different methods.},
bibtype = {article},
author = {Molnár, Szilárd and Tamás, Levente},
doi = {10.15439/2023F7053},
journal = {FedCSIS},
keywords = {molnar2023segmentationmethodsevaluation}
}
The problem of vine disease detection (VDD) was addressed in a number of research papers, however, a generic solution is not yet available for this task in the community. The region of interest segmentation and object detection tasks are often complementary. A similar situation is encountered in VDD applications as well, in which crop or leaf detection can be done via instance segmentation techniques as well. The focus of this work is to validate the most suitable methods from the main literature on vine leaf segmentation and disease detection on a custom dataset containing leaves both from the laboratory environment and cropped from images in the field. We tested five promising methods including the Otsu's thresholding, Mask R-CNN, MobileNet, SegNet, and Feature Pyramid Network variants. The results of the comparison are available in Table I summarizing the accuracy and runtime of different methods.
Early Yield Estimation in Viticulture Based on Grapevine Inflorescence Detection and Counting in Videos.
Khokher, M., R.; Liao, Q.; Smith, A., L.; Sun, C.; MacKenzie, D.; Thomas, M., R.; Wang, D.; and Edwards, E., J.
IEEE Access, 11: 37790-37808. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Early Yield Estimation in Viticulture Based on Grapevine Inflorescence Detection and Counting in Videos},
type = {article},
year = {2023},
keywords = {Grapevine,computer vision,deep learning,early yield estimation,inflorescence detection,inflorescence tracking,viticulture},
pages = {37790-37808},
volume = {11},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
id = {86d161e9-cf7a-3466-b8a6-52c5496412ad},
created = {2023-10-27T07:26:01.046Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:26.212Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {In viticulture, yield estimation is a key activity, which is important throughout the wine industry value chain. The earlier that an accurate yield estimation can be made the greater its value, increasing management options for grape growers and commercial options for winemakers. For the yield estimate based on in- field measurements at scale, the number of inflorescences emerging after bud-burst offers the earliest practical signal, allowing a yield potential to be determined months before harvest. This paper presents an approach to automatically count the inflorescence number at the phenological stage E-L 12 using RGB video data and demonstrates its use for estimating yield. A dataset consisting of RGB videos was collected shortly after bud-burst from multiple vineyards, in conjunction with hand counts to produce a manual ground-truth for the inflorescence counting task. The video frames were annotated using bounding-boxes around the inflorescences to produce a digital ground-truth. A deep learning architecture was developed to learn features from the video frames during training and detect the inflorescences at the later inference stage. The detection results were fed to a tracking pipeline built using computer vision and deep learning techniques to generate numbers of inflorescences present in test videos. The visual and quantitative results are presented and evaluated for the inflorescence detection and counting tasks. The developed inflorescence detector achieves an average precision of 80.00%, a recall of 83.92%, and an F1-score of 80.48%, through a five-fold cross-validation on the annotated dataset. For the test videos, the developed automatic inflorescence counting model reports an absolute error of 11.03 inflorescences per panel, a normalized mean absolute error of 10.80%, and an R2 of 0.86, when the predicted per-panel counts were compared to the corresponding manual ground-truth. Based on the counting results, we estimate an early yield that is within 4% to 11% error when compared to the actual yield after harvest. Based on these results and a separate analysis of the relationship between hand counts of inflorescences and harvest yields in three vineyards over three growing seasons, we conclude that computer vision and machine learning based methods have the potential to provide early yield estimation in viticulture with a commercially viable accuracy.},
bibtype = {article},
author = {Khokher, Muhammad Rizwan and Liao, Qiyu and Smith, Adam L. and Sun, Changming and MacKenzie, Donald and Thomas, Mark R. and Wang, Dadong and Edwards, Everard J.},
doi = {10.1109/ACCESS.2023.3263238},
journal = {IEEE Access}
}
In viticulture, yield estimation is a key activity, which is important throughout the wine industry value chain. The earlier that an accurate yield estimation can be made the greater its value, increasing management options for grape growers and commercial options for winemakers. For the yield estimate based on in- field measurements at scale, the number of inflorescences emerging after bud-burst offers the earliest practical signal, allowing a yield potential to be determined months before harvest. This paper presents an approach to automatically count the inflorescence number at the phenological stage E-L 12 using RGB video data and demonstrates its use for estimating yield. A dataset consisting of RGB videos was collected shortly after bud-burst from multiple vineyards, in conjunction with hand counts to produce a manual ground-truth for the inflorescence counting task. The video frames were annotated using bounding-boxes around the inflorescences to produce a digital ground-truth. A deep learning architecture was developed to learn features from the video frames during training and detect the inflorescences at the later inference stage. The detection results were fed to a tracking pipeline built using computer vision and deep learning techniques to generate numbers of inflorescences present in test videos. The visual and quantitative results are presented and evaluated for the inflorescence detection and counting tasks. The developed inflorescence detector achieves an average precision of 80.00%, a recall of 83.92%, and an F1-score of 80.48%, through a five-fold cross-validation on the annotated dataset. For the test videos, the developed automatic inflorescence counting model reports an absolute error of 11.03 inflorescences per panel, a normalized mean absolute error of 10.80%, and an R2 of 0.86, when the predicted per-panel counts were compared to the corresponding manual ground-truth. Based on the counting results, we estimate an early yield that is within 4% to 11% error when compared to the actual yield after harvest. Based on these results and a separate analysis of the relationship between hand counts of inflorescences and harvest yields in three vineyards over three growing seasons, we conclude that computer vision and machine learning based methods have the potential to provide early yield estimation in viticulture with a commercially viable accuracy.
Comparison of deep learning methods for grapevine growth stage recognition.
Schieck, M.; Krajsic, P.; Loos, F.; Hussein, A.; Franczyk, B.; Kozierkiewicz, A.; and Pietranik, M.
Computers and Electronics in Agriculture, 211: 107944. 8 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Comparison of deep learning methods for grapevine growth stage recognition},
type = {article},
year = {2023},
keywords = {Computer vision,Deep learning,Grapes growth stages,Viticulture},
pages = {107944},
volume = {211},
month = {8},
publisher = {Elsevier},
day = {1},
id = {a8b05a7d-8e3a-3d41-9f3a-3b5c15032533},
created = {2023-10-27T07:27:26.204Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:26.358Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Monitoring the phenological development stages of grapes represents a challenge in viticulture. It includes the phenological distinction of the growth stages of grapevines and the continuous technological developments, especially in computer vision, enabling a detailed classification of economically relevant development stages of grapes. In the present work, we show that based on a cascading computer vision approach, the development stages of grapes can be classified and distinguished at the micro level. In a comparative experiment (ResNet, DenseNet, InceptionV3), it could be shown that a ResNet architecture provides the best classification results with an average accuracy of 88.1%.},
bibtype = {article},
author = {Schieck, Martin and Krajsic, Philippe and Loos, Felix and Hussein, Abdulbaree and Franczyk, Bogdan and Kozierkiewicz, Adrianna and Pietranik, Marcin},
doi = {10.1016/J.COMPAG.2023.107944},
journal = {Computers and Electronics in Agriculture}
}
Monitoring the phenological development stages of grapes represents a challenge in viticulture. It includes the phenological distinction of the growth stages of grapevines and the continuous technological developments, especially in computer vision, enabling a detailed classification of economically relevant development stages of grapes. In the present work, we show that based on a cascading computer vision approach, the development stages of grapes can be classified and distinguished at the micro level. In a comparative experiment (ResNet, DenseNet, InceptionV3), it could be shown that a ResNet architecture provides the best classification results with an average accuracy of 88.1%.
Machine-Learning Methods to Identify Key Predictors of Site-Specific Vineyard Yield and Vine Size.
Taylor, J., A.; Bates, T., R.; Jakubowski, R.; and Jones, H.
American Journal of Enology and Viticulture, 74(1): 740013. 1 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Machine-Learning Methods to Identify Key Predictors of Site-Specific Vineyard Yield and Vine Size},
type = {article},
year = {2023},
keywords = {Concord,proximal canopy sensing,random forests},
pages = {740013},
volume = {74},
websites = {https://www.ajevonline.org/content/74/1/0740013,https://www.ajevonline.org/content/74/1/0740013.abstract},
month = {1},
publisher = {American Journal of Enology and Viticulture},
day = {1},
id = {27268e6e-3611-3319-815d-6a42915c03c8},
created = {2023-10-27T07:30:00.849Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-10-27T07:30:06.542Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Background and goals Lake Erie Concord growers have access to high-resolution spatial soil and production data, but lack protocols and information on the optimum time to collect these data. This study examines the type and timing of sensor information to support in-season management.
Methods and key findings A three-year study in a 2.6 ha vineyard collected yield, pruning mass, canopy vigor, and soil data, including yield and pruning mass from the previous year, at 321 sites. Stepwise linear regression and random forest regression approaches were used to model site-specific yield and pruning mass using historical spatial production data, multi-temporal in-season canopy vigor, and soil data. The more complex yield elaboration process was best modelled with non-linear random forest regression, while the simpler development of pruning mass was best modelled by linear regression.
Conclusions and significance Canopy vigor in the weeks preceding bloom was the most important predictor of the current season’s yield and should be used to generate stratified sampling designs for crop estimation at 30 days after bloom. In contrast, pruning mass was not well-predicted by canopy vigor; even late-season canopy vigor, which is widely advocated to estimate pruning mass in viticulture. The previous year’s pruning mass was the dominant predictor of pruning mass in the current season. To model pruning mass going forward, the best approach is to start measuring it. Further work is still needed to develop robust, local site-specific yield and pruning mass models for operational decision-making in Concord vineyards.},
bibtype = {article},
author = {Taylor, James A. and Bates, Terence R. and Jakubowski, Rhiann and Jones, Hazaël},
doi = {10.5344/AJEV.2022.22050},
journal = {American Journal of Enology and Viticulture},
number = {1}
}
Background and goals Lake Erie Concord growers have access to high-resolution spatial soil and production data, but lack protocols and information on the optimum time to collect these data. This study examines the type and timing of sensor information to support in-season management.
Methods and key findings A three-year study in a 2.6 ha vineyard collected yield, pruning mass, canopy vigor, and soil data, including yield and pruning mass from the previous year, at 321 sites. Stepwise linear regression and random forest regression approaches were used to model site-specific yield and pruning mass using historical spatial production data, multi-temporal in-season canopy vigor, and soil data. The more complex yield elaboration process was best modelled with non-linear random forest regression, while the simpler development of pruning mass was best modelled by linear regression.
Conclusions and significance Canopy vigor in the weeks preceding bloom was the most important predictor of the current season’s yield and should be used to generate stratified sampling designs for crop estimation at 30 days after bloom. In contrast, pruning mass was not well-predicted by canopy vigor; even late-season canopy vigor, which is widely advocated to estimate pruning mass in viticulture. The previous year’s pruning mass was the dominant predictor of pruning mass in the current season. To model pruning mass going forward, the best approach is to start measuring it. Further work is still needed to develop robust, local site-specific yield and pruning mass models for operational decision-making in Concord vineyards.
Machine-Learning Methods to Identify Key Predictors of Site-Specific Vineyard Yield and Vine Size.
Taylor, J., A.; Bates, T., R.; Jakubowski, R.; and Jones, H.
American Journal of Enology and Viticulture, 74(1): 740013. 1 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Machine-Learning Methods to Identify Key Predictors of Site-Specific Vineyard Yield and Vine Size},
type = {article},
year = {2023},
keywords = {Concord,proximal canopy sensing,random forests},
pages = {740013},
volume = {74},
websites = {https://www.ajevonline.org/content/74/1/0740013,https://www.ajevonline.org/content/74/1/0740013.abstract},
month = {1},
publisher = {American Journal of Enology and Viticulture},
day = {1},
id = {fb7ccd1e-a381-3cdc-969c-5b989e234133},
created = {2023-10-27T07:32:15.680Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:26.667Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Background and goals Lake Erie Concord growers have access to high-resolution spatial soil and production data, but lack protocols and information on the optimum time to collect these data. This study examines the type and timing of sensor information to support in-season management.
Methods and key findings A three-year study in a 2.6 ha vineyard collected yield, pruning mass, canopy vigor, and soil data, including yield and pruning mass from the previous year, at 321 sites. Stepwise linear regression and random forest regression approaches were used to model site-specific yield and pruning mass using historical spatial production data, multi-temporal in-season canopy vigor, and soil data. The more complex yield elaboration process was best modelled with non-linear random forest regression, while the simpler development of pruning mass was best modelled by linear regression.
Conclusions and significance Canopy vigor in the weeks preceding bloom was the most important predictor of the current season’s yield and should be used to generate stratified sampling designs for crop estimation at 30 days after bloom. In contrast, pruning mass was not well-predicted by canopy vigor; even late-season canopy vigor, which is widely advocated to estimate pruning mass in viticulture. The previous year’s pruning mass was the dominant predictor of pruning mass in the current season. To model pruning mass going forward, the best approach is to start measuring it. Further work is still needed to develop robust, local site-specific yield and pruning mass models for operational decision-making in Concord vineyards.},
bibtype = {article},
author = {Taylor, James A. and Bates, Terence R. and Jakubowski, Rhiann and Jones, Hazaël},
doi = {10.5344/AJEV.2022.22050},
journal = {American Journal of Enology and Viticulture},
number = {1}
}
Background and goals Lake Erie Concord growers have access to high-resolution spatial soil and production data, but lack protocols and information on the optimum time to collect these data. This study examines the type and timing of sensor information to support in-season management.
Methods and key findings A three-year study in a 2.6 ha vineyard collected yield, pruning mass, canopy vigor, and soil data, including yield and pruning mass from the previous year, at 321 sites. Stepwise linear regression and random forest regression approaches were used to model site-specific yield and pruning mass using historical spatial production data, multi-temporal in-season canopy vigor, and soil data. The more complex yield elaboration process was best modelled with non-linear random forest regression, while the simpler development of pruning mass was best modelled by linear regression.
Conclusions and significance Canopy vigor in the weeks preceding bloom was the most important predictor of the current season’s yield and should be used to generate stratified sampling designs for crop estimation at 30 days after bloom. In contrast, pruning mass was not well-predicted by canopy vigor; even late-season canopy vigor, which is widely advocated to estimate pruning mass in viticulture. The previous year’s pruning mass was the dominant predictor of pruning mass in the current season. To model pruning mass going forward, the best approach is to start measuring it. Further work is still needed to develop robust, local site-specific yield and pruning mass models for operational decision-making in Concord vineyards.
Assessment of vineyard vigour and yield spatio-temporal variability based on UAV high resolution multispectral images.
Ferro, M., V.; Catania, P.; Miccichè, D.; Pisciotta, A.; Vallone, M.; and Orlando, S.
Biosystems Engineering, 231: 36-56. 7 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Assessment of vineyard vigour and yield spatio-temporal variability based on UAV high resolution multispectral images},
type = {article},
year = {2023},
keywords = {Precision viticulture,Shoot pruning weight,Vegetation index},
pages = {36-56},
volume = {231},
month = {7},
publisher = {Academic Press},
day = {1},
id = {963e61b2-4599-34fa-a294-15fd9d6eae60},
created = {2023-10-27T07:33:21.907Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:26.806Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Accurate, timely assessment of the vineyard on a field scale is essential for successful grape yield and quality. Remote sensing can be an effective and useful monitoring tool, as data from sensors on board Unmanned Aerial Vehicles (UAV) can measure vegetative and reproductive growth and thus directly or indirectly detect variability. Through the images obtained from UAV, the Vegetation Indices (VIs) can be calculated and compared with various agronomic characteristics of the vineyard. The objective of this study was to evaluate the multispectral response of the vineyard in three specific phenological phases and to analyse the spatial distribution of vegetative vigour. A multirotor UAV equipped with a camera featuring multispectral sensors was used. Four VIs namely Normalised Difference Vegetation Index (NDVI), Normalised Difference Red Edge (NDRE), Green Normalised Difference Vegetation Index (GNDVI), Modified Soil Adjusted Vegetation Index (MSAVI), were calculated using the georeferenced orthomosaic UAV images. Computer vision techniques were used to segment these orthoimages to extract only the vegetation canopy pixels. High level of agronomic variability within the vineyard was identified. Pearson's coefficient showed a significant correlation between NDVI and NDRE indices and yield since early phenological stages (r = 0.80 and 0.72 respectively), GNDVI at grape ripening (r = 0.83). Shoot pruning weight (SPW) shows the highest values of correlation (r = 0.84) with NDVI during the phenological stage of berries pea size. Simple linear regression techniques were evaluated using VIs as predictors of the SPW, and accurate predictive results were obtained for NDVI and NDRE with RMSE values of 0.18 and 0.24, respectively. Geostatistical analysis was applied to model the spatial variability of SPW, and thus vineyard vigour. Assessing spatial variability and appreciating the level of vigour enables improved vineyard management by increasing sustainability and production efficiency.},
bibtype = {article},
author = {Ferro, Massimo V. and Catania, Pietro and Miccichè, Daniele and Pisciotta, Antonino and Vallone, Mariangela and Orlando, Santo},
doi = {10.1016/J.BIOSYSTEMSENG.2023.06.001},
journal = {Biosystems Engineering}
}
Accurate, timely assessment of the vineyard on a field scale is essential for successful grape yield and quality. Remote sensing can be an effective and useful monitoring tool, as data from sensors on board Unmanned Aerial Vehicles (UAV) can measure vegetative and reproductive growth and thus directly or indirectly detect variability. Through the images obtained from UAV, the Vegetation Indices (VIs) can be calculated and compared with various agronomic characteristics of the vineyard. The objective of this study was to evaluate the multispectral response of the vineyard in three specific phenological phases and to analyse the spatial distribution of vegetative vigour. A multirotor UAV equipped with a camera featuring multispectral sensors was used. Four VIs namely Normalised Difference Vegetation Index (NDVI), Normalised Difference Red Edge (NDRE), Green Normalised Difference Vegetation Index (GNDVI), Modified Soil Adjusted Vegetation Index (MSAVI), were calculated using the georeferenced orthomosaic UAV images. Computer vision techniques were used to segment these orthoimages to extract only the vegetation canopy pixels. High level of agronomic variability within the vineyard was identified. Pearson's coefficient showed a significant correlation between NDVI and NDRE indices and yield since early phenological stages (r = 0.80 and 0.72 respectively), GNDVI at grape ripening (r = 0.83). Shoot pruning weight (SPW) shows the highest values of correlation (r = 0.84) with NDVI during the phenological stage of berries pea size. Simple linear regression techniques were evaluated using VIs as predictors of the SPW, and accurate predictive results were obtained for NDVI and NDRE with RMSE values of 0.18 and 0.24, respectively. Geostatistical analysis was applied to model the spatial variability of SPW, and thus vineyard vigour. Assessing spatial variability and appreciating the level of vigour enables improved vineyard management by increasing sustainability and production efficiency.
Deep Learning for Post-Harvest Grape Diseases Detection.
Mohimont, L.
, 0. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Deep Learning for Post-Harvest Grape Diseases Detection},
type = {article},
year = {2023},
keywords = {classification,deep learning,fruit grading,grape disease,segmentation},
volume = {0},
id = {5c818afd-6b85-37db-85a3-09b23a3d471c},
created = {2023-10-27T07:38:16.458Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-22T17:08:42.431Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Post-harvest fruit grading is a necessary step to avoid disease related loss in quality. This is relevant in the context of the Champagne industry where grapes can not be manipulated by machines to avoid crushing. Our team have been developing a computer vision based solution to automate this process. In this paper, our main contribution is the usage of a PSPnet segmentation model for real-time visible symptoms detection with a IoU score of 58%. The associated classification score reach 95%, which improved our previous work. We also study a MobileNet-V2 model’s ability to discriminate between different grape diseases in ideal condition.},
bibtype = {article},
author = {Mohimont, Lucas},
doi = {10.3233/aise230025}
}
Post-harvest fruit grading is a necessary step to avoid disease related loss in quality. This is relevant in the context of the Champagne industry where grapes can not be manipulated by machines to avoid crushing. Our team have been developing a computer vision based solution to automate this process. In this paper, our main contribution is the usage of a PSPnet segmentation model for real-time visible symptoms detection with a IoU score of 58%. The associated classification score reach 95%, which improved our previous work. We also study a MobileNet-V2 model’s ability to discriminate between different grape diseases in ideal condition.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING Grape Vision: A CNN-Based System for Yield Component Analysis of Grape Clusters.
Dange, B., J.; Kumar Mishra, P.; Metre, K., V.; Gore, S.; Laxnamrao Kurkute, S.; Khodke, H., E.; and Gore, S.
Original Research Paper International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2023(9s): 239-244. 2023.
Paper
Website
link
bibtex
abstract
@article{
title = {International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING Grape Vision: A CNN-Based System for Yield Component Analysis of Grape Clusters},
type = {article},
year = {2023},
keywords = {CNN,Grape prediction,image processing,machine learning},
pages = {239-244},
volume = {2023},
websites = {https://orcid.org/0000-0003-1814-591,},
id = {245f990a-28b0-35d3-911c-59ec6aeb0006},
created = {2023-10-27T07:43:19.741Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:27.138Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
private_publication = {false},
abstract = {The agricultural industry is adopting advanced technologies and applications like yield prediction, precision agriculture, and automated harvesting to enhance production and quality. Machine learning (ML) and computer vision are increasingly used for fruit detection, segmentation, and counting. Specifically, the use of Convolutional Neural Networks (CNN) in grape yield prediction and quality assessment is gaining popularity due to its high accuracy and cost efficiency. Additionally, a new methodology ba sed on image analysis has been developed for fast and inexpensive cluster yield component determination in the wine and table grape industry.},
bibtype = {article},
author = {Dange, B J and Kumar Mishra, Punit and Metre, Kalpana V and Gore, Santosh and Laxnamrao Kurkute, Sanjay and Khodke, H E and Gore, Sujata},
journal = {Original Research Paper International Journal of Intelligent Systems and Applications in Engineering IJISAE},
number = {9s}
}
The agricultural industry is adopting advanced technologies and applications like yield prediction, precision agriculture, and automated harvesting to enhance production and quality. Machine learning (ML) and computer vision are increasingly used for fruit detection, segmentation, and counting. Specifically, the use of Convolutional Neural Networks (CNN) in grape yield prediction and quality assessment is gaining popularity due to its high accuracy and cost efficiency. Additionally, a new methodology ba sed on image analysis has been developed for fast and inexpensive cluster yield component determination in the wine and table grape industry.
A Preliminary Method for Tracking In-Season Grapevine Cluster Closure Using Image Segmentation and Image Thresholding.
Trivedi, M.; Zhou, Y.; Moon, J., H.; Meyers, J.; Jiang, Y.; Lu, G.; and Heuvel, J., V.
. 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {A Preliminary Method for Tracking In-Season Grapevine Cluster Closure Using Image Segmentation and Image Thresholding},
type = {article},
year = {2023},
websites = {https://doi.org/10.1155/2023/3923839},
id = {04103f3c-63de-3134-8a21-dda81d680a7e},
created = {2023-10-27T07:46:25.965Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:27.794Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Mapping and monitoring cluster morphology provides insights for disease risk assessment, quality control in wine production, and understanding environmental innuences on cluster shape. During the progression of grapevine morphology, cluster closure (CC) (also called bunch closure) is the stage when berries touch one another. .is study used mobile phone images to develop a direct quantiication method for tracking CC in three grapevine cultivars (Riesling, Pinot gris, and Cabernet Franc). A total of 809 cluster images from fruit set to veraison were analyzed using two image segmentation methods: (i) a Pyramid Scene Parsing Network (PSPNet) to extract cluster boundaries and (ii) Otsu's image thresholding method to calculate % CC based on gaps between the berries. PSPNet produced high accuracy (mean accuracy = 0.98, mean intersection over union (mIoU) = 0.95) with mIoU > 0.90 for both cluster and noncluster classes. Otsu's thresholding method resulted in <2% falsely classiied gap and berry pixels aaecting quantiied % CC. .e progression of CC was described using basic statistics (mean and standard deviation) and using a curve et. .e CC curve showed an asymptotic trend, with a higher rate of progression observed in the erst three weeks, followed by a gradual approach towards an asymptote. We propose that the X value (in this example, number of weeks past berry set) at which the CC progression curve reaches the asymptote be considered as the oocial phenological stage of CC. .e developed method provides a continuous scale of CC throughout the season, potentially serving as a valuable open-source research tool for studying grapevine cluster phenology and factors aaecting CC.},
bibtype = {article},
author = {Trivedi, Manushi and Zhou, Yuwei and Moon, Jonathan Hyun and Meyers, James and Jiang, Yu and Lu, Guoyu and Heuvel, Justine Vanden},
doi = {10.1155/2023/3923839}
}
Mapping and monitoring cluster morphology provides insights for disease risk assessment, quality control in wine production, and understanding environmental innuences on cluster shape. During the progression of grapevine morphology, cluster closure (CC) (also called bunch closure) is the stage when berries touch one another. .is study used mobile phone images to develop a direct quantiication method for tracking CC in three grapevine cultivars (Riesling, Pinot gris, and Cabernet Franc). A total of 809 cluster images from fruit set to veraison were analyzed using two image segmentation methods: (i) a Pyramid Scene Parsing Network (PSPNet) to extract cluster boundaries and (ii) Otsu's image thresholding method to calculate % CC based on gaps between the berries. PSPNet produced high accuracy (mean accuracy = 0.98, mean intersection over union (mIoU) = 0.95) with mIoU > 0.90 for both cluster and noncluster classes. Otsu's thresholding method resulted in <2% falsely classiied gap and berry pixels aaecting quantiied % CC. .e progression of CC was described using basic statistics (mean and standard deviation) and using a curve et. .e CC curve showed an asymptotic trend, with a higher rate of progression observed in the erst three weeks, followed by a gradual approach towards an asymptote. We propose that the X value (in this example, number of weeks past berry set) at which the CC progression curve reaches the asymptote be considered as the oocial phenological stage of CC. .e developed method provides a continuous scale of CC throughout the season, potentially serving as a valuable open-source research tool for studying grapevine cluster phenology and factors aaecting CC.
Evaluating Critical Disease Occurrence in Grapevine Leaves using CNN: Use-Case in Eastern Europe.
Oprea, C., C.; Dragulinescu, A., M., C.; Marcu, I., M.; and Pirnog, I.
2023 17th International Conference on Engineering of Modern Electric Systems, EMES 2023. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Evaluating Critical Disease Occurrence in Grapevine Leaves using CNN: Use-Case in Eastern Europe},
type = {article},
year = {2023},
keywords = {oprea2023evaluatingcriticaldisease},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
id = {382c45fa-abf6-3100-9f8d-38a85ca33118},
created = {2023-10-27T07:47:55.667Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:12:06.161Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Convolutional Neural Networks are Deep Learning algorithms for image classification tasks in the Computer Vision area. Their efficiency was previously evaluated in medical areas, engineering fields and construction applications. Under this category, VGG16 and Avert-CNN (a modified VGG16 version) algorithms can perform real-time identification for diseases occurrence in agricultural plants with high accuracy and fast estimation time. Thus, this research addresses the identification of health status of grapevine leaves using specialized classification algorithms on images taken from PlantVillage dataset and images acquired in a vineyard in the South-East of Romania. The outcome of these classifications consists in predictions based on one of the 5 classes for which the convolutional network was trained, along with a prediction accuracy metric. The classes considered in this process correspond to the state of a healthy plant and a set of 4 distinct diseases that can affect the vine: Black rot, Esca, Leaf blight and Powdery mildew. Based on the achieved results, a novel convolutional neural network architecture is proposed to ensure reliable estimates on the disease's probability of occurrence. Its efficiency in reaching over 94 % prediction accuracy is demonstrated compared to the classic VGG16 which leads to 90.21 % data accuracy and surpasses Random Forest and Support Vector Machine algorithms that achieve 72.87% and 85.82% accuracy, respectively.},
bibtype = {article},
author = {Oprea, Cristina Claudia and Dragulinescu, Ana Maria Claudia and Marcu, Ioana Manuela and Pirnog, Ionut},
doi = {10.1109/EMES58375.2023.10171678},
journal = {2023 17th International Conference on Engineering of Modern Electric Systems, EMES 2023}
}
Convolutional Neural Networks are Deep Learning algorithms for image classification tasks in the Computer Vision area. Their efficiency was previously evaluated in medical areas, engineering fields and construction applications. Under this category, VGG16 and Avert-CNN (a modified VGG16 version) algorithms can perform real-time identification for diseases occurrence in agricultural plants with high accuracy and fast estimation time. Thus, this research addresses the identification of health status of grapevine leaves using specialized classification algorithms on images taken from PlantVillage dataset and images acquired in a vineyard in the South-East of Romania. The outcome of these classifications consists in predictions based on one of the 5 classes for which the convolutional network was trained, along with a prediction accuracy metric. The classes considered in this process correspond to the state of a healthy plant and a set of 4 distinct diseases that can affect the vine: Black rot, Esca, Leaf blight and Powdery mildew. Based on the achieved results, a novel convolutional neural network architecture is proposed to ensure reliable estimates on the disease's probability of occurrence. Its efficiency in reaching over 94 % prediction accuracy is demonstrated compared to the classic VGG16 which leads to 90.21 % data accuracy and surpasses Random Forest and Support Vector Machine algorithms that achieve 72.87% and 85.82% accuracy, respectively.
Benchmarking edge computing devices for grape bunches and trunks detection using accelerated object detection single shot multibox deep learning models.
Magalhães, S., C.; dos Santos, F., N.; Machado, P.; Moreira, A., P.; and Dias, J.
Engineering Applications of Artificial Intelligence, 117: 105604. 1 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Benchmarking edge computing devices for grape bunches and trunks detection using accelerated object detection single shot multibox deep learning models},
type = {article},
year = {2023},
keywords = {Embedded systems,Heterogeneous platforms,Object detection,RetinaNet resNet,SSD resNet},
pages = {105604},
volume = {117},
month = {1},
publisher = {Pergamon},
day = {1},
id = {7439a1c4-a827-3a2c-b671-870ad812b6b1},
created = {2023-10-27T07:49:54.635Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:28.084Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU—Graphical Processing Units (such as NVIDIA Jetson Nano 2GB and 4GB, and NVIDIA Jetson TX2), TPU—Tensor Processing Unit (such as Coral Dev Board TPU), and DPU—Deep Learning Processor Unit (such as in AMD/Xilinx ZCU104 Development Board, and AMD/Xilinx Kria KV260 Starter Kit). Methods: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency. Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3FPS to 5FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14FPS to 25FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70% and mean Average Precision (mAP) of about 60%.},
bibtype = {article},
author = {Magalhães, Sandro Costa and dos Santos, Filipe Neves and Machado, Pedro and Moreira, António Paulo and Dias, Jorge},
doi = {10.1016/J.ENGAPPAI.2022.105604},
journal = {Engineering Applications of Artificial Intelligence}
}
Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU—Graphical Processing Units (such as NVIDIA Jetson Nano 2GB and 4GB, and NVIDIA Jetson TX2), TPU—Tensor Processing Unit (such as Coral Dev Board TPU), and DPU—Deep Learning Processor Unit (such as in AMD/Xilinx ZCU104 Development Board, and AMD/Xilinx Kria KV260 Starter Kit). Methods: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency. Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3FPS to 5FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14FPS to 25FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70% and mean Average Precision (mAP) of about 60%.
Using deep learning for pruning region detection and plant organ segmentation in dormant spur-pruned grapevines.
Guadagna, P.; Fernandes, M.; Chen, F.; Santamaria, A.; Teng, T.; Frioni, T.; Caldwell, D., G.; Poni, S.; Semini, C.; and Gatti, M.
Precision Agriculture, 24(4): 1547-1569. 8 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Using deep learning for pruning region detection and plant organ segmentation in dormant spur-pruned grapevines},
type = {article},
year = {2023},
keywords = {Computer vision,Object detection,Robotics,Viticulture,Winter pruning},
pages = {1547-1569},
volume = {24},
websites = {https://link.springer.com/article/10.1007/s11119-023-10006-y},
month = {8},
publisher = {Springer},
day = {1},
id = {2f305d7b-5281-3ddd-902e-68d989073a65},
created = {2023-10-27T07:58:34.517Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:27.451Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Even though mechanization has dramatically decreased labor requirements, vineyard management costs are still affected by selective operations such as winter pruning. Robotic solutions are becoming more common in agriculture, however, few studies have focused on grapevines. This work aims at fine-tuning and testing two different deep neural networks for: (i) detecting pruning regions (PRs), and (ii) performing organ segmentation of spur-pruned dormant grapevines. The Faster R-CNN network was fine-tuned using 1215 RGB images collected in different vineyards and annotated through bounding boxes. The network was tested on 232 RGB images, PRs were categorized by wood type (W), orientation (Or) and visibility (V), and performance metrics were calculated. PR detection was dramatically affected by visibility. Highest detection was associated with visible intermediate complex spurs in Merlot (0.97), while most represented coplanar simple spurs allowed a 74% detection rate. The Mask R-CNN network was trained for grapevine organs (GOs) segmentation by using 119 RGB images annotated by distinguishing 5 classes (cordon, arm, spur, cane and node). The network was tested on 60 RGB images of light pruned (LP), shoot-thinned (ST) and unthinned control (C) grapevines. Nodes were the best segmented GOs (0.88) and general recall was higher for ST (0.85) compared to C (0.80) confirming the role of canopy management in improving performances of hi-tech solutions based on artificial intelligence. The two fine-tuned and tested networks are part of a larger control framework that is under development for autonomous winter pruning of grapevines.},
bibtype = {article},
author = {Guadagna, P. and Fernandes, M. and Chen, F. and Santamaria, A. and Teng, T. and Frioni, T. and Caldwell, D. G. and Poni, S. and Semini, C. and Gatti, M.},
doi = {10.1007/S11119-023-10006-Y/TABLES/9},
journal = {Precision Agriculture},
number = {4}
}
Even though mechanization has dramatically decreased labor requirements, vineyard management costs are still affected by selective operations such as winter pruning. Robotic solutions are becoming more common in agriculture, however, few studies have focused on grapevines. This work aims at fine-tuning and testing two different deep neural networks for: (i) detecting pruning regions (PRs), and (ii) performing organ segmentation of spur-pruned dormant grapevines. The Faster R-CNN network was fine-tuned using 1215 RGB images collected in different vineyards and annotated through bounding boxes. The network was tested on 232 RGB images, PRs were categorized by wood type (W), orientation (Or) and visibility (V), and performance metrics were calculated. PR detection was dramatically affected by visibility. Highest detection was associated with visible intermediate complex spurs in Merlot (0.97), while most represented coplanar simple spurs allowed a 74% detection rate. The Mask R-CNN network was trained for grapevine organs (GOs) segmentation by using 119 RGB images annotated by distinguishing 5 classes (cordon, arm, spur, cane and node). The network was tested on 60 RGB images of light pruned (LP), shoot-thinned (ST) and unthinned control (C) grapevines. Nodes were the best segmented GOs (0.88) and general recall was higher for ST (0.85) compared to C (0.80) confirming the role of canopy management in improving performances of hi-tech solutions based on artificial intelligence. The two fine-tuned and tested networks are part of a larger control framework that is under development for autonomous winter pruning of grapevines.
Automated Grapevine Inflorescence Counting in a Vineyard Using Deep Learning and Multi-object Tracking.
Rahim, U., F.; Utsumi, T.; Iwaki, Y.; and Mineno, H.
2023 15th International Conference on Computer and Automation Engineering, ICCAE 2023,276-280. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Automated Grapevine Inflorescence Counting in a Vineyard Using Deep Learning and Multi-object Tracking},
type = {article},
year = {2023},
keywords = {deep learning,high-throughput phenotyping,instance segmentation,multi-object tracking,precision viticulture},
pages = {276-280},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
id = {fa897f4e-6961-34c8-a524-6cbfa9e1e48e},
created = {2023-10-27T08:04:57.476Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:27.940Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {To adjust management practices and improve wine marketing strategies, accurate vineyard yield estimation early in the growing season is essential. Conventional methods for yield forecasting rely on phenotypic features' manual assessment, which is time- and labor-intensive and often destructive. We combined a deep object segmentation method, mask region-based convolutional neural network (Mask R-CNN), with two potential multi-object tracking algorithms, simple online and real-time tracking (SORT) and intersection-over-union (IOU) trackers to develop a complete visual system that can automatically detect and track individual inflorescences, enabling the assessment of the number of inflorescences per vineyard row from vineyard video footage. The performance of the two tracking algorithms was evaluated using our vineyard dataset, which is more challenging than conventional tracking benchmark datasets owing to environmental factors. Our evaluation dataset consists of videos of four vineyard rows, including 221 vines that were automatically acquired under unprepared field conditions. We tracked individual inflorescences across video image frames with a 92.1% multi-object tracking accuracy (MOTA) and an 89.6% identity F1 score (IDF1). This allowed us to estimate inflorescence count per vineyard row with a 0.91 coefficient of determination (R2) between the estimated count and manual-annotated ground truth count. The impact of leaf occlusions on inflorescence visibility was lessened by processing multiple successive image frames with minimal displacements to construct multiple camera views. This study demonstrates the use of deep learning and multi-object tracking in creating a low-cost (requiring only an RGB camera), high-throughput phenotyping system for precision viticulture.},
bibtype = {article},
author = {Rahim, Umme Fawzia and Utsumi, Tomoyoshi and Iwaki, Yohei and Mineno, Hiroshi},
doi = {10.1109/ICCAE56788.2023.10111243},
journal = {2023 15th International Conference on Computer and Automation Engineering, ICCAE 2023}
}
To adjust management practices and improve wine marketing strategies, accurate vineyard yield estimation early in the growing season is essential. Conventional methods for yield forecasting rely on phenotypic features' manual assessment, which is time- and labor-intensive and often destructive. We combined a deep object segmentation method, mask region-based convolutional neural network (Mask R-CNN), with two potential multi-object tracking algorithms, simple online and real-time tracking (SORT) and intersection-over-union (IOU) trackers to develop a complete visual system that can automatically detect and track individual inflorescences, enabling the assessment of the number of inflorescences per vineyard row from vineyard video footage. The performance of the two tracking algorithms was evaluated using our vineyard dataset, which is more challenging than conventional tracking benchmark datasets owing to environmental factors. Our evaluation dataset consists of videos of four vineyard rows, including 221 vines that were automatically acquired under unprepared field conditions. We tracked individual inflorescences across video image frames with a 92.1% multi-object tracking accuracy (MOTA) and an 89.6% identity F1 score (IDF1). This allowed us to estimate inflorescence count per vineyard row with a 0.91 coefficient of determination (R2) between the estimated count and manual-annotated ground truth count. The impact of leaf occlusions on inflorescence visibility was lessened by processing multiple successive image frames with minimal displacements to construct multiple camera views. This study demonstrates the use of deep learning and multi-object tracking in creating a low-cost (requiring only an RGB camera), high-throughput phenotyping system for precision viticulture.
YOLO-based Multi-Modal Analysis of Vineyards using RGB-D Detections.
Clamens, T.; Rodriguez, J.; Delamare, M.; Lew-Yan-Voon, L.; Fauvet, E.; and Fofi, D.
,7-9. 6 2023.
Paper
Website
link
bibtex
abstract
@article{
title = {YOLO-based Multi-Modal Analysis of Vineyards using RGB-D Detections},
type = {article},
year = {2023},
keywords = {RGB-D camera,RGB-D fusion,multi-modal dataset,multi-spectral camera,object detection,vineyard analysis,viticultural robotics},
pages = {7-9},
websites = {https://hal.science/hal-04218442,https://hal.science/hal-04218442/document},
month = {6},
day = {7},
id = {91725ef7-c1e0-3717-a2c8-2634de9c8d45},
created = {2023-10-27T08:07:32.715Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:27.627Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Agricultural robotics is a rapidly growing research area due to the need for new practices that are more environmentally responsible. It involves a range of technologies including autonomous vehicles, drones and robotic arms. These systems can be equipped with sensors and cameras to gather data and perform tasks autonomously or with minimal human intervention. For robot navigation and manipulation, and plant monitoring and analysis, perception is of prime importance and is still a challenging task today. For instance, visual perception using color images only for disease detection in vineyards, such as Mildew in which the symptoms manifest as small spots on or beneath the leaves, is still a hard task that does not allow to achieve high detection accuracy. To extract more representative features to improve the detection accuracy, other modalities must be used in addition to the Red Green and Blue (RGB) information of color images. In this paper, we present first a multimodal acquisition system that we have developed. It is composed of a multi-spectral (MS) camera and an RGB-D camera that are mounted on a mobile robot for data acquisition in a vineyard. Next, we describe the multi-modal dataset that we have built based on the data acquired with our system in a commercial vineyard. Finally, we implemented an Early RGB and depth data fusion technique together with the YOLOv5m Deep Learning network to detect the main parts of the vine: leaves, branches, and grapes using our dataset. The results that we have obtained, compared to those obtained using RGB images only with the YOLOv5m architecture, demonstrate the benefits of adding multi data fusion techniques to the object detection pipeline. These results are encouraging and show that multi-sensor data fusion is a technique that is worth considering as it can be useful for improving grapevine disease recognition technologies.},
bibtype = {article},
author = {Clamens, T and Rodriguez, J and Delamare, M and Lew-Yan-Voon, L and Fauvet, E and Fofi, D}
}
Agricultural robotics is a rapidly growing research area due to the need for new practices that are more environmentally responsible. It involves a range of technologies including autonomous vehicles, drones and robotic arms. These systems can be equipped with sensors and cameras to gather data and perform tasks autonomously or with minimal human intervention. For robot navigation and manipulation, and plant monitoring and analysis, perception is of prime importance and is still a challenging task today. For instance, visual perception using color images only for disease detection in vineyards, such as Mildew in which the symptoms manifest as small spots on or beneath the leaves, is still a hard task that does not allow to achieve high detection accuracy. To extract more representative features to improve the detection accuracy, other modalities must be used in addition to the Red Green and Blue (RGB) information of color images. In this paper, we present first a multimodal acquisition system that we have developed. It is composed of a multi-spectral (MS) camera and an RGB-D camera that are mounted on a mobile robot for data acquisition in a vineyard. Next, we describe the multi-modal dataset that we have built based on the data acquired with our system in a commercial vineyard. Finally, we implemented an Early RGB and depth data fusion technique together with the YOLOv5m Deep Learning network to detect the main parts of the vine: leaves, branches, and grapes using our dataset. The results that we have obtained, compared to those obtained using RGB images only with the YOLOv5m architecture, demonstrate the benefits of adding multi data fusion techniques to the object detection pipeline. These results are encouraging and show that multi-sensor data fusion is a technique that is worth considering as it can be useful for improving grapevine disease recognition technologies.
Localization of Mobile Manipulator in Vineyards for Autonomous Task Execution.
Hrabar, I.; and Kovačić, Z.
Machines, 11(4): 414. 4 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Localization of Mobile Manipulator in Vineyards for Autonomous Task Execution},
type = {article},
year = {2023},
keywords = {autonomous mobile manipulator,localization,protective spraying,suckering,surrounding awareness,vineyards,viticulture},
pages = {414},
volume = {11},
websites = {https://www.mdpi.com/2075-1702/11/4/414/htm,https://www.mdpi.com/2075-1702/11/4/414},
month = {4},
publisher = {MDPI},
day = {1},
id = {1fd176f0-39db-3a32-945f-78a35fd7755f},
created = {2023-10-27T08:09:47.555Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-16T09:07:15.237Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4c7c81ce-f24b-44ae-bc2a-bf60600a3a24,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Although robotic systems have found their place in agriculture, there are still many challenges, especially in the area of localization in semi-structured environments. A robotic system has been developed and tested to perform various tasks in the steep vineyards of the Mediterranean region. In this paper, we describe a method for vine trunk localization, based solely on the visual recognition of vine trunks by neural networks fed by an RGB camera. Assuming that the height of the first wire in the vineyard is known, the proposed method is used to determine the location of vines in the immediate vicinity of the all-terrain mobile manipulator—ATMM-VIV—needed for spraying and bud suckering. The experiment was conducted in a slightly inclined vineyard to evaluate the proposed localization method.},
bibtype = {article},
author = {Hrabar, Ivan and Kovačić, Zdenko},
doi = {10.3390/MACHINES11040414/S1},
journal = {Machines},
number = {4}
}
Although robotic systems have found their place in agriculture, there are still many challenges, especially in the area of localization in semi-structured environments. A robotic system has been developed and tested to perform various tasks in the steep vineyards of the Mediterranean region. In this paper, we describe a method for vine trunk localization, based solely on the visual recognition of vine trunks by neural networks fed by an RGB camera. Assuming that the height of the first wire in the vineyard is known, the proposed method is used to determine the location of vines in the immediate vicinity of the all-terrain mobile manipulator—ATMM-VIV—needed for spraying and bud suckering. The experiment was conducted in a slightly inclined vineyard to evaluate the proposed localization method.
Missing Plant Detection in Vineyards Using UAV Angled RGB Imagery Acquired in Dormant Period.
Di Gennaro, S., F.; Vannini, G., L.; Berton, A.; Dainelli, R.; Toscano, P.; and Matese, A.
Drones 2023, Vol. 7, Page 349, 7(6): 349. 5 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Missing Plant Detection in Vineyards Using UAV Angled RGB Imagery Acquired in Dormant Period},
type = {article},
year = {2023},
keywords = {UAV,missing plant detection,photogrammetry,point cloud,precision agriculture,vineyard},
pages = {349},
volume = {7},
websites = {https://www.mdpi.com/2504-446X/7/6/349/htm,https://www.mdpi.com/2504-446X/7/6/349},
month = {5},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {26},
id = {48edb51a-87ab-3834-8241-2e0e44c28f1b},
created = {2023-10-27T08:14:28.558Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:27.291Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Since 2010, more and more farmers have been using remote sensing data from unmanned aerial vehicles, which have a high spatial–temporal resolution, to determine the status of their crops and how their fields change. Imaging sensors, such as multispectral and RGB cameras, are the most widely used tool in vineyards to characterize the vegetative development of the canopy and detect the presence of missing vines along the rows. In this study, the authors propose different approaches to identify and locate each vine within a commercial vineyard using angled RGB images acquired during winter in the dormant period (without canopy leaves), thus minimizing any disturbance to the agronomic practices commonly conducted in the vegetative period. Using a combination of photogrammetric techniques and spatial analysis tools, a workflow was developed to extract each post and vine trunk from a dense point cloud and then assess the number and position of missing vines with high precision. In order to correctly identify the vines and missing vines, the performance of four methods was evaluated, and the best performing one achieved 95.10% precision and 92.72% overall accuracy. The results confirm that the methodology developed represents an effective support in the decision-making processes for the correct management of missing vines, which is essential for preserving a vineyard’s productive capacity and, more importantly, to ensure the farmer’s economic return.},
bibtype = {article},
author = {Di Gennaro, Salvatore Filippo and Vannini, Gian Luca and Berton, Andrea and Dainelli, Riccardo and Toscano, Piero and Matese, Alessandro},
doi = {10.3390/DRONES7060349},
journal = {Drones 2023, Vol. 7, Page 349},
number = {6}
}
Since 2010, more and more farmers have been using remote sensing data from unmanned aerial vehicles, which have a high spatial–temporal resolution, to determine the status of their crops and how their fields change. Imaging sensors, such as multispectral and RGB cameras, are the most widely used tool in vineyards to characterize the vegetative development of the canopy and detect the presence of missing vines along the rows. In this study, the authors propose different approaches to identify and locate each vine within a commercial vineyard using angled RGB images acquired during winter in the dormant period (without canopy leaves), thus minimizing any disturbance to the agronomic practices commonly conducted in the vegetative period. Using a combination of photogrammetric techniques and spatial analysis tools, a workflow was developed to extract each post and vine trunk from a dense point cloud and then assess the number and position of missing vines with high precision. In order to correctly identify the vines and missing vines, the performance of four methods was evaluated, and the best performing one achieved 95.10% precision and 92.72% overall accuracy. The results confirm that the methodology developed represents an effective support in the decision-making processes for the correct management of missing vines, which is essential for preserving a vineyard’s productive capacity and, more importantly, to ensure the farmer’s economic return.
Disease detection in Okra plant and Grape vein using image processing.
Kavitha, R.; Harini, S., S.; and Akshatha, K.
2nd International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation, ICAECA 2023. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Disease detection in Okra plant and Grape vein using image processing},
type = {article},
year = {2023},
keywords = {Convolution Neural Network (CNN),Deep Learning,Grape vein disease,Internet of Things (IoT),Pesticide Prediction,Tensorflow,okra plant disease},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
id = {ca67c5a8-3430-35f3-b0ee-fb9b21937f77},
created = {2023-10-27T08:18:15.566Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-22T17:08:42.107Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The control of early-stage disease in plants is an essential factor in agriculture. Identification of disease in plants at an early stage helps farmers to reduce the usage of pesticides and avoid economic losses. This also in turn helps in promoting high quality yield production. Convolutional neural networks (CNNs) are deep learning algorithms that is applied for high resolution image recognition. This study uses a deep convolution neural network algorithm to detect and classify plant diseases. RESNET50 network has been applied to improve the effectiveness. Tensor Flow algorithm is utilized for coding the CNN algorithm and for accurate classification of the disease in grape and okra plant leaf. The dataset employed comprises of 6 classes and includes 2500 images. Simulation results for the developed model had achieved an accuracy of 95.1% in training and 91.2% in validation class tests.},
bibtype = {article},
author = {Kavitha, R. and Harini, S. Sree and Akshatha, K.},
doi = {10.1109/ICAECA56562.2023.10200150},
journal = {2nd International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation, ICAECA 2023}
}
The control of early-stage disease in plants is an essential factor in agriculture. Identification of disease in plants at an early stage helps farmers to reduce the usage of pesticides and avoid economic losses. This also in turn helps in promoting high quality yield production. Convolutional neural networks (CNNs) are deep learning algorithms that is applied for high resolution image recognition. This study uses a deep convolution neural network algorithm to detect and classify plant diseases. RESNET50 network has been applied to improve the effectiveness. Tensor Flow algorithm is utilized for coding the CNN algorithm and for accurate classification of the disease in grape and okra plant leaf. The dataset employed comprises of 6 classes and includes 2500 images. Simulation results for the developed model had achieved an accuracy of 95.1% in training and 91.2% in validation class tests.
Autonomous Navigation and Crop Row Detection in Vineyards Using Machine Vision with 2D Camera.
Mendez, E.; Camacho, P.; Escobedo Cabello, J., ;.; Gómez-Espinosa, J., A., ;.; Autonomous, A.; Mendez, E.; Camacho, J., P.; Arturo, J.; Cabello, E.; and Gómez-Espinosa, A.
Automation 2023, Vol. 4, Pages 309-326, 4(4): 309-326. 9 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Autonomous Navigation and Crop Row Detection in Vineyards Using Machine Vision with 2D Camera},
type = {article},
year = {2023},
keywords = {autonomous navigation,machine vision,vineyard navigation},
pages = {309-326},
volume = {4},
websites = {https://www.mdpi.com/2673-4052/4/4/18/htm,https://www.mdpi.com/2673-4052/4/4/18},
month = {9},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {24},
id = {035c8c97-3afb-3397-93cc-1f474aba475b},
created = {2023-10-27T08:19:51.145Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-16T09:07:16.694Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4c7c81ce-f24b-44ae-bc2a-bf60600a3a24,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {In order to improve agriculture productivity, autonomous navigation algorithms are being developed so that robots can navigate along agricultural environments to automatize tasks that are currently performed by hand. This work uses machine vision techniques such as the Otsu’s method, blob detection, and pixel counting to detect the center of the row. Additionally, a commutable control is implemented to autonomously navigate a vineyard. Experimental trials were conducted in an actual vineyard to validate the algorithm. In these trials show that the algorithm can successfully guide the robot through the row without any collisions. This algorithm offers a computationally efficient solution for vineyard row navigation, employing a 2D camera and the Otsu’s thresholding technique to ensure collision-free operation.},
bibtype = {article},
author = {Mendez, E and Camacho, Piña and Escobedo Cabello, J ; and Gómez-Espinosa, J A ; and Autonomous, A and Mendez, Enrico and Camacho, Javier Piña and Arturo, Jesús and Cabello, Escobedo and Gómez-Espinosa, Alfonso},
doi = {10.3390/AUTOMATION4040018},
journal = {Automation 2023, Vol. 4, Pages 309-326},
number = {4}
}
In order to improve agriculture productivity, autonomous navigation algorithms are being developed so that robots can navigate along agricultural environments to automatize tasks that are currently performed by hand. This work uses machine vision techniques such as the Otsu’s method, blob detection, and pixel counting to detect the center of the row. Additionally, a commutable control is implemented to autonomously navigate a vineyard. Experimental trials were conducted in an actual vineyard to validate the algorithm. In these trials show that the algorithm can successfully guide the robot through the row without any collisions. This algorithm offers a computationally efficient solution for vineyard row navigation, employing a 2D camera and the Otsu’s thresholding technique to ensure collision-free operation.
Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture.
Rudenko, M.; Kazak, A.; Oleinikov, N.; Mayorova, A.; Dorofeeva, A.; Nekhaychuk, D.; and Shutova, O.
Computation 2023, Vol. 11, Page 171, 11(9): 171. 9 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture},
type = {article},
year = {2023},
keywords = {rudenko2023intelligentmonitoringsystem},
pages = {171},
volume = {11},
websites = {https://www.mdpi.com/2079-3197/11/9/171/htm,https://www.mdpi.com/2079-3197/11/9/171},
month = {9},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {3},
id = {98ce58d3-cdfa-3d5d-ab7e-826000c4ecba},
created = {2023-10-27T08:30:16.541Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T20:25:07.284Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {1619600c-2adf-4216-9e4c-d260d584753e,6b565182-74c4-44fd-98cc-10618152e2ae,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Plant health plays an important role in influencing agricultural yields and poor plant health can lead to significant economic losses. Grapes are an important and widely cultivated plant, especially in the southern regions of Russia. Grapes are subject to a number of diseases that require timely diagnosis and treatment. Incorrect identification of diseases can lead to large crop losses. A neural network deep learning dataset of 4845 grape disease images was created. Eight categories of common grape diseases typical of the Black Sea region were studied: Mildew, Oidium, Anthracnose, Esca, Gray rot, Black rot, White rot, and bacterial cancer of grapes. In addition, a set of healthy plants was included. In this paper, a new selective search algorithm for monitoring the state of plant development based on computer vision in viticulture, based on YOLOv5, was considered. The most difficult part of object detection is object localization. As a result, the fast and accurate detection of grape health status was realized. The test results showed that the accuracy was 97.5%, with a model size of 14.85 MB. An analysis of existing publications and patents found using the search “Computer vision in viticulture” showed that this technology is original and promising. The developed software package implements the best approaches to the control system in viticulture using computer vision technologies. A mobile application was developed for practical use by the farmer. The developed software and hardware complex can be installed in any vehicle. Such a mobile system will allow for real-time monitoring of the state of the vineyards and will display it on a map. The novelty of this study lies in the integration of software and hardware. Decision support system software can be adapted to solve other similar problems. The software product commercialization plan is focused on the automation and robotization of agriculture, and will form the basis for adding the next set of similar software.},
bibtype = {article},
author = {Rudenko, Marina and Kazak, Anatoliy and Oleinikov, Nikolay and Mayorova, Angela and Dorofeeva, Anna and Nekhaychuk, Dmitry and Shutova, Olga},
doi = {10.3390/COMPUTATION11090171},
journal = {Computation 2023, Vol. 11, Page 171},
number = {9}
}
Plant health plays an important role in influencing agricultural yields and poor plant health can lead to significant economic losses. Grapes are an important and widely cultivated plant, especially in the southern regions of Russia. Grapes are subject to a number of diseases that require timely diagnosis and treatment. Incorrect identification of diseases can lead to large crop losses. A neural network deep learning dataset of 4845 grape disease images was created. Eight categories of common grape diseases typical of the Black Sea region were studied: Mildew, Oidium, Anthracnose, Esca, Gray rot, Black rot, White rot, and bacterial cancer of grapes. In addition, a set of healthy plants was included. In this paper, a new selective search algorithm for monitoring the state of plant development based on computer vision in viticulture, based on YOLOv5, was considered. The most difficult part of object detection is object localization. As a result, the fast and accurate detection of grape health status was realized. The test results showed that the accuracy was 97.5%, with a model size of 14.85 MB. An analysis of existing publications and patents found using the search “Computer vision in viticulture” showed that this technology is original and promising. The developed software package implements the best approaches to the control system in viticulture using computer vision technologies. A mobile application was developed for practical use by the farmer. The developed software and hardware complex can be installed in any vehicle. Such a mobile system will allow for real-time monitoring of the state of the vineyards and will display it on a map. The novelty of this study lies in the integration of software and hardware. Decision support system software can be adapted to solve other similar problems. The software product commercialization plan is focused on the automation and robotization of agriculture, and will form the basis for adding the next set of similar software.
Using a Camera System for the In-Situ Assessment of Cordon Dieback due to Grapevine Trunk Diseases.
Tang, J.; Yem, O.; Russell, F.; Stewart, C., A.; Lin, K.; Jayakody, H.; Ayres, M., R.; Sosnowski, M., R.; Whitty, M.; and Petrie, P., R.
. 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Using a Camera System for the In-Situ Assessment of Cordon Dieback due to Grapevine Trunk Diseases},
type = {article},
year = {2023},
websites = {https://doi.org/10.1155/2023/8634742},
id = {d1119c87-7b74-3c51-b65c-6c9c5ac89357},
created = {2023-10-27T08:32:17.044Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:42:35.062Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Background and Aims. .e assessment of grapevine trunk disease symptoms is a labour-intensive process that requires experience and is prone to bias. Methods that support the easy and accurate monitoring of trunk diseases will aid management decisions. Methods and Results. An algorithm was developed for the assessment of dieback symptoms due to trunk disease which is applied on a smartphone mounted on a vehicle driven through the vineyard. Vine images and corresponding expert ground truth assessments (of over 13,000 vines) were collected and correlated over two seasons in Shiraz vineyards in the Clare Valley, Barossa, and McLaren Vale, South Australia. .is dataset was used to train and verify YOLOv5 models to estimate the percentage dieback of cordons due to trunk diseases. .e performance of the models was evaluated on the metrics of highest conndence, highest dieback score, and average dieback score across multiple detections. Eighty-four percent of vines in a test set derived from an unseen vineyard were assigned a score by the model within 10% of the score given by experts in the vineyard. Conclusions. .e computer vision algorithms were implemented within the phone, allowing real-time assessment and row-level mapping with nothing more than a high-end mobile phone. Signiicance of the Study. .e algorithms form the basis of a system that will allow growers to scan their vineyards easily and regularly to monitor dieback due to grapevine trunk disease and will facilitate corrective interventions.},
bibtype = {article},
author = {Tang, Julie and Yem, Olivia and Russell, Finn and Stewart, Cameron A and Lin, Kangying and Jayakody, Hiranya and Ayres, Matthew R and Sosnowski, Mark R and Whitty, Mark and Petrie, Paul R},
doi = {10.1155/2023/8634742},
keywords = {tang2023usingcamerasystem}
}
Background and Aims. .e assessment of grapevine trunk disease symptoms is a labour-intensive process that requires experience and is prone to bias. Methods that support the easy and accurate monitoring of trunk diseases will aid management decisions. Methods and Results. An algorithm was developed for the assessment of dieback symptoms due to trunk disease which is applied on a smartphone mounted on a vehicle driven through the vineyard. Vine images and corresponding expert ground truth assessments (of over 13,000 vines) were collected and correlated over two seasons in Shiraz vineyards in the Clare Valley, Barossa, and McLaren Vale, South Australia. .is dataset was used to train and verify YOLOv5 models to estimate the percentage dieback of cordons due to trunk diseases. .e performance of the models was evaluated on the metrics of highest conndence, highest dieback score, and average dieback score across multiple detections. Eighty-four percent of vines in a test set derived from an unseen vineyard were assigned a score by the model within 10% of the score given by experts in the vineyard. Conclusions. .e computer vision algorithms were implemented within the phone, allowing real-time assessment and row-level mapping with nothing more than a high-end mobile phone. Signiicance of the Study. .e algorithms form the basis of a system that will allow growers to scan their vineyards easily and regularly to monitor dieback due to grapevine trunk disease and will facilitate corrective interventions.
Phenotyping grapevine red blotch virus and grapevine leafroll-associated viruses before and after symptom expression through machine-learning analysis of hyperspectral images.
Sawyer, E.; Laroche-Pinel, E.; Flasco, M.; Cooper, M., L.; Corrales, B.; Fuchs, M.; and Brillante, L.
Frontiers in Plant Science, 14(March): 1-15. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Phenotyping grapevine red blotch virus and grapevine leafroll-associated viruses before and after symptom expression through machine-learning analysis of hyperspectral images},
type = {article},
year = {2023},
keywords = {Vitis viniferaL,convolutional neural network,deep-learning,disease detection,phenomics,random forest,spectroscopy},
pages = {1-15},
volume = {14},
id = {1d39ab76-7caf-3e37-8a32-afd42d7697da},
created = {2023-10-27T08:35:48.176Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T11:26:17.964Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
private_publication = {false},
abstract = {Introduction: Grapevine leafroll-associated viruses (GLRaVs) and grapevine red blotch virus (GRBV) cause substantial economic losses and concern to North America’s grape and wine industries. Fast and accurate identification of these two groups of viruses is key to informing disease management strategies and limiting their spread by insect vectors in the vineyard. Hyperspectral imaging offers new opportunities for virus disease scouting. Methods: Here we used two machine learning methods, i.e., Random Forest (RF) and 3D-Convolutional Neural Network (CNN), to identify and distinguish leaves from red blotch-infected vines, leafroll-infected vines, and vines co-infected with both viruses using spatiospectral information in the visible domain (510-710nm). We captured hyperspectral images of about 500 leaves from 250 vines at two sampling times during the growing season (a pre-symptomatic stage at veraison and a symptomatic stage at mid-ripening). Concurrently, viral infections were determined in leaf petioles by polymerase chain reaction (PCR) based assays using virus-specific primers and by visual assessment of disease symptoms. Results: When binarily classifying infected vs. non-infected leaves, the CNN model reaches an overall maximum accuracy of 87% versus 82.8% for the RF model. Using the symptomatic dataset lowers the rate of false negatives. Based on a multiclass categorization of leaves, the CNN and RF models had a maximum accuracy of 77.7% and 76.9% (averaged across both healthy and infected leaf categories). Both CNN and RF outperformed visual assessment of symptoms by experts when using RGB segmented images. Interpretation of the RF data showed that the most important wavelengths were in the green, orange, and red subregions. Discussion: While differentiation between plants co-infected with GLRaVs and GRBV proved to be relatively challenging, both models showed promising accuracies across infection categories.},
bibtype = {article},
author = {Sawyer, Erica and Laroche-Pinel, Eve and Flasco, Madison and Cooper, Monica L. and Corrales, Benjamin and Fuchs, Marc and Brillante, Luca},
doi = {10.3389/fpls.2023.1117869},
journal = {Frontiers in Plant Science},
number = {March}
}
Introduction: Grapevine leafroll-associated viruses (GLRaVs) and grapevine red blotch virus (GRBV) cause substantial economic losses and concern to North America’s grape and wine industries. Fast and accurate identification of these two groups of viruses is key to informing disease management strategies and limiting their spread by insect vectors in the vineyard. Hyperspectral imaging offers new opportunities for virus disease scouting. Methods: Here we used two machine learning methods, i.e., Random Forest (RF) and 3D-Convolutional Neural Network (CNN), to identify and distinguish leaves from red blotch-infected vines, leafroll-infected vines, and vines co-infected with both viruses using spatiospectral information in the visible domain (510-710nm). We captured hyperspectral images of about 500 leaves from 250 vines at two sampling times during the growing season (a pre-symptomatic stage at veraison and a symptomatic stage at mid-ripening). Concurrently, viral infections were determined in leaf petioles by polymerase chain reaction (PCR) based assays using virus-specific primers and by visual assessment of disease symptoms. Results: When binarily classifying infected vs. non-infected leaves, the CNN model reaches an overall maximum accuracy of 87% versus 82.8% for the RF model. Using the symptomatic dataset lowers the rate of false negatives. Based on a multiclass categorization of leaves, the CNN and RF models had a maximum accuracy of 77.7% and 76.9% (averaged across both healthy and infected leaf categories). Both CNN and RF outperformed visual assessment of symptoms by experts when using RGB segmented images. Interpretation of the RF data showed that the most important wavelengths were in the green, orange, and red subregions. Discussion: While differentiation between plants co-infected with GLRaVs and GRBV proved to be relatively challenging, both models showed promising accuracies across infection categories.
A Deep Learning Approach for Precision Viticulture, Assessing Grape Maturity via YOLOv7.
Badeka, E.; Karapatzak, E.; Karampatea, A.; Bouloumpasi, E.; Kalathas, I.; Lytridis, C.; Tziolas, E.; Tsakalidou, V., N.; and Kaburlasos, V., G.
Sensors 2023, Vol. 23, Page 8126, 23(19): 8126. 9 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {A Deep Learning Approach for Precision Viticulture, Assessing Grape Maturity via YOLOv7},
type = {article},
year = {2023},
keywords = {YOLO,grape maturity detection,maturity estimation,object detection},
pages = {8126},
volume = {23},
websites = {https://www.mdpi.com/1424-8220/23/19/8126/htm,https://www.mdpi.com/1424-8220/23/19/8126},
month = {9},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {27},
id = {28bb8779-4f34-30ab-8d9e-bca67bcfe59e},
created = {2023-10-27T08:37:26.018Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:28.425Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {In the viticulture sector, robots are being employed more frequently to increase productivity and accuracy in operations such as vineyard mapping, pruning, and harvesting, especially in locations where human labor is in short supply or expensive. This paper presents the development of an algorithm for grape maturity estimation in the framework of vineyard management. An object detection algorithm is proposed based on You Only Look Once (YOLO) v7 and its extensions in order to detect grape maturity in a white variety of grape (Assyrtiko grape variety). The proposed algorithm was trained using images received over a period of six weeks from grapevines in Drama, Greece. Tests on high-quality images have demonstrated that the detection of five grape maturity stages is possible. Furthermore, the proposed approach has been compared against alternative object detection algorithms. The results showed that YOLO v7 outperforms other architectures both in precision and accuracy. This work paves the way for the development of an autonomous robot for grapevine management.},
bibtype = {article},
author = {Badeka, Eftichia and Karapatzak, Eleftherios and Karampatea, Aikaterini and Bouloumpasi, Elisavet and Kalathas, Ioannis and Lytridis, Chris and Tziolas, Emmanouil and Tsakalidou, Viktoria Nikoleta and Kaburlasos, Vassilis G},
doi = {10.3390/S23198126},
journal = {Sensors 2023, Vol. 23, Page 8126},
number = {19}
}
In the viticulture sector, robots are being employed more frequently to increase productivity and accuracy in operations such as vineyard mapping, pruning, and harvesting, especially in locations where human labor is in short supply or expensive. This paper presents the development of an algorithm for grape maturity estimation in the framework of vineyard management. An object detection algorithm is proposed based on You Only Look Once (YOLO) v7 and its extensions in order to detect grape maturity in a white variety of grape (Assyrtiko grape variety). The proposed algorithm was trained using images received over a period of six weeks from grapevines in Drama, Greece. Tests on high-quality images have demonstrated that the detection of five grape maturity stages is possible. Furthermore, the proposed approach has been compared against alternative object detection algorithms. The results showed that YOLO v7 outperforms other architectures both in precision and accuracy. This work paves the way for the development of an autonomous robot for grapevine management.
Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions.
Pinheiro, I.; Moreira, G.; Queirós da Silva, D.; Magalhães, S.; Valente, A.; Moura Oliveira, P.; Cunha, M.; and Santos, F.
Agronomy 2023, Vol. 13, Page 1120, 13(4): 1120. 4 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions},
type = {article},
year = {2023},
keywords = {computer vision,machine learning,object detection,precision agriculture,viticulture},
pages = {1120},
volume = {13},
websites = {https://www.mdpi.com/2073-4395/13/4/1120/htm,https://www.mdpi.com/2073-4395/13/4/1120},
month = {4},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {14},
id = {6ec3008c-82eb-3a79-8111-de95ce928fbb},
created = {2023-10-27T08:38:57.257Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:28.246Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.},
bibtype = {article},
author = {Pinheiro, Isabel and Moreira, Germano and Queirós da Silva, Daniel and Magalhães, Sandro and Valente, António and Moura Oliveira, Paulo and Cunha, Mário and Santos, Filipe},
doi = {10.3390/AGRONOMY13041120},
journal = {Agronomy 2023, Vol. 13, Page 1120},
number = {4}
}
The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.
Evolutionary conditional GANs for supervised data augmentation: The case of assessing berry number per cluster in grapevine.
Gutiérrez, S.; and Tardaguila, J.
Applied Soft Computing, 147: 110805. 11 2023.
Paper
doi
link
bibtex
@article{
title = {Evolutionary conditional GANs for supervised data augmentation: The case of assessing berry number per cluster in grapevine},
type = {article},
year = {2023},
pages = {110805},
volume = {147},
month = {11},
publisher = {Elsevier},
day = {1},
id = {750acab6-e2fa-3983-b12e-1406e4d3895d},
created = {2023-10-27T08:48:09.217Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:28.834Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
bibtype = {article},
author = {Gutiérrez, Salvador and Tardaguila, Javier},
doi = {10.1016/J.ASOC.2023.110805},
journal = {Applied Soft Computing}
}
Surgical Fine-Tuning for Grape Bunch Segmentation under Visual Domain Shifts.
Chiatti, A.; Bertoglio, R.; Catalano, N.; Gatti, M.; and Matteucci, M.
Proceedings of the 11th European Conference on Mobile Robots, ECMR 2023. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Surgical Fine-Tuning for Grape Bunch Segmentation under Visual Domain Shifts},
type = {article},
year = {2023},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
id = {eff2178b-fe2e-3774-8071-deeedd767f20},
created = {2023-10-27T08:58:59.204Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:28.639Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Mobile robots will play a crucial role in the transition towards sustainable agriculture. To autonomously and effectively monitor the state of plants, robots ought to be equipped with visual perception capabilities that are robust to the rapid changes that characterise agricultural settings. In this paper, we focus on the challenging task of segmenting grape bunches from images collected by mobile robots in vineyards. In this context, we present the first study that applies surgical fine-tuning to instance segmentation tasks. We show how selectively tuning only specific model layers can support the adaptation of pre-trained Deep Learning models to newly-collected grape images that introduce visual domain shifts, while also substantially reducing the number of tuned parameters.},
bibtype = {article},
author = {Chiatti, Agnese and Bertoglio, Riccardo and Catalano, Nico and Gatti, Matteo and Matteucci, Matteo},
doi = {10.1109/ECMR59166.2023.10256348},
journal = {Proceedings of the 11th European Conference on Mobile Robots, ECMR 2023}
}
Mobile robots will play a crucial role in the transition towards sustainable agriculture. To autonomously and effectively monitor the state of plants, robots ought to be equipped with visual perception capabilities that are robust to the rapid changes that characterise agricultural settings. In this paper, we focus on the challenging task of segmenting grape bunches from images collected by mobile robots in vineyards. In this context, we present the first study that applies surgical fine-tuning to instance segmentation tasks. We show how selectively tuning only specific model layers can support the adaptation of pre-trained Deep Learning models to newly-collected grape images that introduce visual domain shifts, while also substantially reducing the number of tuned parameters.
Modelling wine grapevines for autonomous robotic cane pruning.
Williams, H.; Smith, D.; Shahabi, J.; Gee, T.; Nejati, M.; Mcguinness, B.; Black, K.; Tobias, J.; Jangali, R.; Lim, H.; Mcculloch, J.; Green, R.; O'connor, M.; Gounder, S.; Ndaka, A.; Burch, K.; Fourie, J.; Hsiao, J.; Werner, A.; Agnew, R.; Oliver, R.; and Macdonald, B., A.
. 2023.
Website
doi
link
bibtex
abstract
@article{
title = {Modelling wine grapevines for autonomous robotic cane pruning},
type = {article},
year = {2023},
keywords = {Horticulture,Robotics},
websites = {http://creativecommons.org/licenses/by/4.0/},
id = {7480fd04-a5cc-3825-9221-872da12f3cd0},
created = {2023-10-27T09:06:43.156Z},
accessed = {2023-10-27},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-10-27T09:13:03.144Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Machine Vision Pruning Orchard Vineyard Aotearoa (New Zealand) has a strong and growing winegrape industry struggling to access workers to complete skilled, seasonal tasks such as pruning. Maintaining high-producing vines requires training agricultural workers that can make quality cane pruning decisions , which can be difficult when workers are not readily available. A novel vision system for an autonomous cane pruning robot is presented that can assess a vine to make quality pruning decisions like an expert. The vision system is designed to generate an accurate digital 3D model of a vine with skeletonised cane structures to estimate key pruning metrics for each cane. The presented approach has been extensively evaluated in a real-world vineyard as a commercial platform would be expected to operate. The system is demonstrated to perform consistently at extracting dimensionally accurate digital models of the vines. Detailed evaluation of the digital models shows that 51.45% of the canes were modelled entirely, with a further 35.51% only missing a single internode connection. The quantified results demonstrate that the robotic platform can generate dimensionally accurate metrics of the canes for future decision-making and automation of pruning. ScienceDirect jo urnal homepage: www .e lsev ie r.com/ locate/issn/153 75110 b i o s y s t e m s e n g i n e e r i n g 2 3 5 (2 0 2 3) 3 1 e4 9 https://doi.org/10.1016/j.biosystemseng.2023.09.006 1537-5110/},
bibtype = {article},
author = {Williams, Henry and Smith, David and Shahabi, Jalil and Gee, Trevor and Nejati, Mahla and Mcguinness, Ben and Black, Kale and Tobias, Jonathan and Jangali, Rahul and Lim, Hin and Mcculloch, Josh and Green, Richard and O'connor, Mira and Gounder, Sandhiya and Ndaka, Angella and Burch, Karly and Fourie, Jaco and Hsiao, Jeffrey and Werner, Armin and Agnew, Rob and Oliver, Richard and Macdonald, Bruce A},
doi = {10.1016/j.biosystemseng.2023.09.006}
}
Machine Vision Pruning Orchard Vineyard Aotearoa (New Zealand) has a strong and growing winegrape industry struggling to access workers to complete skilled, seasonal tasks such as pruning. Maintaining high-producing vines requires training agricultural workers that can make quality cane pruning decisions , which can be difficult when workers are not readily available. A novel vision system for an autonomous cane pruning robot is presented that can assess a vine to make quality pruning decisions like an expert. The vision system is designed to generate an accurate digital 3D model of a vine with skeletonised cane structures to estimate key pruning metrics for each cane. The presented approach has been extensively evaluated in a real-world vineyard as a commercial platform would be expected to operate. The system is demonstrated to perform consistently at extracting dimensionally accurate digital models of the vines. Detailed evaluation of the digital models shows that 51.45% of the canes were modelled entirely, with a further 35.51% only missing a single internode connection. The quantified results demonstrate that the robotic platform can generate dimensionally accurate metrics of the canes for future decision-making and automation of pruning. ScienceDirect jo urnal homepage: www .e lsev ie r.com/ locate/issn/153 75110 b i o s y s t e m s e n g i n e e r i n g 2 3 5 (2 0 2 3) 3 1 e4 9 https://doi.org/10.1016/j.biosystemseng.2023.09.006 1537-5110/
Modelling wine grapevines for autonomous robotic cane pruning.
Williams, H.; Smith, D.; Shahabi, J.; Gee, T.; Nejati, M.; Mcguinness, B.; Black, K.; Tobias, J.; Jangali, R.; Lim, H.; Mcculloch, J.; Green, R.; O'connor, M.; Gounder, S.; Ndaka, A.; Burch, K.; Fourie, J.; Hsiao, J.; Werner, A.; Agnew, R.; Oliver, R.; and Macdonald, B., A.
. 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Modelling wine grapevines for autonomous robotic cane pruning},
type = {article},
year = {2023},
keywords = {Horticulture,Robotics},
websites = {http://creativecommons.org/licenses/by/4.0/},
id = {62e53d0d-de91-34ab-b4f5-370b288a1028},
created = {2023-10-27T09:07:10.046Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:29.135Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Machine Vision Pruning Orchard Vineyard Aotearoa (New Zealand) has a strong and growing winegrape industry struggling to access workers to complete skilled, seasonal tasks such as pruning. Maintaining high-producing vines requires training agricultural workers that can make quality cane pruning decisions , which can be difficult when workers are not readily available. A novel vision system for an autonomous cane pruning robot is presented that can assess a vine to make quality pruning decisions like an expert. The vision system is designed to generate an accurate digital 3D model of a vine with skeletonised cane structures to estimate key pruning metrics for each cane. The presented approach has been extensively evaluated in a real-world vineyard as a commercial platform would be expected to operate. The system is demonstrated to perform consistently at extracting dimensionally accurate digital models of the vines. Detailed evaluation of the digital models shows that 51.45% of the canes were modelled entirely, with a further 35.51% only missing a single internode connection. The quantified results demonstrate that the robotic platform can generate dimensionally accurate metrics of the canes for future decision-making and automation of pruning. ScienceDirect jo urnal homepage: www .e lsev ie r.com/ locate/issn/153 75110 b i o s y s t e m s e n g i n e e r i n g 2 3 5 (2 0 2 3) 3 1 e4 9 https://doi.org/10.1016/j.biosystemseng.2023.09.006 1537-5110/},
bibtype = {article},
author = {Williams, Henry and Smith, David and Shahabi, Jalil and Gee, Trevor and Nejati, Mahla and Mcguinness, Ben and Black, Kale and Tobias, Jonathan and Jangali, Rahul and Lim, Hin and Mcculloch, Josh and Green, Richard and O'connor, Mira and Gounder, Sandhiya and Ndaka, Angella and Burch, Karly and Fourie, Jaco and Hsiao, Jeffrey and Werner, Armin and Agnew, Rob and Oliver, Richard and Macdonald, Bruce A},
doi = {10.1016/j.biosystemseng.2023.09.006}
}
Machine Vision Pruning Orchard Vineyard Aotearoa (New Zealand) has a strong and growing winegrape industry struggling to access workers to complete skilled, seasonal tasks such as pruning. Maintaining high-producing vines requires training agricultural workers that can make quality cane pruning decisions , which can be difficult when workers are not readily available. A novel vision system for an autonomous cane pruning robot is presented that can assess a vine to make quality pruning decisions like an expert. The vision system is designed to generate an accurate digital 3D model of a vine with skeletonised cane structures to estimate key pruning metrics for each cane. The presented approach has been extensively evaluated in a real-world vineyard as a commercial platform would be expected to operate. The system is demonstrated to perform consistently at extracting dimensionally accurate digital models of the vines. Detailed evaluation of the digital models shows that 51.45% of the canes were modelled entirely, with a further 35.51% only missing a single internode connection. The quantified results demonstrate that the robotic platform can generate dimensionally accurate metrics of the canes for future decision-making and automation of pruning. ScienceDirect jo urnal homepage: www .e lsev ie r.com/ locate/issn/153 75110 b i o s y s t e m s e n g i n e e r i n g 2 3 5 (2 0 2 3) 3 1 e4 9 https://doi.org/10.1016/j.biosystemseng.2023.09.006 1537-5110/
Correlation of the Grapevine (Vitis vinifera L.) Leaf Chlorophyll Concentration with RGB Color Indices.
Bodor-Pesti, P.; Taranyi, D.; Nyitrainé Sárdy, D., Á.; Le Phuong Nguyen, L.; and Baranyai, L.
Horticulturae 2023, Vol. 9, Page 899, 9(8): 899. 8 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Correlation of the Grapevine (Vitis vinifera L.) Leaf Chlorophyll Concentration with RGB Color Indices},
type = {article},
year = {2023},
keywords = {RGB,chlorophyll,precision viticulture,vegetation index},
pages = {899},
volume = {9},
websites = {https://www.mdpi.com/2311-7524/9/8/899/htm,https://www.mdpi.com/2311-7524/9/8/899},
month = {8},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {7},
id = {a040356e-a472-3020-8304-27ec7fb84f2b},
created = {2023-10-27T09:25:16.721Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:29.844Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Spectral investigation of the canopy has an increasing importance in precision viticulture to monitor the effect of biotic and abiotic stress factors. In this study, RGB (color model, red, green, blue)-based vegetation indices were evaluated to find a correlation with grapevine leaf chlorophyll concentration. ‘Hárslevelű’ (Vitis vinifera L.) leaf samples were obtained from a commercial vineyard and digitalized. The chlorophyll concentration of the samples was determined with a portable chlorophyll meter. Image processing and color analyses were performed to determine the RGB average values of the digitized samples. According to the RGB values, 31 vegetation indices were calculated and evaluated with a correlation test and multivariate regression. The Pearson correlation between the chlorophyll concentration and most of the indices was significant (p < 0.01), with some exceptions. The same results were obtained with the Spearman correlation as the relationship had high significance (p < 0.01) for most of the indices. The highest Pearson correlation was obtained with the index PCA2 (Principal Component Analysis 2), while Spearman correlation was the highest for RMB (difference between red and blue) and GMB (difference between green and blue). The multivariate regression model also showed a high correlation with the pigmentation. We consider that our results would be applicable in the future to receive information about the canopy physiological status monitored with on-the-go sensors.},
bibtype = {article},
author = {Bodor-Pesti, Péter and Taranyi, Dóra and Nyitrainé Sárdy, Diána Ágnes and Le Phuong Nguyen, Lien and Baranyai, László},
doi = {10.3390/HORTICULTURAE9080899},
journal = {Horticulturae 2023, Vol. 9, Page 899},
number = {8}
}
Spectral investigation of the canopy has an increasing importance in precision viticulture to monitor the effect of biotic and abiotic stress factors. In this study, RGB (color model, red, green, blue)-based vegetation indices were evaluated to find a correlation with grapevine leaf chlorophyll concentration. ‘Hárslevelű’ (Vitis vinifera L.) leaf samples were obtained from a commercial vineyard and digitalized. The chlorophyll concentration of the samples was determined with a portable chlorophyll meter. Image processing and color analyses were performed to determine the RGB average values of the digitized samples. According to the RGB values, 31 vegetation indices were calculated and evaluated with a correlation test and multivariate regression. The Pearson correlation between the chlorophyll concentration and most of the indices was significant (p < 0.01), with some exceptions. The same results were obtained with the Spearman correlation as the relationship had high significance (p < 0.01) for most of the indices. The highest Pearson correlation was obtained with the index PCA2 (Principal Component Analysis 2), while Spearman correlation was the highest for RMB (difference between red and blue) and GMB (difference between green and blue). The multivariate regression model also showed a high correlation with the pigmentation. We consider that our results would be applicable in the future to receive information about the canopy physiological status monitored with on-the-go sensors.
Estimating soil and grapevine water status using ground based hyperspectral imaging under diffused lighting conditions: Addressing the effect of lighting variability in vineyards.
Kang, C.; Diverres, G.; Achyut, P.; Karkee, M.; Zhang, Q.; and Keller, M.
Computers and Electronics in Agriculture, 212: 108175. 9 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Estimating soil and grapevine water status using ground based hyperspectral imaging under diffused lighting conditions: Addressing the effect of lighting variability in vineyards},
type = {article},
year = {2023},
keywords = {Deficit irrigation,Diffused lighting,Grapevine,Hyperspectral imaging,Partial least square,Vitis,Water stress},
pages = {108175},
volume = {212},
month = {9},
publisher = {Elsevier},
day = {1},
id = {22c8f836-8108-3b8b-88fc-b434f6bb8e49},
created = {2023-10-27T09:27:57.641Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:29.971Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {A timely and appropriate level of water deficit is desirable in wine grape production to optimize fruit quality for winemaking. Thus, it is crucial to find a robust and rapid method to assess grapevine water stress in real time. Hyperspectral imaging (HSI) has the potential to detect changes in leaf water status, but the robustness and accuracy are limited in field applications. This study focused on developing ground based approaches for detecting soil and grapevine water status using HSI obtained in diffused lighting conditions. During the 2021 growing season, leaf water potential (ΨL), stomatal conductance (gs) on the selected leaves and volumetric soil moisture (θv) in the root zone were measured as water status indicators. Spectral data from diffused and direct sunlight conditions were obtained to construct models to estimate plant and soil water status indicators. Partial least squares (PLS) regression models were individually developed to estimate ΨL, gs, and θv using spectra obtained from direct/diffused lighting conditions, respectively. The results indicated that the ΨL estimation model using spectral data from diffused lighting performed better than that obtained using direct sunlight, indicated by a higher R2 (0.89 vs. 0.82), a lower RMSE (0.12 vs. 0.15 MPa) and a lower MAE (0.10 vs. 0.11 MPa). The model developed for estimating θv using spectral data under diffused lighting achieved superior performance to the one in direct sunlight in terms of R2, RMSE and MAE (0.90 vs. 0.89 and 1.56 vs. 1.59 %, 1.26 vs. 1.29 %). These results demonstrated that spectral data obtained under diffused lighting can improve model performance by providing a more uniform illumination. Ground based HSI was capable of high-resolution sensing of grapevine water status by estimating ΨL and gs and map variability within canopies.},
bibtype = {article},
author = {Kang, Chenchen and Diverres, Geraldine and Achyut, Paudel and Karkee, Manoj and Zhang, Qin and Keller, Markus},
doi = {10.1016/J.COMPAG.2023.108175},
journal = {Computers and Electronics in Agriculture}
}
A timely and appropriate level of water deficit is desirable in wine grape production to optimize fruit quality for winemaking. Thus, it is crucial to find a robust and rapid method to assess grapevine water stress in real time. Hyperspectral imaging (HSI) has the potential to detect changes in leaf water status, but the robustness and accuracy are limited in field applications. This study focused on developing ground based approaches for detecting soil and grapevine water status using HSI obtained in diffused lighting conditions. During the 2021 growing season, leaf water potential (ΨL), stomatal conductance (gs) on the selected leaves and volumetric soil moisture (θv) in the root zone were measured as water status indicators. Spectral data from diffused and direct sunlight conditions were obtained to construct models to estimate plant and soil water status indicators. Partial least squares (PLS) regression models were individually developed to estimate ΨL, gs, and θv using spectra obtained from direct/diffused lighting conditions, respectively. The results indicated that the ΨL estimation model using spectral data from diffused lighting performed better than that obtained using direct sunlight, indicated by a higher R2 (0.89 vs. 0.82), a lower RMSE (0.12 vs. 0.15 MPa) and a lower MAE (0.10 vs. 0.11 MPa). The model developed for estimating θv using spectral data under diffused lighting achieved superior performance to the one in direct sunlight in terms of R2, RMSE and MAE (0.90 vs. 0.89 and 1.56 vs. 1.59 %, 1.26 vs. 1.29 %). These results demonstrated that spectral data obtained under diffused lighting can improve model performance by providing a more uniform illumination. Ground based HSI was capable of high-resolution sensing of grapevine water status by estimating ΨL and gs and map variability within canopies.
Potential detection of Flavescence dorée in the vineyard using close-range hyperspectral imaging.
Barjaktarovic, M.; Santoni, M.; Faralli, M.; Bertamini, M.; and Bruzzone, L.
International Conference on Electrical, Computer, Communications and Mechatronics Engineering, ICECCME 2023. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Potential detection of Flavescence dorée in the vineyard using close-range hyperspectral imaging},
type = {article},
year = {2023},
keywords = {barjaktarovic2023potentialdetectionflavescence},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
id = {b7c5c04a-6674-3559-80ac-2bc6da112ce7},
created = {2023-10-27T09:33:29.495Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:12:05.342Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Grapevine is one of the most important crops cultivated across Europe. Climate factors and diseases constantly threaten its production. Recently, Flavescence dorée (FD), an incurable grapevine disease with the obligation to uproot each infected plant, has been widely spread in Europe. The symptoms of FD are visually expressed in the late summer. Currently, the adopted procedure consists in scouting for infected plants by trained experts, which is time-consuming and not frequent enough. As stress development causes subtle spectral changes before any visible symptoms appear, during the summer of 2022, hyperspectral and multispectral images were acquired in the two vineyards near Riva del Garda, Trentino, Italy. A classification accuracy between 90.2 % and 96.9 % in distinguishing between infected and healthy plants was obtained from the hyperspectral data. These findings justify further efforts to use an in-house developed, affordable multispectral camera, significantly reducing equipment cost and procedure complexity while mapping the relevant spectral channels.},
bibtype = {article},
author = {Barjaktarovic, Marko and Santoni, Massimo and Faralli, Michele and Bertamini, Massimo and Bruzzone, Lorenzo},
doi = {10.1109/ICECCME57830.2023.10252351},
journal = {International Conference on Electrical, Computer, Communications and Mechatronics Engineering, ICECCME 2023}
}
Grapevine is one of the most important crops cultivated across Europe. Climate factors and diseases constantly threaten its production. Recently, Flavescence dorée (FD), an incurable grapevine disease with the obligation to uproot each infected plant, has been widely spread in Europe. The symptoms of FD are visually expressed in the late summer. Currently, the adopted procedure consists in scouting for infected plants by trained experts, which is time-consuming and not frequent enough. As stress development causes subtle spectral changes before any visible symptoms appear, during the summer of 2022, hyperspectral and multispectral images were acquired in the two vineyards near Riva del Garda, Trentino, Italy. A classification accuracy between 90.2 % and 96.9 % in distinguishing between infected and healthy plants was obtained from the hyperspectral data. These findings justify further efforts to use an in-house developed, affordable multispectral camera, significantly reducing equipment cost and procedure complexity while mapping the relevant spectral channels.
Grapevine water status in a variably irrigated vineyard with NIR hyperspectral imaging from a UAV.
Vasquez, K.; Laroche-Pinel, E.; Partida, G.; and Brillante, L.
Precision agriculture '23,345–350. 2023.
Paper
link
bibtex
@article{
title = {Grapevine water status in a variably irrigated vineyard with NIR hyperspectral imaging from a UAV},
type = {article},
year = {2023},
pages = {345–350},
id = {54567139-8968-32e9-a191-74a8cb3ab39a},
created = {2023-10-27T09:37:09.872Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:29.517Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
bibtype = {article},
author = {Vasquez, K. and Laroche-Pinel, E. and Partida, G. and Brillante, L.},
journal = {Precision agriculture '23}
}
Instance Segmentation and Berry Counting of Table Grape before Thinning Based on AS-SwinT.
Du, W.; and Liu, P.
Plant Phenomics, 5. 8 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Instance Segmentation and Berry Counting of Table Grape before Thinning Based on AS-SwinT},
type = {article},
year = {2023},
volume = {5},
websites = {https://spj.science.org/doi/10.34133/plantphenomics.0085},
month = {8},
publisher = {AAAS},
day = {29},
id = {50bc8402-ec7f-3a68-9762-cc082c870c78},
created = {2023-10-27T09:40:15.912Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:30.128Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Berry thinning is one of the most important tasks in the management of high-quality table grapes. Farmers often thin the berries per cluster to a standard number by counting. With an aging populati...},
bibtype = {article},
author = {Du, Wensheng and Liu, Ping},
doi = {10.34133/PLANTPHENOMICS.0085},
journal = {Plant Phenomics}
}
Berry thinning is one of the most important tasks in the management of high-quality table grapes. Farmers often thin the berries per cluster to a standard number by counting. With an aging populati...
Detecting Grapevine Virus Infections in Red and White Winegrape Canopies Using Proximal Hyperspectral Sensing.
Wang, Y., M.; Ostendorf, B.; and Pagay, V.
Sensors 2023, Vol. 23, Page 2851, 23(5): 2851. 3 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Detecting Grapevine Virus Infections in Red and White Winegrape Canopies Using Proximal Hyperspectral Sensing},
type = {article},
year = {2023},
keywords = {wang2023detectinggrapevinevirus},
pages = {2851},
volume = {23},
websites = {https://www.mdpi.com/1424-8220/23/5/2851/htm,https://www.mdpi.com/1424-8220/23/5/2851},
month = {3},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {6},
id = {121653f9-08d3-350d-af1d-17314e8b4858},
created = {2023-10-27T09:47:18.224Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:53:05.578Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Grapevine virus-associated disease such as grapevine leafroll disease (GLD) affects grapevine health worldwide. Current diagnostic methods are either highly costly (laboratory-based diagnostics) or can be unreliable (visual assessments). Hyperspectral sensing technology is capable of measuring leaf reflectance spectra that can be used for the non-destructive and rapid detection of plant diseases. The present study used proximal hyperspectral sensing to detect virus infection in Pinot Noir (red-berried winegrape cultivar) and Chardonnay (white-berried winegrape cultivar) grapevines. Spectral data were collected throughout the grape growing season at six timepoints per cultivar. Partial least squares-discriminant analysis (PLS-DA) was used to build a predictive model of the presence or absence of GLD. The temporal change of canopy spectral reflectance showed that the harvest timepoint had the best prediction result. Prediction accuracies of 96% and 76% were achieved for Pinot Noir and Chardonnay, respectively. Our results provide valuable information on the optimal time for GLD detection. This hyperspectral method can also be deployed on mobile platforms including ground-based vehicles and unmanned aerial vehicles (UAV) for large-scale disease surveillance in vineyards.},
bibtype = {article},
author = {Wang, Yeniu Mickey and Ostendorf, Bertram and Pagay, Vinay},
doi = {10.3390/S23052851},
journal = {Sensors 2023, Vol. 23, Page 2851},
number = {5}
}
Grapevine virus-associated disease such as grapevine leafroll disease (GLD) affects grapevine health worldwide. Current diagnostic methods are either highly costly (laboratory-based diagnostics) or can be unreliable (visual assessments). Hyperspectral sensing technology is capable of measuring leaf reflectance spectra that can be used for the non-destructive and rapid detection of plant diseases. The present study used proximal hyperspectral sensing to detect virus infection in Pinot Noir (red-berried winegrape cultivar) and Chardonnay (white-berried winegrape cultivar) grapevines. Spectral data were collected throughout the grape growing season at six timepoints per cultivar. Partial least squares-discriminant analysis (PLS-DA) was used to build a predictive model of the presence or absence of GLD. The temporal change of canopy spectral reflectance showed that the harvest timepoint had the best prediction result. Prediction accuracies of 96% and 76% were achieved for Pinot Noir and Chardonnay, respectively. Our results provide valuable information on the optimal time for GLD detection. This hyperspectral method can also be deployed on mobile platforms including ground-based vehicles and unmanned aerial vehicles (UAV) for large-scale disease surveillance in vineyards.
Detecting vineyard plants stress in situ using deep learning.
Cándido-Mireles, M.; Hernández-Gama, R.; and Salas, J.
Computers and Electronics in Agriculture, 210: 107837. 7 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Detecting vineyard plants stress in situ using deep learning},
type = {article},
year = {2023},
keywords = {Convolutional neural network,Grapevine health,Plant stress detection,Transfer learning},
pages = {107837},
volume = {210},
month = {7},
publisher = {Elsevier},
day = {1},
id = {5c94d87a-3539-3523-a19e-0dd768592a16},
created = {2023-10-27T09:49:01.966Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:30.315Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Diseases and nutritional deficiencies have the potential to seriously impact the production yield and proper development of perennial species such as grapevine. The distinction between changes resulting from normal growth stages and plant alterations caused by biotic and abiotic stress is often drawn through visual inspection, where the observer's subjectivity could introduce human errors, despite the presence of experience and technical knowledge. This document presents an assessment of CNNs for detecting plant stress in grapevine RGB images captured in situ, under conditions that could include variations in light, shadows, insects, or the presence of scrubs. We evaluated five architectures for their ability to discriminate plants with stress symptoms in images captured through the annual grapevine cycle in field conditions. The best model exhibited a 97.2% accuracy, 0.996 ROC AUC, and 0.958 AP using the EfficientNetB3 architecture. Our methodology aims to support winegrowers in their decision-making by enhancing the information they collect through traditional visual inspection methods.},
bibtype = {article},
author = {Cándido-Mireles, Mayra and Hernández-Gama, Regina and Salas, Joaquín},
doi = {10.1016/J.COMPAG.2023.107837},
journal = {Computers and Electronics in Agriculture}
}
Diseases and nutritional deficiencies have the potential to seriously impact the production yield and proper development of perennial species such as grapevine. The distinction between changes resulting from normal growth stages and plant alterations caused by biotic and abiotic stress is often drawn through visual inspection, where the observer's subjectivity could introduce human errors, despite the presence of experience and technical knowledge. This document presents an assessment of CNNs for detecting plant stress in grapevine RGB images captured in situ, under conditions that could include variations in light, shadows, insects, or the presence of scrubs. We evaluated five architectures for their ability to discriminate plants with stress symptoms in images captured through the annual grapevine cycle in field conditions. The best model exhibited a 97.2% accuracy, 0.996 ROC AUC, and 0.958 AP using the EfficientNetB3 architecture. Our methodology aims to support winegrowers in their decision-making by enhancing the information they collect through traditional visual inspection methods.
Scalable Early Detection of Grapevine Viral Infection with Airborne Imaging Spectroscopy.
Romero Galvan, F.; Pavlick, R.; Trolley, G., R.; Aggarwal, S.; Sousa, D.; Starr, C.; Forrestel, E., J.; Bolton, S.; Alsina, M., d., M.; Dokoozlian, N.; and Gold, K., M.
https://doi.org/10.1094/PHYTO-01-23-0030-R. 9 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Scalable Early Detection of Grapevine Viral Infection with Airborne Imaging Spectroscopy},
type = {article},
year = {2023},
keywords = {AVIRIS next generation,early detection,grapevine leafroll-associated virus 3,imaging spectroscopy,scalable},
websites = {https://apsjournals.apsnet.org/doi/10.1094/PHYTO-01-23-0030-R},
month = {9},
publisher = { The American Phytopathological Society },
day = {20},
id = {01adfe0d-c7e5-3448-9fbc-b08e35e80d9f},
created = {2023-10-27T09:56:53.924Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-14T12:33:10.212Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {The U.S. wine and grape industry loses $3B annually due to viral diseases including grapevine leafroll-associated virus complex 3 (GLRaV-3). Current detection methods are labor-intensive and expens...},
bibtype = {article},
author = {Romero Galvan, Fernando and Pavlick, Ryan and Trolley, Graham Richard and Aggarwal, Somil and Sousa, Daniel and Starr, Charlie and Forrestel, Elisabeth Jane and Bolton, Stephanie and Alsina, Maria del Mar and Dokoozlian, Nicholaus and Gold, Kaitlin Morey},
doi = {10.1094/PHYTO-01-23-0030-R},
journal = {https://doi.org/10.1094/PHYTO-01-23-0030-R}
}
The U.S. wine and grape industry loses $3B annually due to viral diseases including grapevine leafroll-associated virus complex 3 (GLRaV-3). Current detection methods are labor-intensive and expens...
A Grape Dataset for Instance Segmentation and Maturity Estimation.
Blekos, A.; Chatzis, K.; Kotaidou, M.; Chatzis, T.; Solachidis, V.; Konstantinidis, D.; and Dimitropoulos, K.
Agronomy 2023, Vol. 13, Page 1995, 13(8): 1995. 7 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {A Grape Dataset for Instance Segmentation and Maturity Estimation},
type = {article},
year = {2023},
keywords = {blekos2023grapedatasetinstance},
pages = {1995},
volume = {13},
websites = {https://www.mdpi.com/2073-4395/13/8/1995/htm,https://www.mdpi.com/2073-4395/13/8/1995},
month = {7},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {27},
id = {45b21996-8c40-3316-a650-6ddb2cde7e51},
created = {2023-10-27T10:02:43.010Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-21T15:32:09.826Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {e09e8a73-297c-40aa-8415-81ec386a90b0,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Grape maturity estimation is vital in precise agriculture as it enables informed decision making for disease control, harvest timing, grape quality, and quantity assurance. Despite its importance, there are few large publicly available datasets that can be used to train accurate and robust grape segmentation and maturity estimation algorithms. To this end, this work proposes the CERTH grape dataset, a new sizeable dataset that is designed explicitly for evaluating deep learning algorithms in grape segmentation and maturity estimation. The proposed dataset is one of the largest currently available grape datasets in the literature, consisting of around 2500 images and almost 10 k grape bunches, annotated with masks and maturity levels. The images in the dataset were captured under various illumination conditions and viewing angles and with significant occlusions between grape bunches and leaves, making it a valuable resource for the research community. Thorough experiments were conducted using a plethora of general object detection methods to provide a baseline for the future development of accurate and robust grape segmentation and maturity estimation algorithms that can significantly advance research in the field of viticulture.},
bibtype = {article},
author = {Blekos, Achilleas and Chatzis, Konstantinos and Kotaidou, Martha and Chatzis, Theocharis and Solachidis, Vassilios and Konstantinidis, Dimitrios and Dimitropoulos, Kosmas},
doi = {10.3390/AGRONOMY13081995},
journal = {Agronomy 2023, Vol. 13, Page 1995},
number = {8}
}
Grape maturity estimation is vital in precise agriculture as it enables informed decision making for disease control, harvest timing, grape quality, and quantity assurance. Despite its importance, there are few large publicly available datasets that can be used to train accurate and robust grape segmentation and maturity estimation algorithms. To this end, this work proposes the CERTH grape dataset, a new sizeable dataset that is designed explicitly for evaluating deep learning algorithms in grape segmentation and maturity estimation. The proposed dataset is one of the largest currently available grape datasets in the literature, consisting of around 2500 images and almost 10 k grape bunches, annotated with masks and maturity levels. The images in the dataset were captured under various illumination conditions and viewing angles and with significant occlusions between grape bunches and leaves, making it a valuable resource for the research community. Thorough experiments were conducted using a plethora of general object detection methods to provide a baseline for the future development of accurate and robust grape segmentation and maturity estimation algorithms that can significantly advance research in the field of viticulture.
Scalable Early Detection of Grapevine Viral Infection with Airborne Imaging Spectroscopy.
Romero Galvan, F.; Pavlick, R.; Trolley, G., R.; Aggarwal, S.; Sousa, D.; Starr, C.; Forrestel, E., J.; Bolton, S.; Alsina, M., d., M.; Dokoozlian, N.; and Gold, K., M.
https://doi.org/10.1094/PHYTO-01-23-0030-R. 9 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Scalable Early Detection of Grapevine Viral Infection with Airborne Imaging Spectroscopy},
type = {article},
year = {2023},
keywords = {AVIRIS next generation,early detection,grapevine leafroll-associated virus 3,imaging spectroscopy,scalable},
websites = {https://apsjournals.apsnet.org/doi/10.1094/PHYTO-01-23-0030-R},
month = {9},
publisher = { The American Phytopathological Society },
day = {20},
id = {e70316ac-78fe-384f-9690-906108e8f910},
created = {2023-10-27T10:03:57.212Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-10-27T10:05:00.083Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {The U.S. wine and grape industry loses $3B annually due to viral diseases including grapevine leafroll-associated virus complex 3 (GLRaV-3). Current detection methods are labor-intensive and expens...},
bibtype = {article},
author = {Romero Galvan, Fernando and Pavlick, Ryan and Trolley, Graham Richard and Aggarwal, Somil and Sousa, Daniel and Starr, Charlie and Forrestel, Elisabeth Jane and Bolton, Stephanie and Alsina, Maria del Mar and Dokoozlian, Nicholaus and Gold, Kaitlin Morey},
doi = {10.1094/PHYTO-01-23-0030-R},
journal = {https://doi.org/10.1094/PHYTO-01-23-0030-R}
}
The U.S. wine and grape industry loses $3B annually due to viral diseases including grapevine leafroll-associated virus complex 3 (GLRaV-3). Current detection methods are labor-intensive and expens...
Data Acquisition for Testing Potential Detection of Flavescence Dorée with a Designed, Affordable Multispectral Camera.
Barjaktarović, M.; Santoni, M.; Faralli, M.; Bertamini, M.; and Bruzzone, L.
Telfor Journal, 15(1). 2023.
Paper
link
bibtex
abstract
@article{
title = {Data Acquisition for Testing Potential Detection of Flavescence Dorée with a Designed, Affordable Multispectral Camera},
type = {article},
year = {2023},
keywords = {barjaktarovic2023dataacquisitiontesting},
volume = {15},
id = {c0e4572b-9da8-3bed-b3e3-c3a6e5e1e43f},
created = {2023-10-27T10:12:24.014Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:12:05.541Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {There is a constant push on agriculture to produce more food and other inputs for different industries. Precision agriculture is essential to meet these demands. The intake of this modern technology is rapidly increasing among large and medium-sized farms. However, small farms still struggle with their adaptation due to the expensive initial costs. A contribution in handling this challenge, this paper presents data gathering for testing an in-house made, cost-effective, multispectral camera to detect Flavescence dorée (FD). FD is a grapevine disease that, in the last few years, has become a major concern for grapevine producers across Europe. As a quarantine disease, mandatory control procedures, such as uprooting infected plants and removing all vineyard if the infection is higher than 20%, lead to an immense economic loss. Therefore, it is critical to detect each diseased plant promptly, thus reducing the expansion of Flavescence dorée. Data from two vineyards near Riva del Garda, Trentino, Italy, was acquired in 2022 using multispectral and hyperspectral cameras. The initial finding showed that there is a possibility to detect Flavescence dorée using Linear discriminant analysis (LDA) with hyperspectral data, obtaining an accuracy of 96.6 %. This result justifies future investigation on the use of multispectral images for Flavescence dorée detection.},
bibtype = {article},
author = {Barjaktarović, Marko and Santoni, Massimo and Faralli, Michele and Bertamini, Massimo and Bruzzone, Lorenzo},
journal = {Telfor Journal},
number = {1}
}
There is a constant push on agriculture to produce more food and other inputs for different industries. Precision agriculture is essential to meet these demands. The intake of this modern technology is rapidly increasing among large and medium-sized farms. However, small farms still struggle with their adaptation due to the expensive initial costs. A contribution in handling this challenge, this paper presents data gathering for testing an in-house made, cost-effective, multispectral camera to detect Flavescence dorée (FD). FD is a grapevine disease that, in the last few years, has become a major concern for grapevine producers across Europe. As a quarantine disease, mandatory control procedures, such as uprooting infected plants and removing all vineyard if the infection is higher than 20%, lead to an immense economic loss. Therefore, it is critical to detect each diseased plant promptly, thus reducing the expansion of Flavescence dorée. Data from two vineyards near Riva del Garda, Trentino, Italy, was acquired in 2022 using multispectral and hyperspectral cameras. The initial finding showed that there is a possibility to detect Flavescence dorée using Linear discriminant analysis (LDA) with hyperspectral data, obtaining an accuracy of 96.6 %. This result justifies future investigation on the use of multispectral images for Flavescence dorée detection.
NIR attribute selection for the development of vineyard water status predictive models.
Marañón, M.; Fernández-Novales, J.; Tardaguila, J.; Gutiérrez, S.; and Diago, M., P.
Biosystems Engineering, 229: 167-178. 5 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {NIR attribute selection for the development of vineyard water status predictive models},
type = {article},
year = {2023},
keywords = {Grapevine,Interval Partial Least Squares,Manual wavelength selection,Stem water potential,Variable Importance in Projection scores},
pages = {167-178},
volume = {229},
month = {5},
publisher = {Academic Press},
day = {1},
id = {8aa48acf-a33d-3a3e-9ffc-917abc258ec0},
created = {2023-10-27T10:16:58.488Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-06T09:35:30.474Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Near-Infrared spectroscopy (NIR) returns full spectra in the region between 750 and 2500 nm. Although a full spectrum provides extremely informative data, sometimes this enormous amount of detail is redundant and does not bring any additional information. In this work, different attribute selection methods for the development of vineyard water status predictive models are presented. Spectra from grapevine leaves were collected on-the-go (from a moving vehicle) along nine dates during the 2015 season in a commercial vineyard using a NIR spectrometer (1200–2100 nm). Contemporarily, the stem water potential (Ψstem) was also measured in the monitored vines. A manual selection, based on Variable Importance in Projection scores (VIP scores) to choose the spectrum intervals including the most important wavelengths (interval selection), the locally most important wavelengths in the spectrum (peak selection), as well as the Interval Partial Least Squares (IPLS) were tested as attribute selection methods. The results obtained for the estimation of Ψstem using the whole spectrum (R2P = 0.84, RMSEP = 0.167 MPa) were comparable to those yielded by the three attribute selection methods: the interval selection method (R2P = 0.80, RMSEP = 0.186 MPa), the peak selection method (R2P = 0.77, RMSEP = 0.201 MPa) and the IPLS (R2P ∼ 0.62–0.79, RMSEP ∼ 0.186–0.252 MPa). The highest simplification was provided by two IPLS models with three wavelengths and bandwidths of 20 and 4 nm that yielded R2P∼0.78 and RMSEP∼ 0.190 MPa. These results corroborate the suitability of a highly reduced selection of NIR wavelengths for the prediction of grapevine water status, and its utility to develop simpler multispectral devices for vineyard water status estimation.},
bibtype = {article},
author = {Marañón, Miguel and Fernández-Novales, Juan and Tardaguila, Javier and Gutiérrez, Salvador and Diago, Maria P.},
doi = {10.1016/J.BIOSYSTEMSENG.2023.04.001},
journal = {Biosystems Engineering}
}
Near-Infrared spectroscopy (NIR) returns full spectra in the region between 750 and 2500 nm. Although a full spectrum provides extremely informative data, sometimes this enormous amount of detail is redundant and does not bring any additional information. In this work, different attribute selection methods for the development of vineyard water status predictive models are presented. Spectra from grapevine leaves were collected on-the-go (from a moving vehicle) along nine dates during the 2015 season in a commercial vineyard using a NIR spectrometer (1200–2100 nm). Contemporarily, the stem water potential (Ψstem) was also measured in the monitored vines. A manual selection, based on Variable Importance in Projection scores (VIP scores) to choose the spectrum intervals including the most important wavelengths (interval selection), the locally most important wavelengths in the spectrum (peak selection), as well as the Interval Partial Least Squares (IPLS) were tested as attribute selection methods. The results obtained for the estimation of Ψstem using the whole spectrum (R2P = 0.84, RMSEP = 0.167 MPa) were comparable to those yielded by the three attribute selection methods: the interval selection method (R2P = 0.80, RMSEP = 0.186 MPa), the peak selection method (R2P = 0.77, RMSEP = 0.201 MPa) and the IPLS (R2P ∼ 0.62–0.79, RMSEP ∼ 0.186–0.252 MPa). The highest simplification was provided by two IPLS models with three wavelengths and bandwidths of 20 and 4 nm that yielded R2P∼0.78 and RMSEP∼ 0.190 MPa. These results corroborate the suitability of a highly reduced selection of NIR wavelengths for the prediction of grapevine water status, and its utility to develop simpler multispectral devices for vineyard water status estimation.
Evaluating the Potential of High-Resolution Visible Remote Sensing to Detect Shiraz Disease in Grapevines.
Wang, Y., M.; Ostendorf, B.; and Pagay, V.
Australian Journal of Grape and Wine Research, 2023: 1-9. 5 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Evaluating the Potential of High-Resolution Visible Remote Sensing to Detect Shiraz Disease in Grapevines},
type = {article},
year = {2023},
pages = {1-9},
volume = {2023},
month = {5},
publisher = {Hindawi Limited},
day = {5},
id = {07dbd7c8-6a3d-3fcd-b3a4-5d3a3d9089c3},
created = {2023-10-27T10:20:13.820Z},
accessed = {2023-10-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:53:05.776Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Background and Aims. Shiraz disease (SD) is a viral disease associated with Grapevine virus A that causes significant yield loss in economically important grape cultivars in Australia such as Shiraz and Merlot. Current diagnostic methods are time-consuming and costly. This study evaluates an alternative methodology using visible remote sensing imagery to detect SD in Shiraz grapevines. Methods and Results. High-resolution visible remote sensing images were captured of Shiraz grapevines in two South Australian viticultural regions over two seasons. The projected leaf area (PLA) of individual grapevines was estimated from the images. Virus-infected vines had significantly lower PLA than healthy vines in the early season but fewer difference after veraison. The lower PLA was only observed in grapevines coinfected with grapevine leafroll-associated viruses (GLRaVs) and Grapevine virus A (GVA). Shiraz vines infected with either GLRaVs or GVA had similar PLA to healthy vines. Conclusions. High-resolution RGB remote sensing technology has the potential to rapidly estimate SD infection in Shiraz grapevines. Our observations of shoot devigouration only in coinfected vines calls into question the etiology of SD. Further validation of the PLA technique incorporating different regions, seasons, cultivars, and combinations of viruses is needed for improving the robustness of the method. Significance of the Study. This preliminary study presents a new rapid and low-cost surveillance method to estimate SD infections in Shiraz vineyards, which could significantly lower the cost for growers who conduct on-ground SD visual assessments or lab-based tissue testing at the vineyard scale.},
bibtype = {article},
author = {Wang, Yeniu Mickey and Ostendorf, Bertram and Pagay, Vinay},
doi = {10.1155/2023/7376153},
journal = {Australian Journal of Grape and Wine Research},
keywords = {wang2023evaluatingpotentialhighresolution}
}
Background and Aims. Shiraz disease (SD) is a viral disease associated with Grapevine virus A that causes significant yield loss in economically important grape cultivars in Australia such as Shiraz and Merlot. Current diagnostic methods are time-consuming and costly. This study evaluates an alternative methodology using visible remote sensing imagery to detect SD in Shiraz grapevines. Methods and Results. High-resolution visible remote sensing images were captured of Shiraz grapevines in two South Australian viticultural regions over two seasons. The projected leaf area (PLA) of individual grapevines was estimated from the images. Virus-infected vines had significantly lower PLA than healthy vines in the early season but fewer difference after veraison. The lower PLA was only observed in grapevines coinfected with grapevine leafroll-associated viruses (GLRaVs) and Grapevine virus A (GVA). Shiraz vines infected with either GLRaVs or GVA had similar PLA to healthy vines. Conclusions. High-resolution RGB remote sensing technology has the potential to rapidly estimate SD infection in Shiraz grapevines. Our observations of shoot devigouration only in coinfected vines calls into question the etiology of SD. Further validation of the PLA technique incorporating different regions, seasons, cultivars, and combinations of viruses is needed for improving the robustness of the method. Significance of the Study. This preliminary study presents a new rapid and low-cost surveillance method to estimate SD infections in Shiraz vineyards, which could significantly lower the cost for growers who conduct on-ground SD visual assessments or lab-based tissue testing at the vineyard scale.
Unstructured road extraction and roadside fruit recognition in grape orchards based on a synchronous detection algorithm.
Zhou, X.; Zou, X.; Tang, W.; Yan, Z.; Meng, H.; and Luo, X.
Frontiers in Plant Science, 14(June): 1-22. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Unstructured road extraction and roadside fruit recognition in grape orchards based on a synchronous detection algorithm},
type = {article},
year = {2023},
keywords = {deep learning,fruit harvesting robot,machine vision,non-structural environment,roadside fruits detection},
pages = {1-22},
volume = {14},
id = {24a7aecc-0904-39e3-99a2-a16ebb199121},
created = {2023-10-27T10:20:18.937Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-10-27T10:20:25.317Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Accurate road extraction and recognition of roadside fruit in complex orchard environments are essential prerequisites for robotic fruit picking and walking behavioral decisions. In this study, a novel algorithm was proposed for unstructured road extraction and roadside fruit synchronous recognition, with wine grapes and nonstructural orchards as research objects. Initially, a preprocessing method tailored to field orchards was proposed to reduce the interference of adverse factors in the operating environment. The preprocessing method contained 4 parts: interception of regions of interest, bilateral filter, logarithmic space transformation and image enhancement based on the MSRCR algorithm. Subsequently, the analysis of the enhanced image enabled the optimization of the gray factor, and a road region extraction method based on dual-space fusion was proposed by color channel enhancement and gray factor optimization. Furthermore, the YOLO model suitable for grape cluster recognition in the wild environment was selected, and its parameters were optimized to enhance the recognition performance of the model for randomly distributed grapes. Finally, a fusion recognition framework was innovatively established, wherein the road extraction result was taken as input, and the optimized parameter YOLO model was utilized to identify roadside fruits, thus realizing synchronous road extraction and roadside fruit detection. Experimental results demonstrated that the proposed method based on the pretreatment could reduce the impact of interfering factors in complex orchard environments and enhance the quality of road extraction. Using the optimized YOLOv7 model, the precision, recall, mAP, and F1-score for roadside fruit cluster detection were 88.9%, 89.7%, 93.4%, and 89.3%, respectively, all of which were higher than those of the YOLOv5 model and were more suitable for roadside grape recognition. Compared to the identification results obtained by the grape detection algorithm alone, the proposed synchronous algorithm increased the number of fruit identifications by 23.84% and the detection speed by 14.33%. This research enhanced the perception ability of robots and provided a solid support for behavioral decision systems.},
bibtype = {article},
author = {Zhou, Xinzhao and Zou, Xiangjun and Tang, Wei and Yan, Zhiwei and Meng, Hewei and Luo, Xiwen},
doi = {10.3389/fpls.2023.1103276},
journal = {Frontiers in Plant Science},
number = {June}
}
Accurate road extraction and recognition of roadside fruit in complex orchard environments are essential prerequisites for robotic fruit picking and walking behavioral decisions. In this study, a novel algorithm was proposed for unstructured road extraction and roadside fruit synchronous recognition, with wine grapes and nonstructural orchards as research objects. Initially, a preprocessing method tailored to field orchards was proposed to reduce the interference of adverse factors in the operating environment. The preprocessing method contained 4 parts: interception of regions of interest, bilateral filter, logarithmic space transformation and image enhancement based on the MSRCR algorithm. Subsequently, the analysis of the enhanced image enabled the optimization of the gray factor, and a road region extraction method based on dual-space fusion was proposed by color channel enhancement and gray factor optimization. Furthermore, the YOLO model suitable for grape cluster recognition in the wild environment was selected, and its parameters were optimized to enhance the recognition performance of the model for randomly distributed grapes. Finally, a fusion recognition framework was innovatively established, wherein the road extraction result was taken as input, and the optimized parameter YOLO model was utilized to identify roadside fruits, thus realizing synchronous road extraction and roadside fruit detection. Experimental results demonstrated that the proposed method based on the pretreatment could reduce the impact of interfering factors in complex orchard environments and enhance the quality of road extraction. Using the optimized YOLOv7 model, the precision, recall, mAP, and F1-score for roadside fruit cluster detection were 88.9%, 89.7%, 93.4%, and 89.3%, respectively, all of which were higher than those of the YOLOv5 model and were more suitable for roadside grape recognition. Compared to the identification results obtained by the grape detection algorithm alone, the proposed synchronous algorithm increased the number of fruit identifications by 23.84% and the detection speed by 14.33%. This research enhanced the perception ability of robots and provided a solid support for behavioral decision systems.
Using a Camera System for the In-Situ Assessment of Cordon Dieback due to Grapevine Trunk Diseases.
Tang, J.; Yem, O.; Russell, F.; Stewart, C., A.; Lin, K.; Jayakody, H.; Ayres, M., R.; Sosnowski, M., R.; Whitty, M.; and Petrie, P., R.
. 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Using a Camera System for the In-Situ Assessment of Cordon Dieback due to Grapevine Trunk Diseases},
type = {article},
year = {2023},
websites = {https://doi.org/10.1155/2023/8634742},
id = {4a68233e-bb72-34e9-a4be-7cc50d04653e},
created = {2023-11-06T10:55:17.408Z},
accessed = {2023-11-06},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-16T09:07:18.879Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Background and Aims. .e assessment of grapevine trunk disease symptoms is a labour-intensive process that requires experience and is prone to bias. Methods that support the easy and accurate monitoring of trunk diseases will aid management decisions. Methods and Results. An algorithm was developed for the assessment of dieback symptoms due to trunk disease which is applied on a smartphone mounted on a vehicle driven through the vineyard. Vine images and corresponding expert ground truth assessments (of over 13,000 vines) were collected and correlated over two seasons in Shiraz vineyards in the Clare Valley, Barossa, and McLaren Vale, South Australia. .is dataset was used to train and verify YOLOv5 models to estimate the percentage dieback of cordons due to trunk diseases. .e performance of the models was evaluated on the metrics of highest conndence, highest dieback score, and average dieback score across multiple detections. Eighty-four percent of vines in a test set derived from an unseen vineyard were assigned a score by the model within 10% of the score given by experts in the vineyard. Conclusions. .e computer vision algorithms were implemented within the phone, allowing real-time assessment and row-level mapping with nothing more than a high-end mobile phone. Signiicance of the Study. .e algorithms form the basis of a system that will allow growers to scan their vineyards easily and regularly to monitor dieback due to grapevine trunk disease and will facilitate corrective interventions.},
bibtype = {article},
author = {Tang, Julie and Yem, Olivia and Russell, Finn and Stewart, Cameron A and Lin, Kangying and Jayakody, Hiranya and Ayres, Matthew R and Sosnowski, Mark R and Whitty, Mark and Petrie, Paul R},
doi = {10.1155/2023/8634742}
}
Background and Aims. .e assessment of grapevine trunk disease symptoms is a labour-intensive process that requires experience and is prone to bias. Methods that support the easy and accurate monitoring of trunk diseases will aid management decisions. Methods and Results. An algorithm was developed for the assessment of dieback symptoms due to trunk disease which is applied on a smartphone mounted on a vehicle driven through the vineyard. Vine images and corresponding expert ground truth assessments (of over 13,000 vines) were collected and correlated over two seasons in Shiraz vineyards in the Clare Valley, Barossa, and McLaren Vale, South Australia. .is dataset was used to train and verify YOLOv5 models to estimate the percentage dieback of cordons due to trunk diseases. .e performance of the models was evaluated on the metrics of highest conndence, highest dieback score, and average dieback score across multiple detections. Eighty-four percent of vines in a test set derived from an unseen vineyard were assigned a score by the model within 10% of the score given by experts in the vineyard. Conclusions. .e computer vision algorithms were implemented within the phone, allowing real-time assessment and row-level mapping with nothing more than a high-end mobile phone. Signiicance of the Study. .e algorithms form the basis of a system that will allow growers to scan their vineyards easily and regularly to monitor dieback due to grapevine trunk disease and will facilitate corrective interventions.
Exploratory approach for automatic detection of vine rows in terrace vineyards.
Figueiredo, N.; Padua, L.; Cunha, A.; Sousa, J., J.; and Sousa, A.
Procedia Computer Science, 219: 139-144. 1 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Exploratory approach for automatic detection of vine rows in terrace vineyards},
type = {article},
year = {2023},
keywords = {Artificial Intelligence,Precision agriculture,Remote sensing,Terrace vineyards},
pages = {139-144},
volume = {219},
month = {1},
publisher = {Elsevier},
day = {1},
id = {07f3c769-1b1c-3f33-bf9d-870c047cca01},
created = {2023-11-06T11:17:05.714Z},
accessed = {2023-11-06},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-16T09:07:13.959Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4c7c81ce-f24b-44ae-bc2a-bf60600a3a24,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The Alto Douro Demarcated Region in Portugal is the oldest and most regulated wine-growing region in the world, formed by an ecosystem of unique value allowing the cultivation of vines on its characteristics terraces vineyards. The detection of vine rows in terrace vineyards constitutes an essential task regarding the achievement of important goals such as multi-Temporal crop evaluation and production estimation. Despite the advances and research in this field, most studies are limited to flat vineyards with straight vine rows. In this study an exploratory approach in the precision agriculture for automatic detection of vine rows in terrace vineyards is presented with remote sensing techniques associated with artificial intelligence such as Machine Learning and Deep learning. At the current stage the preliminary results are encouraging for the detection of vine rows in straight and curved lines considering the complexity of the terrain.},
bibtype = {article},
author = {Figueiredo, Nuno and Padua, Luis and Cunha, Antonio and Sousa, Joaquim J. and Sousa, Antonio},
doi = {10.1016/J.PROCS.2023.01.274},
journal = {Procedia Computer Science}
}
The Alto Douro Demarcated Region in Portugal is the oldest and most regulated wine-growing region in the world, formed by an ecosystem of unique value allowing the cultivation of vines on its characteristics terraces vineyards. The detection of vine rows in terrace vineyards constitutes an essential task regarding the achievement of important goals such as multi-Temporal crop evaluation and production estimation. Despite the advances and research in this field, most studies are limited to flat vineyards with straight vine rows. In this study an exploratory approach in the precision agriculture for automatic detection of vine rows in terrace vineyards is presented with remote sensing techniques associated with artificial intelligence such as Machine Learning and Deep learning. At the current stage the preliminary results are encouraging for the detection of vine rows in straight and curved lines considering the complexity of the terrain.
MLGNet: Multi-Task Learning Network with Attention-Guided Mechanism for Segmenting Agricultural Fields.
Mondino, B.; Sarvia, F.; De Petris, S.; Orusa, T.; Luo, W.; Zhang, C.; Li, Y.; and Yan, Y.
Remote Sensing 2023, Vol. 15, Page 3934, 15(16): 3934. 8 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {MLGNet: Multi-Task Learning Network with Attention-Guided Mechanism for Segmenting Agricultural Fields},
type = {article},
year = {2023},
keywords = {agricultural fields,edge detection,multi,remote sensing images,semantic segmentation,task learning},
pages = {3934},
volume = {15},
websites = {https://www.mdpi.com/2072-4292/15/16/3934/htm,https://www.mdpi.com/2072-4292/15/16/3934},
month = {8},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {8},
id = {55df9797-5493-316c-945c-7358f59e50e7},
created = {2023-11-07T08:51:09.069Z},
accessed = {2023-11-07},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-07T08:51:15.288Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bc66e353-ef41-46d4-8108-778d5481c126,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The implementation of precise agricultural fields can drive the intelligent development of agricultural production, and high-resolution remote sensing images provide convenience for obtaining precise fields. With the advancement of spatial resolution, the complexity and heterogeneity of land features are accentuated, making it challenging for existing methods to obtain structurally complete fields, especially in regions with blurred edges. Therefore, a multi-task learning network with attention-guided mechanism is introduced for segmenting agricultural fields. To be more specific, the attention-guided fusion module is used to learn complementary information layer by layer, while the multi-task learning scheme considers both edge detection and semantic segmentation task. Based on this, we further segmented the merged fields using broken edges, following the theory of connectivity perception. Finally, we chose three cities in The Netherlands as study areas for experimentation, and evaluated the extracted field regions and edges separately, the results showed that (1) The proposed method achieved the highest accuracy in three cities, with IoU of 91.27%, 93.05% and 89.76%, respectively. (2) The Qua metrics of the processed edges demonstrated improvements of 6%, 6%, and 5%, respectively. This work successfully segmented potential fields with blurred edges, indicating its potential for precision agriculture development.},
bibtype = {article},
author = {Mondino, Borgogno and Sarvia, Filippo and De Petris, Samuele and Orusa, Tommaso and Luo, Weiran and Zhang, Chengcai and Li, Ying and Yan, Yaning},
doi = {10.3390/RS15163934},
journal = {Remote Sensing 2023, Vol. 15, Page 3934},
number = {16}
}
The implementation of precise agricultural fields can drive the intelligent development of agricultural production, and high-resolution remote sensing images provide convenience for obtaining precise fields. With the advancement of spatial resolution, the complexity and heterogeneity of land features are accentuated, making it challenging for existing methods to obtain structurally complete fields, especially in regions with blurred edges. Therefore, a multi-task learning network with attention-guided mechanism is introduced for segmenting agricultural fields. To be more specific, the attention-guided fusion module is used to learn complementary information layer by layer, while the multi-task learning scheme considers both edge detection and semantic segmentation task. Based on this, we further segmented the merged fields using broken edges, following the theory of connectivity perception. Finally, we chose three cities in The Netherlands as study areas for experimentation, and evaluated the extracted field regions and edges separately, the results showed that (1) The proposed method achieved the highest accuracy in three cities, with IoU of 91.27%, 93.05% and 89.76%, respectively. (2) The Qua metrics of the processed edges demonstrated improvements of 6%, 6%, and 5%, respectively. This work successfully segmented potential fields with blurred edges, indicating its potential for precision agriculture development.
MTA-YOLACT: Multitask-aware network on fruit bunch identification for cherry tomato robotic harvesting.
Li, Y.; Feng, Q.; Liu, C.; Xiong, Z.; Sun, Y.; Xie, F.; Li, T.; and Zhao, C.
European Journal of Agronomy, 146: 126812. 5 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {MTA-YOLACT: Multitask-aware network on fruit bunch identification for cherry tomato robotic harvesting},
type = {article},
year = {2023},
keywords = {Decision tree,Multitask-aware network,Tomato harvesting robot,YOLACT},
pages = {126812},
volume = {146},
month = {5},
publisher = {Elsevier},
day = {1},
id = {47e9274a-975f-3d54-bb52-eb3366c38eae},
created = {2023-11-07T08:59:11.819Z},
accessed = {2023-11-07},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-08T10:14:50.066Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bc66e353-ef41-46d4-8108-778d5481c126,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Accurate and rapid perception of fruit bunch posture is necessary for the cherry tomato harvesting robot to successfully achieve the bunch's holding and separating. According to the postural relationship of the fruit bunch, bunch pedicel, and plant’ main-stem, the robotic end-effector's holding region and approach path could be determined, which were important for successful picking operation. The main goal of this research was to propose a multitask-aware network (MTA-YOLACT), which simultaneously performed region detection on fruit bunch, and region segmentation on pedicel and main-stem. The MTA-YOLACT extended from the pre-trained YOLACT model, included two detector branch networks for detection and instance segment, which shared the same backbone network, and the loss function with weighting coefficients of the two branches was adopted to balance the multi-task learning, according to multi-task's homoscedastic uncertainty during the model training. Furthermore, in order to cluster the fruit bunch, pedicel and main-stem from the same bunch target, a classification and regression tree (CART) model was built, based on the region's positional relationship from the MTA-YOLACT output. An image dataset of cherry tomato plants in China greenhouse was built to training and test the model. The results indicated a promising performance of the proposed network, with an F1-score of 95.4% on detecting fruit bunches and the mean Average Precision of 38.7% and 51.9% on the instance segmentation of pedicel and main-stem, which was 1.1% and 3.5% more than original YOLACT. Beyond that, our approach performed a real-time detection and instance segmentation of 13.3 frames per second (FPS). The whole bunch could be identified by the CART model with an average accuracy of 99.83% and the time cost of 9.53 ms. These results demonstrated the research could be a viable support to the harvesting robot's vision unit development and the end-effector's motion planning in the future research.},
bibtype = {article},
author = {Li, Yajun and Feng, Qingchun and Liu, Cheng and Xiong, Zicong and Sun, Yuhuan and Xie, Feng and Li, Tao and Zhao, Chunjiang},
doi = {10.1016/J.EJA.2023.126812},
journal = {European Journal of Agronomy}
}
Accurate and rapid perception of fruit bunch posture is necessary for the cherry tomato harvesting robot to successfully achieve the bunch's holding and separating. According to the postural relationship of the fruit bunch, bunch pedicel, and plant’ main-stem, the robotic end-effector's holding region and approach path could be determined, which were important for successful picking operation. The main goal of this research was to propose a multitask-aware network (MTA-YOLACT), which simultaneously performed region detection on fruit bunch, and region segmentation on pedicel and main-stem. The MTA-YOLACT extended from the pre-trained YOLACT model, included two detector branch networks for detection and instance segment, which shared the same backbone network, and the loss function with weighting coefficients of the two branches was adopted to balance the multi-task learning, according to multi-task's homoscedastic uncertainty during the model training. Furthermore, in order to cluster the fruit bunch, pedicel and main-stem from the same bunch target, a classification and regression tree (CART) model was built, based on the region's positional relationship from the MTA-YOLACT output. An image dataset of cherry tomato plants in China greenhouse was built to training and test the model. The results indicated a promising performance of the proposed network, with an F1-score of 95.4% on detecting fruit bunches and the mean Average Precision of 38.7% and 51.9% on the instance segmentation of pedicel and main-stem, which was 1.1% and 3.5% more than original YOLACT. Beyond that, our approach performed a real-time detection and instance segmentation of 13.3 frames per second (FPS). The whole bunch could be identified by the CART model with an average accuracy of 99.83% and the time cost of 9.53 ms. These results demonstrated the research could be a viable support to the harvesting robot's vision unit development and the end-effector's motion planning in the future research.
An efficient multi-task convolutional neural network for dairy farm object detection and segmentation.
Tian, F.; Hu, G.; Yu, S.; Wang, R.; Song, Z.; Yan, Y.; Huang, H.; Wang, Q.; Wang, Z.; and Yu, Z.
Computers and Electronics in Agriculture, 211: 108000. 8 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {An efficient multi-task convolutional neural network for dairy farm object detection and segmentation},
type = {article},
year = {2023},
keywords = {Dairy farm,GCS-MUL,Multi-task learning,Target identification},
pages = {108000},
volume = {211},
month = {8},
publisher = {Elsevier},
day = {1},
id = {6409dd1d-f148-3b11-a669-e360e37d5a6f},
created = {2023-11-07T08:59:58.167Z},
accessed = {2023-11-07},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-07T09:00:08.895Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bc66e353-ef41-46d4-8108-778d5481c126,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Real-time and accurate detection of multiple types of targets and obstacles in dairy barns is a necessary function for autonomous pushing robots. To improve the efficiency of target recognition and to reduce the path extraction error of the pushing robot, on the basis of the high accuracy perception of every pixel collected with an embedded AI computer, a multi-task learning based dairy barn multi-type target recognition model Ghost CBAM Segmentation-Multi-task (GCS-MUL) was proposed, which could recognize dairy cows, obstacles and road targets in real-time and efficiently. Firstly, in order to enhance the ability to extract key features from the targets, the proposed model intergrades the Convolutional Block Attention Module (CBAM), a self-designed light-weight target feature extraction network Ghost CBAM Network (GCNet) as the backbone of the whole model. Secondly, to improve the model multi-scale feature fusion, Path Aggregation Network (PAN) and Feature Pyramid Network (FPN) structures with the GhostConv module were used in neck net. Finally, for real-time semantic segmentation dairy farms multiple targets, a Segmentation Head (Seg Head), which is composed of the Receptive Field Block (RFB), Pyramid Pooling Module (PPM) and Feature Fusion Module (FFM), was introduced. Experimental results showed that the mAP@0.5 (mean average precision IoU = 0.5) of the dairy farm target reached 94.86%. Compared to the YOLOv5 model, the precision and recall was improved by 7.47% and 6.85%, respectively. In comparison to the YOLOv7 model, the precision was improved by 5.1%. Furthermore, when compared to the SSD model, the proposed model have reduced the number of model parameters by 92.43%, and its average detection time was reduced by 84.37 ms, which is ideal for meeting the real-time target recognition requirements. The average detection time of the model is 66.43 ms, making it more suitable for deployment in embedded devices. Compared with Ghost CBAM-Detection (GC-Detect) without the introduction of the Seg Head, the precision, recall and mAP@0.5 was improved by 4.49%, 4.92% and 6.58%, respectively. The research results can provide accurate algorithms for real-time and efficient identification of dairy farm targets for pushing robots, and provide more effective road and environmental scene segmentation methods for autonomous walking.},
bibtype = {article},
author = {Tian, Fuyang and Hu, Guozheng and Yu, Sufang and Wang, Ruixue and Song, Zhanhua and Yan, Yinfa and Huang, Hailing and Wang, Qing and Wang, Zhonghua and Yu, Zhenwei},
doi = {10.1016/J.COMPAG.2023.108000},
journal = {Computers and Electronics in Agriculture}
}
Real-time and accurate detection of multiple types of targets and obstacles in dairy barns is a necessary function for autonomous pushing robots. To improve the efficiency of target recognition and to reduce the path extraction error of the pushing robot, on the basis of the high accuracy perception of every pixel collected with an embedded AI computer, a multi-task learning based dairy barn multi-type target recognition model Ghost CBAM Segmentation-Multi-task (GCS-MUL) was proposed, which could recognize dairy cows, obstacles and road targets in real-time and efficiently. Firstly, in order to enhance the ability to extract key features from the targets, the proposed model intergrades the Convolutional Block Attention Module (CBAM), a self-designed light-weight target feature extraction network Ghost CBAM Network (GCNet) as the backbone of the whole model. Secondly, to improve the model multi-scale feature fusion, Path Aggregation Network (PAN) and Feature Pyramid Network (FPN) structures with the GhostConv module were used in neck net. Finally, for real-time semantic segmentation dairy farms multiple targets, a Segmentation Head (Seg Head), which is composed of the Receptive Field Block (RFB), Pyramid Pooling Module (PPM) and Feature Fusion Module (FFM), was introduced. Experimental results showed that the mAP@0.5 (mean average precision IoU = 0.5) of the dairy farm target reached 94.86%. Compared to the YOLOv5 model, the precision and recall was improved by 7.47% and 6.85%, respectively. In comparison to the YOLOv7 model, the precision was improved by 5.1%. Furthermore, when compared to the SSD model, the proposed model have reduced the number of model parameters by 92.43%, and its average detection time was reduced by 84.37 ms, which is ideal for meeting the real-time target recognition requirements. The average detection time of the model is 66.43 ms, making it more suitable for deployment in embedded devices. Compared with Ghost CBAM-Detection (GC-Detect) without the introduction of the Seg Head, the precision, recall and mAP@0.5 was improved by 4.49%, 4.92% and 6.58%, respectively. The research results can provide accurate algorithms for real-time and efficient identification of dairy farm targets for pushing robots, and provide more effective road and environmental scene segmentation methods for autonomous walking.
Grape Cold Hardiness Prediction via Multi-Task Learning.
Saxena, A.; Pesantez-Cabrera, P.; Ballapragada, R.; Lam, K., H.; Keller, M.; and Fern, A.
Proceedings of the AAAI Conference on Artificial Intelligence, 37(13): 15717-15723. 9 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Grape Cold Hardiness Prediction via Multi-Task Learning},
type = {article},
year = {2023},
keywords = {Agriculture,Grapevines,Recurrent Neural Networks,Time Series Modelling,Transfer Learning},
pages = {15717-15723},
volume = {37},
websites = {https://ojs.aaai.org/index.php/AAAI/article/view/26865},
month = {9},
publisher = {AAAI Press},
day = {6},
id = {9cfaa664-03f2-37ff-83be-15ddd02b6942},
created = {2023-11-07T11:13:55.515Z},
accessed = {2023-11-07},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-07T11:13:59.645Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bc66e353-ef41-46d4-8108-778d5481c126,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Cold temperatures during fall and spring have the potential to cause frost damage to grapevines and other fruit plants, which can significantly decrease harvest yields. To help prevent these losses, farmers deploy expensive frost mitigation measures, such as, sprinklers, heaters, and wind machines, when they judge that damage may occur. This judgment, however, is challenging because the cold hardiness of plants changes throughout the dormancy period and it is difficult to directly measure. This has led scientists to develop cold hardiness prediction models that can be tuned to different grape cultivars based on laborious field measurement data. In this paper, we study whether deep-learning models can improve cold hardiness prediction for grapes based on data that has been collected over a 30-year time period. A key challenge is that the amount of data per cultivar is highly variable, with some cultivars having only a small amount. For this purpose, we investigate the use of multi-task learning to leverage data across cultivars in order to improve prediction performance for individual cultivars. We evaluate a number of multi-task learning approaches and show that the highest performing approach is able to significantly improve over learning for single cultivars and outperforms the current state-of-the-art scientific model for most cultivars.},
bibtype = {article},
author = {Saxena, Aseem and Pesantez-Cabrera, Paola and Ballapragada, Rohan and Lam, Kin Ho and Keller, Markus and Fern, Alan},
doi = {10.1609/AAAI.V37I13.26865},
journal = {Proceedings of the AAAI Conference on Artificial Intelligence},
number = {13}
}
Cold temperatures during fall and spring have the potential to cause frost damage to grapevines and other fruit plants, which can significantly decrease harvest yields. To help prevent these losses, farmers deploy expensive frost mitigation measures, such as, sprinklers, heaters, and wind machines, when they judge that damage may occur. This judgment, however, is challenging because the cold hardiness of plants changes throughout the dormancy period and it is difficult to directly measure. This has led scientists to develop cold hardiness prediction models that can be tuned to different grape cultivars based on laborious field measurement data. In this paper, we study whether deep-learning models can improve cold hardiness prediction for grapes based on data that has been collected over a 30-year time period. A key challenge is that the amount of data per cultivar is highly variable, with some cultivars having only a small amount. For this purpose, we investigate the use of multi-task learning to leverage data across cultivars in order to improve prediction performance for individual cultivars. We evaluate a number of multi-task learning approaches and show that the highest performing approach is able to significantly improve over learning for single cultivars and outperforms the current state-of-the-art scientific model for most cultivars.
Multi-task Transfer Learning Facilitated by Segmentation and Denoising for Anomaly Detection of Rail Fasteners.
Kim, B.; Jeon, Y.; Kang, J., W.; and Gwak, J.
Journal of Electrical Engineering and Technology, 18(3): 2383-2394. 5 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Multi-task Transfer Learning Facilitated by Segmentation and Denoising for Anomaly Detection of Rail Fasteners},
type = {article},
year = {2023},
keywords = {Anomaly detection,Denoising,Multi-task transfer learning,Railway fastener,Segmentation},
pages = {2383-2394},
volume = {18},
websites = {https://link.springer.com/article/10.1007/s42835-022-01347-1},
month = {5},
publisher = {Korean Institute of Electrical Engineers},
day = {1},
id = {87040c64-d3ce-3623-b338-3a646954e781},
created = {2023-11-08T10:16:08.570Z},
accessed = {2023-11-08},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-14T10:14:35.248Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bc66e353-ef41-46d4-8108-778d5481c126,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The rail fastener is the main component of rail tracks that contributes to safe train travel by fixing the rail to railroad ties. Anomalies such as the absence or damage should be checked regularly as they can lead to large-scale accidents. Image-based maintenance systems have been proposed as efficient management, but it is still difficult to detect anomalies due to occlusion by obstacles or noise in the image. Therefore, we propose a deep anomaly detection system for rail fasteners using multi-task transfer learning. First, the U-Net model is trained for an auxiliary task consisting of segmentation and denoising. Second, in the transfer learning process, a machine learning or deep learning-based classifier detects anomalies using the feature map obtained from the trained U-Net encoder. The proposed model is rigorously evaluated with our collected data. The experimental results show that the deep learning-based classifier detected anomalies with an accuracy of about 97.57%, and multi-task transfer learning contributes to the model focusing on the fastener region in the image. It suggests the potential of image-based automatic defect detection systems for many industrial applications.},
bibtype = {article},
author = {Kim, Beomjun and Jeon, Younghoon and Kang, Jeong Won and Gwak, Jeonghwan},
doi = {10.1007/S42835-022-01347-1/FIGURES/19},
journal = {Journal of Electrical Engineering and Technology},
number = {3}
}
The rail fastener is the main component of rail tracks that contributes to safe train travel by fixing the rail to railroad ties. Anomalies such as the absence or damage should be checked regularly as they can lead to large-scale accidents. Image-based maintenance systems have been proposed as efficient management, but it is still difficult to detect anomalies due to occlusion by obstacles or noise in the image. Therefore, we propose a deep anomaly detection system for rail fasteners using multi-task transfer learning. First, the U-Net model is trained for an auxiliary task consisting of segmentation and denoising. Second, in the transfer learning process, a machine learning or deep learning-based classifier detects anomalies using the feature map obtained from the trained U-Net encoder. The proposed model is rigorously evaluated with our collected data. The experimental results show that the deep learning-based classifier detected anomalies with an accuracy of about 97.57%, and multi-task transfer learning contributes to the model focusing on the fastener region in the image. It suggests the potential of image-based automatic defect detection systems for many industrial applications.
Plant-water relations.
Dodd, I., C.; Hirons, A., D.; and Puértolas, J.
Encyclopedia of Soils in the Environment,516-526. 2023.
doi
link
bibtex
@article{
title = {Plant-water relations},
type = {article},
year = {2023},
pages = {516-526},
publisher = {Elsevier},
id = {3eeac158-12ce-3096-b064-ef8b70a5ad0b},
created = {2023-11-13T12:58:15.151Z},
accessed = {2023-11-13},
file_attached = {false},
profile_id = {c3c41a69-4b45-352f-9232-4d3281e18730},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-13T12:58:15.470Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {2bfb8d91-9fac-46f0-bd0c-93235d01dbed},
private_publication = {false},
bibtype = {article},
author = {Dodd, Ian C. and Hirons, Andrew D. and Puértolas, Jaime},
doi = {10.1016/B978-0-12-822974-3.00253-6},
journal = {Encyclopedia of Soils in the Environment}
}
Scalable early detection of grapevine virus infection with airborne imaging spectroscopy.
Romero Galvan, F.; Pavlick, R.; Trolley, G., R.; Aggarwal, S.; Sousa, D.; Starr, C.; Forrestel, E., J.; Bolton, S.; Alsina, M., d., M.; Dokoozlian, N.; and Gold, K., M.
Phytopathology®. 8 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Scalable early detection of grapevine virus infection with airborne imaging spectroscopy},
type = {article},
year = {2023},
keywords = {galvan2023scalableearlydetection},
websites = {https://apsjournals.apsnet.org/doi/10.1094/PHYTO-01-23-0030-R},
month = {8},
publisher = {Scientific Societies},
day = {25},
id = {6dd24140-fee6-30db-a64f-0c8d61a86efc},
created = {2023-11-14T11:17:40.710Z},
accessed = {2023-11-14},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:12:05.752Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {6b565182-74c4-44fd-98cc-10618152e2ae,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The US wine and grape industry suffers $3B in damages and losses annually due to viral diseases such as Grapevine Leafroll-associated Virus Complex 3 (GLRaV-3). Current detection methods are labor intensive and expensive. GLRaV-3 undergoes a latent period in which the vines are infected but do not yet display visible symptoms, making it an ideal model to evaluate the scalability of imaging spectroscopy-based disease detection. We deployed the NASA Airborne Visible and Infrared Imaging Spectrometer Next Generation (AVIRIS-NG) to detect GLRaV-3 in Cabernet Sauvignon grapevines in Lodi, CA in September 2020. Foliage was removed from the vines as part of mechanical harvest soon after imagery acquisition. In both Sept. 2020 and 2021, industry collaborators scouted 317ac on a vine-by-vine basis for visible viral symptoms and collected a subset for molecular confirmation testing. Grapevines identified as visibly diseased in 2021, but not 2020, were assumed to have been latently infected at time of acquisition. We trained spectral models with random forest and synthetic minority oversampling technique to distinguish non-infected and GLRaV-3-infected grapevines. Non-infected and GLRaV-3 infected vines could be differentiated both pre- and post-symptomatically at 1m through 5m resolution. The best-performing models had 87% accuracy distinguishing between non-infected and asymptomatic vines, and 85% accuracy distinguishing between non-infected and asymptomatic + symptomatic vines. The importance of non-visible wavelengths suggests this capacity is driven by disease-induced changes to overall plant physiology. Our work sets a foundation for using the forthcoming hyperspectral satellite Surface Biology and Geology for regional disease monitoring.},
bibtype = {article},
author = {Romero Galvan, Fernando and Pavlick, Ryan and Trolley, Graham Richard and Aggarwal, Somil and Sousa, Daniel and Starr, Charlie and Forrestel, Elisabeth Jane and Bolton, Stephanie and Alsina, Maria del Mar and Dokoozlian, Nicholaus and Gold, Kaitlin Morey},
doi = {10.1094/PHYTO-01-23-0030-R/ASSET/IMAGES/LARGE/PHYTO-01-23-0030-RF5.JPEG},
journal = {Phytopathology®}
}
The US wine and grape industry suffers $3B in damages and losses annually due to viral diseases such as Grapevine Leafroll-associated Virus Complex 3 (GLRaV-3). Current detection methods are labor intensive and expensive. GLRaV-3 undergoes a latent period in which the vines are infected but do not yet display visible symptoms, making it an ideal model to evaluate the scalability of imaging spectroscopy-based disease detection. We deployed the NASA Airborne Visible and Infrared Imaging Spectrometer Next Generation (AVIRIS-NG) to detect GLRaV-3 in Cabernet Sauvignon grapevines in Lodi, CA in September 2020. Foliage was removed from the vines as part of mechanical harvest soon after imagery acquisition. In both Sept. 2020 and 2021, industry collaborators scouted 317ac on a vine-by-vine basis for visible viral symptoms and collected a subset for molecular confirmation testing. Grapevines identified as visibly diseased in 2021, but not 2020, were assumed to have been latently infected at time of acquisition. We trained spectral models with random forest and synthetic minority oversampling technique to distinguish non-infected and GLRaV-3-infected grapevines. Non-infected and GLRaV-3 infected vines could be differentiated both pre- and post-symptomatically at 1m through 5m resolution. The best-performing models had 87% accuracy distinguishing between non-infected and asymptomatic vines, and 85% accuracy distinguishing between non-infected and asymptomatic + symptomatic vines. The importance of non-visible wavelengths suggests this capacity is driven by disease-induced changes to overall plant physiology. Our work sets a foundation for using the forthcoming hyperspectral satellite Surface Biology and Geology for regional disease monitoring.
Evaluating the Potential of High-Resolution Visible Remote Sensing to Detect Shiraz Disease in Grapevines.
Wang, Y., M.; Ostendorf, B.; and Pagay, V.
Australian Journal of Grape and Wine Research, 2023: 1-9. 5 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Evaluating the Potential of High-Resolution Visible Remote Sensing to Detect Shiraz Disease in Grapevines},
type = {article},
year = {2023},
pages = {1-9},
volume = {2023},
month = {5},
publisher = {Hindawi Limited},
day = {5},
id = {c49d2169-01e3-33af-909e-e854e8c5e758},
created = {2023-11-14T11:37:37.632Z},
accessed = {2023-11-14},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-14T11:37:43.351Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Background and Aims. Shiraz disease (SD) is a viral disease associated with Grapevine virus A that causes significant yield loss in economically important grape cultivars in Australia such as Shiraz and Merlot. Current diagnostic methods are time-consuming and costly. This study evaluates an alternative methodology using visible remote sensing imagery to detect SD in Shiraz grapevines. Methods and Results. High-resolution visible remote sensing images were captured of Shiraz grapevines in two South Australian viticultural regions over two seasons. The projected leaf area (PLA) of individual grapevines was estimated from the images. Virus-infected vines had significantly lower PLA than healthy vines in the early season but fewer difference after veraison. The lower PLA was only observed in grapevines coinfected with grapevine leafroll-associated viruses (GLRaVs) and Grapevine virus A (GVA). Shiraz vines infected with either GLRaVs or GVA had similar PLA to healthy vines. Conclusions. High-resolution RGB remote sensing technology has the potential to rapidly estimate SD infection in Shiraz grapevines. Our observations of shoot devigouration only in coinfected vines calls into question the etiology of SD. Further validation of the PLA technique incorporating different regions, seasons, cultivars, and combinations of viruses is needed for improving the robustness of the method. Significance of the Study. This preliminary study presents a new rapid and low-cost surveillance method to estimate SD infections in Shiraz vineyards, which could significantly lower the cost for growers who conduct on-ground SD visual assessments or lab-based tissue testing at the vineyard scale.},
bibtype = {article},
author = {Wang, Yeniu Mickey and Ostendorf, Bertram and Pagay, Vinay},
doi = {10.1155/2023/7376153},
journal = {Australian Journal of Grape and Wine Research}
}
Background and Aims. Shiraz disease (SD) is a viral disease associated with Grapevine virus A that causes significant yield loss in economically important grape cultivars in Australia such as Shiraz and Merlot. Current diagnostic methods are time-consuming and costly. This study evaluates an alternative methodology using visible remote sensing imagery to detect SD in Shiraz grapevines. Methods and Results. High-resolution visible remote sensing images were captured of Shiraz grapevines in two South Australian viticultural regions over two seasons. The projected leaf area (PLA) of individual grapevines was estimated from the images. Virus-infected vines had significantly lower PLA than healthy vines in the early season but fewer difference after veraison. The lower PLA was only observed in grapevines coinfected with grapevine leafroll-associated viruses (GLRaVs) and Grapevine virus A (GVA). Shiraz vines infected with either GLRaVs or GVA had similar PLA to healthy vines. Conclusions. High-resolution RGB remote sensing technology has the potential to rapidly estimate SD infection in Shiraz grapevines. Our observations of shoot devigouration only in coinfected vines calls into question the etiology of SD. Further validation of the PLA technique incorporating different regions, seasons, cultivars, and combinations of viruses is needed for improving the robustness of the method. Significance of the Study. This preliminary study presents a new rapid and low-cost surveillance method to estimate SD infections in Shiraz vineyards, which could significantly lower the cost for growers who conduct on-ground SD visual assessments or lab-based tissue testing at the vineyard scale.
Low-Cost Handheld Spectrometry for Detecting Flavescence Dorée in Vineyards.
Imran, H., A.; Zeggada, A.; Ianniello, I.; Melgani, F.; Polverari, A.; Baroni, A.; Danzi, D.; and Goller, R.
Applied Sciences 2023, Vol. 13, Page 2388, 13(4): 2388. 2 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Low-Cost Handheld Spectrometry for Detecting Flavescence Dorée in Vineyards},
type = {article},
year = {2023},
keywords = {cost spectrometer,feature selection,genetic algorithms (GA),hyperspectral remote sensing,low,machine learning algorithms,precision farming,vineyard},
pages = {2388},
volume = {13},
websites = {https://www.mdpi.com/2076-3417/13/4/2388/htm,https://www.mdpi.com/2076-3417/13/4/2388},
month = {2},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {13},
id = {cb5fbf36-efe0-3ca0-bea3-5f234031cc82},
created = {2023-11-14T12:13:43.997Z},
accessed = {2023-11-14},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-22T17:08:41.948Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {This study was conducted to evaluate the potential of low-cost hyperspectral sensors for the early detection of Flavescence dorée (FD) from asymptomatic samples prior to symptom development. In total, 180 leaf spectra from 60 randomly selected plants (three leaves per plant) were collected by using two portable mini-spectrometers (Hamamatsu: 340–850 nm and NIRScan: 900–1700 nm) at five vegetative growth stages in a vineyard with grape variety Garganega. High differences in the Hamamatsu spectra of the two groups were found in the VIS-NIR (visible–near infrared) spectral region while very small differences were observed in the NIRScan spectra. We analyzed the spectral data of two sensors by using all bands, features reduced by an ensemble method, and by genetic algorithms (GA) to discriminate the asymptomatic healthy (FD negative) and diseased (FD positive) leaves using five different classifiers. Overall, high classification accuracies were found in case of the Hamamatsu sensor compared to the NIRScan sensor. The feature selection techniques performed better compared to all bands, and the highest classification accuracy of 96% was achieved when GA features of the Hamamatsu sensor were used with the logistic regression (LR) classifier on test samples. A slightly low accuracy of 85% was achieved when the features (selected by the ensemble method) of the Hamamatsu sensor were used with the support vector machine (SVM) classifier by using leave-one-out (LOO) cross-validation on the whole dataset. Results demonstrated that employing a feature selection technique can provide a valid tool for determining the optimal bands that can be used to identify FD disease in the vineyard. However, further validation studies are required, as this study was conducted using a small dataset and from the single grapevine variety.},
bibtype = {article},
author = {Imran, Hafiz Ali and Zeggada, Abdallah and Ianniello, Ivan and Melgani, Farid and Polverari, Annalisa and Baroni, Alice and Danzi, Davide and Goller, Rino},
doi = {10.3390/APP13042388},
journal = {Applied Sciences 2023, Vol. 13, Page 2388},
number = {4}
}
This study was conducted to evaluate the potential of low-cost hyperspectral sensors for the early detection of Flavescence dorée (FD) from asymptomatic samples prior to symptom development. In total, 180 leaf spectra from 60 randomly selected plants (three leaves per plant) were collected by using two portable mini-spectrometers (Hamamatsu: 340–850 nm and NIRScan: 900–1700 nm) at five vegetative growth stages in a vineyard with grape variety Garganega. High differences in the Hamamatsu spectra of the two groups were found in the VIS-NIR (visible–near infrared) spectral region while very small differences were observed in the NIRScan spectra. We analyzed the spectral data of two sensors by using all bands, features reduced by an ensemble method, and by genetic algorithms (GA) to discriminate the asymptomatic healthy (FD negative) and diseased (FD positive) leaves using five different classifiers. Overall, high classification accuracies were found in case of the Hamamatsu sensor compared to the NIRScan sensor. The feature selection techniques performed better compared to all bands, and the highest classification accuracy of 96% was achieved when GA features of the Hamamatsu sensor were used with the logistic regression (LR) classifier on test samples. A slightly low accuracy of 85% was achieved when the features (selected by the ensemble method) of the Hamamatsu sensor were used with the support vector machine (SVM) classifier by using leave-one-out (LOO) cross-validation on the whole dataset. Results demonstrated that employing a feature selection technique can provide a valid tool for determining the optimal bands that can be used to identify FD disease in the vineyard. However, further validation studies are required, as this study was conducted using a small dataset and from the single grapevine variety.
Detecting Grapevine Virus Infections in Red and White Winegrape Canopies Using Proximal Hyperspectral Sensing.
Wang, Y., M.; Ostendorf, B.; and Pagay, V.
Sensors 2023, Vol. 23, Page 2851, 23(5): 2851. 3 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Detecting Grapevine Virus Infections in Red and White Winegrape Canopies Using Proximal Hyperspectral Sensing},
type = {article},
year = {2023},
keywords = {1,DA,GLD,GLRaV,GVA,PLS,disease detection,proximal sensing,spectroradiometer},
pages = {2851},
volume = {23},
websites = {https://www.mdpi.com/1424-8220/23/5/2851/htm,https://www.mdpi.com/1424-8220/23/5/2851},
month = {3},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {6},
id = {180d072b-732a-3909-96c7-60bd222d226f},
created = {2023-11-14T12:16:26.785Z},
accessed = {2023-11-14},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T14:06:51.070Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Grapevine virus-associated disease such as grapevine leafroll disease (GLD) affects grapevine health worldwide. Current diagnostic methods are either highly costly (laboratory-based diagnostics) or can be unreliable (visual assessments). Hyperspectral sensing technology is capable of measuring leaf reflectance spectra that can be used for the non-destructive and rapid detection of plant diseases. The present study used proximal hyperspectral sensing to detect virus infection in Pinot Noir (red-berried winegrape cultivar) and Chardonnay (white-berried winegrape cultivar) grapevines. Spectral data were collected throughout the grape growing season at six timepoints per cultivar. Partial least squares-discriminant analysis (PLS-DA) was used to build a predictive model of the presence or absence of GLD. The temporal change of canopy spectral reflectance showed that the harvest timepoint had the best prediction result. Prediction accuracies of 96% and 76% were achieved for Pinot Noir and Chardonnay, respectively. Our results provide valuable information on the optimal time for GLD detection. This hyperspectral method can also be deployed on mobile platforms including ground-based vehicles and unmanned aerial vehicles (UAV) for large-scale disease surveillance in vineyards.},
bibtype = {article},
author = {Wang, Yeniu Mickey and Ostendorf, Bertram and Pagay, Vinay},
doi = {10.3390/S23052851},
journal = {Sensors 2023, Vol. 23, Page 2851},
number = {5}
}
Grapevine virus-associated disease such as grapevine leafroll disease (GLD) affects grapevine health worldwide. Current diagnostic methods are either highly costly (laboratory-based diagnostics) or can be unreliable (visual assessments). Hyperspectral sensing technology is capable of measuring leaf reflectance spectra that can be used for the non-destructive and rapid detection of plant diseases. The present study used proximal hyperspectral sensing to detect virus infection in Pinot Noir (red-berried winegrape cultivar) and Chardonnay (white-berried winegrape cultivar) grapevines. Spectral data were collected throughout the grape growing season at six timepoints per cultivar. Partial least squares-discriminant analysis (PLS-DA) was used to build a predictive model of the presence or absence of GLD. The temporal change of canopy spectral reflectance showed that the harvest timepoint had the best prediction result. Prediction accuracies of 96% and 76% were achieved for Pinot Noir and Chardonnay, respectively. Our results provide valuable information on the optimal time for GLD detection. This hyperspectral method can also be deployed on mobile platforms including ground-based vehicles and unmanned aerial vehicles (UAV) for large-scale disease surveillance in vineyards.
Drones in Plant Disease Assessment, Efficient Monitoring, and Detection: A Way Forward to Smart Agriculture.
Abbas, A.; Zhang, Z.; Zheng, H.; Alami, M., M.; Alrefaei, A., F.; Abbas, Q.; Naqvi, S., A., H.; Rao, M., J.; Mosa, W., F.; Abbas, Q.; Hussain, A.; Hassan, M., Z.; and Zhou, L.
Agronomy 2023, Vol. 13, Page 1524, 13(6): 1524. 5 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Drones in Plant Disease Assessment, Efficient Monitoring, and Detection: A Way Forward to Smart Agriculture},
type = {article},
year = {2023},
keywords = {abbas2023dronesplantdisease},
pages = {1524},
volume = {13},
websites = {https://www.mdpi.com/2073-4395/13/6/1524/htm,https://www.mdpi.com/2073-4395/13/6/1524},
month = {5},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {31},
id = {b8f73b64-870f-3191-baaa-e6336eafd99e},
created = {2023-11-14T13:34:25.055Z},
accessed = {2023-11-14},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-21T15:03:47.116Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {1619600c-2adf-4216-9e4c-d260d584753e,4dd63b9a-2e78-4e16-b45d-b692d8e0c5c3,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {Plant diseases are one of the major threats to global food production. Efficient monitoring and detection of plant pathogens are instrumental in restricting and effectively managing the spread of the disease and reducing the cost of pesticides. Traditional, molecular, and serological methods that are widely used for plant disease detection are often ineffective if not applied during the initial stages of pathogenesis, when no or very weak symptoms appear. Moreover, they are almost useless in acquiring spatialized diagnostic results on plant diseases. On the other hand, remote sensing (RS) techniques utilizing drones are very effective for the rapid identification of plant diseases in their early stages. Currently, drones, play a pivotal role in the monitoring of plant pathogen spread, detection, and diagnosis to ensure crops’ health status. The advantages of drone technology include high spatial resolution (as several sensors are carried aboard), high efficiency, usage flexibility, and more significantly, quick detection of plant diseases across a large area with low cost, reliability, and provision of high-resolution data. Drone technology employs an automated procedure that begins with gathering images of diseased plants using various sensors and cameras. After extracting features, image processing approaches use the appropriate traditional machine learning or deep learning algorithms. Features are extracted from images of leaves using edge detection and histogram equalization methods. Drones have many potential uses in agriculture, including reducing manual labor and increasing productivity. Drones may be able to provide early warning of plant diseases, allowing farmers to prevent costly crop failures.},
bibtype = {article},
author = {Abbas, Aqleem and Zhang, Zhenhao and Zheng, Hongxia and Alami, Mohammad Murtaza and Alrefaei, Abdulmajeed F. and Abbas, Qamar and Naqvi, Syed Atif Hasan and Rao, Muhammad Junaid and Mosa, Walid F.A. and Abbas, Qamar and Hussain, Azhar and Hassan, Muhammad Zeeshan and Zhou, Lei},
doi = {10.3390/AGRONOMY13061524},
journal = {Agronomy 2023, Vol. 13, Page 1524},
number = {6}
}
Plant diseases are one of the major threats to global food production. Efficient monitoring and detection of plant pathogens are instrumental in restricting and effectively managing the spread of the disease and reducing the cost of pesticides. Traditional, molecular, and serological methods that are widely used for plant disease detection are often ineffective if not applied during the initial stages of pathogenesis, when no or very weak symptoms appear. Moreover, they are almost useless in acquiring spatialized diagnostic results on plant diseases. On the other hand, remote sensing (RS) techniques utilizing drones are very effective for the rapid identification of plant diseases in their early stages. Currently, drones, play a pivotal role in the monitoring of plant pathogen spread, detection, and diagnosis to ensure crops’ health status. The advantages of drone technology include high spatial resolution (as several sensors are carried aboard), high efficiency, usage flexibility, and more significantly, quick detection of plant diseases across a large area with low cost, reliability, and provision of high-resolution data. Drone technology employs an automated procedure that begins with gathering images of diseased plants using various sensors and cameras. After extracting features, image processing approaches use the appropriate traditional machine learning or deep learning algorithms. Features are extracted from images of leaves using edge detection and histogram equalization methods. Drones have many potential uses in agriculture, including reducing manual labor and increasing productivity. Drones may be able to provide early warning of plant diseases, allowing farmers to prevent costly crop failures.
GrapesNet: Indian RGB & RGB-D vineyard image datasets for deep learning applications.
Barbole, D., K.; and Jadhav, P., M.
Data in Brief, 48: 109100. 6 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {GrapesNet: Indian RGB & RGB-D vineyard image datasets for deep learning applications},
type = {article},
year = {2023},
keywords = {barbole2023grapesnetindianrgb},
pages = {109100},
volume = {48},
month = {6},
publisher = {Elsevier},
day = {1},
id = {480d539c-ec12-354e-b98f-8f46a5851286},
created = {2023-11-21T15:22:56.915Z},
accessed = {2023-11-21},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-21T15:32:10.183Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {e09e8a73-297c-40aa-8415-81ec386a90b0,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {In most of the countries, grapes are considered as a cash crop. Currently huge research is going on in development of automated grape harvesting systems. Speedy and reliable grape bunch detection is prime need for various deep learning based automated systems which deals with object detection and object segmentation tasks. But currently very few datasets are available on grape bunches in vineyard, because of which there is restriction to the research in this area. In comparison to the vineyard in outside countries, Indian vineyard structure is more complex, so it becomes hard to work in real-time. To overcome these problems and to make vineyard dataset for suitable for Indian vineyard scenarios, this paper proposed four different datasets on grape bunches in vineyard. For creating all datasets in GrapesNet, natural environmental conditions have been considered. GrapesNet includes total 11000+ images of grape bunches. Necessary data for weight prediction of grape cluster is also provided with dataset like height, width and real weight of cluster present in image. Proposed datasets can be used for prime tasks like grape bunch detection, grape bunch segmentation, and grape bunch weight estimation etc. of future generation automated vineyard harvesting technologies.},
bibtype = {article},
author = {Barbole, Dhanashree K. and Jadhav, Parul M.},
doi = {10.1016/J.DIB.2023.109100},
journal = {Data in Brief}
}
In most of the countries, grapes are considered as a cash crop. Currently huge research is going on in development of automated grape harvesting systems. Speedy and reliable grape bunch detection is prime need for various deep learning based automated systems which deals with object detection and object segmentation tasks. But currently very few datasets are available on grape bunches in vineyard, because of which there is restriction to the research in this area. In comparison to the vineyard in outside countries, Indian vineyard structure is more complex, so it becomes hard to work in real-time. To overcome these problems and to make vineyard dataset for suitable for Indian vineyard scenarios, this paper proposed four different datasets on grape bunches in vineyard. For creating all datasets in GrapesNet, natural environmental conditions have been considered. GrapesNet includes total 11000+ images of grape bunches. Necessary data for weight prediction of grape cluster is also provided with dataset like height, width and real weight of cluster present in image. Proposed datasets can be used for prime tasks like grape bunch detection, grape bunch segmentation, and grape bunch weight estimation etc. of future generation automated vineyard harvesting technologies.
Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions.
Pinheiro, I.; Moreira, G.; Queirós da Silva, D.; Magalhães, S.; Valente, A.; Moura Oliveira, P.; Cunha, M.; and Santos, F.
Agronomy 2023, Vol. 13, Page 1120, 13(4): 1120. 4 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions},
type = {article},
year = {2023},
keywords = {computer vision,machine learning,object detection,precision agriculture,viticulture},
pages = {1120},
volume = {13},
websites = {https://www.mdpi.com/2073-4395/13/4/1120/htm,https://www.mdpi.com/2073-4395/13/4/1120},
month = {4},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {14},
id = {bd30f66a-4817-310e-b81d-a4d2fa76ae36},
created = {2023-11-21T15:31:54.214Z},
accessed = {2023-11-21},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-22T12:32:57.248Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {e09e8a73-297c-40aa-8415-81ec386a90b0,bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.},
bibtype = {article},
author = {Pinheiro, Isabel and Moreira, Germano and Queirós da Silva, Daniel and Magalhães, Sandro and Valente, António and Moura Oliveira, Paulo and Cunha, Mário and Santos, Filipe},
doi = {10.3390/AGRONOMY13041120},
journal = {Agronomy 2023, Vol. 13, Page 1120},
number = {4}
}
The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.
The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings.
Rudenko, M.; Plugatar, Y.; Korzin, V.; Kazak, A.; Gallini, N.; and Gorbunova, N.
Inventions 2023, Vol. 8, Page 92, 8(4): 92. 7 2023.
Website
doi
link
bibtex
abstract
@article{
title = {The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings},
type = {article},
year = {2023},
keywords = {artificial intelligence,computer vision,environmental engineering,graft combinations’ affinity,grape diseases,grape seedlings,neural networks,rootstock,viticulture},
pages = {92},
volume = {8},
websites = {https://www.mdpi.com/2411-5134/8/4/92/htm,https://www.mdpi.com/2411-5134/8/4/92},
month = {7},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {19},
id = {bd913b78-356d-3573-b259-f6fb292a6743},
created = {2023-11-23T12:08:56.647Z},
accessed = {2023-11-23},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:12:03.078Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {This study explores the application of computer vision for enhancing the selection of rootstock-graft combinations and detecting diseases in grape seedlings. Computer vision has various applications in viticulture, but publications and research have not reported the use of computer vision in rootstock-graft selection, which defines the novelty of this research. This paper presents elements of the technology for applying computer vision to rootstock-graft combinations and includes an analysis of grape seedling cuttings. This analysis allows for a more accurate determination of the compatibility between rootstock and graft, as well as the detection of potential seedling diseases. The utilization of computer vision to automate the grafting process of grape cuttings offers significant benefits in terms of increased efficiency, improved quality, and reduced costs. This technology can replace manual labor and ensure economic efficiency and reliability, among other advantages. It also facilitates monitoring the development of seedlings to determine the appropriate planting time. Image processing algorithms play a vital role in automatically determining seedling characteristics such as trunk diameter and the presence of any damage. Furthermore, computer vision can aid in the identification of diseases and defects in seedlings, which is crucial for assessing their overall quality. The automation of these processes offers several advantages, including increased efficiency, improved quality, and reduced costs through the reduction of manual labor and waste. To fulfill these objectives, a unique robotic assembly line is planned for the grafting of grape cuttings. This line will be equipped with two conveyor belts, a delta robot, and a computer vision system. The use of computer vision in automating the grafting process for grape cuttings offers significant benefits in terms of efficiency, quality improvement, and cost reduction. By incorporating image processing algorithms and advanced robotics, this technology has the potential to revolutionize the viticulture industry. Thanks to training a computer vision system to analyze data on rootstock and graft grape varieties, it is possible to reduce the number of defects by half. The implementation of a semi-automated computer vision system can improve crossbreeding efficiency by 90%. Reducing the time spent on pairing selection is also a significant advantage. While manual selection takes between 1 and 2 min, reducing the time to 30 s using the semi-automated system, and the prospect of further automation reducing the time to 10–15 s, will significantly increase the productivity and efficiency of the process. In addition to the aforementioned benefits, the integration of computer vision technology in grape grafting processes brings several other advantages. One notable advantage is the increased accuracy and precision in pairing selection. Computer vision algorithms can analyze a wide range of factors, including size, shape, color, and structural characteristics, to make more informed decisions when matching rootstock and graft varieties. This can lead to better compatibility and improved overall grafting success rates.},
bibtype = {article},
author = {Rudenko, Marina and Plugatar, Yurij and Korzin, Vadim and Kazak, Anatoliy and Gallini, Nadezhda and Gorbunova, Natalia},
doi = {10.3390/INVENTIONS8040092},
journal = {Inventions 2023, Vol. 8, Page 92},
number = {4}
}
This study explores the application of computer vision for enhancing the selection of rootstock-graft combinations and detecting diseases in grape seedlings. Computer vision has various applications in viticulture, but publications and research have not reported the use of computer vision in rootstock-graft selection, which defines the novelty of this research. This paper presents elements of the technology for applying computer vision to rootstock-graft combinations and includes an analysis of grape seedling cuttings. This analysis allows for a more accurate determination of the compatibility between rootstock and graft, as well as the detection of potential seedling diseases. The utilization of computer vision to automate the grafting process of grape cuttings offers significant benefits in terms of increased efficiency, improved quality, and reduced costs. This technology can replace manual labor and ensure economic efficiency and reliability, among other advantages. It also facilitates monitoring the development of seedlings to determine the appropriate planting time. Image processing algorithms play a vital role in automatically determining seedling characteristics such as trunk diameter and the presence of any damage. Furthermore, computer vision can aid in the identification of diseases and defects in seedlings, which is crucial for assessing their overall quality. The automation of these processes offers several advantages, including increased efficiency, improved quality, and reduced costs through the reduction of manual labor and waste. To fulfill these objectives, a unique robotic assembly line is planned for the grafting of grape cuttings. This line will be equipped with two conveyor belts, a delta robot, and a computer vision system. The use of computer vision in automating the grafting process for grape cuttings offers significant benefits in terms of efficiency, quality improvement, and cost reduction. By incorporating image processing algorithms and advanced robotics, this technology has the potential to revolutionize the viticulture industry. Thanks to training a computer vision system to analyze data on rootstock and graft grape varieties, it is possible to reduce the number of defects by half. The implementation of a semi-automated computer vision system can improve crossbreeding efficiency by 90%. Reducing the time spent on pairing selection is also a significant advantage. While manual selection takes between 1 and 2 min, reducing the time to 30 s using the semi-automated system, and the prospect of further automation reducing the time to 10–15 s, will significantly increase the productivity and efficiency of the process. In addition to the aforementioned benefits, the integration of computer vision technology in grape grafting processes brings several other advantages. One notable advantage is the increased accuracy and precision in pairing selection. Computer vision algorithms can analyze a wide range of factors, including size, shape, color, and structural characteristics, to make more informed decisions when matching rootstock and graft varieties. This can lead to better compatibility and improved overall grafting success rates.
The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings.
Rudenko, M.; Plugatar, Y.; Korzin, V.; Kazak, A.; Gallini, N.; and Gorbunova, N.
Inventions 2023, Vol. 8, Page 92, 8(4): 92. 7 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings},
type = {article},
year = {2023},
keywords = {artificial intelligence,computer vision,environmental engineering,graft combinations’ affinity,grape diseases,grape seedlings,neural networks,rootstock,viticulture},
pages = {92},
volume = {8},
websites = {https://www.mdpi.com/2411-5134/8/4/92/htm,https://www.mdpi.com/2411-5134/8/4/92},
month = {7},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {19},
id = {06a7b616-1426-3b7d-a39c-b12f78c0d74f},
created = {2023-11-23T18:10:02.287Z},
accessed = {2023-11-23},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-11-23T18:10:11.751Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {bd3c6f2e-3514-47cf-bc42-12db8b9abe45},
private_publication = {false},
abstract = {This study explores the application of computer vision for enhancing the selection of rootstock-graft combinations and detecting diseases in grape seedlings. Computer vision has various applications in viticulture, but publications and research have not reported the use of computer vision in rootstock-graft selection, which defines the novelty of this research. This paper presents elements of the technology for applying computer vision to rootstock-graft combinations and includes an analysis of grape seedling cuttings. This analysis allows for a more accurate determination of the compatibility between rootstock and graft, as well as the detection of potential seedling diseases. The utilization of computer vision to automate the grafting process of grape cuttings offers significant benefits in terms of increased efficiency, improved quality, and reduced costs. This technology can replace manual labor and ensure economic efficiency and reliability, among other advantages. It also facilitates monitoring the development of seedlings to determine the appropriate planting time. Image processing algorithms play a vital role in automatically determining seedling characteristics such as trunk diameter and the presence of any damage. Furthermore, computer vision can aid in the identification of diseases and defects in seedlings, which is crucial for assessing their overall quality. The automation of these processes offers several advantages, including increased efficiency, improved quality, and reduced costs through the reduction of manual labor and waste. To fulfill these objectives, a unique robotic assembly line is planned for the grafting of grape cuttings. This line will be equipped with two conveyor belts, a delta robot, and a computer vision system. The use of computer vision in automating the grafting process for grape cuttings offers significant benefits in terms of efficiency, quality improvement, and cost reduction. By incorporating image processing algorithms and advanced robotics, this technology has the potential to revolutionize the viticulture industry. Thanks to training a computer vision system to analyze data on rootstock and graft grape varieties, it is possible to reduce the number of defects by half. The implementation of a semi-automated computer vision system can improve crossbreeding efficiency by 90%. Reducing the time spent on pairing selection is also a significant advantage. While manual selection takes between 1 and 2 min, reducing the time to 30 s using the semi-automated system, and the prospect of further automation reducing the time to 10–15 s, will significantly increase the productivity and efficiency of the process. In addition to the aforementioned benefits, the integration of computer vision technology in grape grafting processes brings several other advantages. One notable advantage is the increased accuracy and precision in pairing selection. Computer vision algorithms can analyze a wide range of factors, including size, shape, color, and structural characteristics, to make more informed decisions when matching rootstock and graft varieties. This can lead to better compatibility and improved overall grafting success rates.},
bibtype = {article},
author = {Rudenko, Marina and Plugatar, Yurij and Korzin, Vadim and Kazak, Anatoliy and Gallini, Nadezhda and Gorbunova, Natalia},
doi = {10.3390/INVENTIONS8040092},
journal = {Inventions 2023, Vol. 8, Page 92},
number = {4}
}
This study explores the application of computer vision for enhancing the selection of rootstock-graft combinations and detecting diseases in grape seedlings. Computer vision has various applications in viticulture, but publications and research have not reported the use of computer vision in rootstock-graft selection, which defines the novelty of this research. This paper presents elements of the technology for applying computer vision to rootstock-graft combinations and includes an analysis of grape seedling cuttings. This analysis allows for a more accurate determination of the compatibility between rootstock and graft, as well as the detection of potential seedling diseases. The utilization of computer vision to automate the grafting process of grape cuttings offers significant benefits in terms of increased efficiency, improved quality, and reduced costs. This technology can replace manual labor and ensure economic efficiency and reliability, among other advantages. It also facilitates monitoring the development of seedlings to determine the appropriate planting time. Image processing algorithms play a vital role in automatically determining seedling characteristics such as trunk diameter and the presence of any damage. Furthermore, computer vision can aid in the identification of diseases and defects in seedlings, which is crucial for assessing their overall quality. The automation of these processes offers several advantages, including increased efficiency, improved quality, and reduced costs through the reduction of manual labor and waste. To fulfill these objectives, a unique robotic assembly line is planned for the grafting of grape cuttings. This line will be equipped with two conveyor belts, a delta robot, and a computer vision system. The use of computer vision in automating the grafting process for grape cuttings offers significant benefits in terms of efficiency, quality improvement, and cost reduction. By incorporating image processing algorithms and advanced robotics, this technology has the potential to revolutionize the viticulture industry. Thanks to training a computer vision system to analyze data on rootstock and graft grape varieties, it is possible to reduce the number of defects by half. The implementation of a semi-automated computer vision system can improve crossbreeding efficiency by 90%. Reducing the time spent on pairing selection is also a significant advantage. While manual selection takes between 1 and 2 min, reducing the time to 30 s using the semi-automated system, and the prospect of further automation reducing the time to 10–15 s, will significantly increase the productivity and efficiency of the process. In addition to the aforementioned benefits, the integration of computer vision technology in grape grafting processes brings several other advantages. One notable advantage is the increased accuracy and precision in pairing selection. Computer vision algorithms can analyze a wide range of factors, including size, shape, color, and structural characteristics, to make more informed decisions when matching rootstock and graft varieties. This can lead to better compatibility and improved overall grafting success rates.
Domain Generalization for Crop Segmentation with Knowledge Distillation.
Angarano, S.; Martini, M.; Navone, A.; and Chiaberge, M.
. 4 2023.
Paper
Website
link
bibtex
abstract
@article{
title = {Domain Generalization for Crop Segmentation with Knowledge Distillation},
type = {article},
year = {2023},
keywords = {Domain Generalization,Knowledge Distillation,Semantic Segmentation},
websites = {https://arxiv.org/abs/2304.01029v2},
month = {4},
day = {3},
id = {c276e262-94b0-3e54-a265-45792cf6834d},
created = {2023-12-13T07:01:13.813Z},
accessed = {2023-12-13},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-12-13T07:45:24.430Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {d25a2be2-b54f-400b-918b-b254e8044e39},
private_publication = {false},
abstract = {In recent years, precision agriculture has gradually oriented farming closer to automation processes to support all the activities related to field management. Service robotics plays a predominant role in this evolution by deploying autonomous agents that can navigate fields while performing tasks without human intervention, such as monitoring, spraying, and harvesting. To execute these precise actions, mobile robots need a real-time perception system that understands their surroundings and identifies their targets in the wild. Generalizing to new crops and environmental conditions is critical for practical applications, as labeled samples are rarely available. In this paper, we investigate the problem of crop segmentation and propose a novel approach to enhance domain generalization using knowledge distillation. In the proposed framework, we transfer knowledge from an ensemble of models individually trained on source domains to a student model that can adapt to unseen target domains. To evaluate the proposed method, we present a synthetic multi-domain dataset for crop segmentation containing plants of variegate shapes and covering different terrain styles, weather conditions, and light scenarios for more than 50,000 samples. We demonstrate significant improvements in performance over state-of-the-art methods and superior sim-to-real generalization. Our approach provides a promising solution for domain generalization in crop segmentation and has the potential to enhance a wide variety of precision agriculture applications.},
bibtype = {article},
author = {Angarano, Simone and Martini, Mauro and Navone, Alessandro and Chiaberge, Marcello}
}
In recent years, precision agriculture has gradually oriented farming closer to automation processes to support all the activities related to field management. Service robotics plays a predominant role in this evolution by deploying autonomous agents that can navigate fields while performing tasks without human intervention, such as monitoring, spraying, and harvesting. To execute these precise actions, mobile robots need a real-time perception system that understands their surroundings and identifies their targets in the wild. Generalizing to new crops and environmental conditions is critical for practical applications, as labeled samples are rarely available. In this paper, we investigate the problem of crop segmentation and propose a novel approach to enhance domain generalization using knowledge distillation. In the proposed framework, we transfer knowledge from an ensemble of models individually trained on source domains to a student model that can adapt to unseen target domains. To evaluate the proposed method, we present a synthetic multi-domain dataset for crop segmentation containing plants of variegate shapes and covering different terrain styles, weather conditions, and light scenarios for more than 50,000 samples. We demonstrate significant improvements in performance over state-of-the-art methods and superior sim-to-real generalization. Our approach provides a promising solution for domain generalization in crop segmentation and has the potential to enhance a wide variety of precision agriculture applications.
Applying Knowledge Distillation on Pre-Trained Model for Early Grapevine Detection.
Hollard, L.; and Mohimont, L.
,149-156. 6 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Applying Knowledge Distillation on Pre-Trained Model for Early Grapevine Detection},
type = {article},
year = {2023},
keywords = {Deep Learning,Fine-tuning,Knowledge Distillation,Pseudo-labelling,Yield forecast},
pages = {149-156},
websites = {https://ebooks.iospress.nl/doi/10.3233/AISE230024},
month = {6},
publisher = {IOS Press},
day = {23},
id = {5bc00688-4c07-3da7-9171-c8c832e4c790},
created = {2023-12-13T07:02:00.181Z},
accessed = {2023-12-13},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-12-13T07:45:24.412Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {d25a2be2-b54f-400b-918b-b254e8044e39},
private_publication = {false},
abstract = {The development of Artificial Intelligence has raised interesting opportunities for improved automation in smart agriculture. Smart viticulture is one of the domains that can benefit from Computer-vision tasks through field sustainability. Computer-vision solutions present additional constraints as the amount of data for good training convergence has to be complex enough to cover sufficient features from desired inputs. In this paper, we present a study to implement a grapevine detection improvement for early grapes detection and grape yield prediction whose interest in Champagne and wine companies is undeniable. Earlier yield predictions allow a better market assessment, the harvest work's organization and help decision-making about plant management. Our goal is to carry estimations 5 to 6 weeks before the harvest. Furthermore, the grapevines growing condition and the large amount of data to process for yield estimation require an embedded device to acquire and compute deep learning inference. Thus, the grapes detection model has to be lightweight enough to run on an embedded device. These models were subsequently pre-trained on two different types of datasets and several layer depth of deep learning models to propose a pseudo-labelling Teacher-Student related Knowledge Distillation. Overall solutions proposed an improvement of 7.56%, 6.98, 8.279%, 7.934% and 13.63% for f1 score, precision, recall, mean average precision at 50 and mean average precision 50-95 respectively on BBCH77 phenological stage.},
bibtype = {article},
author = {Hollard, Lilian and Mohimont, Lucas},
doi = {10.3233/AISE230024}
}
The development of Artificial Intelligence has raised interesting opportunities for improved automation in smart agriculture. Smart viticulture is one of the domains that can benefit from Computer-vision tasks through field sustainability. Computer-vision solutions present additional constraints as the amount of data for good training convergence has to be complex enough to cover sufficient features from desired inputs. In this paper, we present a study to implement a grapevine detection improvement for early grapes detection and grape yield prediction whose interest in Champagne and wine companies is undeniable. Earlier yield predictions allow a better market assessment, the harvest work's organization and help decision-making about plant management. Our goal is to carry estimations 5 to 6 weeks before the harvest. Furthermore, the grapevines growing condition and the large amount of data to process for yield estimation require an embedded device to acquire and compute deep learning inference. Thus, the grapes detection model has to be lightweight enough to run on an embedded device. These models were subsequently pre-trained on two different types of datasets and several layer depth of deep learning models to propose a pseudo-labelling Teacher-Student related Knowledge Distillation. Overall solutions proposed an improvement of 7.56%, 6.98, 8.279%, 7.934% and 13.63% for f1 score, precision, recall, mean average precision at 50 and mean average precision 50-95 respectively on BBCH77 phenological stage.
Multi-Task Learning with Knowledge Distillation for Dense Prediction.
Xu, Y.; Yang, Y.; and Zhang, L.
2023.
Paper
link
bibtex
abstract
@misc{
title = {Multi-Task Learning with Knowledge Distillation for Dense Prediction},
type = {misc},
year = {2023},
pages = {21550-21559},
id = {ae105343-eeb4-308c-999b-a8324a26099f},
created = {2023-12-13T07:45:14.990Z},
accessed = {2023-12-13},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2023-12-13T07:45:17.960Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {d25a2be2-b54f-400b-918b-b254e8044e39},
private_publication = {false},
abstract = {While multi-task learning (MTL) has become an attractive topic, its training usually poses more difficulties than the single-task case. How to successfully apply knowledge distillation into MTL to improve training efficiency and model performance is still a challenging problem. In this paper, we introduce a new knowledge distillation procedure with an alternative match for MTL of dense prediction based on two simple design principles. First, for memory and training efficiency, we use a single strong multi-task model as a teacher during training instead of multiple teachers, as widely adopted in existing studies. Second, we employ a less sensitive Cauchy-Schwarz (CS) divergence instead of the Kullback-Leibler (KL) divergence and propose a CS distillation loss accordingly. With the less sensitive divergence, our knowledge distillation with an alternative match is applied for capturing inter-task and intra-task information between the teacher model and the student model of each task, thereby learning more "dark knowl-edge" for effective distillation. We conducted extensive experiments on dense prediction datasets, including NYUD-v2 and PASCAL-Context, for multiple vision tasks, such as semantic segmentation, human parts segmentation, depth estimation , surface normal estimation, and boundary detection. The results show that our proposed method decidedly improves model performance and the practical inference efficiency .},
bibtype = {misc},
author = {Xu, Yangyang and Yang, Yibo and Zhang, Lefei}
}
While multi-task learning (MTL) has become an attractive topic, its training usually poses more difficulties than the single-task case. How to successfully apply knowledge distillation into MTL to improve training efficiency and model performance is still a challenging problem. In this paper, we introduce a new knowledge distillation procedure with an alternative match for MTL of dense prediction based on two simple design principles. First, for memory and training efficiency, we use a single strong multi-task model as a teacher during training instead of multiple teachers, as widely adopted in existing studies. Second, we employ a less sensitive Cauchy-Schwarz (CS) divergence instead of the Kullback-Leibler (KL) divergence and propose a CS distillation loss accordingly. With the less sensitive divergence, our knowledge distillation with an alternative match is applied for capturing inter-task and intra-task information between the teacher model and the student model of each task, thereby learning more "dark knowl-edge" for effective distillation. We conducted extensive experiments on dense prediction datasets, including NYUD-v2 and PASCAL-Context, for multiple vision tasks, such as semantic segmentation, human parts segmentation, depth estimation , surface normal estimation, and boundary detection. The results show that our proposed method decidedly improves model performance and the practical inference efficiency .
Research progress of autonomous navigation technology for multi-agricultural scenes.
Xie, B.; Jin, Y.; Faheem, M.; Gao, W.; Liu, J.; Jiang, H.; Cai, L.; and Li, Y.
8 2023.
Paper
doi
link
bibtex
abstract
@misc{
title = {Research progress of autonomous navigation technology for multi-agricultural scenes},
type = {misc},
year = {2023},
source = {Computers and Electronics in Agriculture},
keywords = {Agricultural autonomous navigation,Agriculture,Differentiated,Scenes},
volume = {211},
month = {8},
publisher = {Elsevier B.V.},
day = {1},
id = {308ff033-5c45-3017-8758-14f0ae00cc0e},
created = {2024-01-09T11:55:49.228Z},
file_attached = {true},
profile_id = {c3c41a69-4b45-352f-9232-4d3281e18730},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-01-09T14:11:54.625Z},
read = {false},
starred = {true},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {2bfb8d91-9fac-46f0-bd0c-93235d01dbed},
private_publication = {false},
abstract = {Due to the increasing demand for food, the global shortage of agricultural labor and the reduction in the utilization of agricultural resources, the demand for agricultural autonomous navigation technology and equipment has become increasingly urgent. Compared to the closed express structured environment for self-driving cars, the agricultural scene is complex and diverse, and its navigation modes and core technologies are also significantly different. In view of this, it is of great value to summarize the development characteristics and trends of autonomous navigation technology in fields, orchards, greenhouses and other scenarios. In this paper, (1) firstly, the types and characteristics of multi-agricultural scenes are analyzed, the principle and mode of agricultural autonomous navigation are expounded, and the agricultural autonomous navigation system and architecture are introduced; (2) secondly, the research status of autonomous navigation technology in open field agriculture, orchard agriculture, facility agriculture, livestock and poultry agriculture and aquatic agriculture is investigated; (3) finally, it is predicted that the differentiated autonomous navigation technology scheme for multi-scene agriculture, multi-dimensional perception, self-understanding navigation technology in agricultural scenarios, selective autonomous navigation technology for task self-matching fusion, the cooperative autonomous navigation technology for multi machine ad hoc network clusters, and the fault self-checking remote diagnosis navigation system are the future research hotspots and development trends. At the same time, it is clear that the in-depth practical application of artificial intelligence, the in-depth research and development of high-precision and low-cost sensors, and the collaborative integration technology of agricultural machinery and agronomy are the critical path to promote the breakthrough of agricultural autonomous navigation technology. The summary and prospect of this paper have positive significance for promoting the overall development of agricultural autonomous navigation technology.},
bibtype = {misc},
author = {Xie, Binbin and Jin, Yucheng and Faheem, Muhammad and Gao, Wenjie and Liu, Jizhan and Jiang, Houkang and Cai, Lianjiang and Li, Yuanxiang},
doi = {10.1016/j.compag.2023.107963}
}
Due to the increasing demand for food, the global shortage of agricultural labor and the reduction in the utilization of agricultural resources, the demand for agricultural autonomous navigation technology and equipment has become increasingly urgent. Compared to the closed express structured environment for self-driving cars, the agricultural scene is complex and diverse, and its navigation modes and core technologies are also significantly different. In view of this, it is of great value to summarize the development characteristics and trends of autonomous navigation technology in fields, orchards, greenhouses and other scenarios. In this paper, (1) firstly, the types and characteristics of multi-agricultural scenes are analyzed, the principle and mode of agricultural autonomous navigation are expounded, and the agricultural autonomous navigation system and architecture are introduced; (2) secondly, the research status of autonomous navigation technology in open field agriculture, orchard agriculture, facility agriculture, livestock and poultry agriculture and aquatic agriculture is investigated; (3) finally, it is predicted that the differentiated autonomous navigation technology scheme for multi-scene agriculture, multi-dimensional perception, self-understanding navigation technology in agricultural scenarios, selective autonomous navigation technology for task self-matching fusion, the cooperative autonomous navigation technology for multi machine ad hoc network clusters, and the fault self-checking remote diagnosis navigation system are the future research hotspots and development trends. At the same time, it is clear that the in-depth practical application of artificial intelligence, the in-depth research and development of high-precision and low-cost sensors, and the collaborative integration technology of agricultural machinery and agronomy are the critical path to promote the breakthrough of agricultural autonomous navigation technology. The summary and prospect of this paper have positive significance for promoting the overall development of agricultural autonomous navigation technology.
Robots in the Garden: Artificial Intelligence and Adaptive Landscapes.
Zhang, Z.; Epstein, S., L.; Breen, C.; Xia, S.; Zhu, Z.; and Volkmann, C.
Journal of Digital Landscape Architecture, 2023(8): 264-272. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Robots in the Garden: Artificial Intelligence and Adaptive Landscapes},
type = {article},
year = {2023},
keywords = {Resilience,artificial intelligence,climate change,machine learning,robotics,technology},
pages = {264-272},
volume = {2023},
publisher = {VDE VERLAG GMBH},
id = {82bae6f1-ed8b-3504-b125-7583e135ac2e},
created = {2024-01-29T10:34:18.068Z},
file_attached = {true},
profile_id = {c3c41a69-4b45-352f-9232-4d3281e18730},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-01-29T10:36:29.987Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {9e28b3ab-dc65-4cb8-8373-439f2758a335},
private_publication = {false},
abstract = {This paper introduces ELUA, the Ecological Laboratory for Urban Agriculture, a collaboration among landscape architects, architects and computer scientists who specialize in artificial intelligence, robotics and computer vision. ELUA has two gantry robots, one indoors and the other outside on the rooftop of a 6-story campus building. Each robot can seed, water, weed, and prune in its garden. To support responsive landscape research, ELUA also includes sensor arrays, an AI-powered camera, and an extensive network infrastructure. This project demonstrates a way to integrate artificial intelligence into an evolving urban ecosystem, and encourages landscape architects to develop an adaptive design framework where design becomes a long-term engagement with the environment.},
bibtype = {article},
author = {Zhang, Zihao and Epstein, Susan L. and Breen, Casey and Xia, Sophia and Zhu, Zhigang and Volkmann, Christian},
doi = {10.14627/537740028},
journal = {Journal of Digital Landscape Architecture},
number = {8}
}
This paper introduces ELUA, the Ecological Laboratory for Urban Agriculture, a collaboration among landscape architects, architects and computer scientists who specialize in artificial intelligence, robotics and computer vision. ELUA has two gantry robots, one indoors and the other outside on the rooftop of a 6-story campus building. Each robot can seed, water, weed, and prune in its garden. To support responsive landscape research, ELUA also includes sensor arrays, an AI-powered camera, and an extensive network infrastructure. This project demonstrates a way to integrate artificial intelligence into an evolving urban ecosystem, and encourages landscape architects to develop an adaptive design framework where design becomes a long-term engagement with the environment.
Efficient Grapevine Structure Estimation in Vineyards Conditions.
Gentilhomme, T.; Villamizar, M.; Corre, J.; and Odobez, J.
In
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pages 712-720, 2023.
link
bibtex
@inproceedings{
title = {Efficient Grapevine Structure Estimation in Vineyards Conditions},
type = {inproceedings},
year = {2023},
pages = {712-720},
id = {5138ec35-be6d-3fbf-8abe-d94b096f35dc},
created = {2024-02-14T13:08:56.410Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:56.410Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {gentilhomme2023efficientgrapevinestructure},
source_type = {inproceedings},
private_publication = {false},
bibtype = {inproceedings},
author = {Gentilhomme, Théophile and Villamizar, Michael and Corre, Jerome and Odobez, Jean-Marc},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}
}
Using a Camera System for the In-Situ Assessment of Cordon Dieback due to Grapevine Trunk Diseases.
Tang, J.; Yem, O.; Russell, F.; Stewart, C., A.; Lin, K.; Jayakody, H.; Ayres, M., R.; Sosnowski, M., R.; Whitty, M.; Petrie, P., R.; and others
Australian Journal of Grape and Wine Research, 2023: 8634742. 2023.
link
bibtex
@article{
title = {Using a Camera System for the In-Situ Assessment of Cordon Dieback due to Grapevine Trunk Diseases},
type = {article},
year = {2023},
pages = {8634742},
volume = {2023},
id = {b8c4317e-4a26-3de6-8b04-07bc437f8f4e},
created = {2024-02-14T13:08:56.618Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:56.618Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {tang2023usingcamerasystem},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Tang, Julie and Yem, Olivia and Russell, Finn and Stewart, Cameron A and Lin, Kangying and Jayakody, Hiranya and Ayres, Matthew R and Sosnowski, Mark R and Whitty, Mark and Petrie, Paul R and others, undefined},
journal = {Australian Journal of Grape and Wine Research}
}
Mapping the spatial variability of Botrytis bunch rot risk in vineyards using UAV multispectral imagery.
Vélez, S.; Ariza-Sentís, M.; and Valente, J.
European Journal of Agronomy, 142: 126691. 2023.
link
bibtex
@article{
title = {Mapping the spatial variability of Botrytis bunch rot risk in vineyards using UAV multispectral imagery},
type = {article},
year = {2023},
pages = {126691},
volume = {142},
id = {137b00b5-f7c4-3f89-aa59-28d74000c717},
created = {2024-02-14T13:08:56.793Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:56.793Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {velez2023mappingspatialvariability},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Vélez, Sergio and Ariza-Sentís, Mar and Valente, João},
journal = {European Journal of Agronomy}
}
Dataset on unmanned aerial vehicle multispectral images acquired over a vineyard affected by Botrytis cinerea in northern Spain.
Vélez, S.; Ariza-Sentís, M.; and Valente, J.
Data in Brief, 46: 108876. 2023.
link
bibtex
@article{
title = {Dataset on unmanned aerial vehicle multispectral images acquired over a vineyard affected by Botrytis cinerea in northern Spain},
type = {article},
year = {2023},
pages = {108876},
volume = {46},
id = {45d0cfb7-d133-3eb9-81a2-af2ec5cfbf45},
created = {2024-02-14T13:08:56.965Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:56.965Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {velez2023datasetuav},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Vélez, Sergio and Ariza-Sentís, Mar and Valente, João},
journal = {Data in Brief}
}
Data acquisition for testing potential detection of Flavescence dorée with a designed, affordable multispectral camera.
Barjaktarović, M.; Santoni, M.; Faralli, M.; Bertamini, M.; Bruzzone, L.; and others
Telfor Journal, 2023(1): 2-7. 2023.
link
bibtex
@article{
title = {Data acquisition for testing potential detection of Flavescence dorée with a designed, affordable multispectral camera},
type = {article},
year = {2023},
pages = {2-7},
volume = {2023},
id = {2666545a-fbbd-3ebc-baeb-f2b6781a3134},
created = {2024-02-14T13:08:57.344Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:57.344Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {barjaktarovic2023dataacquisitiontesting},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Barjaktarović, Marko and Santoni, Massimo and Faralli, Michele and Bertamini, Massimo and Bruzzone, Lorenzo and others, undefined},
journal = {Telfor Journal},
number = {1}
}
Detecting Grapevine Virus Infections in Red and White Winegrape Canopies Using Proximal Hyperspectral Sensing.
Wang, Y., M.; Ostendorf, B.; and Pagay, V.
Sensors, 23(5): 2851. 2023.
link
bibtex
@article{
title = {Detecting Grapevine Virus Infections in Red and White Winegrape Canopies Using Proximal Hyperspectral Sensing},
type = {article},
year = {2023},
pages = {2851},
volume = {23},
id = {22c3356e-f390-3c90-ae8f-35b13ed96315},
created = {2024-02-14T13:08:58.486Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:58.486Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {wang2023detectinggrapevinevirus},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Wang, Yeniu Mickey and Ostendorf, Bertram and Pagay, Vinay},
journal = {Sensors},
number = {5}
}
Drones in Plant Disease Assessment, Efficient Monitoring, and Detection: A Way Forward to Smart Agriculture.
Abbas, A.; Zhang, Z.; Zheng, H.; Alami, M., M.; Alrefaei, A., F.; Abbas, Q.; Naqvi, S., A., H.; Rao, M., J.; Mosa, W., F., A.; Abbas, Q.; Hussain, A.; Hassan, M., Z.; and Zhou, L.
Agronomy, 13(6): 1524. 2023.
link
bibtex
@article{
title = {Drones in Plant Disease Assessment, Efficient Monitoring, and Detection: A Way Forward to Smart Agriculture},
type = {article},
year = {2023},
pages = {1524},
volume = {13},
id = {bcea88b2-faac-3f75-b922-028485bc6ea7},
created = {2024-02-14T13:08:58.499Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:58.499Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {abbas2023dronesplantdisease},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Abbas, Aqleem and Zhang, Zhenhao and Zheng, Hongxia and Alami, Mohammad Murtaza and Alrefaei, Abdulmajeed F and Abbas, Qamar and Naqvi, Syed Atif Hasan and Rao, Muhammad Junaid and Mosa, Walid F A and Abbas, Qamar and Hussain, Azhar and Hassan, Muhammad Zeeshan and Zhou, Lei},
journal = {Agronomy},
number = {6}
}
GrapesNet: Indian RGB \& RGB-D vineyard image datasets for deep learning applications.
Barbole, D., K.; and Jadhav, P., M.
Data in Brief, 48: 109100. 2023.
link
bibtex
@article{
title = {GrapesNet: Indian RGB \& RGB-D vineyard image datasets for deep learning applications},
type = {article},
year = {2023},
pages = {109100},
volume = {48},
id = {d227a7ca-2b79-3b29-86b7-680ad8f52eb3},
created = {2024-02-14T13:08:58.932Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:58.932Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {barbole2023grapesnetindian},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Barbole, Dhanashree K and Jadhav, Parul M},
journal = {Data in Brief}
}
Segmentation Methods Evaluation on Grapevine Leaf Diseases.
Molnár, S.; and Tamás, L.
In
Proceedings of the 18th Conference on Computer Science and Intelligence Systems, FedCSIS 2023, Warsaw, Poland, September 17-20, 2023, volume 35, of
Annals of Computer Science and Information Systems, pages 1081-1085, 2023.
link
bibtex
@inproceedings{
title = {Segmentation Methods Evaluation on Grapevine Leaf Diseases},
type = {inproceedings},
year = {2023},
pages = {1081-1085},
volume = {35},
series = {Annals of Computer Science and Information Systems},
id = {f7deb781-84fd-3c01-bb41-e9ddeeaf4247},
created = {2024-02-14T13:08:59.004Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:59.004Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {molnar2023segmentationmethodsevaluation},
source_type = {inproceedings},
private_publication = {false},
bibtype = {inproceedings},
author = {Molnár, Szilárd and Tamás, Levente},
booktitle = {Proceedings of the 18th Conference on Computer Science and Intelligence Systems, FedCSIS 2023, Warsaw, Poland, September 17-20, 2023}
}
A survey on deep learning-based identification of plant and crop diseases from UAV-based aerial images.
Bouguettaya, A.; Zarzour, H.; Kechida, A.; and Taberkit, A., M.
Cluster Computing, 26(2): 1297-1317. 2023.
link
bibtex
@article{
title = {A survey on deep learning-based identification of plant and crop diseases from UAV-based aerial images},
type = {article},
year = {2023},
pages = {1297-1317},
volume = {26},
id = {278e00dc-e2f0-30f5-972b-40134694cf86},
created = {2024-02-14T13:08:59.020Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:59.020Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {bouguettaya2022surveydeeplearning},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Bouguettaya, Abdelmalek and Zarzour, Hafed and Kechida, Ahmed and Taberkit, Amine Mohammed},
journal = {Cluster Computing},
number = {2}
}
Evaluating the Potential of High-Resolution Visible Remote Sensing to Detect Shiraz Disease in Grapevines.
Wang, Y., M.; Ostendorf, B.; Pagay, V.; and others
Australian Journal of Grape and Wine Research, 2023. 2023.
link
bibtex
@article{
title = {Evaluating the Potential of High-Resolution Visible Remote Sensing to Detect Shiraz Disease in Grapevines},
type = {article},
year = {2023},
volume = {2023},
id = {15b6c3b5-5078-3e0f-9d3f-c2409fce4635},
created = {2024-02-14T13:08:59.468Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:59.468Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {wang2023evaluatingpotentialhighresolution},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Wang, Yeniu Mickey and Ostendorf, Bertram and Pagay, Vinay and others, undefined},
journal = {Australian Journal of Grape and Wine Research}
}
Amazon SageMaker.
AWS
11 2023.
link
bibtex
@misc{
title = {Amazon SageMaker},
type = {misc},
year = {2023},
month = {11},
id = {76440691-a4e0-31ce-97e8-0a05f8ac765d},
created = {2024-02-14T13:08:59.627Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:59.627Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {amazonsagemaker2023},
source_type = {misc},
medium = {https://aws.amazon.com/sagemaker/},
private_publication = {false},
bibtype = {misc},
author = {AWS, undefined}
}
An expertized grapevine disease image database including five grape varieties focused on Flavescence dorée and its confounding diseases, biotic and abiotic stresses.
Tardif, M.; Amri, A.; Deshayes, A.; Greven, M.; Keresztes, B.; Fontaine, G.; Sicaud, L.; Paulhac, L.; Bentejac, S.; and da Costa, J.
Data in Brief, 48: 109230. 2023.
link
bibtex
@article{
title = {An expertized grapevine disease image database including five grape varieties focused on Flavescence dorée and its confounding diseases, biotic and abiotic stresses},
type = {article},
year = {2023},
pages = {109230},
volume = {48},
id = {28afb888-298f-35fb-96b8-81159c8ca094},
created = {2024-02-14T13:08:59.772Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:59.772Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {tardif2023expertizedgrapevinedisease},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Tardif, Malo and Amri, Ahmed and Deshayes, Aymeric and Greven, Marc and Keresztes, Barna and Fontaine, Gaël and Sicaud, Laetitia and Paulhac, Laetitia and Bentejac, Sophie and da Costa, Jean-Pierre},
journal = {Data in Brief}
}
Scalable Early Detection of Grapevine Viral Infection with Airborne Imaging Spectroscopy.
Galvan, F., E., R.; Pavlick, R.; Trolley, G.; Aggarwal, S.; Sousa, D.; Starr, C.; Forrestel, E.; Bolton, S.; Alsina, M., d., M.; Dokoozlian, N.; and Gold, K., M.
Phytopathology, 113(8): 1439-1446. 2023.
link
bibtex
@article{
title = {Scalable Early Detection of Grapevine Viral Infection with Airborne Imaging Spectroscopy},
type = {article},
year = {2023},
pages = {1439-1446},
volume = {113},
id = {1daec3c3-127f-39b7-8b9b-53eeb98b03cb},
created = {2024-02-14T13:09:00.034Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:00.034Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {galvan2023scalableearlydetection},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Galvan, Fernando E Romero and Pavlick, Ryan and Trolley, Graham and Aggarwal, Somil and Sousa, Daniel and Starr, Charles and Forrestel, Elisabeth and Bolton, Stephanie and Alsina, Maria del Mar and Dokoozlian, Nick and Gold, Kaitlin M},
journal = {Phytopathology},
number = {8}
}
Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture.
Rudenko, M.; Kazak, A.; Oleinikov, N.; Mayorova, A.; Dorofeeva, A.; Nekhaychuk, D.; and Shutova, O.
Computation, 11(9): 171. 2023.
link
bibtex
@article{
title = {Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture},
type = {article},
year = {2023},
pages = {171},
volume = {11},
id = {09397c77-6d21-361d-bbdb-9f071699e695},
created = {2024-02-14T13:09:00.957Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:00.957Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {rudenko2023intelligentmonitoringsystem},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Rudenko, Marina and Kazak, Anatoliy and Oleinikov, Nikolay and Mayorova, Angela and Dorofeeva, Anna and Nekhaychuk, Dmitry and Shutova, Olga},
journal = {Computation},
number = {9}
}
Potential detection of Flavescence dorée in the vineyard using close-range hyperspectral imaging.
Barjaktarović, M.; Santoni, M.; Faralli, M.; Bertamini, M.; and Bruzzone, L.
In
2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), pages 1-6, 2023.
link
bibtex
@inproceedings{
title = {Potential detection of Flavescence dorée in the vineyard using close-range hyperspectral imaging},
type = {inproceedings},
year = {2023},
pages = {1-6},
id = {2b58ffec-47dd-3310-9672-9315fb9b3eb5},
created = {2024-02-14T13:09:01.891Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:01.891Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {barjaktarovic2023potentialdetectionflavescence},
source_type = {inproceedings},
private_publication = {false},
bibtype = {inproceedings},
author = {Barjaktarović, Marko and Santoni, Massimo and Faralli, Michele and Bertamini, Massimo and Bruzzone, Lorenzo},
booktitle = {2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME)}
}
Automatic diagnosis of a multi-symptom grape vine disease using computer vision.
Tardif, M.; Amri, A.; Keresztes, B.; Deshayes, A.; Martin, D.; Greven, M.; and da Costa, J.
In
Acta Horticulturae, pages 53-60, 2023. International Society for Horticultural Science (ISHS), Leuven, Belgium
link
bibtex
@inproceedings{
title = {Automatic diagnosis of a multi-symptom grape vine disease using computer vision},
type = {inproceedings},
year = {2023},
pages = {53-60},
publisher = {International Society for Horticultural Science (ISHS), Leuven, Belgium},
id = {031bb480-497f-3641-a7cf-28dd16143f8d},
created = {2024-02-14T13:09:02.071Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:02.071Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {tardif2023automaticdiagnosismulti},
source_type = {inproceedings},
private_publication = {false},
bibtype = {inproceedings},
author = {Tardif, Malo and Amri, Ahmed and Keresztes, Barna and Deshayes, Aymeric and Martin, Damian and Greven, Marc and da Costa, Jean-Pierre},
booktitle = {Acta Horticulturae}
}
Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions.
Pinheiro, I.; Moreira, G.; da Silva, D.; Magalhães, S.; Valente, A.; Moura Oliveira, P.; Cunha, M.; and Santos, F.
Agronomy, 13(4): 1120. 2023.
link
bibtex
@article{
title = {Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions},
type = {article},
year = {2023},
pages = {1120},
volume = {13},
id = {067e5554-f72b-36dc-ab02-319f6c8a20a4},
created = {2024-02-14T13:09:02.588Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:02.588Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {pinheiro2023deeplearningyolobased},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Pinheiro, Isabel and Moreira, Germano and da Silva, Daniel and Magalhães, Sandro and Valente, António and Moura Oliveira, Paulo and Cunha, Mário and Santos, Filipe},
journal = {Agronomy},
number = {4}
}
Automatic diagnosis of a multi-symptom grapevine disease by decision trees and Graph Neural Networks.
Tardif, M.; Keresztes, B.; Deshayes, A.; Martin, D.; Greven, M.; and da Costa, J.
Precision agriculture '23, pages 1011-1017. Wageningen Academic, 2023.
link
bibtex
@inbook{
type = {inbook},
year = {2023},
pages = {1011-1017},
publisher = {Wageningen Academic},
city = {Leiden, The Netherlands},
id = {d3c70f35-47c4-37b2-890a-a236101fc08d},
created = {2024-02-14T13:09:03.001Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:03.001Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {tardif2023automaticdiagnosismultigraph},
source_type = {inbook},
private_publication = {false},
bibtype = {inbook},
author = {Tardif, Malo and Keresztes, Barna and Deshayes, Aymeric and Martin, Damian and Greven, Marc and da Costa, Jean-Pierre},
chapter = {Automatic diagnosis of a multi-symptom grapevine disease by decision trees and Graph Neural Networks},
title = {Precision agriculture '23}
}
Dataset on UAV RGB videos acquired over a vineyard including bunch labels for object detection and tracking.
Ariza-Sentís, M.; Vélez, S.; and Valente, J.
Data in Brief, 46: 108848. 2023.
link
bibtex
@article{
title = {Dataset on UAV RGB videos acquired over a vineyard including bunch labels for object detection and tracking},
type = {article},
year = {2023},
pages = {108848},
volume = {46},
id = {5ddb0849-78f8-3211-8ecb-9bae6d55eae4},
created = {2024-02-14T13:09:03.112Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:03.112Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {arizasentis2023datasetuavrgb},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Ariza-Sentís, Mar and Vélez, Sergio and Valente, João},
journal = {Data in Brief}
}
Implementation of drone technology for farm monitoring \& pesticide spraying: A review.
Hafeez, A.; Husain, M., A.; Singh, S., P.; Chauhan, A.; Khan, M., T.; Kumar, N.; Chauhan, A.; and Soni, S., K.
Information Processing in Agriculture, 10(2): 192-203. 2023.
link
bibtex
@article{
title = {Implementation of drone technology for farm monitoring \& pesticide spraying: A review},
type = {article},
year = {2023},
pages = {192-203},
volume = {10},
id = {b8b193bb-7e05-3c66-aae4-f45f24f541ea},
created = {2024-02-14T13:09:03.187Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:03.187Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {hafeez2023implementationdronetechnology},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Hafeez, Abdul and Husain, Mohammed Aslam and Singh, S P and Chauhan, Anurag and Khan, Mohd. Tauseef and Kumar, Navneet and Chauhan, Abhishek and Soni, S K},
journal = {Information Processing in Agriculture},
number = {2}
}
Evaluating Critical Disease Occurrence in Grapevine Leaves using CNN: Use-Case in Eastern Europe.
Oprea, C.; Drăgulinescu, A., C.; Marcu, I.; and Pirnog, I.
In
2023 17th International Conference on Engineering of Modern Electric Systems (EMES), pages 1-4, 2023.
link
bibtex
@inproceedings{
title = {Evaluating Critical Disease Occurrence in Grapevine Leaves using CNN: Use-Case in Eastern Europe},
type = {inproceedings},
year = {2023},
pages = {1-4},
id = {b7a00f2b-81ad-3a99-b33e-3643f80746c7},
created = {2024-02-14T13:09:03.679Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:03.679Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {oprea2023evaluatingcriticaldisease},
source_type = {inproceedings},
private_publication = {false},
bibtype = {inproceedings},
author = {Oprea, Cristina-Claudia and Drăgulinescu, Ana-Maria Claudia and Marcu, Ioana-Manuela and Pirnog, Ionuţ},
booktitle = {2023 17th International Conference on Engineering of Modern Electric Systems (EMES)}
}
Proximal sensing for geometric characterization of vines: A review of the latest advances.
Moreno, H.; and Andújar, D.
Computers and Electronics in Agriculture, 210: 107901. 2023.
link
bibtex
@article{
title = {Proximal sensing for geometric characterization of vines: A review of the latest advances},
type = {article},
year = {2023},
pages = {107901},
volume = {210},
id = {779e0de3-6eb2-314f-b3b0-a4076257a924},
created = {2024-02-14T13:09:03.851Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:09:03.851Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {moreno2023proximalsensinggeometric},
source_type = {article},
private_publication = {false},
bibtype = {article},
author = {Moreno, Hugo and Andújar, Dionisio},
journal = {Computers and Electronics in Agriculture}
}
AOGC: Anchor-Free Oriented Object Detection Based on Gaussian Centerness.
Xia, G.; Cheng, G.; Feng, J.; Mou, L.; Wang, Z.; Bao, C.; Cao, J.; and Hao, Q.
Remote Sensing 2023, Vol. 15, Page 4690, 15(19): 4690. 9 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {AOGC: Anchor-Free Oriented Object Detection Based on Gaussian Centerness},
type = {article},
year = {2023},
keywords = {Gaussian kernal,anchor,free,one,orientated object detection,remote sensing images,stage},
pages = {4690},
volume = {15},
websites = {https://www.mdpi.com/2072-4292/15/19/4690/htm,https://www.mdpi.com/2072-4292/15/19/4690},
month = {9},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {25},
id = {5d858a6f-91f7-32ae-9b36-df3d9e9948eb},
created = {2024-02-16T10:13:40.203Z},
accessed = {2024-02-16},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-03-04T11:58:02.971Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0,5a010301-acb6-4642-a6b2-8afaee1b741c},
private_publication = {false},
abstract = {Oriented object detection is a challenging task in scene text detection and remote sensing image analysis, and it has attracted extensive attention due to the development of deep learning in recent years. Currently, mainstream oriented object detectors are anchor-based methods. These methods increase the computational load of the network and cause a large amount of anchor box redundancy. In order to address this issue, we proposed an anchor-free oriented object detection method based on Gaussian centerness (AOGC), which is a single-stage anchor-free detection method. Our method uses contextual attention FPN (CAFPN) to obtain the contextual information of the target. Then, we designed a label assignment method for the oriented objects, which can select positive samples with higher quality and is suitable for large aspect ratio targets. Finally, we developed a Gaussian kernel-based centerness branch that can effectively determine the significance of different anchors. AOGC achieved a mAP of 74.30% on the DOTA-1.0 datasets and 89.80% on the HRSC2016 datasets, respectively. Our experimental results show that AOGC exhibits superior performance to other methods in single-stage oriented object detection and achieves similar performance to the two-stage methods.},
bibtype = {article},
author = {Xia, Gui-Song and Cheng, Gong and Feng, Jie and Mou, Lichao and Wang, Zechen and Bao, Chun and Cao, Jie and Hao, Qun},
doi = {10.3390/RS15194690},
journal = {Remote Sensing 2023, Vol. 15, Page 4690},
number = {19}
}
Oriented object detection is a challenging task in scene text detection and remote sensing image analysis, and it has attracted extensive attention due to the development of deep learning in recent years. Currently, mainstream oriented object detectors are anchor-based methods. These methods increase the computational load of the network and cause a large amount of anchor box redundancy. In order to address this issue, we proposed an anchor-free oriented object detection method based on Gaussian centerness (AOGC), which is a single-stage anchor-free detection method. Our method uses contextual attention FPN (CAFPN) to obtain the contextual information of the target. Then, we designed a label assignment method for the oriented objects, which can select positive samples with higher quality and is suitable for large aspect ratio targets. Finally, we developed a Gaussian kernel-based centerness branch that can effectively determine the significance of different anchors. AOGC achieved a mAP of 74.30% on the DOTA-1.0 datasets and 89.80% on the HRSC2016 datasets, respectively. Our experimental results show that AOGC exhibits superior performance to other methods in single-stage oriented object detection and achieves similar performance to the two-stage methods.
A comprehensive survey of oriented object detection in remote sensing images.
Wen, L.; Cheng, Y.; Fang, Y.; and Li, X.
Expert Systems with Applications, 224: 119960. 8 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {A comprehensive survey of oriented object detection in remote sensing images},
type = {article},
year = {2023},
keywords = {Anchor-free,Oriented object detection,Rotation invariance},
pages = {119960},
volume = {224},
month = {8},
publisher = {Pergamon},
day = {15},
id = {ca798cae-1aa3-32ab-9a6f-fb8bfbd9e8a5},
created = {2024-03-14T10:28:19.463Z},
accessed = {2024-03-14},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-03-15T07:41:31.204Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {With the rapid development of object detection, it is widely used in many scenes and images. However, the dense arrangement of objects with different dimensions, orientations and aspect ratios in remote sensing and aerial images undoubtedly poses many problems for detection. Anchor-based oriented object detection to maintain rotational invariance has to solve the problem of object orientation dimension and also to consider the calculation of angular periodicity in the regression calculation. To achieve accurate detection of objects, it is necessary to obtain the precise frame surrounding the object and the precise features. Anchor-free methods do not require a predefined anchor, but only need to learn the object feature parameters to get an accurate frame for detection. In this paper we first introduce the technical approaches to object detection, both traditional and deep learning-based methods. Then we summarize the main problems and methods solved in oriented object detection in anchor-based and anchor-free based detection. We present some datasets using oriented bounding box (OBB) annotation that are suitable for oriented object detection, as well as introduce the accepted benchmarks and evaluation metrics for object detection. Finally, we discuss potential trends in oriented object detection for the benefit of researchers who are new to the field.},
bibtype = {article},
author = {Wen, Long and Cheng, Yu and Fang, Yi and Li, Xinyu},
doi = {10.1016/J.ESWA.2023.119960},
journal = {Expert Systems with Applications}
}
With the rapid development of object detection, it is widely used in many scenes and images. However, the dense arrangement of objects with different dimensions, orientations and aspect ratios in remote sensing and aerial images undoubtedly poses many problems for detection. Anchor-based oriented object detection to maintain rotational invariance has to solve the problem of object orientation dimension and also to consider the calculation of angular periodicity in the regression calculation. To achieve accurate detection of objects, it is necessary to obtain the precise frame surrounding the object and the precise features. Anchor-free methods do not require a predefined anchor, but only need to learn the object feature parameters to get an accurate frame for detection. In this paper we first introduce the technical approaches to object detection, both traditional and deep learning-based methods. Then we summarize the main problems and methods solved in oriented object detection in anchor-based and anchor-free based detection. We present some datasets using oriented bounding box (OBB) annotation that are suitable for oriented object detection, as well as introduce the accepted benchmarks and evaluation metrics for object detection. Finally, we discuss potential trends in oriented object detection for the benefit of researchers who are new to the field.
Benchmarking Generations of You Only Look Once Architectures for Detection of Defective and Normal Long Rod Insulators.
Békési, G., B.
Journal of Control, Automation and Electrical Systems, 34(5): 1093-1107. 10 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Benchmarking Generations of You Only Look Once Architectures for Detection of Defective and Normal Long Rod Insulators},
type = {article},
year = {2023},
keywords = {Insulator detection,Insulator fault detection,YOLOv3,YOLOv4,YOLOv5},
pages = {1093-1107},
volume = {34},
websites = {https://link.springer.com/article/10.1007/s40313-023-01023-3},
month = {10},
publisher = {Springer},
day = {1},
id = {751462e4-662d-3915-89e3-c97cafb131c6},
created = {2024-03-16T12:43:07.789Z},
accessed = {2024-03-16},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-03-16T12:43:11.012Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {Effective infrastructure monitoring is a priority in all technical fields in this century. In high-voltage transmission networks, line inspection is one such task. Fault detection of insulators is crucial, and object detection algorithms can handle this problem. This work presents a comparison of You Only Look Once architectures. The different subtypes of the last three generations (v3, v4, and v5) are compared in terms of losses, precision, recall, and mean average precision on an open-source, augmented dataset of normal and defective insulators from the State Grid Corporation of China. The primary focus of this work is a comprehensive subtype analysis, providing a useful resource for academics and industry professionals involved in insulator detection and surveillance projects. This study aims to enhance the monitoring of insulator health and maintenance for industries relying on power grid stability. YOLOv5 subtypes are found to be the most suitable for this computer vision task, considering their mean average precision, which ranges between 98.1 and 99.0%, and a frame per second rate between 27.1 and 212.8, depending on the architecture size. While their predecessors are faster, they are less accurate. It is also discovered that, for all generations, normal-sized and large architectures generally demonstrate better accuracy. However, small architectures are noted for their significantly faster processing speeds.},
bibtype = {article},
author = {Békési, Gergő Bendegúz},
doi = {10.1007/S40313-023-01023-3/TABLES/2},
journal = {Journal of Control, Automation and Electrical Systems},
number = {5}
}
Effective infrastructure monitoring is a priority in all technical fields in this century. In high-voltage transmission networks, line inspection is one such task. Fault detection of insulators is crucial, and object detection algorithms can handle this problem. This work presents a comparison of You Only Look Once architectures. The different subtypes of the last three generations (v3, v4, and v5) are compared in terms of losses, precision, recall, and mean average precision on an open-source, augmented dataset of normal and defective insulators from the State Grid Corporation of China. The primary focus of this work is a comprehensive subtype analysis, providing a useful resource for academics and industry professionals involved in insulator detection and surveillance projects. This study aims to enhance the monitoring of insulator health and maintenance for industries relying on power grid stability. YOLOv5 subtypes are found to be the most suitable for this computer vision task, considering their mean average precision, which ranges between 98.1 and 99.0%, and a frame per second rate between 27.1 and 212.8, depending on the architecture size. While their predecessors are faster, they are less accurate. It is also discovered that, for all generations, normal-sized and large architectures generally demonstrate better accuracy. However, small architectures are noted for their significantly faster processing speeds.
Investigation of You Only Look Once Networks for Vision-based Small Object Detection.
Yang, L.
International Journal of Advanced Computer Science and Applications, 14(4): 69-82. 38 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Investigation of You Only Look Once Networks for Vision-based Small Object Detection},
type = {article},
year = {2023},
keywords = {YOLOv5,YOLOv6,YOLOv7,computer vision,jewellery detection,real-time detection,small object detection},
pages = {69-82},
volume = {14},
websites = {www.ijacsa.thesai.org},
month = {38},
publisher = {The Science and Information (SAI) Organization Limited},
day = {29},
id = {ec6503e1-425e-36f5-b756-78d4bf05d3f1},
created = {2024-03-16T12:50:26.559Z},
accessed = {2024-03-16},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-04-10T06:59:40.618Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {Small object detection is a challenging issue in computer vision-based algorithms. Although various methods have been investigated for common objects including person, car and others, small object are not addressed in this issue. Therefore, it is necessary to conduct more researches on them. This paper is focused on small object detection especially jewellery as current object detection methods suffer from low accuracy in this domain. This paper introduces a new dataset whose images were taken by a web camera from a jewellery store and data augmentation procedure. It comprises three classes, namely, ring, earrings, and pendant. In view of the small target of jewellery and the real-time detection, this study adopted the You Only Look Once (Yolo) algorithms. Different Yolo based model including eight versions are implemented and train them using our dataset to address most effective one. Evaluation criteria, including accuracy, F1 score, recall, and mAP, are used to evaluate the performance of the various YOLOv5, YOLOv6, and YOLOv7 versions. According to the experimental findings, utilizing YOLOv6 is significantly superior to YOLOv7 and marginally superior to YOLOv5.},
bibtype = {article},
author = {Yang, Li},
doi = {10.14569/IJACSA.2023.0140410},
journal = {International Journal of Advanced Computer Science and Applications},
number = {4}
}
Small object detection is a challenging issue in computer vision-based algorithms. Although various methods have been investigated for common objects including person, car and others, small object are not addressed in this issue. Therefore, it is necessary to conduct more researches on them. This paper is focused on small object detection especially jewellery as current object detection methods suffer from low accuracy in this domain. This paper introduces a new dataset whose images were taken by a web camera from a jewellery store and data augmentation procedure. It comprises three classes, namely, ring, earrings, and pendant. In view of the small target of jewellery and the real-time detection, this study adopted the You Only Look Once (Yolo) algorithms. Different Yolo based model including eight versions are implemented and train them using our dataset to address most effective one. Evaluation criteria, including accuracy, F1 score, recall, and mAP, are used to evaluate the performance of the various YOLOv5, YOLOv6, and YOLOv7 versions. According to the experimental findings, utilizing YOLOv6 is significantly superior to YOLOv7 and marginally superior to YOLOv5.
Lightweight You Only Look Once v8: An Upgraded You Only Look Once v8 Algorithm for Small Object Identification in Unmanned Aerial Vehicle Images.
Huangfu, Z.; and Li, S.
Applied Sciences 2023, Vol. 13, Page 12369, 13(22): 12369. 11 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {Lightweight You Only Look Once v8: An Upgraded You Only Look Once v8 Algorithm for Small Object Identification in Unmanned Aerial Vehicle Images},
type = {article},
year = {2023},
keywords = {YOLO v8,attention mechanism,small targets,target detection,unmanned aerial vehicle},
pages = {12369},
volume = {13},
websites = {https://www.mdpi.com/2076-3417/13/22/12369/htm,https://www.mdpi.com/2076-3417/13/22/12369},
month = {11},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {15},
id = {1b3f9f34-4e59-3367-bad9-6fb9653a91d4},
created = {2024-03-16T12:51:54.758Z},
accessed = {2024-03-16},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-03-16T12:57:43.065Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {In order to solve the problems of high leakage rate, high false detection rate, low detection success rate and large model volume of small targets in the traditional target detection algorithm for Unmanned Aerial Vehicle (UAV) aerial images, a lightweight You Only Look Once (YOLO) v8 algorithm model Lightweight (LW)-YOLO v8 is proposed. By increasing the channel attention mechanism Squeeze-and-Excitation (SE) module, this method can adaptively improves the model’s ability to extract features from small targets; at the same time, the lightweight convolution technology is introduced into the Conv module, where the ordinary convolution is replaced by the GSConv module, which can effectively reduce the model computational volume; on the basis of the GSConv module, a single aggregation module VoV-GSCSPC is designed to optimize the model structure in order to achieve a higher computational cost-effectiveness. The experimental results show that the LW-YOLO v8 model’s mAP@0.5 metrics on the VisDrone2019 dataset are more favorable than those on the YOLO v8n model, improving by 3.8 percentage points, and the computational amount is reduced to 7.2 GFLOPs. The LW-YOLO v8 model proposed in this work can effectively accomplish the task of detecting small targets in aerial images from UAV at a lower cost.},
bibtype = {article},
author = {Huangfu, Zhongmin and Li, Shuqing},
doi = {10.3390/APP132212369},
journal = {Applied Sciences 2023, Vol. 13, Page 12369},
number = {22}
}
In order to solve the problems of high leakage rate, high false detection rate, low detection success rate and large model volume of small targets in the traditional target detection algorithm for Unmanned Aerial Vehicle (UAV) aerial images, a lightweight You Only Look Once (YOLO) v8 algorithm model Lightweight (LW)-YOLO v8 is proposed. By increasing the channel attention mechanism Squeeze-and-Excitation (SE) module, this method can adaptively improves the model’s ability to extract features from small targets; at the same time, the lightweight convolution technology is introduced into the Conv module, where the ordinary convolution is replaced by the GSConv module, which can effectively reduce the model computational volume; on the basis of the GSConv module, a single aggregation module VoV-GSCSPC is designed to optimize the model structure in order to achieve a higher computational cost-effectiveness. The experimental results show that the LW-YOLO v8 model’s mAP@0.5 metrics on the VisDrone2019 dataset are more favorable than those on the YOLO v8n model, improving by 3.8 percentage points, and the computational amount is reduced to 7.2 GFLOPs. The LW-YOLO v8 model proposed in this work can effectively accomplish the task of detecting small targets in aerial images from UAV at a lower cost.
YOLO-SE: Improved YOLOv8 for Remote Sensing Object Detection and Recognition.
Wu, T.; and Dong, Y.
Applied Sciences 2023, Vol. 13, Page 12977, 13(24): 12977. 12 2023.
Paper
Website
doi
link
bibtex
abstract
@article{
title = {YOLO-SE: Improved YOLOv8 for Remote Sensing Object Detection and Recognition},
type = {article},
year = {2023},
keywords = {loss functions,multi,object detection,remote sensing images,scale},
pages = {12977},
volume = {13},
websites = {https://www.mdpi.com/2076-3417/13/24/12977/htm,https://www.mdpi.com/2076-3417/13/24/12977},
month = {12},
publisher = {Multidisciplinary Digital Publishing Institute},
day = {5},
id = {29fa1003-d5cc-3fc0-9fca-cb7120e46000},
created = {2024-03-16T15:17:53.484Z},
accessed = {2024-03-16},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-03-16T15:17:58.593Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {Object detection remains a pivotal aspect of remote sensing image analysis, and recent strides in Earth observation technology coupled with convolutional neural networks (CNNs) have propelled the field forward. Despite advancements, challenges persist, especially in detecting objects across diverse scales and pinpointing small-sized targets. This paper introduces YOLO-SE, a novel YOLOv8-based network that innovatively addresses these challenges. First, the introduction of a lightweight convolution SEConv in lieu of standard convolutions reduces the network’s parameter count, thereby expediting the detection process. To tackle multi-scale object detection, the paper proposes the SEF module, an enhancement based on SEConv. Second, an ingenious Efficient Multi-Scale Attention (EMA) mechanism is integrated into the network, forming the SPPFE module. This addition augments the network’s feature extraction capabilities, adeptly handling challenges in multi-scale object detection. Furthermore, a dedicated prediction head for tiny object detection is incorporated, and the original detection head is replaced by a transformer prediction head. To address adverse gradients stemming from low-quality instances in the target detection training dataset, the paper introduces the Wise-IoU bounding box loss function. YOLO-SE showcases remarkable performance, achieving an average precision at IoU threshold 0.5 (AP50) of 86.5% on the optical remote sensing dataset SIMD. This represents a noteworthy 2.1% improvement over YOLOv8 and YOLO-SE outperforms the state-of-the-art model by 0.91%. In further validation, experiments on the NWPU VHR-10 dataset demonstrated YOLO-SE’s superiority with an accuracy of 94.9%, surpassing that of YOLOv8 by 2.6%. The proposed advancements position YOLO-SE as a compelling solution in the realm of deep learning-based remote sensing image object detection.},
bibtype = {article},
author = {Wu, Tianyong and Dong, Youkou},
doi = {10.3390/APP132412977},
journal = {Applied Sciences 2023, Vol. 13, Page 12977},
number = {24}
}
Object detection remains a pivotal aspect of remote sensing image analysis, and recent strides in Earth observation technology coupled with convolutional neural networks (CNNs) have propelled the field forward. Despite advancements, challenges persist, especially in detecting objects across diverse scales and pinpointing small-sized targets. This paper introduces YOLO-SE, a novel YOLOv8-based network that innovatively addresses these challenges. First, the introduction of a lightweight convolution SEConv in lieu of standard convolutions reduces the network’s parameter count, thereby expediting the detection process. To tackle multi-scale object detection, the paper proposes the SEF module, an enhancement based on SEConv. Second, an ingenious Efficient Multi-Scale Attention (EMA) mechanism is integrated into the network, forming the SPPFE module. This addition augments the network’s feature extraction capabilities, adeptly handling challenges in multi-scale object detection. Furthermore, a dedicated prediction head for tiny object detection is incorporated, and the original detection head is replaced by a transformer prediction head. To address adverse gradients stemming from low-quality instances in the target detection training dataset, the paper introduces the Wise-IoU bounding box loss function. YOLO-SE showcases remarkable performance, achieving an average precision at IoU threshold 0.5 (AP50) of 86.5% on the optical remote sensing dataset SIMD. This represents a noteworthy 2.1% improvement over YOLOv8 and YOLO-SE outperforms the state-of-the-art model by 0.91%. In further validation, experiments on the NWPU VHR-10 dataset demonstrated YOLO-SE’s superiority with an accuracy of 94.9%, surpassing that of YOLOv8 by 2.6%. The proposed advancements position YOLO-SE as a compelling solution in the realm of deep learning-based remote sensing image object detection.
G-Rep: Gaussian Representation for Arbitrary-Oriented Object Detection.
Hou, L.; Lu, K.; Yang, X.; Li, Y.; and Xue, J.
Remote Sensing, 15(3): 1-21. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {G-Rep: Gaussian Representation for Arbitrary-Oriented Object Detection},
type = {article},
year = {2023},
keywords = {Gaussian metrics,Gaussian representation,arbitrary-oriented object detection,convolutional neural networks},
pages = {1-21},
volume = {15},
id = {545989fb-8ae1-39c1-aadd-31d1562b6398},
created = {2024-03-27T10:37:21.871Z},
file_attached = {true},
profile_id = {bfbbf840-4c42-3914-a463-19024f50b30c},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-04-08T06:04:34.526Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {Typical representations for arbitrary-oriented object detection tasks include the oriented bounding box (OBB), the quadrilateral bounding box (QBB), and the point set (PointSet). Each representation encounters problems that correspond to its characteristics, such as boundary discontinuity, square-like problems, representation ambiguity, and isolated points, which lead to inaccurate detection. Although many effective strategies have been proposed for various representations, there is still no unified solution. Current detection methods based on Gaussian modeling have demonstrated the possibility of resolving this dilemma; however, they remain limited to OBB. To go further, in this paper, we propose a unified Gaussian representation called G-Rep to construct Gaussian distributions for OBB, QBB, and PointSet, which achieves a unified solution to various representations and problems. Specifically, PointSet- or QBB-based object representations are converted into Gaussian distributions and their parameters are optimized using the maximum likelihood estimation algorithm. Then, three optional Gaussian metrics are explored to optimize the regression loss of the detector because of their excellent parameter optimization mechanisms. Furthermore, we also use Gaussian metrics for sampling to align label assignment and regression loss. Experimental results obtained on several publicly available datasets, such as DOTA, HRSC2016, UCAS-AOD, and ICDAR2015, show the excellent performance of the proposed method for arbitrary-oriented object detection.},
bibtype = {article},
author = {Hou, Liping and Lu, Ke and Yang, Xue and Li, Yuqiu and Xue, Jian},
doi = {10.3390/rs15030757},
journal = {Remote Sensing},
number = {3}
}
Typical representations for arbitrary-oriented object detection tasks include the oriented bounding box (OBB), the quadrilateral bounding box (QBB), and the point set (PointSet). Each representation encounters problems that correspond to its characteristics, such as boundary discontinuity, square-like problems, representation ambiguity, and isolated points, which lead to inaccurate detection. Although many effective strategies have been proposed for various representations, there is still no unified solution. Current detection methods based on Gaussian modeling have demonstrated the possibility of resolving this dilemma; however, they remain limited to OBB. To go further, in this paper, we propose a unified Gaussian representation called G-Rep to construct Gaussian distributions for OBB, QBB, and PointSet, which achieves a unified solution to various representations and problems. Specifically, PointSet- or QBB-based object representations are converted into Gaussian distributions and their parameters are optimized using the maximum likelihood estimation algorithm. Then, three optional Gaussian metrics are explored to optimize the regression loss of the detector because of their excellent parameter optimization mechanisms. Furthermore, we also use Gaussian metrics for sampling to align label assignment and regression loss. Experimental results obtained on several publicly available datasets, such as DOTA, HRSC2016, UCAS-AOD, and ICDAR2015, show the excellent performance of the proposed method for arbitrary-oriented object detection.
H2RBox-v2: Incorporating Symmetry for Boosting Horizontal Box Supervised Oriented Object Detection.
Yu, Y.; Yang, X.; Li, Q.; Zhou, Y.; Zhang, G.; Da, F.; and Yan, J.
Advances in Neural Information Processing Systems, 36: 59137-59150. 12 2023.
Paper
link
bibtex
abstract
@article{
title = {H2RBox-v2: Incorporating Symmetry for Boosting Horizontal Box Supervised Oriented Object Detection},
type = {article},
year = {2023},
pages = {59137-59150},
volume = {36},
month = {12},
day = {15},
id = {3615db2d-cfc8-352d-91c2-a34c9a555d70},
created = {2024-06-26T06:38:18.354Z},
accessed = {2024-06-26},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-09-02T07:05:11.216Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {With the rapidly increasing demand for oriented object detection, e.g. in autonomous driving and remote sensing, the recently proposed paradigm involving weakly-supervised detector H2RBox for learning rotated box (RBox) from the more readily-available horizontal box (HBox) has shown promise. This paper presents H2RBox-v2, to further bridge the gap between HBox-supervised and RBox-supervised oriented object detection. Specifically, we propose to leverage the reflection symmetry via flip and rotate consistencies, using a weakly-supervised network branch similar to H2RBox, together with a novel self-supervised branch that learns orientations from the symmetry inherent in visual objects. The detector is further stabilized and enhanced by practical techniques to cope with peripheral issues e.g. angular periodicity. To our best knowledge, H2RBox-v2 is the first symmetry-aware self-supervised paradigm for oriented object detection. In particular, our method shows less susceptibility to low-quality annotation and insufficient training data compared to H2RBox. Specifically, H2RBox-v2 achieves very close performance to a rotation annotation trained counterpart-Rotated FCOS: 1) DOTA-v1.0/1.5/2.0: 72.31%/64.76%/50.33% vs. 72.44%/64.53%/51.77%; 2) HRSC: 89.66% vs. 88.99%; 3) FAIR1M: 42.27% vs. 41.25%.},
bibtype = {article},
author = {Yu, Yi and Yang, Xue and Li, Qingyun and Zhou, Yue and Zhang, Gefan and Da, Feipeng and Yan, Junchi},
journal = {Advances in Neural Information Processing Systems}
}
With the rapidly increasing demand for oriented object detection, e.g. in autonomous driving and remote sensing, the recently proposed paradigm involving weakly-supervised detector H2RBox for learning rotated box (RBox) from the more readily-available horizontal box (HBox) has shown promise. This paper presents H2RBox-v2, to further bridge the gap between HBox-supervised and RBox-supervised oriented object detection. Specifically, we propose to leverage the reflection symmetry via flip and rotate consistencies, using a weakly-supervised network branch similar to H2RBox, together with a novel self-supervised branch that learns orientations from the symmetry inherent in visual objects. The detector is further stabilized and enhanced by practical techniques to cope with peripheral issues e.g. angular periodicity. To our best knowledge, H2RBox-v2 is the first symmetry-aware self-supervised paradigm for oriented object detection. In particular, our method shows less susceptibility to low-quality annotation and insufficient training data compared to H2RBox. Specifically, H2RBox-v2 achieves very close performance to a rotation annotation trained counterpart-Rotated FCOS: 1) DOTA-v1.0/1.5/2.0: 72.31%/64.76%/50.33% vs. 72.44%/64.53%/51.77%; 2) HRSC: 89.66% vs. 88.99%; 3) FAIR1M: 42.27% vs. 41.25%.
AgriDet: Plant Leaf Disease severity classification using agriculture detection framework.
Pal, A.; and Kumar, V.
Engineering Applications of Artificial Intelligence, 119: 105754. 3 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {AgriDet: Plant Leaf Disease severity classification using agriculture detection framework},
type = {article},
year = {2023},
keywords = {Classification,Deep learning,Occlusion,Plant disease detection,Segmentation,Severity classes},
pages = {105754},
volume = {119},
month = {3},
publisher = {Pergamon},
day = {1},
id = {c881c0e7-96e2-38c6-80b5-4b0ac2fdf8ca},
created = {2024-09-20T11:52:45.872Z},
accessed = {2024-09-20},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-09-20T11:52:49.023Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {In the field of modern agriculture, plant disease detection plays a vital role in improving crop productivity. To increase the yield on a large scale, it is necessary to predict the onset of the disease and give advice to farmers. Previous methods for detecting plant diseases rely on manual feature extraction, which is more expensive. Therefore, image-based techniques are gaining interest in the research area of plant disease detection. However, existing methods have several problems due to the improper nature of the captured image, including improper background conditions that lead to occlusion, illumination, orientation, and size. Also, cost complexity, misclassifications, and overfitting problems occur in several real-time applications. To solve these issues, we proposed an Agriculture Detection (AgriDet) framework that incorporates conventional Inception-Visual Geometry Group Network (INC-VGGN) and Kohonen-based deep learning networks to detect plant diseases and classify the severity level of diseased plants. In this framework, image pre-processing is done to remove all the constraints in the captured image. Then, the occlusion problem is tackled by the proposed multi-variate grabcut algorithm for effective segmentation. Furthermore, the framework performs accurate disease detection and classification by utilizing an improved base network, namely a pre-trained conventionally based INC-VGGN model. Here, the pre-trained INC-VGGN model is a deep convolutional neural network for prediction of plant diseases that was previously trained for the distinctive dataset. The pre-trained weights and the features learned in this base network are transferred into the newly developed neural network to perform the specific task of plant disease detection for our dataset. In order to overcome the overfitting problem, a dropout layer is introduced, and the deep learning of features is performed using the Kohonen learning layer. After percentage computation, the improved base network classifies the severity classes in the training sets. Finally, the performance of the framework is computed for different performance metrics and achieves better accuracy than previous models. Also, the performance of the statistical analysis is validated to prove the results in terms of accuracy, specificity, and sensitivity.},
bibtype = {article},
author = {Pal, Arunangshu and Kumar, Vinay},
doi = {10.1016/J.ENGAPPAI.2022.105754},
journal = {Engineering Applications of Artificial Intelligence}
}
In the field of modern agriculture, plant disease detection plays a vital role in improving crop productivity. To increase the yield on a large scale, it is necessary to predict the onset of the disease and give advice to farmers. Previous methods for detecting plant diseases rely on manual feature extraction, which is more expensive. Therefore, image-based techniques are gaining interest in the research area of plant disease detection. However, existing methods have several problems due to the improper nature of the captured image, including improper background conditions that lead to occlusion, illumination, orientation, and size. Also, cost complexity, misclassifications, and overfitting problems occur in several real-time applications. To solve these issues, we proposed an Agriculture Detection (AgriDet) framework that incorporates conventional Inception-Visual Geometry Group Network (INC-VGGN) and Kohonen-based deep learning networks to detect plant diseases and classify the severity level of diseased plants. In this framework, image pre-processing is done to remove all the constraints in the captured image. Then, the occlusion problem is tackled by the proposed multi-variate grabcut algorithm for effective segmentation. Furthermore, the framework performs accurate disease detection and classification by utilizing an improved base network, namely a pre-trained conventionally based INC-VGGN model. Here, the pre-trained INC-VGGN model is a deep convolutional neural network for prediction of plant diseases that was previously trained for the distinctive dataset. The pre-trained weights and the features learned in this base network are transferred into the newly developed neural network to perform the specific task of plant disease detection for our dataset. In order to overcome the overfitting problem, a dropout layer is introduced, and the deep learning of features is performed using the Kohonen learning layer. After percentage computation, the improved base network classifies the severity classes in the training sets. Finally, the performance of the framework is computed for different performance metrics and achieves better accuracy than previous models. Also, the performance of the statistical analysis is validated to prove the results in terms of accuracy, specificity, and sensitivity.
Plant Disease Severity Detection and Fertilizer Recommendation using Deep Learning Techniques.
Sudhir, B.; Teja, D., C.; Sai, K.; Sridhar, P.; and Daniya, T.
Proceedings of the 2nd International Conference on Applied Artificial Intelligence and Computing, ICAAIC 2023,215-221. 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Plant Disease Severity Detection and Fertilizer Recommendation using Deep Learning Techniques},
type = {article},
year = {2023},
keywords = {ANN,CNN,Deep Learning,KNN,Machine Learning,Prediction,SVM,Severity},
pages = {215-221},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
id = {a8f33efb-bc2f-36f7-84e5-6cc3b8e67971},
created = {2024-09-20T11:59:05.739Z},
accessed = {2024-09-20},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-09-23T09:20:38.430Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {b2e80568-e430-43c3-9f7c-33cc6051f581},
private_publication = {false},
abstract = {In India, the agriculture industry plays a significant role in the economy and employs a sizable section of the workforce. The demand for food is increasing and analysis of agriculture data can help improve practices and increase productivity by providing insights into crop diseases and weather conditions. Plant diseases can greatly impact agricultural productivity, and early detection is crucial to avoiding losses. The proposed project makes use of different ML techniques such as KNN, SVM, and DL techniques such as CNN and ANN to detect plant diseases in an efficient and effective manner. These techniques can be trained on large datasets to learn patterns and make predictions, making them well suited for this task. The Deep Learning system includes a system that automatically scans leaf images and detects disease based on visual symptoms. This system also calculates severity level of disease and suggests suitable amount of fertilizer for disease to soak in their crop according to severity level. A user interface was created to help farmers and agriculture workers for easy usage by simple capturing leaf image and get suggestions, this helps farmers to increase their crop production and to maintain quality of crop.},
bibtype = {article},
author = {Sudhir, Buddepu and Teja, Devalaraju Charan and Sai, Kurra and Sridhar, Peddinti and Daniya, T.},
doi = {10.1109/ICAAIC56838.2023.10140467},
journal = {Proceedings of the 2nd International Conference on Applied Artificial Intelligence and Computing, ICAAIC 2023}
}
In India, the agriculture industry plays a significant role in the economy and employs a sizable section of the workforce. The demand for food is increasing and analysis of agriculture data can help improve practices and increase productivity by providing insights into crop diseases and weather conditions. Plant diseases can greatly impact agricultural productivity, and early detection is crucial to avoiding losses. The proposed project makes use of different ML techniques such as KNN, SVM, and DL techniques such as CNN and ANN to detect plant diseases in an efficient and effective manner. These techniques can be trained on large datasets to learn patterns and make predictions, making them well suited for this task. The Deep Learning system includes a system that automatically scans leaf images and detects disease based on visual symptoms. This system also calculates severity level of disease and suggests suitable amount of fertilizer for disease to soak in their crop according to severity level. A user interface was created to help farmers and agriculture workers for easy usage by simple capturing leaf image and get suggestions, this helps farmers to increase their crop production and to maintain quality of crop.
Apparatus and method for image-guided agriculture.
. 2 2023.
Paper
link
bibtex
abstract
@article{
title = {Apparatus and method for image-guided agriculture},
type = {article},
year = {2023},
month = {2},
day = {13},
id = {8925d083-898f-3990-adb8-9393dde541d4},
created = {2024-09-24T11:53:56.226Z},
accessed = {2024-09-24},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-10-06T11:39:58.820Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {d1530310-ca4f-487a-aa15-294fe1ffea82},
private_publication = {false},
abstract = {A method for image-guided agriculture includes receiving images; processing the images to generate reflectance maps respectively corresponding to spectral bands; synthesizing the reflectance maps to generate a multispectral image including vegetation index information of a target area; receiving crop information in regions of the target area; and assessing crop conditions for the regions based on the identified crop information and the vegetation index information.},
bibtype = {article},
author = {}
}
A method for image-guided agriculture includes receiving images; processing the images to generate reflectance maps respectively corresponding to spectral bands; synthesizing the reflectance maps to generate a multispectral image including vegetation index information of a target area; receiving crop information in regions of the target area; and assessing crop conditions for the regions based on the identified crop information and the vegetation index information.
Behavior-Aware Pedestrian Trajectory Prediction in Ego-Centric Camera Views with Spatio-Temporal Ego-Motion Estimation †.
Czech, P.; Braun, M.; Kreßel, U.; and Yang, B.
Machine Learning and Knowledge Extraction, 5(3): 957-978. 9 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Behavior-Aware Pedestrian Trajectory Prediction in Ego-Centric Camera Views with Spatio-Temporal Ego-Motion Estimation †},
type = {article},
year = {2023},
keywords = {autonomous driving,behavioral features,ego-motion compensation,pedestrian trajectory prediction},
pages = {957-978},
volume = {5},
month = {9},
publisher = {Multidisciplinary Digital Publishing Institute (MDPI)},
day = {1},
id = {061e221e-ce16-3509-b461-490878c857c2},
created = {2024-11-06T13:04:01.465Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-11-07T10:52:37.948Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {4cda9297-f98e-4246-88d6-ffeeade205c3},
private_publication = {false},
abstract = {With the ongoing development of automated driving systems, the crucial task of predicting pedestrian behavior is attracting growing attention. The prediction of future pedestrian trajectories from the ego-vehicle camera perspective is particularly challenging due to the dynamically changing scene. Therefore, we present Behavior-Aware Pedestrian Trajectory Prediction (BA-PTP), a novel approach to pedestrian trajectory prediction for ego-centric camera views. It incorporates behavioral features extracted from real-world traffic scene observations such as the body and head orientation of pedestrians, as well as their pose, in addition to positional information from body and head bounding boxes. For each input modality, we employed independent encoding streams that are combined through a modality attention mechanism. To account for the ego-motion of the camera in an ego-centric view, we introduced Spatio-Temporal Ego-Motion Module (STEMM), a novel approach to ego-motion prediction. Compared to the related works, it utilizes spatial goal points of the ego-vehicle that are sampled from its intended route. We experimentally validated the effectiveness of our approach using two datasets for pedestrian behavior prediction in urban traffic scenes. Based on ablation studies, we show the advantages of incorporating different behavioral features for pedestrian trajectory prediction in the image plane. Moreover, we demonstrate the benefit of integrating STEMM into our pedestrian trajectory prediction method, BA-PTP. BA-PTP achieves state-of-the-art performance on the PIE dataset, outperforming prior work by 7% in MSE-1.5 s and CMSE as well as 9% in CFMSE.},
bibtype = {article},
author = {Czech, Phillip and Braun, Markus and Kreßel, Ulrich and Yang, Bin},
doi = {10.3390/make5030050},
journal = {Machine Learning and Knowledge Extraction},
number = {3}
}
With the ongoing development of automated driving systems, the crucial task of predicting pedestrian behavior is attracting growing attention. The prediction of future pedestrian trajectories from the ego-vehicle camera perspective is particularly challenging due to the dynamically changing scene. Therefore, we present Behavior-Aware Pedestrian Trajectory Prediction (BA-PTP), a novel approach to pedestrian trajectory prediction for ego-centric camera views. It incorporates behavioral features extracted from real-world traffic scene observations such as the body and head orientation of pedestrians, as well as their pose, in addition to positional information from body and head bounding boxes. For each input modality, we employed independent encoding streams that are combined through a modality attention mechanism. To account for the ego-motion of the camera in an ego-centric view, we introduced Spatio-Temporal Ego-Motion Module (STEMM), a novel approach to ego-motion prediction. Compared to the related works, it utilizes spatial goal points of the ego-vehicle that are sampled from its intended route. We experimentally validated the effectiveness of our approach using two datasets for pedestrian behavior prediction in urban traffic scenes. Based on ablation studies, we show the advantages of incorporating different behavioral features for pedestrian trajectory prediction in the image plane. Moreover, we demonstrate the benefit of integrating STEMM into our pedestrian trajectory prediction method, BA-PTP. BA-PTP achieves state-of-the-art performance on the PIE dataset, outperforming prior work by 7% in MSE-1.5 s and CMSE as well as 9% in CFMSE.
Learning power Gaussian modeling loss for dense rotated object detection in remote sensing images.
LI, Y.; WANG, H.; FANG, Y.; WANG, S.; LI, Z.; and JIANG, B.
Chinese Journal of Aeronautics, 36(10): 353-365. 10 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Learning power Gaussian modeling loss for dense rotated object detection in remote sensing images},
type = {article},
year = {2023},
keywords = {Convolutional neural networks,Distribution metric,Losses,Remote sensing,Rotated object detection},
pages = {353-365},
volume = {36},
month = {10},
publisher = {Elsevier},
day = {1},
id = {6eb5e6c0-09cd-3c47-98ae-f865e97f7f5d},
created = {2024-11-27T13:16:40.172Z},
accessed = {2024-11-27},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-12-13T08:38:16.502Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {Object detection in Remote Sensing (RS) has achieved tremendous advances in recent years, but it remains challenging for rotated object detection due to cluttered backgrounds, dense object arrangements and the wide range of size variations among objects. To tackle this problem, Dense Context Feature Pyramid Network (DCFPN) and a power α-Gaussian loss are designed for rotated object detection in this paper. The proposed DCFPN can extract multi-scale information densely and accurately by leveraging a dense multi-path dilation layer to cover all sizes of objects in remote sensing scenarios. For more accurate detection while avoiding bottlenecks such as boundary discontinuity in rotated bounding box regression, α-Gaussian loss, a unified power generalization of existing Gaussian modeling losses is proposed. Furthermore, the properties of α-Gaussian loss are analyzed comprehensively for a wider range of applications. Experimental results on four datasets (UCAS-AOD, HRSC2016, DIOR-R, and DOTA) show the effectiveness of the proposed method using different detectors, and are superior to the existing methods in both feature extraction and bounding box regression.},
bibtype = {article},
author = {LI, Yang and WANG, Haining and FANG, Yuqiang and WANG, Shengjin and LI, Zhi and JIANG, Bitao},
doi = {10.1016/J.CJA.2023.04.022},
journal = {Chinese Journal of Aeronautics},
number = {10}
}
Object detection in Remote Sensing (RS) has achieved tremendous advances in recent years, but it remains challenging for rotated object detection due to cluttered backgrounds, dense object arrangements and the wide range of size variations among objects. To tackle this problem, Dense Context Feature Pyramid Network (DCFPN) and a power α-Gaussian loss are designed for rotated object detection in this paper. The proposed DCFPN can extract multi-scale information densely and accurately by leveraging a dense multi-path dilation layer to cover all sizes of objects in remote sensing scenarios. For more accurate detection while avoiding bottlenecks such as boundary discontinuity in rotated bounding box regression, α-Gaussian loss, a unified power generalization of existing Gaussian modeling losses is proposed. Furthermore, the properties of α-Gaussian loss are analyzed comprehensively for a wider range of applications. Experimental results on four datasets (UCAS-AOD, HRSC2016, DIOR-R, and DOTA) show the effectiveness of the proposed method using different detectors, and are superior to the existing methods in both feature extraction and bounding box regression.
Evaluating the Economic and Sustainability Impacts of Drones in Viticulture using BPMN-based Simulation.
Schieck, M.; Roemer, I.; Oertel, A.; and Franczyk, B.
Procedia Computer Science, 225: 892-901. 1 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Evaluating the Economic and Sustainability Impacts of Drones in Viticulture using BPMN-based Simulation},
type = {article},
year = {2023},
keywords = {Business process management,digitalisation,plant protection,simulation,viticulture},
pages = {892-901},
volume = {225},
month = {1},
publisher = {Elsevier},
day = {1},
id = {c23d0251-7433-3afa-9fcc-16269d4785c9},
created = {2024-12-06T10:29:24.434Z},
accessed = {2024-12-06},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-12-06T10:29:27.091Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {1619600c-2adf-4216-9e4c-d260d584753e},
private_publication = {false},
abstract = {This paper presents an investigation into the economic impact of drones in viticulture, an area that has not been previously researched. The authors calculate the economic impacts of drones in viticulture and use this to measure the overall sustainability impact assessment for this technology. The study aims to explore a method for investigating the impact of technological changes on business processes in viticulture. This involves selecting viticultural business processes, representing them using Business Process Model and Notation (BPMN), and considering their simulation. Backpack sprayers and trailed sprayers were considered conventional application methods. The application of crop protection products by drone was considered a digitalized variant. Literature research and guideline-based expert interviews with vinegrowers provided the information basis. The study focuses on plant protection in viticulture, but the results can be applied to other agricultural processes. By integrating all three indicators for sustainability, the study provides an evidence-based method for evaluating the impact of drones on viticultural business processes.},
bibtype = {article},
author = {Schieck, Martin and Roemer, Ingolf and Oertel, Anika and Franczyk, Bogdan},
doi = {10.1016/J.PROCS.2023.10.076},
journal = {Procedia Computer Science}
}
This paper presents an investigation into the economic impact of drones in viticulture, an area that has not been previously researched. The authors calculate the economic impacts of drones in viticulture and use this to measure the overall sustainability impact assessment for this technology. The study aims to explore a method for investigating the impact of technological changes on business processes in viticulture. This involves selecting viticultural business processes, representing them using Business Process Model and Notation (BPMN), and considering their simulation. Backpack sprayers and trailed sprayers were considered conventional application methods. The application of crop protection products by drone was considered a digitalized variant. Literature research and guideline-based expert interviews with vinegrowers provided the information basis. The study focuses on plant protection in viticulture, but the results can be applied to other agricultural processes. By integrating all three indicators for sustainability, the study provides an evidence-based method for evaluating the impact of drones on viticultural business processes.
Cost Analysis of Using UAV Sprayers for Olive Fruit Fly Control.
Cavalaris, C.; Tagarakis, A., C.; Kateris, D.; and Bochtis, D.
AgriEngineering, 5(4): 1925-1942. 12 2023.
Paper
doi
link
bibtex
abstract
@article{
title = {Cost Analysis of Using UAV Sprayers for Olive Fruit Fly Control},
type = {article},
year = {2023},
keywords = {UAV sprayers,economic feasibility,olive fruit fly},
pages = {1925-1942},
volume = {5},
month = {12},
publisher = {Multidisciplinary Digital Publishing Institute (MDPI)},
day = {1},
id = {6d74b4eb-7992-3c9d-b7b5-be10f91b43a0},
created = {2024-12-06T10:30:17.395Z},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-12-06T10:30:18.316Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {1619600c-2adf-4216-9e4c-d260d584753e},
private_publication = {false},
abstract = {Unmanned Aerial Vehicles (UAVs) are a novel up-and-coming technology with wide applicability and great potential to be used in agricultural systems for spraying applications. However, the cost-effectiveness of this application is still rather uncertain. The present study utilized actual data from field applications to analyze the critical components and parameters in the potential case of using UAV sprayers for the control of olive fruit flies in order to assess the operational costs. The results are compared with the costs of two traditional spraying methods: manual spaying by workers using backpack sprayers and manual spraying assisted by a tractor. The case of the olive fruit fly was selected because it involves costly, time consuming, and laborious manual spraying. Furthermore, the bait character of spraying in these applications does not require full canopy coverage, making it ideal for UAV applications. A parameterized computational model was developed to assess the costs of labor, capital spending, repair and maintenance, energy, licensees, fees and taxes, and storage for each of the three methods. In addition, the cost for surveillance was also accounted for with the UAV method. Consequently, a sensitivity analysis was performed to examine the impact of the most crucial parameters. The results showed that the cost of spraying with a UAV was 1.45 to 2 times higher than the traditional methods, mainly due to the high capital spending resulting from a low economic life. There are opportunities, however, of improving the economic performance, making it compatible to the traditional methods, by using a smaller UAV with longer lasting batteries and by expanding its annual use beyond the needs of olive fruit fly control.},
bibtype = {article},
author = {Cavalaris, Chris and Tagarakis, Aristotelis C. and Kateris, Dimitrios and Bochtis, Dionysis},
doi = {10.3390/agriengineering5040118},
journal = {AgriEngineering},
number = {4}
}
Unmanned Aerial Vehicles (UAVs) are a novel up-and-coming technology with wide applicability and great potential to be used in agricultural systems for spraying applications. However, the cost-effectiveness of this application is still rather uncertain. The present study utilized actual data from field applications to analyze the critical components and parameters in the potential case of using UAV sprayers for the control of olive fruit flies in order to assess the operational costs. The results are compared with the costs of two traditional spraying methods: manual spaying by workers using backpack sprayers and manual spraying assisted by a tractor. The case of the olive fruit fly was selected because it involves costly, time consuming, and laborious manual spraying. Furthermore, the bait character of spraying in these applications does not require full canopy coverage, making it ideal for UAV applications. A parameterized computational model was developed to assess the costs of labor, capital spending, repair and maintenance, energy, licensees, fees and taxes, and storage for each of the three methods. In addition, the cost for surveillance was also accounted for with the UAV method. Consequently, a sensitivity analysis was performed to examine the impact of the most crucial parameters. The results showed that the cost of spraying with a UAV was 1.45 to 2 times higher than the traditional methods, mainly due to the high capital spending resulting from a low economic life. There are opportunities, however, of improving the economic performance, making it compatible to the traditional methods, by using a smaller UAV with longer lasting batteries and by expanding its annual use beyond the needs of olive fruit fly control.