NARISETTI, N., Michael HENKE, C. SEILER, A. JUNKER, J. OSTERMANN, T. ALTMANN and E. GLADILIN. Fully-automated root image analysis (faRIA). Nature Scientific Reports. London: NATURE RESEARCH, 2021, vol. 11, No 1, p. 16047-16061. ISSN 2045-2322. Available from: https://dx.doi.org/10.1038/s41598-021-95480-y.
Other formats:   BibTeX LaTeX RIS
Basic information
Original name Fully-automated root image analysis (faRIA)
Authors NARISETTI, N., Michael HENKE (276 Germany, guarantor, belonging to the institution), C. SEILER, A. JUNKER, J. OSTERMANN, T. ALTMANN and E. GLADILIN.
Edition Nature Scientific Reports, London, NATURE RESEARCH, 2021, 2045-2322.
Other information
Original language English
Type of outcome Article in a journal
Field of Study 10609 Biochemical research methods
Country of publisher Germany
Confidentiality degree is not subject to a state or trade secret
WWW URL
Impact factor Impact factor: 4.996
RIV identification code RIV/00216224:14740/21:00124293
Organization unit Central European Institute of Technology
Doi http://dx.doi.org/10.1038/s41598-021-95480-y
UT WoS 000684558900004
Keywords in English SEGMENTATIONARCHITECTUREGROWTHRHIZOTOOL
Tags rivok
Tags International impact, Reviewed
Changed by Changed by: Mgr. Pavla Foltynová, Ph.D., učo 106624. Changed: 26/2/2022 11:04.
Abstract
High-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advanced methods for automated image segmentation are required. Due to often unavoidable overlap between the intensity of fore- and background regions simple thresholding methods are, generally, not suitable for the segmentation of root regions. Higher-level cognitive models such as convolutional neural networks (CNN) provide capabilities for segmenting roots from heterogeneous and noisy background structures, however, they require a representative set of manually segmented (ground truth) images. Here, we present a GUI-based tool for fully automated quantitative analysis of root images using a pre-trained CNN model, which relies on an extension of the U-Net architecture. The developed CNN framework was designed to efficiently segment root structures of different size, shape and optical contrast using low budget hardware systems. The CNN model was trained on a set of 6465 masks derived from 182 manually segmented near-infrared (NIR) maize root images. Our experimental results show that the proposed approach achieves a Dice coefficient of 0.87 and outperforms existing tools (e.g., SegRoot) with Dice coefficient of 0.67 by application not only to NIR but also to other imaging modalities and plant species such as barley and arabidopsis soil-root images from LED-rhizotron and UV imaging systems, respectively. In summary, the developed software framework enables users to efficiently analyse soil-root images in an automated manner (i.e. without manual interaction with data and/or parameter tuning) providing quantitative plant scientists with a powerful analytical tool.
Links
EF16_026/0008446, research and development projectName: Integrace signálu a epigenetické reprogramování pro produktivitu rostlin
PrintDisplayed: 28/7/2024 06:27