The Radiolarian Database for Automated Recognition:
A free access and participative Radiolarian images Database for Automated Recognition using convolutional neural networks.
The automated radiolarian recognition is part of a new automated radiolarian image acquisition, stacking, processing, segmentation, and identification workflow, described below and developed at CEREGE, in partnership with IODP-France and CNRS.
“These classes were trained to be recognised with an overall accuracy of about 90 %. This whole procedure, including the image acquisition, stacking, processing, segmentation and recognition, is entirely automated and piloted by a LabVIEW interface and roughly last 1 hour per sample.”
Radiolarian slides are prepared using a new 3D printed decanter system developed at CEREGE and composed of eight 3.5 ml tanks.
After placing a 12×12 mm cover slide in each tanks, a solution containing radiolarians in suspension is poured into each tank.
After a few minutes, radiolarians have settled on the cover slide, and water can be vacuum out using the hole on the side of each tank.
Each cover slide can then be mounted using optical glue on a glass slide (8 samples per slide) and is ready for image acquisition
3D files can be freely downloaded here.
Automated image acquisition
Radiolarian slides are automatically imaged at CEREGE using a transmitted light microscope piloted with a LabVIEW interface.
It automatically scans 18×18 fields of view (324 FOVs) on each of the 8 samples of the slide.
Each FOV is scanned 15 times at different focal depth (every 10 µm) to cover the thickness of radiolarian specimens.
For each FOV, the batch of 15 images is automatically stacked using Helicon Focus and the depth map approach, piloted with the LabVIEW interface.
This step results in a single and clear FOV image.
Automated image processing and segmentation
This plugin enhances contrast, inverts colours, subtracts background, creates Regions of Interest (ROIs) for each specimens, separates specimens in contact with each others, and saves every specimen as single individual images, for each sample.
Here is an example of segmentation on a few FOVs using the isolated plugin, slow down to see each step.
Individual images are then automatically send to a specific software developed at CEREGE (Marchant et al., 2020), that uses a trained convolutional neural network (CNN) to assign them to a class and returned their identification.
This CNN is able to recognise 109 classes of which 101 are Neogene to modern radiolarian classes with a current accuracy of 90 %. The database used to train this CNN is composed of 21.746 images distributed in 132 classes, and is visible here, and can be download here.
Automated image storage and census counts
Identified individual images are then saved into folder corresponding to their assigned classes and samples.
Census data and morphometric measurements are also automatically saved for each sample and core.
Do you wish to help with the building of the AutoRadio Database? Do you have any radiolarian images you are willing to share? Contact us!