FlowerPower makes biodiversity measurable with AI image analysis

Fontys Information and Communication Technology
What is the state of nature in the Netherlands? To get that clear, we look at biodiversity, but mapping it with volunteers and professionals who manually conduct counts and inventories takes a lot of time and cannot be done on a landscape scale. Can we not use drones to collect images and then analyse them using artificial intelligence? Sounds simple, but there is a lot involved.

The FlowerPower project is a multi-year research project at Fontys. Last semester, student Filip Vangelov and his student team did research on image analysis algorithms for the project.

Biodiversity as a yardstick  
Around 85% of the native flora of the Netherlands has disappeared since 1900, mainly due to agriculture and urbanisation. Growing ecological awareness is making us try to turn the tide. To this end, we measure biodiversity, explains Gerard Schouten, Professor of AI & Big Data: "Biodiversity is the yardstick for measuring the health of an ecosystem. Simply put, the more species of flora in an area, the stronger and more resilient the ecosystem. In the event of a plague, for example, you will see that a monoculture such as the one we encounter in agriculture is much more vulnerable and less resilient. Biodiversity also has great intrinsic value and contributes to a healthy living environment, including for people. If you want to do something about it, you have to be able to measure it first, and that is what we are trying to do with the FlowerPower project, using drone images and image analysis."  

In the third edition of Eyes on AI, Filip explains exactly what he and his team did in the FlowerPower III project.

Convolutional Neurel Networks 
In previous iterations of the project, students investigated the use of drones and mapped the process from image to data. Filip's group started with insights and findings from previous student projects. It was their challenge to develop a model that can efficiently and correctly perform AI analyses of images. To do this, you use a convolutional neural network (CNN). In fact, this is a series of algorithms that are able to 'understand' and recognise an image. Filip: "The idea of a CNN has existed for a long time and what such a neural network does is translate the data, the image, into different layers in which it learns to recognise an object like a heat map. There are various models for this, all of which work slightly differently. It is up to us to find out which CNN can most efficiently and accurately recognise and count flowers in photos."

Structured approach 
This seems simple, but a lot of work goes into testing a model. In order to have a structured approach, the team chose the IBM CRISP-DM model, which consists of the following steps: 

  • Business understanding
  • Data understanding
  • Data preparation
  • Modelling 
  • Evaluation
  • Deployment

In data preparation, it became quite challenging. We can immediately see in a photo that there are flowers, but an AI does not know whether a cluster of flowers is one or more flowers, or whether two yellow flowers are different species. The data had to be prepared manually, Filip explains: "Your data has to be readable, so your model has to learn what is a flower then. We received 11Gb of image material, supplied by the research group. A number of photos were manually annotated, so-called bounding boxes. In fact, these are manual cut-outs that make it possible for the model to read where in the image the flowers are to be found. This is a labor of love, so on the advice of our supervisor, we looked into methods of automating this. Applying colour filtering, for example by making all the green invisible, made this a faster process."

Training the models 
With a subset of the data ready, it was time to choose the models. Three CNN models were tested. The more complex Faster R-CNN model has more computing power and results were promising according to the students: "This model is particularly good at recognising different types of flowers in a single image. Other models lack this, but it takes a lot of computing power and could not yet be fully exploited. "In a follow-up project, we are going to work this out in more detail with our teacher-researchers, connected to the professorship AI & Big Data," says Gerard Schouten. The students also tested a custom and pre-trained CNN model. The first was developed by the group itself, and the second is an existing model. Supervisor and researcher Nico Kuijpers explains the difference: "The easiest way to explain it is like this; when a child learns to look, it learns to recognise different colours and objects. That is what a custom model does, it still has to 'learn' and make connections in the brain. Suppose there is another child, who already learned to look and make connections in the brain, but in a completely different part of the world. That child doesn't have to learn to recognise and make connections again, but has to integrate new knowledge into it. That is a what a pre-trained model does." Again, the results were promising, but the work is far from over. Filip and his group have successfully explored the different models and documented their findings, including recommendations for the next group of students to refine.

Learning in practice 
The project has now been successfully completed. An instructive experience for the group, who gained both knowledge and practical experience. Filip: "The FlowerPower project was very instructive, but there is much more to do. We will pass that on to the next group. I have learned a lot about techniques, but also about cooperation, processes and desk research. All skills that are useful as professionals. I also now know more about flowers than ever before."

The Flower Power III project was carried out by students Veneta Angelova, Lia Boyadzhieva, Kristina Krasteva and Filip Vangelov under the supervision of teacher-researcher Nico Kuijpers. The project is part of the research line "AI for People and Planet" of the professorship AI & Big Data, and was made possible by co-financing from partner cooperative SPARC.