Convolutional Neural Network to detect deforestation in the Amazon Rainforest

Overview

Convolutional Neural Network to detect deforestation in the Amazon Rainforest

This project is part of my final work as an Aerospace Engineering student, and it comprises the development of a convolutional neural network (CNN) capable of classifying SAR images of deforestation in the Amazon Rainforest. The database used to train the CNN was created using the imagery avaiable in the European Space Agency (ESA) portal Copernicus.

Choosing the target area

The target area was the region inside the municipality of São Félix do Xingu, in the state of Pará, Brazil, and the sensing was made in July 5th, 2021. This city is particularly suitable for this project since it is the number one in cumulative forest degradation up to 2020, according to the National Institute of Space Research (INPE). Around 24% of São Félix's territory (more than 83 thousands square kilometers, that is more than the territory of Austria) has already been deforested.

Collecting de dataset

Synthetic Aperture Array (SAR) imaging is a method of remote sensing that operates beyond the visible light spectrum, using microwaves to form the image. The radiation in this wavelength is less susceptible to atmospheric interference than in the optical range. This is particularly fitting for monitoring the Amazon Rainforest, a region considerably umid and prone to cloud formation in a great part of the year. The downside is that, alternatively, a SAR image is less intuitive to be interpreted by a human eye than an optical image.

In order to remove the aspect of a televison tuned to a dead channel, it is necessary to pre-process the colleceted images. More details on this process will be avaiable in a near future (when my work will be published by the library of Universidade de Brasília). For the time being, it suffices to say that the original image turned into 27 new image as the one that follows:

Everyone of these new images were sliced in small chunks, resulting in about 1800 samples that comprised the dataset to be used to train the neural network that has yet to be developed.

Labelling the samples

As the above picture can demonstrate, the resulting faux-colors of the pre-processing step highlight the contrast between the areas where the forest is preserved and those where there are deforestation hotspots. Using high-resolution optical images of the same region as a complementary guide, every sample was manually labeled as one of these four categories:

  • totally preserved, when there is no trace of deforestation;
  • partially preserved, when there is some trace of deforestation, but the preserved prevail;
  • partially deforested, when the deforested area is the prevailing feature;
  • totally deforested, when there is no trace of preserved area.

Later in the CNN trainin step it will be clearer that this categorization were not optimal, to say the least.

Developing de convolutional neural network

CNN is a deep neural network specifically developed to be applied in the recognition of visual pattern. This architecture is made by two kinds of hidden layers:

  • a convolutional layer (as the name goes), that pass a small window (the filter) through the input image, making a convolutional operation (dot product) between the filter and every chunck of pixels comprised in the perception window;
  • a pooling layer, that gets as an input the output of the convolutional layer and makes a dimensional reduction operation, normally a mean operation with a given number (2x2, 3x3, depending on the desired reduction) of inputs.

These operations are well suited in finding patterns in a picture with a good amount of generalization in order to prevent overfitting. The CNN developed for this work can be seen in the following picture:

Training, testing and results

Using four labels to pre-classify the dataset used to train de CNN ended up to be a bad idea. CNN architecture is good to find commom patterns in a set of pictures, as long as these patterns are well generalized. Trying to differentiate between 'partially preserved' and 'partially deforested' proved to be unfruitful, with a low accuracy (75%) in small epochs and an increasing overfitting with more epochs.

Thus, a merge between these two labels was made, with considerably better results. Bearing this in mind, this new merged label was once again merged with the label 'totally deforested', creating a binary scenario ('preserved', 'not preserved') with even better results (accuracy of about 90%). These results are shown in the following graphics:

You might also like...
Code repo for
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

An implementation of quantum convolutional neural network with MindQuantum. Huawei, classifying MNIST dataset

关于实现的一点说明 山东大学 2020级 苏博南 www.subonan.com 文件说明 tools.py 这里面主要有两个函数: resize(a, lenb) 这其实是我找同学写的一个小算法hhh。给出一个$28\times 28$的方阵a,返回一个$lenb\times lenb$的方阵。因

CasualHealthcare's Pneumonia detection with Artificial Intelligence (Convolutional Neural Network)

CasualHealthcare's Pneumonia detection with Artificial Intelligence (Convolutional Neural Network) This is PneumoniaDiagnose, an artificially intellig

TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

Convolutional neural network that analyzes self-generated images in a variety of languages to find etymological similarities

This project is a convolutional neural network (CNN) that analyzes self-generated images in a variety of languages to find etymological similarities. Specifically, the goal is to prove that computer vision can be used to identify cognates known to exist, and perhaps lead linguists to evidence of unknown cognates.

Temporal Dynamic Convolutional Neural Network for Text-Independent Speaker Verification and Phonemetic Analysis
Temporal Dynamic Convolutional Neural Network for Text-Independent Speaker Verification and Phonemetic Analysis

TDY-CNN for Text-Independent Speaker Verification Official implementation of Temporal Dynamic Convolutional Neural Network for Text-Independent Speake

Using LSTM to detect spoofing attacks in an Air-Ground network
Using LSTM to detect spoofing attacks in an Air-Ground network

Using LSTM to detect spoofing attacks in an Air-Ground network Specifications IDE: Spider Packages: Tensorflow 2.1.0 Keras NumPy Scikit-learn Matplotl

DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks
DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks

English | 简体中文 Introduction DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks Reference Pat

Releases(v1.0.0)
  • v1.0.0(Feb 6, 2022)

    What's Changed

    • Update README.md by @diogosens in https://github.com/diogosens/cnn_sar_image_classification/pull/1
    • Add files via upload by @diogosens in https://github.com/diogosens/cnn_sar_image_classification/pull/2
    • Update readme by @diogosens in https://github.com/diogosens/cnn_sar_image_classification/pull/3
    • Update README.md by @diogosens in https://github.com/diogosens/cnn_sar_image_classification/pull/4
    • Update readme by @diogosens in https://github.com/diogosens/cnn_sar_image_classification/pull/5

    New Contributors

    • @diogosens made their first contribution in https://github.com/diogosens/cnn_sar_image_classification/pull/1

    Full Changelog: https://github.com/diogosens/cnn_sar_image_classification/commits/v1.0.0

    Source code(tar.gz)
    Source code(zip)
Arquitetura e Desenho de Software.

S203 Este é um repositório dedicado às aulas de Arquitetura e Desenho de Software, cuja sigla é "S203". E agora, José? Como não tenho muito a falar aq

Fabio 7 Oct 23, 2021
The official PyTorch code implementation of "Human Trajectory Prediction via Counterfactual Analysis" in ICCV 2021.

Human Trajectory Prediction via Counterfactual Analysis (CausalHTP) The official PyTorch code implementation of "Human Trajectory Prediction via Count

46 Dec 03, 2022
Remote sensing change detection using PaddlePaddle

Change Detection Laboratory Developing and benchmarking deep learning-based remo

Lin Manhui 15 Sep 23, 2022
This is a simple backtesting framework to help you test your crypto currency trading. It includes a way to download and store historical crypto data and to execute a trading strategy.

You can use this simple crypto backtesting script to ensure your trading strategy is successful Minimal setup required and works well with static TP a

Andrei 154 Sep 12, 2022
Code for training and evaluation of the model from "Language Generation with Recurrent Generative Adversarial Networks without Pre-training"

Language Generation with Recurrent Generative Adversarial Networks without Pre-training Code for training and evaluation of the model from "Language G

Amir Bar 253 Sep 14, 2022
Federated_learning codes used for the the paper "Evaluation of Federated Learning Aggregation Algorithms" and "A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison"

Federated Distance (FedDist) This is the code accompanying the Percom2021 paper "A Federated Learning Aggregation Algorithm for Pervasive Computing: E

GETALP 8 Jan 03, 2023
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

Zhenda Xie 293 Dec 20, 2022
TEA: A Sequential Recommendation Framework via Temporally Evolving Aggregations

TEA: A Sequential Recommendation Framework via Temporally Evolving Aggregations Requirements python 3.6 torch 1.9 numpy 1.19 Quick Start The experimen

DMIRLAB 4 Oct 16, 2022
Measuring Coding Challenge Competence With APPS

Measuring Coding Challenge Competence With APPS This is the repository for Measuring Coding Challenge Competence With APPS by Dan Hendrycks*, Steven B

Dan Hendrycks 218 Dec 27, 2022
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

57 Nov 14, 2022
Text completion with Hugging Face and TensorFlow.js running on Node.js

Katana ML Text Completion 🤗 Description Runs with with Hugging Face DistilBERT and TensorFlow.js on Node.js distilbert-model - converter from Hugging

Katana ML 2 Nov 04, 2022
The versatile ocean simulator, in pure Python, powered by JAX.

Veros is the versatile ocean simulator -- it aims to be a powerful tool that makes high-performance ocean modeling approachable and fun. Because Veros

TeamOcean 245 Dec 20, 2022
JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces

JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces JAXMAPP is a JAX-based library for multi-agent path planning (MAPP) in c

OMRON SINIC X 24 Dec 28, 2022
This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation

This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation (Guillaume Couairon, Holger

Meta Research 31 Oct 17, 2022
Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.

MAUVE MAUVE is a library built on PyTorch and HuggingFace Transformers to measure the gap between neural text and human text with the eponymous MAUVE

Krishna Pillutla 182 Jan 02, 2023
Domain Generalization with MixStyle, ICLR'21.

MixStyle This repo contains the code of our ICLR'21 paper, "Domain Generalization with MixStyle". The OpenReview link is https://openreview.net/forum?

Kaiyang 208 Dec 28, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

The official code for the paper "Inverse Problems Leveraging Pre-trained Contrastive Representations" (to appear in NeurIPS 2021).

Sriram Ravula 26 Dec 10, 2022
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity

MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity Introduction The 3D LiDAR place recognition aim

16 Dec 08, 2022
🏅 Top 5% in 제2회 연구개발특구 인공지능 경진대회 AI SPARK 챌린지

AI_SPARK_CHALLENG_Object_Detection 제2회 연구개발특구 인공지능 경진대회 AI SPARK 챌린지 🏅 Top 5% in mAP(0.75) (443명 중 13등, mAP: 0.98116) 대회 설명 Edge 환경에서의 가축 Object Dete

3 Sep 19, 2022