Repository to run object detection on a model trained on an autonomous driving dataset.

Overview

Autonomous Driving Object Detection on the Raspberry Pi 4

Description of Repository

This repository contains code and instructions to configure the necessary hardware and software for running autonomous driving object detection on the Raspberry Pi 4!

Details of Software and Neural Network Model for Object Detection:

  • Language: Python
  • Framework: TensorFlow Lite
  • Network: SSD MobileNet-V2
  • Training Dataset:Berkely Deep Drive (BBD100K)

The motivation for the Project

The goal of this project was to train a neural network to detect things on the road that an autonomous driving vehicle would see (eg. bus, traffic light, traffic sign, person, bike, truck, motor, car, train, rider). Then to test the trained network on lightweight hardware (i.e. Raspberry PI 4) to see how it performs in terms of processing speed and detection accuracy.

Additional Resources

Source

Reference for Source Code for the Project: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md

Special thanks to Evan from EdjeElectronics for the instructions and the majority of the code used in this project! :)

Results

Vehicle Testing Configuration

Core

  • Raspberry Pi 4 GB
  • Raspberry Pi 5MP Camera (rev 1.3)

Other

  • LED
  • 470 Ohm Resistor
  • Small breadboard
  • GPIO push button
  • 3.5 Amp USB-C Power Supply

This tissue box setup isn't the greatest, but it's what I used to mount the PI on the dashboard of my car. I then used the USB-C cable plugged into the AC outlet of my car while I drove around to record and process footage.

Issues

1.) If you get an error when trying to run the program showing the following:

ImportError: No module named cv2

Try using this tutorial to install and build opencv: https://pimylifeup.com/raspberry-pi-opencv/ The software setup steps should install OpenCV, but sometimes installing it on the Raspberry Pi can be finicky.

Setting Up Software

1.) Clone Repository:

git clone https://github.com/ecd1012/rpi_road_object_detection.git

2.) Change directory to source code:

cd rpi_road_object_detection

3.) Open command prompt and make sure pi is up to date:

sudo apt-get update && sudo apt-get upgrade

4.) Install virtual environment:

sudo pip3 install virtualenv

5.) Make virtual environment:

python3.7 -m venv TFLite-venv

6.) Activate Environment:

source TFLite-venv/bin/activate

7.) Install the dependencies:

bash get_py_requirements.sh

8.) Make sure the camera module is enabled:

sudo raspi-config

9.) Go to Intercae Options and make sure the Pi Camera is enabled.

Setting Up Hardware

10.) Connect a push button to GPIO pin 17. This will be used as input.

Help: https://www.youtube.com/watch?v=BWYy3qZ315U&ab_channel=O%27Reilly

11.) Connect an LED to GPIO PIN 4. This LED will turn on to indicate when the program is running. Make sure you use a resistor with the LED!

Help: https://www.youtube.com/watch?v=3TDJ4FmtGgk&ab_channel=O%27Reilly

12.) Connect Pi Camera Module to Raspberry Pi. Help: https://www.youtube.com/watch?v=0hrF8Wq8SSQ&ab_channel=BINARYUPDATES

Running Detection

15.) After all your hardware and software is configured correctly run the following command:

python TFLite_detection_webcam_loop.py --modeldir=TFLite_model_bbd --output_path=processed_images

Where the --output_path you specify is where you want images saved.

16.) The script will start running and wait for you to press the GPIO input button to start processing the video feed from the camera. Once you press the button, the green LED will turn on and the pi will start feeding and processing the video stream through the neural network. Processed images will be saved to the '--output_path' you specified over the command line.

17.) If you like, make a video out of the images. You can do this with gif making software, video making software, or ffmpeg. Help: https://stackoverflow.com/questions/24961127/how-to-create-a-video-from-images-with-ffmpeg

18.) Enjoy!! :)

Running on Boot

19.) If you want to start running the python script on boot, do the following:

nano ~/.bashrc

And add the following to the end of your .bashrc

#Change directories to where you cloned the repo
cd ~/rpi_road_object_detection
source TFLite-venv/bin/activate
python TFLite_detection_webcam_loop.py --modeldir=TFLite_model_bbd --output_path=processed_images

Then press CTRL+X and Press Y and enter to save.

Owner
Ethan
Personal Site: https://ethandell.tech/
Ethan
GoodNews Everyone! Context driven entity aware captioning for news images

This is the code for a CVPR 2019 paper, called GoodNews Everyone! Context driven entity aware captioning for news images. Enjoy! Model preview: Huge T

117 Dec 19, 2022
BBScan py3 - BBScan py3 With Python

BBScan_py3 This repository is forked from lijiejie/BBScan 1.5. I migrated the fo

baiyunfei 12 Dec 30, 2022
A map update dataset and benchmark

MUNO21 MUNO21 is a dataset and benchmark for machine learning methods that automatically update and maintain digital street map datasets. Previous dat

16 Nov 30, 2022
A implemetation of the LRCN in mxnet

A implemetation of the LRCN in mxnet ##Abstract LRCN is a combination of CNN and RNN ##Installation Download UCF101 dataset ./avi2jpg.sh to split the

44 Aug 25, 2022
GrailQA: Strongly Generalizable Question Answering

GrailQA is a new large-scale, high-quality KBQA dataset with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It ca

OSU DKI Lab 76 Dec 21, 2022
Yet another video caption

Yet another video caption

Fan Zhimin 5 May 26, 2022
用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本和PARL(paddle)版本

用强化学习玩合成大西瓜 代码地址:https://github.com/Sharpiless/play-daxigua-using-Reinforcement-Learning 用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本、PARL(paddle)版本和pytorch版本

72 Dec 17, 2022
Train/evaluate a Keras model, get metrics streamed to a dashboard in your browser.

Hera Train/evaluate a Keras model, get metrics streamed to a dashboard in your browser. Setting up Step 1. Plant the spy Install the package pip

Keplr 495 Dec 10, 2022
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding This repo contains the data and source code for baseline models in the NeurIPS 2

Microsoft 29 Dec 29, 2022
Open source repository for the code accompanying the paper 'PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations'.

PatchNets This is the official repository for the project "PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations". For details,

16 May 22, 2022
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

zhanglabNKU 1 Nov 29, 2022
DPT: Deformable Patch-based Transformer for Visual Recognition (ACM MM2021)

DPT This repo is the official implementation of DPT: Deformable Patch-based Transformer for Visual Recognition (ACM MM2021). We provide code and model

CASIA-IVA-Lab 111 Dec 21, 2022
Heat transfer problemas solved using python

heat-transfer Heat transfer problems solved using python isolation-convection.py compares the temperature distribution on the problem as shown in the

2 Nov 14, 2021
A tensorflow implementation of an HMM layer

tensorflow_hmm Tensorflow and numpy implementations of the HMM viterbi and forward/backward algorithms. See Keras example for an example of how to use

Zach Dwiel 283 Oct 19, 2022
Large-Scale Unsupervised Object Discovery

Large-Scale Unsupervised Object Discovery Huy V. Vo, Elena Sizikova, Cordelia Schmid, Patrick Pérez, Jean Ponce [PDF] We propose a novel ranking-based

17 Sep 19, 2022
This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

19 Oct 27, 2022
Source code of CIKM2021 Long Paper "PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling".

PSSL Source code of CIKM2021 Long Paper "PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling". It consists of the pre-tra

2 Dec 21, 2021
Trajectory Prediction with Graph-based Dual-scale Context Fusion

DSP: Trajectory Prediction with Graph-based Dual-scale Context Fusion Introduction This is the project page of the paper Lu Zhang, Peiliang Li, Jing C

HKUST Aerial Robotics Group 103 Jan 04, 2023
Making Structure-from-Motion (COLMAP) more robust to symmetries and duplicated structures

SfM disambiguation with COLMAP About Structure-from-Motion generally fails when the scene exhibits symmetries and duplicated structures. In this repos

Computer Vision and Geometry Lab 193 Dec 26, 2022
A JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short.

BraVe This is a JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short. The model provided in this package wa

DeepMind 44 Nov 20, 2022