A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

Related tags

Deep Learningcrysx_nn
Overview

Contributors Forks Stargazers Issues MIT License LinkedIn


crysx_nn

A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Features
  5. Roadmap
  6. Contributing
  7. License
  8. Contact
  9. Acknowledgments
  10. Citation

About The Project

Product Name Screen Shot

Neural networks are an integral part of machine learning. The project provides an easy-to-use, yet efficient implementation that can be used in your projects or for teaching/learning purposes.

The library is written in pure-python with some optimizations using numpy, opt_einsum, and numba when using CPU and cupy for CUDA support.

The goal was to create a framework that is efficient yet easy to understand, so that everyone can see and learn about what goes inside a neural network. After all, the project did spawn from a semester project on CP_IV: Machine Learning course at the University of Jena, Germany.

(back to top)

Built With

(back to top)

Getting Started

To get a local copy up and running follow these simple example steps.

Prerequisites

You need to have python3 installed along with pip.

Installation

There are many ways to install crysx_nn

  1. Install the release (stable) version from PyPi
    pip install crysx_nn
  2. Install the latest development version, by cloning the git repo and installing it. This requires you to have git installed.
    git clone https://github.com/manassharma07/crysx_nn.git
    cd crysx_nn
    pip install .
  3. Install the latest development version without git.
    pip install --upgrade https://github.com/manassharma07/crysx_nn/tarball/main

Check if the installation was successful by running python shell and trying to import the package

python3
>> import crysx_nn >>> crysx_nn.__version__ '0.1.0' >>> ">
Python 3.7.11 (default, Jul 27 2021, 07:03:16) 
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import crysx_nn
>>> crysx_nn.__version__
'0.1.0'
>>> 

Finally download the example script (here) for simulating logic gates like AND, XOR, NAND, and OR, and try running it

python Simluating_logic_gates.py

(back to top)

Usage

The most important thing for using this library properly is to use 2D NumPy arrays for defining the inputs and exoected outputs (targets) for a network. 1D arrays for inputs and targets are not supported and will result in an error.

For example, let us try to simulate the logic gate AND. The AND gate takes two input bits and returns a single input bit. The bits can take a value of either 0 or 1. The AND gate returns 1 only if both the inputs are 1, otherwise it returns 0.

The truth table of the AND gate is as follows

x1 x2 output
0 0 0
0 1 0
1 0 0
1 1 1

The four possible set of inputs are

inputs = np.array([[0.,0.,1.,1.],[0.,1.,0.,1.]]).T.astype('float32')
print(inputs)
print(inputs.dtype) 

Output:

[[0. 0.]
 [0. 1.]
 [1. 0.]
 [1. 1.]]
float32

Similarly, set the corresponding four possible outputs as a 2D numpy array

# AND outputs
outputAND = np.array([0.,0.,0.,1.]) # 1D array
outputAND = np.asarray([outputAND]).T # 2D array
print('AND outputs\n', outputAND)

Output:

AND outputs
 [[0.]
 [0.]
 [0.]
 [1.]]

Next, we need to set some parameters of our Neural network

nInputs = 2 # No. of nodes in the input layer
neurons_per_layer = [3,1] # Neurons per layer (excluding the input layer)
activation_func_names = ['Sigmoid', 'Sigmoid']
nLayers = len(neurons_per_layer)
eeta = 0.5 # Learning rate
nEpochs=10**4 # For stochastic gradient descent
batchSize = 4 # No. of input samples to process at a time for optimization

For a better understanding, let us visualize it.

visualize(nInputs, neurons_per_layer, activation_func_names)

Output:

Now let us initialize the weights and biases. Weights and biases are provided as lists of 2D and 1D NumPy arrays, respectively (1 Numpy array for each layer). In our case, we have 2 layers (1 hidden+ 1 output), therefore, the list of Weights and Biases will have 2 NumPy arrays each.

# Initial guesses for weights
w1 = 0.30
w2 = 0.55
w3 = 0.20
w4 = 0.45
w5 = 0.50
w6 = 0.35
w7 = 0.15
w8 = 0.40
w9 = 0.25

# Initial guesses for biases
b1 = 0.60
b2 = 0.05

# need to use a list instead of a numpy array, since the 
#weight matrices at each layer are not of the same dimensions
weights = [] 
# Weights for layer 1 --> 2
weights.append(np.array([[w1,w4],[w2, w5], [w3, w6]]))
# Weights for layer 2 --> 3
weights.append(np.array([[w7, w8, w9]]))
# List of biases at each layer
biases = []
biases.append(np.array([b1,b1,b1]))
biases.append(np.array([b2]))

weightsOriginal = weights
biasesOriginal = biases

print('Weights matrices: ',weights)
print('Biases: ',biases)

Output:

Weights matrices:  [array([[0.3 , 0.45],
       [0.55, 0.5 ],
       [0.2 , 0.35]]), array([[0.15, 0.4 , 0.25]])]
Biases:  [array([0.6, 0.6, 0.6]), array([0.05])]

Finally it is time to train our neural network. We will use mean squared error (MSE) loss function as the metric of performance. Currently, only stochastic gradient descent is supported.

# Run optimization
optWeights, optBiases, errorPlot = nn_optimize_fast(inputs, outputAND, activation_func_names, nLayers, nEpochs=nEpochs, batchSize=batchSize, eeta=eeta, weights=weightsOriginal, biases=biasesOriginal, errorFunc=MSE_loss, gradErrorFunc=MSE_loss_grad,miniterEpoch=1,batchProgressBar=False,miniterBatch=100)

The function nn_optimize_fast returns the optimized weights and biases, as well as the error at each epoch of the optimization.

We can then plot the training loss at each epoch

# Plot the error vs epochs
plt.plot(errorPlot)
plt.yscale('log')
plt.show()

Output: For more examples, please refer to the Examples Section

CrysX-NN (crysx_nn) also provides CUDA support by using cupy versions of all the features ike activation functions, loss functions, neural network calculations, etc. Note: For small networks the Cupy versions may actually be slower than CPU versions. But the benefit becomes evident as you go beyond 1.5 Million parameters.

(back to top)

Features

  • Efficient implementations of activation functions and their gradients
    • Sigmoid, Sigmoid_grad
    • ReLU, ReLU_grad
    • Softmax, Softmax_grad
    • Softplus, Sofplus_grad
    • Tanh, Tanh_grad
    • Tanh_offset, Tanh_offset_grad
    • Identity, Identity_grad
  • Efficient implementations of loss functions and their gradients
    • Mean squared error
    • Binary cross entropy
  • Neural network optimization using
    • Stochastic Gradient Descent
  • Support for batched inputs, i.e., supplying a matrix of inputs where the collumns correspond to features and rows to the samples
  • Support for GPU through Cupy pip install cupy-cuda102 (Tested with CUDA 10.2)
  • JIT compiled functions when possible for efficiency

(back to top)

Roadmap

  • Weights and biases initialization
  • More activation functions
    • Identity, LeakyReLU, Tanh, etc.
  • More loss functions
    • categorical cross entropy, and others
  • Optimization algorithms apart from Stochastic Gradient Descent, like ADAM, RMSprop, etc.
  • Implement regularizers
  • Batch normalization
  • Dropout
  • Early stopping
  • A predict function that returns the output of the last layer and the loss/accuracy
  • Some metric functions, although there is no harm in using sklearn for that

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Manas Sharma - @manassharma07 - [email protected]

Project Link: https://github.com/manassharma07/crysx_nn

Project Documentation: https://bragitoff.com

Blog: https://bragitoff.com

(back to top)

Acknowledgments

(back to top)

Citation

If you use this library and would like to cite it, you can use:

 M. Sharma, "CrysX-NN: Neural Network libray", 2021. [Online]. Available: https://github.com/manassharma07/crysx_nn. [Accessed: DD- Month- 20YY].

or:

@Misc{,
  author = {Manas Sharma},
  title  = {CrysX-NN: Neural Network libray},
  month  = december,
  year   = {2021},
  note   = {Online; accessed 
   
    },
  url    = {https://github.com/manassharma07/crysx_nn},
}

   

(back to top)

You might also like...
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

Tensors and Dynamic neural networks in Python with strong GPU acceleration
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

CPU inference engine that delivers unprecedented performance for sparse models
CPU inference engine that delivers unprecedented performance for sparse models

The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads. It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend.

Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

BlockUnexpectedPackets - Preventing BungeeCord CPU overload due to Layer 7 DDoS attacks by scanning BungeeCord's logs

BlockUnexpectedPackets This script automatically blocks DDoS attacks that are sp

Neural-net-from-scratch - A simple Neural Network from scratch in Python using the Pymathrix library

A Simple Neural Network from scratch A Simple Neural Network from scratch in Pyt

XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale

XtremeDistilTransformers for Distilling Massive Multilingual Neural Networks ACL 2020 Microsoft Research [Paper] [Video] Releasing [XtremeDistilTransf

TorchPQ is a python library for Approximate Nearest Neighbor Search (ANNS) and Maximum Inner Product Search (MIPS) on GPU using Product Quantization (PQ) algorithm. [ICLR 2021]
[ICLR 2021] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin

CPT: Efficient Deep Neural Network Training via Cyclic Precision Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin Accep

Comments
  • NAN loss or loss gradient when using Binary Cross Entropy or Categorical Cross Entropy sometimes

    NAN loss or loss gradient when using Binary Cross Entropy or Categorical Cross Entropy sometimes

    This is a strange bug, where using a batch_size like 32 or smaller results in nan values in the gradient of loss calculations. But the bug is not there when using a larger batch size like 60-200.

    The bug was observed when training on MNIST dataset.

    Using ReLU (Hidden, size=256) and Softmax (Output, size=10) activation layers.

    bug 
    opened by manassharma07 2
  • Reduce the number of parameters required for `nn_optimize` function

    Reduce the number of parameters required for `nn_optimize` function

    Add some defaults to parameters, it could even be None and then if the value of a parameters remains None, i.e., the user didn't provide them, then use our default values.

    For example,

    • [ ] for batchSize we can use: min(32,nSamples)

    • [ ] for weights and biases we can use an initialisation function.

    enhancement 
    opened by manassharma07 2
Releases(v_0.1.7)
  • v_0.1.7(Jan 16, 2022)

    FInalized the example for MNIST and MNIST_Plus.

    Added the ability to calculate accuracy during training as well as during prediction.

    Added confusion matrix calculation and visualization functions in utils.py.

    Source code(tar.gz)
    Source code(zip)
  • v_0.1.6(Jan 2, 2022)

    1. Both forward feed and back propagation are now significantly faster, for both NumPy and Cupy versions.
    2. Furthermore, several more activation and loss functions are also available now.
    Source code(tar.gz)
    Source code(zip)
  • v_0.1.5(Dec 27, 2021)

    Support for CUDA is here via Cupy.

    Slower than CPU for smaller networks but the benefits are very evident for larger networks with more than 1.5 Million parameters.

    Tested on

    • XPS i7 11800H + 3050 Ti,
    • Google Colab K80
    • Kaggle
    Source code(tar.gz)
    Source code(zip)
  • v_0.1.2(Dec 25, 2021)

  • v_0.1.1(Dec 25, 2021)

  • v_0.0.1(Dec 25, 2021)

Owner
Manas Sharma
Physicist
Manas Sharma
Bootstrapped Representation Learning on Graphs

Bootstrapped Representation Learning on Graphs This is the PyTorch implementation of BGRL Bootstrapped Representation Learning on Graphs The main scri

NerDS Lab :: Neural Data Science Lab 55 Jan 07, 2023
Implementation of the ICCV'21 paper Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases

Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases [Papers 1, 2][Project page] [Video] The implementation of the papers Temporal

56 Nov 21, 2022
The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track.

ISC21-Descriptor-Track-1st The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track. You can check our solution

lyakaap 73 Dec 24, 2022
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
AdaFocus V2: End-to-End Training of Spatial Dynamic Networks for Video Recognition

AdaFocusV2 This repo contains the official code and pre-trained models for AdaFo

79 Dec 26, 2022
Deep Learning for humans

Keras: Deep Learning for Python Under Construction In the near future, this repository will be used once again for developing the Keras codebase. For

Keras 57k Jan 09, 2023
HW3 ― GAN, ACGAN and UDA

HW3 ― GAN, ACGAN and UDA In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN

grassking100 1 Dec 13, 2021
🌎 The Modern Declarative Data Flow Framework for the AI Empowered Generation.

🌎 JSONClasses JSONClasses is a declarative data flow pipeline and data graph framework. Official Website: https://www.jsonclasses.com Official Docume

Fillmula Inc. 53 Dec 09, 2022
Unofficial Implement PU-Transformer

PU-Transformer-pytorch Pytorch unofficial implementation of PU-Transformer (PU-Transformer: Point Cloud Upsampling Transformer) https://arxiv.org/abs/

Lee Hyung Jun 7 Sep 21, 2022
1st-in-MICCAI2020-CPM - Combined Radiology and Pathology Classification

Combined Radiology and Pathology Classification MICCAI 2020 Combined Radiology a

22 Dec 08, 2022
Capsule endoscopy detection DACON challenge

capsule_endoscopy_detection (DACON Challenge) Overview Yolov5, Yolor, mmdetection기반의 모델을 사용 (총 11개 모델 앙상블) 모든 모델은 학습 시 Pretrained Weight을 yolov5, yolo

MAILAB 11 Nov 25, 2022
Torchyolo - Yolov3 ve Yolov4 modellerin Pytorch uygulamasıdır

TORCHYOLO : Yolo Modellerin Pytorch Uygulaması Yapılacaklar: Yolov3 model.py ve

Kadir Nar 3 Aug 22, 2022
CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery

CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery This paper (CoANet) has been published in IEEE TIP 2021. This code i

Jie Mei 53 Dec 03, 2022
NAS-FCOS: Fast Neural Architecture Search for Object Detection (CVPR 2020)

NAS-FCOS: Fast Neural Architecture Search for Object Detection This project hosts the train and inference code with pretrained model for implementing

Ning Wang 180 Dec 06, 2022
How to Predict Stock Prices Easily Demo

How-to-Predict-Stock-Prices-Easily-Demo How to Predict Stock Prices Easily - Intro to Deep Learning #7 by Siraj Raval on Youtube ##Overview This is th

Siraj Raval 752 Nov 16, 2022
Pytorch implementation of MalConv

MalConv-Pytorch A Pytorch implementation of MalConv Desciprtion This is the implementation of MalConv proposed in Malware Detection by Eating a Whole

Alexander H. Liu 58 Oct 26, 2022
Lane follower: Lane-detector (OpenCV) + Object-detector (YOLO5) + CAN-bus

Lane Follower This code is for the lane follower, including perception and control, as shown below. Environment Hardware Industrial Camera Intel-NUC(1

Siqi Fan 3 Jul 07, 2022
Underwater image enhancement

LANet Our work proposes an adaptive learning attention network (LANet) to solve the problem of color casts and low illumination in underwater images.

LiuShiBen 7 Sep 14, 2022
Machine-in-the-Loop Rewriting for Creative Image Captioning

Machine-in-the-Loop Rewriting for Creative Image Captioning Data Annotated sources of data used in the paper: Data Source URL Mohammed et al. Link Gor

Vishakh P 6 Jul 24, 2022
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 06, 2022