Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Overview

Neural Networks For Chess

cover

Free Book

  • Grab your free PDF copy HERE
  • Buy a printed copy at HERE or HERE

Donations are welcome:

paypal

Contents

AlphaZero, Leela Chess Zero and Stockfish NNUE revolutionized Computer Chess. This book gives a complete introduction into the technical inner workings of such engines.

The book is split into four chapters:

  1. The first chapter introduces neural networks and covers all the basic building blocks that are used to build deep networks such as those used by AlphaZero. Contents include the perceptron, back-propagation and gradient descent, classification, regression, multilayer perpectron, vectorization techniques, convolutional netowrks, squeeze and exciation networks, fully connected networks, batch normalization and rectified linear units, residual layers, overfitting and underfitting.

  2. The second chapter introduces classical search techniques used for chess engines as well as those used by AlphaZero. Contents include minimax, alpha-beta search, and Monte Carlo tree search.

  3. The third chapter shows how modern chess engines are designed. Aside from the ground-breaking AlphaGo, AlphaGo Zero and AlphaZero we cover Leela Chess Zero, Fat Fritz, Fat Fritz 2 and Effectively Updateable Neural Networks (NNUE) as well as Maia.

  4. The fourth chapter is about implementing a miniaturized AlphaZero. Hexapawn, a minimalistic version of chess, is used as an example for that. Hexapawn is solved by minimax search and training positions for supervised learning are generated. Then as a comparison, an AlphaZero-like training loop is implemented where training is done via self-play combined with reinforcement learning. Finally, AlphaZero-like training and supervised training are compared.

Source Code

Just clone this repository or directly browse the files. You will find here all sources of the examples of the book.

About

During COVID, I worked a lot from home and saved approximately 1.5 hours of commuting time each day. I decided to use that time to do something useful (?) and wrote a book about computer chess. In the end I decided to release the book for free.

Profits

To be completely transparent, here is what I make from every paper copy sold on Amazon. The book retails for $16.95 (about 15 Euro).

  • printing costs $4.04
  • Amazon takes $6.78
  • my royalties are $6.13

Errata

If you find mistakes, please report them here - your help is appreciated!

You might also like...
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

PyTorch implementation of 1712.06087
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning

Unofficial PyTorch implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning Unofficial Implementation of 1712.06087 "Zero-Shot" Sup

[IJCAI-2021] A benchmark of data-free knowledge distillation from paper
[IJCAI-2021] A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation"

DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa

FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus
Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus

free-duolingo-plus duolingo account creator that uses your invite code to get yo

Free like Freedom

This is all very much a work in progress! More to come! ( We're working on it though! Stay tuned!) Installation Open an Anaconda Prompt (in Windows, o

MLSpace: Hassle-free machine learning & deep learning development

MLSpace: Hassle-free machine learning & deep learning development

Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Comments
  • 'Board' object has no attribute 'outcome'

    'Board' object has no attribute 'outcome'

    I just executed python mcts.py and received an error message: 34 0 Traceback (most recent call last): File "mcts.py", line 134, in payout = simulate(node) File "mcts.py", line 63, in simulate while(board.outcome(claim_draw = True) == None): AttributeError: 'Board' object has no attribute 'outcome'

    opened by barvinog 5
  • Invalid Reduction Key auto.

    Invalid Reduction Key auto.

    Thank you for the source code of Chapter 5. I executed python mnx_generateTrainingData.py - OK Then python sup_network.py - OK

    Then I executed python sup_eval.py and got the error :

    Traceback (most recent call last): File "sup_eval.py", line 6, in model = keras.models.load_model("supervised_model.keras") File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 492, in load_wrapper return load_function(*args, **kwargs) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 584, in load_model model = _deserialize_model(h5dict, custom_objects, compile) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 369, in _deserialize_model sample_weight_mode=sample_weight_mode) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper return func(*args, **kwargs) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/training.py", line 229, in compile self.total_loss = self._prepare_total_loss(masks) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/training.py", line 692, in _prepare_total_loss y_true, y_pred, sample_weight=sample_weight) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/losses.py", line 73, in call losses, sample_weight, reduction=self.reduction) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/utils/losses_utils.py", line 156, in compute_weighted_loss Reduction.validate(reduction) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/utils/losses_utils.py", line 35, in validate raise ValueError('Invalid Reduction Key %s.' % key) ValueError: Invalid Reduction Key auto.

    opened by barvinog 2
  • Chapter 2 convolution.py

    Chapter 2 convolution.py

    Hello Dominik, I'm a Python novice, but an experienced chess player and long ago a developer of software for infinite dimensional optimization. I've installed the latest Python on a 64 cores Ryzen Threadripper with two NVIDIA 3090 graphic cards. I study your very helpful overview of modern chess engine programming and started with Chapter 2 where except convolution.py all examples work fine. I have installed module scikit-image as skimage doesn't load correctly. Then (without changing the source of convolution.py) I get the following warning

    PS C:\Users\diete\Downloads\neural_network_chess-1.3\chapter_02> python.exe .\convolution.py (640, 480) Lossy conversion from float64 to uint8. Range [-377.0, 433.0]. Convert image to uint8 prior to saving to suppress this warning. PS C:\Users\diete\Downloads\neural_network_chess-1.3\chapter_02>

    and after some seconds python exits without any more output. Help with this problem is kindly appreciated. Dieter

    opened by d-kraft 1
Releases(v1.5)
Owner
Dominik Klein
random code snippets, including the chess program Jerry
Dominik Klein
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022
Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021) Hang Zhou, Yasheng Sun, Wayne Wu, Chen Cha

Hang_Zhou 628 Dec 28, 2022
FluidNet re-written with ATen tensor lib

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation. This repository is based on the paper,

JoliBrain 50 Jun 07, 2022
Train the HRNet model on ImageNet

High-resolution networks (HRNets) for Image classification News [2021/01/20] Add some stronger ImageNet pretrained models, e.g., the HRNet_W48_C_ssld_

HRNet 866 Jan 04, 2023
Multiple-criteria decision-making (MCDM) with Electre, Promethee, Weighted Sum and Pareto

EasyMCDM - Quick Installation methods Install with PyPI Once you have created your Python environment (Python 3.6+) you can simply type: pip3 install

Labrak Yanis 6 Nov 22, 2022
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations

Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations This is the authors' implementation of Unsupervised Adversarial Learning of

Dwango Media Village 140 Dec 07, 2022
Image-popularity-score - A novel deep regression method for image scoring.

Image-popularity-score - A novel deep regression method for image scoring.

Shoaib ahmed 1 Dec 26, 2021
Turning SymPy expressions into PyTorch modules.

sympytorch A micro-library as a convenience for turning SymPy expressions into PyTorch Modules. All SymPy floats become trainable parameters. All SymP

Patrick Kidger 89 Dec 13, 2022
NER for Indian languages

CL-NERIL: A Cross-Lingual Model for NER in Indian Languages Code for the paper - https://arxiv.org/abs/2111.11815 Setup Setup a virtual environment Th

Akshara P 0 Nov 24, 2021
A custom-designed Spider Robot trained to walk using Deep RL in a PyBullet Simulation

SpiderBot_DeepRL Title: Implementation of Single and Multi-Agent Deep Reinforcement Learning Algorithms for a Walking Spider Robot Authors(s): Arijit

Arijit Dasgupta 9 Jul 28, 2022
Scripts and misc. stuff related to the PortSwigger Web Academy

PortSwigger Web Academy Notes Mostly scripts to automate the exploits. Going in the order of the recomended learning path - starting with SQLi. Commun

pageinsec 17 Dec 30, 2022
Deep Learning to Improve Breast Cancer Detection on Screening Mammography

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Deep Learning to Improve Breast

Li Shen 305 Jan 03, 2023
This code is part of the reproducibility package for the SANER 2022 paper "Generating Clarifying Questions for Query Refinement in Source Code Search".

Clarifying Questions for Query Refinement in Source Code Search This code is part of the reproducibility package for the SANER 2022 paper "Generating

Zachary Eberhart 0 Dec 04, 2021
This tutorial aims to learn the basics of deep learning by hands, and master the basics through combination of lectures and exercises

2021-Deep-learning This tutorial aims to learn the basics of deep learning by hands, and master the basics through combination of paper and exercises.

108 Feb 24, 2022
TrackTech: Real-time tracking of subjects and objects on multiple cameras

TrackTech: Real-time tracking of subjects and objects on multiple cameras This project is part of the 2021 spring bachelor final project of the Bachel

5 Jun 17, 2022
Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant self-at

35 Oct 18, 2022
Automatic library of congress classification, using word embeddings from book titles and synopses.

Automatic Library of Congress Classification The Library of Congress Classification (LCC) is a comprehensive classification system that was first deve

Ahmad Pourihosseini 3 Oct 01, 2022
Official Pytorch implementation of "Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021)

Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021) Official Pytorch implementation of Unbiased Classification

Youngkyu 17 Jan 01, 2023
Official implementation of CVPR2020 paper "Deep Generative Model for Robust Imbalance Classification"

Deep Generative Model for Robust Imbalance Classification Deep Generative Model for Robust Imbalance Classification Xinyue Wang, Yilin Lyu, Liping Jin

9 Nov 01, 2022
Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation

SSWS-loss_function_based_on_MS-TCN Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation Supervised Sliding Window

3 Aug 03, 2022