Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Overview

Neural Networks For Chess

cover

Free Book

  • Grab your free PDF copy HERE
  • Buy a printed copy at HERE or HERE

Donations are welcome:

paypal

Contents

AlphaZero, Leela Chess Zero and Stockfish NNUE revolutionized Computer Chess. This book gives a complete introduction into the technical inner workings of such engines.

The book is split into four chapters:

  1. The first chapter introduces neural networks and covers all the basic building blocks that are used to build deep networks such as those used by AlphaZero. Contents include the perceptron, back-propagation and gradient descent, classification, regression, multilayer perpectron, vectorization techniques, convolutional netowrks, squeeze and exciation networks, fully connected networks, batch normalization and rectified linear units, residual layers, overfitting and underfitting.

  2. The second chapter introduces classical search techniques used for chess engines as well as those used by AlphaZero. Contents include minimax, alpha-beta search, and Monte Carlo tree search.

  3. The third chapter shows how modern chess engines are designed. Aside from the ground-breaking AlphaGo, AlphaGo Zero and AlphaZero we cover Leela Chess Zero, Fat Fritz, Fat Fritz 2 and Effectively Updateable Neural Networks (NNUE) as well as Maia.

  4. The fourth chapter is about implementing a miniaturized AlphaZero. Hexapawn, a minimalistic version of chess, is used as an example for that. Hexapawn is solved by minimax search and training positions for supervised learning are generated. Then as a comparison, an AlphaZero-like training loop is implemented where training is done via self-play combined with reinforcement learning. Finally, AlphaZero-like training and supervised training are compared.

Source Code

Just clone this repository or directly browse the files. You will find here all sources of the examples of the book.

About

During COVID, I worked a lot from home and saved approximately 1.5 hours of commuting time each day. I decided to use that time to do something useful (?) and wrote a book about computer chess. In the end I decided to release the book for free.

Profits

To be completely transparent, here is what I make from every paper copy sold on Amazon. The book retails for $16.95 (about 15 Euro).

  • printing costs $4.04
  • Amazon takes $6.78
  • my royalties are $6.13

Errata

If you find mistakes, please report them here - your help is appreciated!

You might also like...
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

PyTorch implementation of 1712.06087
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning

Unofficial PyTorch implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning Unofficial Implementation of 1712.06087 "Zero-Shot" Sup

[IJCAI-2021] A benchmark of data-free knowledge distillation from paper
[IJCAI-2021] A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation"

DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa

FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus
Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus

free-duolingo-plus duolingo account creator that uses your invite code to get yo

Free like Freedom

This is all very much a work in progress! More to come! ( We're working on it though! Stay tuned!) Installation Open an Anaconda Prompt (in Windows, o

MLSpace: Hassle-free machine learning & deep learning development

MLSpace: Hassle-free machine learning & deep learning development

Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Comments
  • 'Board' object has no attribute 'outcome'

    'Board' object has no attribute 'outcome'

    I just executed python mcts.py and received an error message: 34 0 Traceback (most recent call last): File "mcts.py", line 134, in payout = simulate(node) File "mcts.py", line 63, in simulate while(board.outcome(claim_draw = True) == None): AttributeError: 'Board' object has no attribute 'outcome'

    opened by barvinog 5
  • Invalid Reduction Key auto.

    Invalid Reduction Key auto.

    Thank you for the source code of Chapter 5. I executed python mnx_generateTrainingData.py - OK Then python sup_network.py - OK

    Then I executed python sup_eval.py and got the error :

    Traceback (most recent call last): File "sup_eval.py", line 6, in model = keras.models.load_model("supervised_model.keras") File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 492, in load_wrapper return load_function(*args, **kwargs) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 584, in load_model model = _deserialize_model(h5dict, custom_objects, compile) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 369, in _deserialize_model sample_weight_mode=sample_weight_mode) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper return func(*args, **kwargs) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/training.py", line 229, in compile self.total_loss = self._prepare_total_loss(masks) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/training.py", line 692, in _prepare_total_loss y_true, y_pred, sample_weight=sample_weight) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/losses.py", line 73, in call losses, sample_weight, reduction=self.reduction) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/utils/losses_utils.py", line 156, in compute_weighted_loss Reduction.validate(reduction) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/utils/losses_utils.py", line 35, in validate raise ValueError('Invalid Reduction Key %s.' % key) ValueError: Invalid Reduction Key auto.

    opened by barvinog 2
  • Chapter 2 convolution.py

    Chapter 2 convolution.py

    Hello Dominik, I'm a Python novice, but an experienced chess player and long ago a developer of software for infinite dimensional optimization. I've installed the latest Python on a 64 cores Ryzen Threadripper with two NVIDIA 3090 graphic cards. I study your very helpful overview of modern chess engine programming and started with Chapter 2 where except convolution.py all examples work fine. I have installed module scikit-image as skimage doesn't load correctly. Then (without changing the source of convolution.py) I get the following warning

    PS C:\Users\diete\Downloads\neural_network_chess-1.3\chapter_02> python.exe .\convolution.py (640, 480) Lossy conversion from float64 to uint8. Range [-377.0, 433.0]. Convert image to uint8 prior to saving to suppress this warning. PS C:\Users\diete\Downloads\neural_network_chess-1.3\chapter_02>

    and after some seconds python exits without any more output. Help with this problem is kindly appreciated. Dieter

    opened by d-kraft 1
Releases(v1.5)
Owner
Dominik Klein
random code snippets, including the chess program Jerry
Dominik Klein
Code for "Learning Graph Cellular Automata"

Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro

Daniele Grattarola 37 Oct 26, 2022
Official Pytorch implementation of C3-GAN

Official pytorch implemenation of C3-GAN Contrastive Fine-grained Class Clustering via Generative Adversarial Networks [Paper] Authors: Yunji Kim, Jun

NAVER AI 114 Dec 02, 2022
[NeurIPS 2021] Low-Rank Subspaces in GANs

Low-Rank Subspaces in GANs Figure: Image editing results using LowRankGAN on StyleGAN2 (first three columns) and BigGAN (last column). Low-Rank Subspa

112 Dec 28, 2022
Implementation for paper: Self-Regulation for Semantic Segmentation

Self-Regulation for Semantic Segmentation This is the PyTorch implementation for paper Self-Regulation for Semantic Segmentation, ICCV 2021. Citing SR

Dong ZHANG 30 Nov 21, 2022
PyContinual (An Easy and Extendible Framework for Continual Learning)

PyContinual (An Easy and Extendible Framework for Continual Learning) Easy to Use You can sumply change the baseline, backbone and task, and then read

176 Jan 05, 2023
PN-Net a neural field-based framework for depth estimation from single-view RGB images.

PN-Net We present a neural field-based framework for depth estimation from single-view RGB images. Rather than representing a 2D depth map as a single

1 Oct 02, 2021
This repository contains the code used in the paper "Prompt-Based Multi-Modal Image Segmentation".

Prompt-Based Multi-Modal Image Segmentation This repository contains the code used in the paper "Prompt-Based Multi-Modal Image Segmentation". The sys

Timo Lüddecke 305 Dec 30, 2022
Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers.

Contra-OOD Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers. Requirements PyTorch Transformers datasets

Wenxuan Zhou 27 Oct 28, 2022
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

Ren Tianhe 49 Nov 10, 2022
Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation

CorrNet This project provides the code and results for 'Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation'

Gongyang Li 13 Nov 03, 2022
Code for this paper The Lottery Ticket Hypothesis for Pre-trained BERT Networks.

The Lottery Ticket Hypothesis for Pre-trained BERT Networks Code for this paper The Lottery Ticket Hypothesis for Pre-trained BERT Networks. [NeurIPS

VITA 122 Dec 14, 2022
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
Fast Scattering Transform with CuPy/PyTorch

Announcement 11/18 This package is no longer supported. We have now released kymatio: http://www.kymat.io/ , https://github.com/kymatio/kymatio which

Edouard Oyallon 289 Dec 07, 2022
Indices Matter: Learning to Index for Deep Image Matting

IndexNet Matting This repository includes the official implementation of IndexNet Matting for deep image matting, presented in our paper: Indices Matt

Hao Lu 357 Nov 26, 2022
TDmatch is a Python library developed to perform matching tasks in three categories:

TDmatch TDmatch is a Python library developed to perform matching tasks in three categories: Text to Data which matches tuples of a table to text docu

Naser Ahmadi 5 Aug 11, 2022
Architecture Patterns with Python (TDD, DDD, EDM)

architecture-traning Architecture Patterns with Python (TDD, DDD, EDM) Chapter 5. 높은 기어비와 낮은 기어비의 TDD 5.2 도메인 계층 테스트를 서비스 계층으로 옮겨야 하는가? 도메인 계층 테스트 def

minsung sim 2 Mar 04, 2022
YOLOX-CondInst - Implement CondInst which is a instances segmentation method on YOLOX

YOLOX CondInst -- YOLOX 实例分割 前言 本项目是自己学习实例分割时,复现的代码. 通过自己编程,让自己对实例分割有更进一步的了解。 若想

DDGRCF 16 Nov 18, 2022
A Dataset for Direct Quotation Extraction and Attribution in News Articles.

DirectQuote - A Dataset for Direct Quotation Extraction and Attribution in News Articles DirectQuote is a corpus containing 19,760 paragraphs and 10,3

THUNLP-MT 9 Sep 23, 2022
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
Pytorch implementation of NeurIPS 2021 paper: Geometry Processing with Neural Fields.

Geometry Processing with Neural Fields Pytorch implementation for the NeurIPS 2021 paper: Geometry Processing with Neural Fields Guandao Yang, Serge B

Guandao Yang 162 Dec 16, 2022