Code Repository for Machine Learning with PyTorch and Scikit-Learn

Overview

Machine Learning with PyTorch and Scikit-Learn Book

Code Repository

To be published January 2022

Paperback: TBD pages
Publisher: Packt Publishing
Language: English

ISBN-10: TBD
ISBN-13: 978-1789955750
Kindle ASIN: TBD

Links

Table of Contents and Code Notebooks

Helpful installation and setup instructions can be found in the README.md file of Chapter 1

Please note that these are just the code examples accompanying the book, which we uploaded for your convenience; be aware that these notebooks may not be useful without the formulae and descriptive text.

  1. Machine Learning - Giving Computers the Ability to Learn from Data [open dir]
  2. Training Machine Learning Algorithms for Classification [open dir]
  3. A Tour of Machine Learning Classifiers Using Scikit-Learn [open dir]
  4. Building Good Training Sets – Data Pre-Processing [open dir]
  5. Compressing Data via Dimensionality Reduction [open dir]
  6. Learning Best Practices for Model Evaluation and Hyperparameter Optimization [open dir]
  7. Combining Different Models for Ensemble Learning [open dir]
  8. Applying Machine Learning to Sentiment Analysis [open dir]
  9. Predicting Continuous Target Variables with Regression Analysis [open dir]
  10. Working with Unlabeled Data – Clustering Analysis [open dir]
  11. Implementing a Multi-layer Artificial Neural Network from Scratch [open dir]
  12. Parallelizing Neural Network Training with PyTorch [open dir]
  13. Going Deeper -- The Mechanics of PyTorch [open dir]
  14. Classifying Images with Deep Convolutional Neural Networks [open dir]
  15. Modeling Sequential Data Using Recurrent Neural Networks [open dir]
  16. Transformers -- Improving Natural Language Processing with Attention Mechanisms [open dir]
  17. Generative Adversarial Networks for Synthesizing New Data [open dir]
  18. Graph Neural Networks for Capturing Dependencies in Graph Structured Data [open dir]
  19. Reinforcement Learning for Decision Making in Complex Environments [open dir]



Sebastian Raschka, Yuxi (Hayden) Liu, and Vahid Mirjalili. Machine Learning with PyTorch and Scikit-Learn. Packt Publishing, 2022.

@book{mlbook2022,  
address = {Birmingham, UK},  
author = {Sebastian Raschka, and Yuxi (Hayden) Liu, and Vahid Mirjalili},  
isbn = {978-1801819312},   
publisher = {Packt Publishing},  
title = {{Machine Learning with PyTorch and Scikit-Learn}},  
year = {2022}  
}
Comments
  • Loss functions for classification - logits/probabilities (page 472)

    Loss functions for classification - logits/probabilities (page 472)

    Hi Sebastian,

    There is the same value on the picture on the page 472 for y_pred in the columns for probabilities (BCELoss) and logits (BCEWithLogitsLoss): 0.8 Probably the value for the first column (BCELoss) is 0.69, which is equal to sigmoid(0.8)?

    Thank you.

    opened by labdmitriy 5
  • ch13 pag 438 no softmax needed

    ch13 pag 438 no softmax needed

    Hello, kindly clarify this: in the nn.Sequential model, there's no softmax at the end, because we use the cross-entropy loss, which probably doesn't need that, because it is equivalent to the combination of LogSoftmax and NLLLoss? Yet the text below says that the output layer is activated by the softmax.

    https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss Note

    $Clipboard01

    opened by GianniGi 4
  • LogSoftmax in the output but not in the description/code (page 532)

    LogSoftmax in the output but not in the description/code (page 532)

    Hi Sebastian,

    There is an output of created RNN model which includes log softmax as the last layer on the page 532:

    (softmax): LogSoftmax(dim=1)
    

    But based on the code of the model and on the following steps we do not need this layer because we use nn.CrossEntropyLoss() where the input is expected to contain raw, unnormalized scores for each class. Is it correct?

    Thank you.

    opened by labdmitriy 4
  • Different code between book and notebook for NN implementation

    Different code between book and notebook for NN implementation

    ## code in notebook
    from sklearn.datasets import load_iris
    from sklearn.model_selection import train_test_split 
    
    iris = load_iris()
    X = iris['data']
    y = iris['target']
     
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=1./3, random_state=1)
    
    from torch.utils.data import TensorDataset
    from torch.utils.data import DataLoader
    import numpy as np 
    import torch
    X_train_norm = (X_train - np.mean(X_train)) / np.std(X_train)
    X_train_norm = torch.from_numpy(X_train_norm).float()
    y_train = torch.from_numpy(y_train) 
    
    train_ds = TensorDataset(X_train_norm, y_train)
    
    torch.manual_seed(1)
    batch_size = 2
    train_dl = DataLoader(train_ds, batch_size, shuffle=True)
    
    import torch.nn as nn
    class Model(nn.Module):
        def __init__(self, input_size, hidden_size, output_size):
            super(Model, self).__init__()
            self.layer1 = nn.Linear(input_size, hidden_size)  
            self.layer2 = nn.Linear(hidden_size, output_size)  
    
        def forward(self, x):
            x = self.layer1(x)
            x = nn.Sigmoid()(x)
            x = self.layer2(x)
            x = nn.Softmax(dim=1)(x)
            return x
        
    input_size = X_train_norm.shape[1]
    hidden_size = 16
    output_size = 3
     
    model = Model(input_size, hidden_size, output_size)
    
    learning_rate = 0.001
    
    loss_fn = nn.CrossEntropyLoss()
     
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
    
    num_epochs = 100
    loss_hist = [0] * num_epochs
    accuracy_hist = [0] * num_epochs
    
    for epoch in range(num_epochs):
    
        for x_batch, y_batch in train_dl:
            pred = model(x_batch)
            loss = loss_fn(pred, y_batch)
            loss.backward()
            optimizer.step()
            optimizer.zero_grad()
        
            loss_hist[epoch] += loss.item()*y_batch.size(0)
            is_correct = (torch.argmax(pred, dim=1) == y_batch).float()
            accuracy_hist[epoch] += is_correct.sum()
            
        loss_hist[epoch] /= len(train_dl.dataset)
        accuracy_hist[epoch] /= len(train_dl.dataset)
    import matplotlib.pyplot as plt 
    fig = plt.figure(figsize=(12, 5))
    ax = fig.add_subplot(1, 2, 1)
    ax.plot(loss_hist, lw=3)
    ax.set_title('Training loss', size=15)
    ax.set_xlabel('Epoch', size=15)
    ax.tick_params(axis='both', which='major', labelsize=15)
    
    ax = fig.add_subplot(1, 2, 2)
    ax.plot(accuracy_hist, lw=3)
    ax.set_title('Training accuracy', size=15)
    ax.set_xlabel('Epoch', size=15)
    ax.tick_params(axis='both', which='major', labelsize=15)
    plt.tight_layout()
    
    #plt.savefig('figures/12_09.pdf')
     
    plt.show()
    
    ## code in book
    from sklearn.datasets import load_iris
    from sklearn.model_selection import train_test_split 
    
    iris = load_iris()
    X = iris['data']
    y = iris['target']
     
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=1./3, random_state=1)
    
    
    from torch.utils.data import TensorDataset
    from torch.utils.data import DataLoader
    
    X_train_norm = (X_train - np.mean(X_train)) / np.std(X_train)
    X_train_norm = torch.from_numpy(X_train_norm).float()
    y_train = torch.from_numpy(y_train) 
    
    train_ds = TensorDataset(X_train_norm, y_train)
    
    torch.manual_seed(1)
    batch_size = 2
    train_dl = DataLoader(train_ds, batch_size, shuffle=True)
    
    class Model(nn.Module):
        def __init__(self, input_size, hidden_size, output_size):
           ## in book without Model,self but i added
            super().__init__()
            self.layer1 = nn.Linear(input_size, hidden_size)  
            self.layer2 = nn.Linear(hidden_size, output_size)  
    
        def forward(self, x):
            x = self.layer1(x)
            x = nn.Sigmoid()(x)
            x = self.layer2(x)
            x = nn.Softmax(dim=1)(x)
            return x
        
    input_size = X_train_norm.shape[1]
    hidden_size = 16
    output_size = 3
     
    model = Model(input_size, hidden_size, output_size)
    
    learning_rate = 0.001
    
    loss_fn = nn.CrossEntropyLoss()
     
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
    
    num_epochs = 100
    loss_hist = [0] * num_epochs
    accuracy_hist = [0] * num_epochs
    ## got error here
    for epoch in range(num_epochs):
    
        for x_batch, y_batch in train_dl:
            pred = model(x_batch)
            loss = loss_fn(pred, y_batch)
            loss.backward()
            optimizer.step()
            optimizer.zero_grad()
        
            loss_hist[epoch] += loss.item()*y_batch.size(0)
            is_correct = (torch.argmax(pred, dim=1) == y_batch).float()
          ## change the mean() from the code in the book but still doesnt work
            accuracy_hist[epoch] += is_correct.sum()
            
        loss_hist[epoch] /= len(train_dl.dataset)
        accuracy_hist[epoch] /= len(train_dl.dataset)
    
    
    fig = plt.figure(figsize=(12, 5))
    ax = fig.add_subplot(1, 2, 1)
    ax.plot(loss_hist, lw=3)
    ax.set_title('Training loss', size=15)
    ax.set_xlabel('Epoch', size=15)
    ax.tick_params(axis='both', which='major', labelsize=15)
    
    ax = fig.add_subplot(1, 2, 2)
    ax.plot(accuracy_hist, lw=3)
    ax.set_title('Training accuracy', size=15)
    ax.set_xlabel('Epoch', size=15)
    ax.tick_params(axis='both', which='major', labelsize=15)
    plt.tight_layout()
    
    #plt.savefig('figures/12_09.pdf')
     
    plt.show()
    
    ## Note: I wrote the code from the local notebook step by step but got this error. However, the code works while running in the notebook  on google colab. Is it due to python version?--------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-145-4bceac91f560> in <module>
          7     for x_batch, y_batch in train_dl:
          8         pred = model(x_batch)
    ----> 9         loss = loss_fn(pred, y_batch)
         10         loss.backward()
         11         optimizer.step()
    
    ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
       1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1109                 or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1110             return forward_call(*input, **kwargs)
       1111         # Do not call functions when jit is used
       1112         full_backward_hooks, non_full_backward_hooks = [], []
    
    ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
       1161 
       1162     def forward(self, input: Tensor, target: Tensor) -> Tensor:
    -> 1163         return F.cross_entropy(input, target, weight=self.weight,
       1164                                ignore_index=self.ignore_index, reduction=self.reduction,
       1165                                label_smoothing=self.label_smoothing)
    
    ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
       2994     if size_average is not None or reduce is not None:
       2995         reduction = _Reduction.legacy_get_string(size_average, reduce)
    -> 2996     return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
       2997 
       2998 
    
    RuntimeError: expected scalar type Long but found Int
    
    opened by naiborhujosua 3
  • CH 13: Changing the order of the cells gives different results. Pg 420 - 422

    CH 13: Changing the order of the cells gives different results. Pg 420 - 422

    In the Solving an XOR classification problem section, the author defined the model, then defined the loss function and the optimizer, then created the data loader. Finally, he defined the training function followed by plotting the results. If I follow this same sequence, I get this figure image This figure is different from what is shown in the book!

    However, in the notebook, the author defined the data loader, then the model, then the loss function and the optimizer, followed by the training and plotting procedures. That it, he defined the data loader first instead of being before the training procedure.

    Can anyone please explain why changing the order of the cells causes such error?

    opened by OmarAlmighty 2
  • Visualizing Transformer based on your notebook

    Visualizing Transformer based on your notebook

    Dear Prof. Sebastian Raschka, I published a blog with an accompanying 3D interactive website based on your published notebook, to visualize the inner working of Transformer, hope you can check it out!

    opened by jackli777 2
  • Typo in page 80 - logical_or

    Typo in page 80 - logical_or

    The last paragraph in page 80 says "Using the following code, we will create a simple dataset that has the form of an XOR gate using the logical_or function".

    It should be logical_xor, as we can deduce from the preceding explanation and from the code immediately below the text.

    opened by pablo-sampaio 2
  • Possible error in ch14_part2.ipynb of GitHub

    Possible error in ch14_part2.ipynb of GitHub

    There is a code 'get_smile = lambda attr: attr[18]' in In[6] cell. It should be 'get_smile = lambda attr: attr[31]' according to the 'list_attr_celeba' text document in the celeba file.

    opened by Unamu7simure 2
  • Downloading CelebA dataset from book's download link.

    Downloading CelebA dataset from book's download link.

    On page 483, one way to download the CelebA dataset is with the book's download link. In the instructions, you mentioned that we must unzip the downloaded file. But one step that is missing is that we have to unzip the img_align_celeba.zip too; otherwise, PyTorch will throw an error complaining the dataset is corrupt, which is caused by this line of code:

    https://github.com/pytorch/vision/blob/22400011d6a498ecf77797a56dfe13bc94c426ca/torchvision/datasets/celeba.py#L142

    So, I think it's better to mention that explicitly too.

    P.S: Thanks for this excellent book!

    opened by Mahyar24 2
  • Missing text chunk (page 530)

    Missing text chunk (page 530)

    Hi Sebastian,

    There is a code snippet for text preprocessing for language model on the page 530:

    text_chunks = [text_encoded[i:i+chunk_size]
                   for i in range(len(text_encoded)-chunk_size)]
    

    Probably the last text chunk is not included, and to include all text chunks we need to use the following code:

    text_chunks = [text_encoded[i:i+chunk_size]
                   for i in range(len(text_encoded)-chunk_size+1)]
    

    Then for the last i value (len(text_encoded)-chunk_size)) we will have text chunk: text_encoded[len(text_encoded)-chunk_size:len(text_encoded)] which has the size chunk_size and I suppose can be included as additional text chunk.

    Thank you.

    opened by labdmitriy 2
  • Embedding matrix dimension (page 519)

    Embedding matrix dimension (page 519)

    Hi Sebastian,

    There is the following statement on the page 519: "The output will have the dimensionality batchsize × input_length × embedding_dim, where embedding_ dim is the size of the embedding features (here, set to 3). The other argument provided to the embedding layer, num_embeddings, corresponds to the unique integer values that the model will receive as input (for instance, n + 2, set here to 10). Therefore, the embedding matrix in this case has the size 10×6."

    Based on these conclusions, probably there is a typo in the last sentence and embedding matrix dimension is 10x3?

    Thank you.

    opened by labdmitriy 2
  • Label `losses_` docstring as being log loss, not mean squared error

    Label `losses_` docstring as being log loss, not mean squared error

    The docstring for LogisticRegressionGD.losses_ specifies that is it composed of the mean squared error, when I think it's composed of log loss. Changed the docstring to reflect that.


    Thanks for the book!

    opened by paw-lu 1
  • chapter 16, page 547

    chapter 16, page 547

    "the columns in this attention matrix should sum to 1"

    Since you sum, for each row, all the elements, should instead be:

    "Each row in this attention matrix should sum to 1"?

    image

    Probably worth nothing that all the diagonal values have the maximum value, since there are no repetitions of words. If - for example - we had the same word two times, we would have two identical values in the corresponding row.

    opened by GianniGi 0
  • chapter 14 pag.489 transforms

    chapter 14 pag.489 transforms

    Hello, I don't know if I was the only one not getting this the first time that I read it, but I didn't notice that "transform trains" are applied to the full DataSet each time that it's reloaded, for each epoch. Probably because the line where the dataset is reloaded, is completely different from the usual one, which is more legible.

    I would change the red line: image image to this: image

    so the code becomes more legible and familiar??

    from torch.utils.data import DataLoader
    
    celeba_train_dataset = torchvision.datasets.CelebA(image_path, 
                                                       split='train', 
                                                       target_type='attr', 
                                                       download=False, 
                                                       transform=transform_train,
                                                       target_transform=get_smile)
    
    torch.manual_seed(1)
    data_loader = DataLoader(celeba_train_dataset, batch_size=2)
    
    fig = plt.figure(figsize=(15, 6))
    
    num_epochs = 5
    for j in range(num_epochs):
        for img_batch, label_batch in data_loader: # new line
            img = img_batch[0]
            ax = fig.add_subplot(2, 5, j + 1)
            ax.set_xticks([])
            ax.set_yticks([])
            ax.set_title(f'Epoch {j}:', size=15)
            ax.imshow(img.permute(1, 2, 0))
        
            img = img_batch[1]
            ax = fig.add_subplot(2, 5, j + 6)
            ax.set_xticks([])
            ax.set_yticks([])
            ax.imshow(img.permute(1, 2, 0))
            break #new break
            
          
        
    #plt.savefig('figures/14_16.png', dpi=300)
    plt.show()
    
    opened by GianniGi 2
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
Releases(v1.1)
Owner
Sebastian Raschka
Machine Learning researcher & passionate open source contributor. Author of the "Python Machine Learning" book.
Sebastian Raschka
Distributed Evolutionary Algorithms in Python

DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
Breast-Cancer-Classification - Using SKLearn breast cancer dataset which contains 569 examples and 32 features classifying has been made with 6 different algorithms

Breast-Cancer-Classification - Using SKLearn breast cancer dataset which contains 569 examples and 32 features classifying has been made with 6 different algorithms

Mert Sezer Ardal 1 Jan 31, 2022
Automated Machine Learning Pipeline with Feature Engineering and Hyper-Parameters Tuning

The mljar-supervised is an Automated Machine Learning Python package that works with tabular data. I

MLJAR 2.4k Jan 02, 2023
Python module for machine learning time series:

seglearn Seglearn is a python package for machine learning time series or sequences. It provides an integrated pipeline for segmentation, feature extr

David Burns 536 Dec 29, 2022
A simple machine learning package to cluster keywords in higher-level groups.

Simple Keyword Clusterer A simple machine learning package to cluster keywords in higher-level groups. Example: "Senior Frontend Engineer" -- "Fronte

Andrea D'Agostino 10 Dec 18, 2022
Cool Python features for machine learning that I used to be too afraid to use. Will be updated as I have more time / learn more.

python-is-cool A gentle guide to the Python features that I didn't know existed or was too afraid to use. This will be updated as I learn more and bec

Chip Huyen 3.3k Jan 05, 2023
Cohort Intelligence used to solve various mathematical functions

Cohort-Intelligence-for-Mathematical-Functions About Cohort Intelligence : Cohort Intelligence ( CI ) is an optimization technique. It attempts to mod

Aayush Khandekar 2 Oct 25, 2021
Distributed Computing for AI Made Simple

Project Home Blog Documents Paper Media Coverage Join Fiber users email list Uber Open Source 997 Dec 30, 2022

customer churn prediction prevention in telecom industry using machine learning and survival analysis

Telco Customer Churn Prediction - Plotly Dash Application Description This dash application allows you to predict telco customer churn using machine l

Benaissa Mohamed Fayçal 3 Nov 20, 2021
Python/Sage Tool for deriving Scattering Matrices for WDF R-Adaptors

R-Solver A Python tools for deriving R-Type adaptors for Wave Digital Filters. This code is not quite production-ready. If you are interested in contr

8 Sep 19, 2022
Drug prediction

I have collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of 5 medications, Drug A, Drug B, Drug c, Dr

Khazar 1 Jan 28, 2022
Test symmetries with sklearn decision tree models

Test symmetries with sklearn decision tree models Setup Begin from an environment with a recent version of python 3. source setup.sh Leave the enviro

Rupert Tombs 2 Jul 19, 2022
Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...

Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...

Thoughtworks 318 Jan 02, 2023
pure-predict: Machine learning prediction in pure Python

pure-predict speeds up and slims down machine learning prediction applications. It is a foundational tool for serverless inference or small batch prediction with popular machine learning frameworks l

Ibotta 84 Dec 29, 2022
Dieses Projekt ermöglicht es den Smartmeter der EVN (Netz Niederösterreich) über die Kundenschnittstelle auszulesen.

SmartMeterEVN Dieses Projekt ermöglicht es den Smartmeter der EVN (Netz Niederösterreich) über die Kundenschnittstelle auszulesen. Smart Meter werden

greenMike 43 Dec 04, 2022
Simple structured learning framework for python

PyStruct PyStruct aims at being an easy-to-use structured learning and prediction library. Currently it implements only max-margin methods and a perce

pystruct 666 Jan 03, 2023
AtsPy: Automated Time Series Models in Python (by @firmai)

Automated Time Series Models in Python (AtsPy) SSRN Report Easily develop state of the art time series models to forecast univariate data series. Simp

Derek Snow 465 Jan 02, 2023
Machine learning algorithms implementation

Machine learning algorithms implementation This repository consisits of implementation of various machine learning algorithms. The algorithms implemen

Karun Dawadi 1 Jan 03, 2022
A quick reference guide to the most commonly used patterns and functions in PySpark SQL

Using PySpark we can process data from Hadoop HDFS, AWS S3, and many file systems. PySpark also is used to process real-time data using Streaming and

Sundar Ramamurthy 53 Dec 21, 2022
A collection of neat and practical data science and machine learning projects

Data Science A collection of neat and practical data science and machine learning projects Explore the docs » Report Bug · Request Feature Table of Co

Will Fong 2 Dec 10, 2021