Intro-to-dl - Resources for "Introduction to Deep Learning" course.

Overview

Introduction to Deep Learning course resources

https://www.coursera.org/learn/intro-to-deep-learning

Running on Google Colab (tested for all weeks)

Google has released its own flavour of Jupyter called Colab, which has free GPUs!

Here's how you can use it:

  1. Open https://colab.research.google.com, click Sign in in the upper right corner, use your Google credentials to sign in.
  2. Click GITHUB tab, paste https://github.com/hse-aml/intro-to-dl and press Enter
  3. Choose the notebook you want to open, e.g. week2/v2/mnist_with_keras.ipynb
  4. Click File -> Save a copy in Drive... to save your progress in Google Drive
  5. Click Runtime -> Change runtime type and select GPU in Hardware accelerator box
  6. Execute the following code in the first cell that downloads dependencies (change for your week number):
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
# please, uncomment the week you're working on
# setup_google_colab.setup_week1()
# setup_google_colab.setup_week2()
# setup_google_colab.setup_week2_honor()
# setup_google_colab.setup_week3()
# setup_google_colab.setup_week4()
# setup_google_colab.setup_week5()
# setup_google_colab.setup_week6()
  1. If you run many notebooks on Colab, they can continue to eat up memory, you can kill them with ! pkill -9 python3 and check with ! nvidia-smi that GPU memory is freed.

Known issues:

  • Blinking animation with IPython.display.clear_output(). It's usable, but still looking for a workaround.

Offline instructions

Coursera Jupyter Environment can be slow if many learners use it heavily. Our tasks are compute-heavy and we recommend to run them on your hardware for optimal performance.

You will need a computer with at least 4GB of RAM.

There're two options to setup the Jupyter Notebooks locally: Docker container and Anaconda.

Docker container option (best for Mac/Linux)

Follow the instructions on https://hub.docker.com/r/zimovnov/coursera-aml-docker/ to install Docker container with all necessary software installed.

After that you should see a Jupyter page in your browser.

Anaconda option (best for Windows)

We highly recommend to install docker environment, but if it's not an option, you can try to install the necessary python modules with Anaconda.

First, install Anaconda with Python 3.5+ from here.

Download conda_requirements.txt from here.

Open terminal on Mac/Linux or "Anaconda Prompt" in Start Menu on Windows and run:

conda config --append channels conda-forge
conda config --append channels menpo
conda install --yes --file conda_requirements.txt

To start Jupyter Notebooks run jupyter notebook on Mac/Linux or "Jupyter Notebook" in Start Menu on Windows.

After that you should see a Jupyter page in your browser.

Prepare resources inside Jupyter Notebooks (for local setups only)

Click New -> Terminal and execute: git clone https://github.com/hse-aml/intro-to-dl.git On Windows you might want to install Git. You can also download all the resources as zip archive from GitHub page.

Close the terminal and refresh Jupyter page, you will see intro-to-dl folder, go there, all the necessary notebooks are waiting for you.

First you need to download necessary resources, to do that open download_resources.ipynb and run cells for Keras and your week.

Now you can open a notebook for the corresponding week and work there just like in Coursera Jupyter Environment.

Using GPU for offline setup (for advanced users)

Comments
  • cannot submit

    cannot submit

    In the first submission for week 3, I couldn't submit. Here is the error: AttributeError: module 'grading_utils' has no attribute 'model_total_params'

    opened by AhmedFrikha 4
  • week4/lfw_dataset.py

    week4/lfw_dataset.py

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-4-856143fffc33> in <module>()
          8 #Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
          9 from lfw_dataset import load_lfw_dataset
    ---> 10 data,attrs = load_lfw_dataset(dimx=36,dimy=36)
         11 
         12 #preprocess faces
    
    ~/GitHub/intro-to-dl/week4/lfw_dataset.py in load_lfw_dataset(use_raw, dx, dy, dimx, dimy)
         52 
         53     # preserve photo_ids order!
    ---> 54     all_attrs = photo_ids.merge(df_attrs, on=('person', 'imagenum')).drop(["person", "imagenum"], axis=1)
         55 
         56     return all_photos, all_attrs
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in merge(self, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
       6377                      right_on=right_on, left_index=left_index,
       6378                      right_index=right_index, sort=sort, suffixes=suffixes,
    -> 6379                      copy=copy, indicator=indicator, validate=validate)
       6380 
       6381     def round(self, decimals=0, *args, **kwargs):
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
         58                          right_index=right_index, sort=sort, suffixes=suffixes,
         59                          copy=copy, indicator=indicator,
    ---> 60                          validate=validate)
         61     return op.get_result()
         62 
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in __init__(self, left, right, how, on, left_on, right_on, axis, left_index, right_index, sort, suffixes, copy, indicator, validate)
        552         # validate the merge keys dtypes. We may need to coerce
        553         # to avoid incompat dtypes
    --> 554         self._maybe_coerce_merge_keys()
        555 
        556         # If argument passed to validate,
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in _maybe_coerce_merge_keys(self)
        976             # incompatible dtypes GH 9780, GH 15800
        977             elif is_numeric_dtype(lk) and not is_numeric_dtype(rk):
    --> 978                 raise ValueError(msg)
        979             elif not is_numeric_dtype(lk) and is_numeric_dtype(rk):
        980                 raise ValueError(msg)
    
    ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat
    
    opened by zuenko 4
  • explanation of

    explanation of "download_utils.py"

    def link_all_keras_resources():
        link_all_files_from_dir("../readonly/keras/datasets/", os.path.expanduser("~/.keras/datasets"))
        link_all_files_from_dir("../readonly/keras/models/", os.path.expanduser("~/.keras/models"))
    

    which datas are belong to the datasets and models dir ? (with name).

    def link_week_6_resources():
        link_all_files_from_dir("../readonly/week6/", ".")
    

    which datas are belong to the week6 dir ? (with name).

    Please, explain this two function. I want to run week-6 image_captionong_project into my local jupyter-notebook.

    Please help me . THANKS

    opened by rezwanh001 3
  • NumpyNN (honor).ipynb not able to import util.py

    NumpyNN (honor).ipynb not able to import util.py

    Hi,

    It seems like

    from util import eval_numerical_gradient

    not working. (week 2 honor assignment)

    It can work by manually adding eval_numerical_gradientm function but it would be better if linked.

    Cheers, Nan

    opened by xia0nan 1
  • The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks. The result is always 6 out of 9 because the progress halts after that. Please help me complete the work and submit the results.

    It's an earnest request to the mentors, tutors , instructors to please consider those students facing such issues and provide assistance.

    As for my case , it's the only project left in the entire specialization and it's completion.

    I will be extremely grateful for the opportunity for the peer review to be made accessible to all the learners whether they are undergoing the same issue for a long span of time or otherwise.

    Will be eagerly awaiting a response.

    Regards,

    Saheli Basu

    opened by MehaRima 0
  • Fixed a typo on line 285.

    Fixed a typo on line 285.

    Original: So far our model is staggeringly inefficient. There is something wring with it. Guess, what?

    Changed to: So far, our model is staggeringly inefficient. There is something wrong with it. Guess, what?

    opened by IAmSuyogJadhav 0
  • KeyError in keras_utils.py

    KeyError in keras_utils.py

    I tried running on my local computer

    model.fit( x_train2, y_train2, # prepared data batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[keras.callbacks.LearningRateScheduler(lr_scheduler), LrHistory(), keras_utils.TqdmProgressCallback(), keras_utils.ModelSaveCallback(model_filename)], validation_data=(x_test2, y_test2), shuffle=True, verbose=0, initial_epoch=last_finished_epoch or 0 )

    But it returned me this error

    ~\Documents\kkbq\Coursera\Intro to Deep Learning\intro-to-dl\keras_utils.py in _set_prog_bar_desc(self, logs) 27 28 def _set_prog_bar_desc(self, logs): ---> 29 for k in self.params['metrics']: 30 if k in logs: 31 self.log_values_by_metric[k].append(logs[k])

    KeyError: 'metrics'

    Does anyone know why this happened? Thanks.

    opened by samtjong23 0
  • Week 3 - Task 2 issue

    Week 3 - Task 2 issue

    In one of the last cells,

    model.compile(
        loss='categorical_crossentropy',  # we train 102-way classification
        optimizer=keras.optimizers.adamax(lr=1e-2),  # we can take big lr here because we fixed first layers
        metrics=['accuracy']  # report accuracy during training
    )
    

    AttributeError: module 'keras.optimizers' has no attribute 'adamax'

    This can be fixed by changing "adamax" to "Adamax". However, after that the second next cell:

    # fine tune for 2 epochs (full passes through all training data)
    # we make 2*8 epochs, where epoch is 1/8 of our training data to see progress more often
    model.fit_generator(
        train_generator(tr_files, tr_labels), 
        steps_per_epoch=len(tr_files) // BATCH_SIZE // 8,
        epochs=2 * 8,
        validation_data=train_generator(te_files, te_labels), 
        validation_steps=len(te_files) // BATCH_SIZE // 4,
        callbacks=[keras_utils.TqdmProgressCallback(), 
                   keras_utils.ModelSaveCallback(model_filename)],
        verbose=0,
        initial_epoch=last_finished_epoch or 0
    )
    

    throws the following error:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-183-faf1b24645ff> in <module>()
         10                keras_utils.ModelSaveCallback(model_filename)],
         11     verbose=0,
    ---> 12     initial_epoch=last_finished_epoch or 0
         13 )
    
    2 frames
    /usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
         85                 warnings.warn('Update your `' + object_name +
         86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
    ---> 87             return func(*args, **kwargs)
         88         wrapper._original_function = func
         89         return wrapper
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, initial_epoch)
       1723 
       1724         do_validation = bool(validation_data)
    -> 1725         self._make_train_function()
       1726         if do_validation:
       1727             self._make_test_function()
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _make_train_function(self)
        935                 self._collected_trainable_weights,
        936                 self.constraints,
    --> 937                 self.total_loss)
        938             updates = self.updates + training_updates
        939             # Gets loss and metrics. Updates weights at each call.
    
    TypeError: get_updates() takes 3 positional arguments but 4 were given
    

    keras.optimizers.Adamax() inherits the get_updates() method from keras.optimizers.Optimizer(), and that method takes only three arguments (self, loss, params), but _make_train_function is trying to pass four arguments to it.

    As I understand it, the issue here is compatibility between tf 1.x and tf 2. I'm using colab and running the %tensorflow_version 1.x line, as well as the setup cell with week 3 setup uncommented at the start of the notebook.

    All checkpoints up to this point have been passed succesfully.

    opened by nietoo 1
  • conda issue

    conda issue

    Hi there, I face a lot of problem to create the environment. I want to use my GPU as I used to do but here, to run your environment I face a lot a package conflicts. I spent 4 hours trying to to make working tensorflow==1.2.1 & Keras==2.0.6 (with theano ).

    (nvidia-docker does not work on my Debian so I would use a stable conda environment) Please update the co-lab with tensflow 2+

    opened by kakooloukia 0
  • Google colab code addition

    Google colab code addition

    The original code does not work fine in the Google colab. Please add following code: !pip install q keras==2.0.6 to these lines of codes: ! shred -u setup_google_colab.py ! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py import setup_google_colab please, uncomment the week you're working on setup_google_colab.setup_week1() setup_google_colab.setup_week2() setup_google_colab.setup_week2_honor() setup_google_colab.setup_week3() setup_google_colab.setup_week4() setup_google_colab.setup_week5() setup_google_colab.setup_week6()

    opened by ansh997 0
Owner
Advanced Machine Learning specialisation by HSE
Advanced Machine Learning specialisation by HSE
An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Deep-motion-editing This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch. The co

1.2k Dec 29, 2022
Nightmare-Writeup - Writeup for the Nightmare CTF Challenge from 2022 DiceCTF

Nightmare: One Byte to ROP // Alternate Solution TLDR: One byte write, no leak.

1 Feb 17, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 05, 2022
An architecture that makes any doodle realistic, in any specified style, using VQGAN, CLIP and some basic embedding arithmetics.

Sketch Simulator An architecture that makes any doodle realistic, in any specified style, using VQGAN, CLIP and some basic embedding arithmetics. See

12 Dec 18, 2022
An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"

Channel LM Prompting (and beyond) This includes an original implementation of Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer. "Noisy Cha

Sewon Min 92 Jan 07, 2023
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

xmu-xiaoma66 7.7k Jan 05, 2023
PICK: Processing Key Information Extraction from Documents using Improved Graph Learning-Convolutional Networks

Code for the paper "PICK: Processing Key Information Extraction from Documents using Improved Graph Learning-Convolutional Networks" (ICPR 2020)

Wenwen Yu 498 Dec 24, 2022
Cross-Document Coreference Resolution

Cross-Document Coreference Resolution This repository contains code and models for end-to-end cross-document coreference resolution, as decribed in ou

Arie Cattan 29 Nov 28, 2022
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.

Deep-Unsupervised-Domain-Adaptation Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E.

Alan Grijalva 49 Dec 20, 2022
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

54 Dec 06, 2022
Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Vansh Wassan 15 Jun 17, 2021
Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

xRBM Library Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow Installation Using pip: pip install xrbm Examples Tut

Omid Alemi 55 Dec 29, 2022
Implementation of Hierarchical Transformer Memory (HTM) for Pytorch

Hierarchical Transformer Memory (HTM) - Pytorch Implementation of Hierarchical Transformer Memory (HTM) for Pytorch. This Deepmind paper proposes a si

Phil Wang 63 Dec 29, 2022
Code for the preprint "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"

This is a repository for the paper of "Well-classified Examples are Underestimated in Classification with Deep Neural Networks" The implementation and

LancoPKU 25 Dec 11, 2022
LIAO Shuiying 6 Dec 01, 2022
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"

SinGAN Project | Arxiv | CVF | Supplementary materials | Talk (ICCV`19) Official pytorch implementation of the paper: "SinGAN: Learning a Generative M

Tamar Rott Shaham 3.2k Dec 25, 2022
Codebase for "ProtoAttend: Attention-Based Prototypical Learning."

Codebase for "ProtoAttend: Attention-Based Prototypical Learning." Authors: Sercan O. Arik and Tomas Pfister Paper: Sercan O. Arik and Tomas Pfister,

47 2 May 17, 2022
Monitora la qualità della ricezione dei segnali radio nelle province siciliane.

FMap-server Monitora la qualità della ricezione dei segnali radio nelle province siciliane. Conversion data Frequency - StationName maps are stored in

Triglie 5 May 24, 2021
NeuralTalk is a Python+numpy project for learning Multimodal Recurrent Neural Networks that describe images with sentences.

#NeuralTalk Warning: Deprecated. Hi there, this code is now quite old and inefficient, and now deprecated. I am leaving it on Github for educational p

Andrej 5.3k Jan 07, 2023
Code release for "Self-Tuning for Data-Efficient Deep Learning" (ICML 2021)

Self-Tuning for Data-Efficient Deep Learning This repository contains the implementation code for paper: Self-Tuning for Data-Efficient Deep Learning

THUML @ Tsinghua University 101 Dec 11, 2022