Intro-to-dl - Resources for "Introduction to Deep Learning" course.

Overview

Introduction to Deep Learning course resources

https://www.coursera.org/learn/intro-to-deep-learning

Running on Google Colab (tested for all weeks)

Google has released its own flavour of Jupyter called Colab, which has free GPUs!

Here's how you can use it:

  1. Open https://colab.research.google.com, click Sign in in the upper right corner, use your Google credentials to sign in.
  2. Click GITHUB tab, paste https://github.com/hse-aml/intro-to-dl and press Enter
  3. Choose the notebook you want to open, e.g. week2/v2/mnist_with_keras.ipynb
  4. Click File -> Save a copy in Drive... to save your progress in Google Drive
  5. Click Runtime -> Change runtime type and select GPU in Hardware accelerator box
  6. Execute the following code in the first cell that downloads dependencies (change for your week number):
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
# please, uncomment the week you're working on
# setup_google_colab.setup_week1()
# setup_google_colab.setup_week2()
# setup_google_colab.setup_week2_honor()
# setup_google_colab.setup_week3()
# setup_google_colab.setup_week4()
# setup_google_colab.setup_week5()
# setup_google_colab.setup_week6()
  1. If you run many notebooks on Colab, they can continue to eat up memory, you can kill them with ! pkill -9 python3 and check with ! nvidia-smi that GPU memory is freed.

Known issues:

  • Blinking animation with IPython.display.clear_output(). It's usable, but still looking for a workaround.

Offline instructions

Coursera Jupyter Environment can be slow if many learners use it heavily. Our tasks are compute-heavy and we recommend to run them on your hardware for optimal performance.

You will need a computer with at least 4GB of RAM.

There're two options to setup the Jupyter Notebooks locally: Docker container and Anaconda.

Docker container option (best for Mac/Linux)

Follow the instructions on https://hub.docker.com/r/zimovnov/coursera-aml-docker/ to install Docker container with all necessary software installed.

After that you should see a Jupyter page in your browser.

Anaconda option (best for Windows)

We highly recommend to install docker environment, but if it's not an option, you can try to install the necessary python modules with Anaconda.

First, install Anaconda with Python 3.5+ from here.

Download conda_requirements.txt from here.

Open terminal on Mac/Linux or "Anaconda Prompt" in Start Menu on Windows and run:

conda config --append channels conda-forge
conda config --append channels menpo
conda install --yes --file conda_requirements.txt

To start Jupyter Notebooks run jupyter notebook on Mac/Linux or "Jupyter Notebook" in Start Menu on Windows.

After that you should see a Jupyter page in your browser.

Prepare resources inside Jupyter Notebooks (for local setups only)

Click New -> Terminal and execute: git clone https://github.com/hse-aml/intro-to-dl.git On Windows you might want to install Git. You can also download all the resources as zip archive from GitHub page.

Close the terminal and refresh Jupyter page, you will see intro-to-dl folder, go there, all the necessary notebooks are waiting for you.

First you need to download necessary resources, to do that open download_resources.ipynb and run cells for Keras and your week.

Now you can open a notebook for the corresponding week and work there just like in Coursera Jupyter Environment.

Using GPU for offline setup (for advanced users)

Comments
  • cannot submit

    cannot submit

    In the first submission for week 3, I couldn't submit. Here is the error: AttributeError: module 'grading_utils' has no attribute 'model_total_params'

    opened by AhmedFrikha 4
  • week4/lfw_dataset.py

    week4/lfw_dataset.py

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-4-856143fffc33> in <module>()
          8 #Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
          9 from lfw_dataset import load_lfw_dataset
    ---> 10 data,attrs = load_lfw_dataset(dimx=36,dimy=36)
         11 
         12 #preprocess faces
    
    ~/GitHub/intro-to-dl/week4/lfw_dataset.py in load_lfw_dataset(use_raw, dx, dy, dimx, dimy)
         52 
         53     # preserve photo_ids order!
    ---> 54     all_attrs = photo_ids.merge(df_attrs, on=('person', 'imagenum')).drop(["person", "imagenum"], axis=1)
         55 
         56     return all_photos, all_attrs
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in merge(self, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
       6377                      right_on=right_on, left_index=left_index,
       6378                      right_index=right_index, sort=sort, suffixes=suffixes,
    -> 6379                      copy=copy, indicator=indicator, validate=validate)
       6380 
       6381     def round(self, decimals=0, *args, **kwargs):
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
         58                          right_index=right_index, sort=sort, suffixes=suffixes,
         59                          copy=copy, indicator=indicator,
    ---> 60                          validate=validate)
         61     return op.get_result()
         62 
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in __init__(self, left, right, how, on, left_on, right_on, axis, left_index, right_index, sort, suffixes, copy, indicator, validate)
        552         # validate the merge keys dtypes. We may need to coerce
        553         # to avoid incompat dtypes
    --> 554         self._maybe_coerce_merge_keys()
        555 
        556         # If argument passed to validate,
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in _maybe_coerce_merge_keys(self)
        976             # incompatible dtypes GH 9780, GH 15800
        977             elif is_numeric_dtype(lk) and not is_numeric_dtype(rk):
    --> 978                 raise ValueError(msg)
        979             elif not is_numeric_dtype(lk) and is_numeric_dtype(rk):
        980                 raise ValueError(msg)
    
    ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat
    
    opened by zuenko 4
  • explanation of

    explanation of "download_utils.py"

    def link_all_keras_resources():
        link_all_files_from_dir("../readonly/keras/datasets/", os.path.expanduser("~/.keras/datasets"))
        link_all_files_from_dir("../readonly/keras/models/", os.path.expanduser("~/.keras/models"))
    

    which datas are belong to the datasets and models dir ? (with name).

    def link_week_6_resources():
        link_all_files_from_dir("../readonly/week6/", ".")
    

    which datas are belong to the week6 dir ? (with name).

    Please, explain this two function. I want to run week-6 image_captionong_project into my local jupyter-notebook.

    Please help me . THANKS

    opened by rezwanh001 3
  • NumpyNN (honor).ipynb not able to import util.py

    NumpyNN (honor).ipynb not able to import util.py

    Hi,

    It seems like

    from util import eval_numerical_gradient

    not working. (week 2 honor assignment)

    It can work by manually adding eval_numerical_gradientm function but it would be better if linked.

    Cheers, Nan

    opened by xia0nan 1
  • The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks. The result is always 6 out of 9 because the progress halts after that. Please help me complete the work and submit the results.

    It's an earnest request to the mentors, tutors , instructors to please consider those students facing such issues and provide assistance.

    As for my case , it's the only project left in the entire specialization and it's completion.

    I will be extremely grateful for the opportunity for the peer review to be made accessible to all the learners whether they are undergoing the same issue for a long span of time or otherwise.

    Will be eagerly awaiting a response.

    Regards,

    Saheli Basu

    opened by MehaRima 0
  • Fixed a typo on line 285.

    Fixed a typo on line 285.

    Original: So far our model is staggeringly inefficient. There is something wring with it. Guess, what?

    Changed to: So far, our model is staggeringly inefficient. There is something wrong with it. Guess, what?

    opened by IAmSuyogJadhav 0
  • KeyError in keras_utils.py

    KeyError in keras_utils.py

    I tried running on my local computer

    model.fit( x_train2, y_train2, # prepared data batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[keras.callbacks.LearningRateScheduler(lr_scheduler), LrHistory(), keras_utils.TqdmProgressCallback(), keras_utils.ModelSaveCallback(model_filename)], validation_data=(x_test2, y_test2), shuffle=True, verbose=0, initial_epoch=last_finished_epoch or 0 )

    But it returned me this error

    ~\Documents\kkbq\Coursera\Intro to Deep Learning\intro-to-dl\keras_utils.py in _set_prog_bar_desc(self, logs) 27 28 def _set_prog_bar_desc(self, logs): ---> 29 for k in self.params['metrics']: 30 if k in logs: 31 self.log_values_by_metric[k].append(logs[k])

    KeyError: 'metrics'

    Does anyone know why this happened? Thanks.

    opened by samtjong23 0
  • Week 3 - Task 2 issue

    Week 3 - Task 2 issue

    In one of the last cells,

    model.compile(
        loss='categorical_crossentropy',  # we train 102-way classification
        optimizer=keras.optimizers.adamax(lr=1e-2),  # we can take big lr here because we fixed first layers
        metrics=['accuracy']  # report accuracy during training
    )
    

    AttributeError: module 'keras.optimizers' has no attribute 'adamax'

    This can be fixed by changing "adamax" to "Adamax". However, after that the second next cell:

    # fine tune for 2 epochs (full passes through all training data)
    # we make 2*8 epochs, where epoch is 1/8 of our training data to see progress more often
    model.fit_generator(
        train_generator(tr_files, tr_labels), 
        steps_per_epoch=len(tr_files) // BATCH_SIZE // 8,
        epochs=2 * 8,
        validation_data=train_generator(te_files, te_labels), 
        validation_steps=len(te_files) // BATCH_SIZE // 4,
        callbacks=[keras_utils.TqdmProgressCallback(), 
                   keras_utils.ModelSaveCallback(model_filename)],
        verbose=0,
        initial_epoch=last_finished_epoch or 0
    )
    

    throws the following error:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-183-faf1b24645ff> in <module>()
         10                keras_utils.ModelSaveCallback(model_filename)],
         11     verbose=0,
    ---> 12     initial_epoch=last_finished_epoch or 0
         13 )
    
    2 frames
    /usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
         85                 warnings.warn('Update your `' + object_name +
         86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
    ---> 87             return func(*args, **kwargs)
         88         wrapper._original_function = func
         89         return wrapper
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, initial_epoch)
       1723 
       1724         do_validation = bool(validation_data)
    -> 1725         self._make_train_function()
       1726         if do_validation:
       1727             self._make_test_function()
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _make_train_function(self)
        935                 self._collected_trainable_weights,
        936                 self.constraints,
    --> 937                 self.total_loss)
        938             updates = self.updates + training_updates
        939             # Gets loss and metrics. Updates weights at each call.
    
    TypeError: get_updates() takes 3 positional arguments but 4 were given
    

    keras.optimizers.Adamax() inherits the get_updates() method from keras.optimizers.Optimizer(), and that method takes only three arguments (self, loss, params), but _make_train_function is trying to pass four arguments to it.

    As I understand it, the issue here is compatibility between tf 1.x and tf 2. I'm using colab and running the %tensorflow_version 1.x line, as well as the setup cell with week 3 setup uncommented at the start of the notebook.

    All checkpoints up to this point have been passed succesfully.

    opened by nietoo 1
  • conda issue

    conda issue

    Hi there, I face a lot of problem to create the environment. I want to use my GPU as I used to do but here, to run your environment I face a lot a package conflicts. I spent 4 hours trying to to make working tensorflow==1.2.1 & Keras==2.0.6 (with theano ).

    (nvidia-docker does not work on my Debian so I would use a stable conda environment) Please update the co-lab with tensflow 2+

    opened by kakooloukia 0
  • Google colab code addition

    Google colab code addition

    The original code does not work fine in the Google colab. Please add following code: !pip install q keras==2.0.6 to these lines of codes: ! shred -u setup_google_colab.py ! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py import setup_google_colab please, uncomment the week you're working on setup_google_colab.setup_week1() setup_google_colab.setup_week2() setup_google_colab.setup_week2_honor() setup_google_colab.setup_week3() setup_google_colab.setup_week4() setup_google_colab.setup_week5() setup_google_colab.setup_week6()

    opened by ansh997 0
Owner
Advanced Machine Learning specialisation by HSE
Advanced Machine Learning specialisation by HSE
Hydra Lightning Template for Structured Configs

Hydra Lightning Template for Structured Configs Template for creating projects with pytorch-lightning and hydra. How to use this template? Create your

Model-driven Machine Learning 4 Jul 19, 2022
Deep Learning Training Scripts With Python

Deep Learning Training Scripts DNN Frameworks Caffe PyTorch Tensorflow CNN Models VGG ResNet DenseNet Inception Language Modeling GatedCNN-LM Attentio

Multicore Computing Research Lab 16 Dec 15, 2022
Faune proche - Retrieval of Faune-France data near a google maps location

faune_proche Récupération des données de Faune-France près d'un lieu google maps

4 Feb 15, 2022
Dogs classification with Deep Metric Learning using some popular losses

Tsinghua Dogs classification with Deep Metric Learning 1. Introduction Tsinghua Dogs dataset Tsinghua Dogs is a fine-grained classification dataset fo

QuocThangNguyen 45 Nov 09, 2022
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022
Dataset and Code for the paper "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021), and "Depth-only Object Tracking" (BMVC2021)

DeT and DOT Code and datasets for "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021) "Depth-only Object Tracking" (BMVC2021) @InProceedings

Yan Song 55 Dec 15, 2022
Scalable, event-driven, deep-learning-friendly backtesting library

...Minimizing the mean square error on future experience. - Richard S. Sutton BTGym Scalable event-driven RL-friendly backtesting library. Build on

Andrew 922 Dec 27, 2022
clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation

README clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation CVPR 2021 Authors: Suprosanna Shit and Johannes C. Paetzo

110 Dec 29, 2022
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Ibai Gorordo 14 Dec 09, 2022
Weighing Counts: Sequential Crowd Counting by Reinforcement Learning

LibraNet This repository includes the official implementation of LibraNet for crowd counting, presented in our paper: Weighing Counts: Sequential Crow

Hao Lu 18 Nov 05, 2022
D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Albert Pumarola 291 Jan 02, 2023
Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time.

BBB Face Recognizer Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time. Instalati

Rafael Azevedo 232 Dec 24, 2022
Autoregressive Models in PyTorch.

Autoregressive This repository contains all the necessary PyTorch code, tailored to my presentation, to train and generate data from WaveNet-like auto

Christoph Heindl 41 Oct 09, 2022
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023
An implementation of EWC with PyTorch

EWC.pytorch An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural

Ryuichiro Hataya 166 Dec 22, 2022
This repository contains the code for the CVPR 2021 paper "GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields"

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields Project Page | Paper | Supplementary | Video | Slides | Blog | Talk If

1.1k Dec 30, 2022
Extracts data from the database for a graph-node and stores it in parquet files

subgraph-extractor Extracts data from the database for a graph-node and stores it in parquet files Installation For developing, it's recommended to us

Cardstack 0 Jan 10, 2022
HyperCube: Implicit Field Representations of Voxelized 3D Models

HyperCube: Implicit Field Representations of Voxelized 3D Models Authors: Magdalena Proszewska, Marcin Mazur, Tomasz Trzcinski, Przemysław Spurek [Pap

Magdalena Proszewska 3 Mar 09, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

62 Dec 22, 2022
Bottleneck Transformers for Visual Recognition

Bottleneck Transformers for Visual Recognition Experiments Model Params (M) Acc (%) ResNet50 baseline (ref) 23.5M 93.62 BoTNet-50 18.8M 95.11% BoTNet-

Myeongjun Kim 236 Jan 03, 2023