Intro-to-dl - Resources for "Introduction to Deep Learning" course.

Overview

Introduction to Deep Learning course resources

https://www.coursera.org/learn/intro-to-deep-learning

Running on Google Colab (tested for all weeks)

Google has released its own flavour of Jupyter called Colab, which has free GPUs!

Here's how you can use it:

  1. Open https://colab.research.google.com, click Sign in in the upper right corner, use your Google credentials to sign in.
  2. Click GITHUB tab, paste https://github.com/hse-aml/intro-to-dl and press Enter
  3. Choose the notebook you want to open, e.g. week2/v2/mnist_with_keras.ipynb
  4. Click File -> Save a copy in Drive... to save your progress in Google Drive
  5. Click Runtime -> Change runtime type and select GPU in Hardware accelerator box
  6. Execute the following code in the first cell that downloads dependencies (change for your week number):
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
# please, uncomment the week you're working on
# setup_google_colab.setup_week1()
# setup_google_colab.setup_week2()
# setup_google_colab.setup_week2_honor()
# setup_google_colab.setup_week3()
# setup_google_colab.setup_week4()
# setup_google_colab.setup_week5()
# setup_google_colab.setup_week6()
  1. If you run many notebooks on Colab, they can continue to eat up memory, you can kill them with ! pkill -9 python3 and check with ! nvidia-smi that GPU memory is freed.

Known issues:

  • Blinking animation with IPython.display.clear_output(). It's usable, but still looking for a workaround.

Offline instructions

Coursera Jupyter Environment can be slow if many learners use it heavily. Our tasks are compute-heavy and we recommend to run them on your hardware for optimal performance.

You will need a computer with at least 4GB of RAM.

There're two options to setup the Jupyter Notebooks locally: Docker container and Anaconda.

Docker container option (best for Mac/Linux)

Follow the instructions on https://hub.docker.com/r/zimovnov/coursera-aml-docker/ to install Docker container with all necessary software installed.

After that you should see a Jupyter page in your browser.

Anaconda option (best for Windows)

We highly recommend to install docker environment, but if it's not an option, you can try to install the necessary python modules with Anaconda.

First, install Anaconda with Python 3.5+ from here.

Download conda_requirements.txt from here.

Open terminal on Mac/Linux or "Anaconda Prompt" in Start Menu on Windows and run:

conda config --append channels conda-forge
conda config --append channels menpo
conda install --yes --file conda_requirements.txt

To start Jupyter Notebooks run jupyter notebook on Mac/Linux or "Jupyter Notebook" in Start Menu on Windows.

After that you should see a Jupyter page in your browser.

Prepare resources inside Jupyter Notebooks (for local setups only)

Click New -> Terminal and execute: git clone https://github.com/hse-aml/intro-to-dl.git On Windows you might want to install Git. You can also download all the resources as zip archive from GitHub page.

Close the terminal and refresh Jupyter page, you will see intro-to-dl folder, go there, all the necessary notebooks are waiting for you.

First you need to download necessary resources, to do that open download_resources.ipynb and run cells for Keras and your week.

Now you can open a notebook for the corresponding week and work there just like in Coursera Jupyter Environment.

Using GPU for offline setup (for advanced users)

Comments
  • cannot submit

    cannot submit

    In the first submission for week 3, I couldn't submit. Here is the error: AttributeError: module 'grading_utils' has no attribute 'model_total_params'

    opened by AhmedFrikha 4
  • week4/lfw_dataset.py

    week4/lfw_dataset.py

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-4-856143fffc33> in <module>()
          8 #Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
          9 from lfw_dataset import load_lfw_dataset
    ---> 10 data,attrs = load_lfw_dataset(dimx=36,dimy=36)
         11 
         12 #preprocess faces
    
    ~/GitHub/intro-to-dl/week4/lfw_dataset.py in load_lfw_dataset(use_raw, dx, dy, dimx, dimy)
         52 
         53     # preserve photo_ids order!
    ---> 54     all_attrs = photo_ids.merge(df_attrs, on=('person', 'imagenum')).drop(["person", "imagenum"], axis=1)
         55 
         56     return all_photos, all_attrs
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in merge(self, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
       6377                      right_on=right_on, left_index=left_index,
       6378                      right_index=right_index, sort=sort, suffixes=suffixes,
    -> 6379                      copy=copy, indicator=indicator, validate=validate)
       6380 
       6381     def round(self, decimals=0, *args, **kwargs):
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
         58                          right_index=right_index, sort=sort, suffixes=suffixes,
         59                          copy=copy, indicator=indicator,
    ---> 60                          validate=validate)
         61     return op.get_result()
         62 
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in __init__(self, left, right, how, on, left_on, right_on, axis, left_index, right_index, sort, suffixes, copy, indicator, validate)
        552         # validate the merge keys dtypes. We may need to coerce
        553         # to avoid incompat dtypes
    --> 554         self._maybe_coerce_merge_keys()
        555 
        556         # If argument passed to validate,
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in _maybe_coerce_merge_keys(self)
        976             # incompatible dtypes GH 9780, GH 15800
        977             elif is_numeric_dtype(lk) and not is_numeric_dtype(rk):
    --> 978                 raise ValueError(msg)
        979             elif not is_numeric_dtype(lk) and is_numeric_dtype(rk):
        980                 raise ValueError(msg)
    
    ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat
    
    opened by zuenko 4
  • explanation of

    explanation of "download_utils.py"

    def link_all_keras_resources():
        link_all_files_from_dir("../readonly/keras/datasets/", os.path.expanduser("~/.keras/datasets"))
        link_all_files_from_dir("../readonly/keras/models/", os.path.expanduser("~/.keras/models"))
    

    which datas are belong to the datasets and models dir ? (with name).

    def link_week_6_resources():
        link_all_files_from_dir("../readonly/week6/", ".")
    

    which datas are belong to the week6 dir ? (with name).

    Please, explain this two function. I want to run week-6 image_captionong_project into my local jupyter-notebook.

    Please help me . THANKS

    opened by rezwanh001 3
  • NumpyNN (honor).ipynb not able to import util.py

    NumpyNN (honor).ipynb not able to import util.py

    Hi,

    It seems like

    from util import eval_numerical_gradient

    not working. (week 2 honor assignment)

    It can work by manually adding eval_numerical_gradientm function but it would be better if linked.

    Cheers, Nan

    opened by xia0nan 1
  • The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks. The result is always 6 out of 9 because the progress halts after that. Please help me complete the work and submit the results.

    It's an earnest request to the mentors, tutors , instructors to please consider those students facing such issues and provide assistance.

    As for my case , it's the only project left in the entire specialization and it's completion.

    I will be extremely grateful for the opportunity for the peer review to be made accessible to all the learners whether they are undergoing the same issue for a long span of time or otherwise.

    Will be eagerly awaiting a response.

    Regards,

    Saheli Basu

    opened by MehaRima 0
  • Fixed a typo on line 285.

    Fixed a typo on line 285.

    Original: So far our model is staggeringly inefficient. There is something wring with it. Guess, what?

    Changed to: So far, our model is staggeringly inefficient. There is something wrong with it. Guess, what?

    opened by IAmSuyogJadhav 0
  • KeyError in keras_utils.py

    KeyError in keras_utils.py

    I tried running on my local computer

    model.fit( x_train2, y_train2, # prepared data batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[keras.callbacks.LearningRateScheduler(lr_scheduler), LrHistory(), keras_utils.TqdmProgressCallback(), keras_utils.ModelSaveCallback(model_filename)], validation_data=(x_test2, y_test2), shuffle=True, verbose=0, initial_epoch=last_finished_epoch or 0 )

    But it returned me this error

    ~\Documents\kkbq\Coursera\Intro to Deep Learning\intro-to-dl\keras_utils.py in _set_prog_bar_desc(self, logs) 27 28 def _set_prog_bar_desc(self, logs): ---> 29 for k in self.params['metrics']: 30 if k in logs: 31 self.log_values_by_metric[k].append(logs[k])

    KeyError: 'metrics'

    Does anyone know why this happened? Thanks.

    opened by samtjong23 0
  • Week 3 - Task 2 issue

    Week 3 - Task 2 issue

    In one of the last cells,

    model.compile(
        loss='categorical_crossentropy',  # we train 102-way classification
        optimizer=keras.optimizers.adamax(lr=1e-2),  # we can take big lr here because we fixed first layers
        metrics=['accuracy']  # report accuracy during training
    )
    

    AttributeError: module 'keras.optimizers' has no attribute 'adamax'

    This can be fixed by changing "adamax" to "Adamax". However, after that the second next cell:

    # fine tune for 2 epochs (full passes through all training data)
    # we make 2*8 epochs, where epoch is 1/8 of our training data to see progress more often
    model.fit_generator(
        train_generator(tr_files, tr_labels), 
        steps_per_epoch=len(tr_files) // BATCH_SIZE // 8,
        epochs=2 * 8,
        validation_data=train_generator(te_files, te_labels), 
        validation_steps=len(te_files) // BATCH_SIZE // 4,
        callbacks=[keras_utils.TqdmProgressCallback(), 
                   keras_utils.ModelSaveCallback(model_filename)],
        verbose=0,
        initial_epoch=last_finished_epoch or 0
    )
    

    throws the following error:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-183-faf1b24645ff> in <module>()
         10                keras_utils.ModelSaveCallback(model_filename)],
         11     verbose=0,
    ---> 12     initial_epoch=last_finished_epoch or 0
         13 )
    
    2 frames
    /usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
         85                 warnings.warn('Update your `' + object_name +
         86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
    ---> 87             return func(*args, **kwargs)
         88         wrapper._original_function = func
         89         return wrapper
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, initial_epoch)
       1723 
       1724         do_validation = bool(validation_data)
    -> 1725         self._make_train_function()
       1726         if do_validation:
       1727             self._make_test_function()
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _make_train_function(self)
        935                 self._collected_trainable_weights,
        936                 self.constraints,
    --> 937                 self.total_loss)
        938             updates = self.updates + training_updates
        939             # Gets loss and metrics. Updates weights at each call.
    
    TypeError: get_updates() takes 3 positional arguments but 4 were given
    

    keras.optimizers.Adamax() inherits the get_updates() method from keras.optimizers.Optimizer(), and that method takes only three arguments (self, loss, params), but _make_train_function is trying to pass four arguments to it.

    As I understand it, the issue here is compatibility between tf 1.x and tf 2. I'm using colab and running the %tensorflow_version 1.x line, as well as the setup cell with week 3 setup uncommented at the start of the notebook.

    All checkpoints up to this point have been passed succesfully.

    opened by nietoo 1
  • conda issue

    conda issue

    Hi there, I face a lot of problem to create the environment. I want to use my GPU as I used to do but here, to run your environment I face a lot a package conflicts. I spent 4 hours trying to to make working tensorflow==1.2.1 & Keras==2.0.6 (with theano ).

    (nvidia-docker does not work on my Debian so I would use a stable conda environment) Please update the co-lab with tensflow 2+

    opened by kakooloukia 0
  • Google colab code addition

    Google colab code addition

    The original code does not work fine in the Google colab. Please add following code: !pip install q keras==2.0.6 to these lines of codes: ! shred -u setup_google_colab.py ! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py import setup_google_colab please, uncomment the week you're working on setup_google_colab.setup_week1() setup_google_colab.setup_week2() setup_google_colab.setup_week2_honor() setup_google_colab.setup_week3() setup_google_colab.setup_week4() setup_google_colab.setup_week5() setup_google_colab.setup_week6()

    opened by ansh997 0
Owner
Advanced Machine Learning specialisation by HSE
Advanced Machine Learning specialisation by HSE
Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"

T-Few This repository contains the official code for the paper: "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learni

220 Dec 31, 2022
This repository implements Douzero's interface to IGCA.

douzero-interface-for-ICGA This repository implements Douzero's interface to ICGA. ./douzero: This directory stores Doudizhu AI projects. ./interface:

zhanggenjin 4 Aug 07, 2022
Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

Christian Steinmetz 94 Dec 29, 2022
A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

Deep Reinforcement Learning Agents This repository contains a collection of reinforcement learning algorithms written in Tensorflow. The ipython noteb

Arthur Juliani 2.2k Jan 01, 2023
Deep Sketch-guided Cartoon Video Inbetweening

Cartoon Video Inbetweening Paper | DOI | Video The source code of Deep Sketch-guided Cartoon Video Inbetweening by Xiaoyu Li, Bo Zhang, Jing Liao, Ped

Xiaoyu Li 37 Dec 22, 2022
Semantic Segmentation in Pytorch. Network include: FCN、FCN_ResNet、SegNet、UNet、BiSeNet、BiSeNetV2、PSPNet、DeepLabv3_plus、 HRNet、DDRNet

🚀 If it helps you, click a star! ⭐ Update log 2020.12.10 Project structure adjustment, the previous code has been deleted, the adjustment will be re-

Deeachain 269 Jan 04, 2023
Created as part of CS50 AI's coursework. This AI makes use of knowledge entailment to calculate the best probabilities to win Minesweeper.

Minesweeper-AI Created as part of CS50 AI's coursework. This AI makes use of knowledge entailment to calculate the best probabilities to win Minesweep

Beckham 0 Jul 20, 2022
Perception-aware multi-sensor fusion for 3D LiDAR semantic segmentation (ICCV 2021)

Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation (ICCV 2021) [中文|EN] 概述 本工作主要探索一种高效的多传感器(激光雷达和摄像头)融合点云语义分割方法。现有的多传感器融合方法主要将点云投影

ICE 126 Dec 30, 2022
The source code for Adaptive Kernel Graph Neural Network at AAAI2022

AKGNN The source code for Adaptive Kernel Graph Neural Network at AAAI2022. Please cite our paper if you think our work is helpful to you: @inproceedi

11 Nov 25, 2022
Microscopy Image Cytometry Toolkit

Cytokit Cytokit is a collection of tools for quantifying and analyzing properties of individual cells in large fluorescent microscopy datasets with a

Hammer Lab 106 Jan 06, 2023
3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks Introduction This repository contains the code and models for the follo

124 Jan 06, 2023
Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)

Taming Visually Guided Sound Generation • [Project Page] • [ArXiv] • [Poster] • • Listen for the samples on our project page. Overview We propose to t

Vladimir Iashin 226 Jan 03, 2023
clustering moroccan stocks time series data using k-means with dtw (dynamic time warping)

Moroccan Stocks Clustering Context Hey! we don't always have to forecast time series am I right ? We use k-means to cluster about 70 moroccan stock pr

Ayman Lafaz 7 Oct 18, 2022
A general framework for inferring CNNs efficiently. Reduce the inference latency of MobileNet-V3 by 1.3x on an iPhone XS Max without sacrificing accuracy.

GFNet-Pytorch (NeurIPS 2020) This repo contains the official code and pre-trained models for the glance and focus network (GFNet). Glance and Focus: a

Rainforest Wang 169 Oct 28, 2022
This project provides an unsupervised framework for mining and tagging quality phrases on text corpora with pretrained language models (KDD'21).

UCPhrase: Unsupervised Context-aware Quality Phrase Tagging To appear on KDD'21...[pdf] This project provides an unsupervised framework for mining and

Xiaotao Gu 146 Dec 22, 2022
Interpretable-contrastive-word-mover-s-embedding

Interpretable-contrastive-word-mover-s-embedding Paper Datasets Here is a Dropbox link to the datasets used in the paper: https://www.dropbox.com/sh/n

0 Nov 02, 2021
Neural Network Libraries

Neural Network Libraries Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. W

Sony 2.6k Dec 30, 2022
Builds a LoRa radio frequency fingerprint identification (RFFI) system based on deep learning techiniques

This project builds a LoRa radio frequency fingerprint identification (RFFI) system based on deep learning techiniques.

20 Dec 30, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 03, 2023
PyTorch implementation of some learning rate schedulers for deep learning researcher.

pytorch-lr-scheduler PyTorch implementation of some learning rate schedulers for deep learning researcher. Usage WarmupReduceLROnPlateauScheduler Visu

Soohwan Kim 59 Dec 08, 2022