Hossein Askari

I am a Phd student at Ecole Polytechnique de Montreal . I am working under Prof. Jean-Pierre David supervision. Previously, I was an ASIC Verification engineer at Microsemi (Now MicroChip) where I worked on the next generation of OTN (Optical Transport Network) processors. And before that, I was a Software Engineer at Tru Simulation + Training.

My major areas of interests are:
  • Making Deep Neural Networks more computationally efficient
  • Deep Learning Acceleration

Email  /  Google Scholar




Projects
prl

ICLR 2018 Reproducibility Challenge
Training and Inference with Integers in Deep Neural Networks
arxiv (original paper)


We reproduce Wu et al.'s ICLR 2018 submission Training And Inference With Integers In Deep Neural Networks. The proposed `WAGE' model reduces floating-point precision with only a slight reduction in accuracy. The paper introduces two novel approaches which allow for the use of integer values by quantizing weights, activation, gradients and errors in both training and inference. We reproduce the WAGE model, trained on the CIFAR10 dataset. The methodology demonstrated in this paper has applications for use with Application Specific Integrated Circuit (ASICs).


prl

Celebrity Face Generation using GAN
Deep Learning Course Assignment
report


We implemented a GAN with latent dimension 100, in order to generate novel faces from the CelebFaces Attributes Dataset. The dataset downloaded contains nearly 10000 images. A DCGAN architecture was used.


prl

Neural Turing Machine implementation
Deep Learning Course Assignment
report


We implemented the copy task of a Neural Turing Machine that was proposed by Deep Mind. The original paper can be found here. We implemented the MLP_NTM and LSTM_NTM and we compared the results with a LSTM model.


prl

Cats VS. Dogs (Kaggle challenge)
Deep Learning Course Assignment
report


In this assignment, we implemented a CNN model to detect if the image contains a dog or a cat. We also had to provide some missclassified images and propose methods to prevent such scenraios. Finally, we implementd a focus detection algorithm to show what features in the original image are really important for the network to correctly classify the image.


prl

Implementation of BinaryConnect
paper / code (theano) / code (pytorch)


BinaryConnect introduced a quantization method to use -1 and +1 as the only values for parameters in a Neural Network. This dramatically reduces the footprint needed to store the parameters for a Neural Net. For most models, this can be achieved with negligible drop in accuracy.