Hossein Askari

I am a Phd student at Ecole Polytechnique de Montreal . I am working under Prof. Jean-Pierre David supervision. Previously, I was an ASIC Verification engineer at Microsemi (Now MicroChip) where I worked on the next generation of OTN (Optical Transport Network) processors. And before that, I was a Software Engineer at Tru Simulation + Training.

My major areas of interests are:
  • Making Deep Neural Networks more computationally efficient
  • Deep Learning Acceleration

Email  /  Google Scholar




Papers

prl

Quantization of UNET Models for Medical Image Segmentation
Submitted to MIDL2019 Conference

In this work, we present a quantization method for the U-Net architecture, a popular model in medical image segmentation. We then apply our quantization algorithm to Spinal Cord Gray Matter Segmentation data set and experimentally demonstrate that even with few bits, we can obtain almost the same level of accuracy compared to a full precision U-Net model.




Projects

prl

Project Logic Brain : A Binary Neural Networks Accelerator on FPGA
Rapid Prototyping Course Project
report / source code

In this project, we designed a Binary Neural Network Accelerator. We used Intel Cyclone V FPGA as the target platform. With no modifications, the accelerator is capable of accelerating MLP Networks and with some modifications, it is capable of accelerating CNNs. We have shown that compared to a software model (that runs on a NIOS II processor @100Mhz), our implementation can run upto 14K times faster (@100Mhz).


prl

ICLR 2018 Reproducibility Challenge
Training and Inference with Integers in Deep Neural Networks
arxiv (original paper)

We reproduce Wu et al.'s ICLR 2018 submission Training And Inference With Integers In Deep Neural Networks. The proposed `WAGE' model reduces floating-point precision with only a slight reduction in accuracy. The paper introduces two novel approaches which allow for the use of integer values by quantizing weights, activation, gradients and errors in both training and inference. We reproduce the WAGE model, trained on the CIFAR10 dataset. The methodology demonstrated in this paper has applications for use with Application Specific Integrated Circuit (ASICs).


prl

Celebrity Face Generation using GAN
Deep Learning Course Assignment
report

We implemented a GAN with latent dimension 100, in order to generate novel faces from the CelebFaces Attributes Dataset. The dataset downloaded contains nearly 10000 images. A DCGAN architecture was used.


prl

Neural Turing Machine implementation
Deep Learning Course Assignment
report

We implemented the copy task of a Neural Turing Machine that was proposed by Deep Mind. The original paper can be found here. We implemented the MLP_NTM and LSTM_NTM and we compared the results with a LSTM model.


prl

Cats VS. Dogs (Kaggle challenge)
Deep Learning Course Assignment
report

In this assignment, we implemented a CNN model to detect if the image contains a dog or a cat. We also had to provide some missclassified images and propose methods to prevent such scenraios. Finally, we implementd a focus detection algorithm to show what features in the original image are really important for the network to correctly classify the image.


prl

Implementation of BinaryConnect
paper / code (theano) / code (pytorch)

BinaryConnect introduced a quantization method to use -1 and +1 as the only values for parameters in a Neural Network. This dramatically reduces the footprint needed to store the parameters for a Neural Net. For most models, this can be achieved with negligible drop in accuracy.