Previous Research: Model Learning and Compression

Principal Investigator(s): 
Stella Yu

Traditional deep neural network learning seeks an optimal model in a large model space.  However, the optimal model ends up with a lot of redundancy which is then removed during model compression.  We seek an optimal model in a reduced model space without jeopardizing optimality.  We study several techniques such as tied block convolution (TBC), light-cost regularizer (OCNN), and recurrent parameter generator (RPG) where smaller and leaner models are optimized and can be deployed directly with more robustness and generalizability.