Scalable Bayesian Matrix Completion with Stochastic Optimization and Coordinate Updates – In this paper, we propose a novel unsupervised model based on a multi-level Gaussian process model to compute the structure of the data generated by a neural network. Unlike the previous unsupervised methods, our model performs well even on very sparse data. Extensive experiments on several real real world datasets demonstrate that our model outperforms existing unsupervised methods in terms of the average precision of the predictions.
In this paper, we present a novel framework to train Convolutional Neural Networks (CNNs) with random forest to solve high dimensional subspace optimization problems. The objective of this work is to combine local and global knowledge of the underlying sparse representation of the subspace that is learned via the CNN. We first develop an effective training strategy, by learning a sparse representation from the data structure, and then iteratively training a CNN to find the subspace. We show that the learned representation serves to minimize the loss of the underlying sparse representation on the training set and improves the performance of the whole model on the test set. Experiments on synthetic and a real dataset on MNIST and CIFAR-10 show that our proposed framework achieves significantly more robust prediction performance compared to the state-of-the-art CNNs.
Super-Dense: Robust Deep Convolutional Neural Network Embedding via Self-Adaptive Regularization
Learning from Humans: Deep Face Recognition for Early Visual History and Motion Recognition
Scalable Bayesian Matrix Completion with Stochastic Optimization and Coordinate Updates
3D Scanning Network for Segmentation of Medical Images
A Deep Learning Approach for Optimizing Sparsity in Generative Adversarial NetworksIn this paper, we present a novel framework to train Convolutional Neural Networks (CNNs) with random forest to solve high dimensional subspace optimization problems. The objective of this work is to combine local and global knowledge of the underlying sparse representation of the subspace that is learned via the CNN. We first develop an effective training strategy, by learning a sparse representation from the data structure, and then iteratively training a CNN to find the subspace. We show that the learned representation serves to minimize the loss of the underlying sparse representation on the training set and improves the performance of the whole model on the test set. Experiments on synthetic and a real dataset on MNIST and CIFAR-10 show that our proposed framework achieves significantly more robust prediction performance compared to the state-of-the-art CNNs.