Mixed-Membership Stochastic Blockmodular Learning – We propose a new approach to model multiple user behaviors, called `Multi-User Stochastic Blockmodular Learning’, to learn the structure of a block of data. The network uses a deep feature representation for user interactions, such that user interactions are characterized by the user’s preference and interest. The network learns to represent user interactions through a novel hierarchical latent classifier that estimates the latent class matrix of the users’ behavior. The proposed model is able to represent individual user behaviors in a unified form, which enables it to learn multiple user behaviors simultaneously. We validate our method on multiple publicly available datasets, including the COCO dataset and the Yahoo COCO dataset.
We propose a novel formulation for learning artificial languages based on learning to read. Our model, dubbed The Natural Language Model, incorporates a learned language model and a domain-specific knowledge-base to learn a semantic representation of a language from a limited, but well-founded, set of data samples. The model was proposed as an alternative to a priori-based learning methods. We show that our model outperforms a priori learning methods due to the number of sample pairs in the model and the model’s robustness against the learner’s ability to mimic the model’s description of language in an unsupervised manner. In addition, we show that our model outperforms previous state-of-the-art approaches on both human and machine learning tasks.
Toward More Efficient Training of Visual Inspection Cameras
Efficient Learning-Invariant Signals and Sparse Approximation Algorithms
Mixed-Membership Stochastic Blockmodular Learning
An Online Matching System for Multilingual Answering
Evolving Learning about Humans by Using LanguageWe propose a novel formulation for learning artificial languages based on learning to read. Our model, dubbed The Natural Language Model, incorporates a learned language model and a domain-specific knowledge-base to learn a semantic representation of a language from a limited, but well-founded, set of data samples. The model was proposed as an alternative to a priori-based learning methods. We show that our model outperforms a priori learning methods due to the number of sample pairs in the model and the model’s robustness against the learner’s ability to mimic the model’s description of language in an unsupervised manner. In addition, we show that our model outperforms previous state-of-the-art approaches on both human and machine learning tasks.