Any-Shot Learning From Multimodal Observations (ALMO)
In this paper, we propose a framework (ALMO) for any-shot learning from multi-modal observations.Using training data containing both objects (inputs) and class attributes (side information) from multiple modalities, ALMO embeds the high-dimensional data into a common stochastic Board Game new/sealed latent space using modality-specific encoders.Sub