Multimodal learning

Information in the real world usually comes as different modalities. For example, images are usually associated with tags and text explanations; text contains images to more clearly express the main idea of the article. Different modalities are characterized by different statistical properties. For instance, images are usually represented as pixel intensities or outputs of feature extractors, while texts are represented as discrete word count vectors. Due to the distinct statistical properties of different information resources, it is important to discover the relationship between different modalities. Multimodal learning is a good model to represent the joint representations of different modalities. The multimodal learning model is also capable of supplying a missing modality based on observed ones. The multimodal learning model combines two deep Boltzmann machines, each corresponding to one modality. An additional hidden layer is placed on top of the two Boltzmann Machines to produce the joint representation.

This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)

This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. (June 2015)
This article may be too technical for most readers to understand. (June 2015)
This article’s tone or style may not reflect the encyclopedic tone used on Wikipedia. (June 2015)

. . . Multimodal learning . . .

A lot of models/algorithms have been implemented to retrieve and classify a certain type of data, e.g. image or text (where humans who interact with machines can extract images in a form of pictures and text that could be any message etc). However, data usually comes with different modalities (it is the degree to which a system’s components may be separated or combined) which carry different information. For example, it is very common to caption an image to convey the information not presented by this image. Similarly, sometimes it is more straightforward to use an image to describe the information which may not be obvious from texts. As a result, if some different words appear in similar images, these words are very likely used to describe the same thing. Conversely, if some words are used in different images, these images may represent the same object. Thus, it is important to invite a novel model which is able to jointly represent the information such that the model can capture the correlation structure between different modalities. Moreover, it should also be able to recover missing modalities given observed ones, e.g. predicting possible image object according to text description. The Multimodal Deep Boltzmann Machine model satisfies the above purposes.

A Boltzmann machine is a type of stochastic neural network invented by Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They are named after the Boltzmann distribution in statistical mechanics. The units in Boltzmann machines are divided into two groups: visible units and hidden units. General Boltzmann machines allow connection between any units. However, learning is impractical using general Boltzmann Machines because the computational time is exponential to the size of the machine. A more efficient architecture is called restricted Boltzmann machine where connection is only allowed between hidden unit and visible unit, which is described in the next section.

A restricted Boltzmann machine[1] is an undirected graphical model with stochastic visible variable and stochastic hidden variables. Each visible variable is connected to each hidden variable. The energy function of the model is defined as

E(v,h;θ)=i=1Dj=1FWijvihji=1Dbivij=1Fajhj{displaystyle E(mathbf {v} ,mathbf {h}

 ;theta )=-sum _{i=1}^{D}sum _{j=1}^{F}W_{ij}v_{i}h_{j}-sum _{i=1}^{D}b_{i}v_{i}-sum _{j=1}^{F}a_{j}h_{j}}

where

θ={v,h;θ}{displaystyle theta ={mathbf {v} ,mathbf {h}

 ;theta }} are model parameters:

Wij{displaystyle W_{ij}}

represents the symmetric interaction term between visible unit

i{displaystyle i}

and hidden unit

j{displaystyle j}

;

bi{displaystyle b_{i}}

and

aj{displaystyle a_{j}}

are bias terms. The joint distribution of the system is defined as

P(v;θ)=1Z(θ)hexp(E(v,h;θ)){displaystyle P(mathbf {v}

 ;theta )={frac {1}{{mathcal {Z}}(theta )}}sum _{mathbf {h} }mathrm {exp} (-E(mathbf {v} ,mathbf {h} ;theta ))}

where

Z(θ){displaystyle {mathcal {Z}}(theta )}

is a normalizing constant. The conditional distribution over hidden

h{displaystyle mathbf {h} }

and

v{displaystyle mathbf {v} }

can be derived as logistic function in terms of model parameters.

P(h|v;θ)=j=1Fp(hj|v){displaystyle P(mathbf {h} |mathbf {v}

 ;theta )=prod _{j=1}^{F}p(h_{j}|mathbf {v} )} , with

p(hj=1|v)=g(i=1DWijvi+aj){displaystyle p(h_{j}=1|mathbf {v} )=g(sum _{i=1}^{D}W_{ij}v_{i}+a_{j})}

P(v|h;θ)=i=1Dp(vi|h){displaystyle P(mathbf {v} |mathbf {h}

 ;theta )=prod _{i=1}^{D}p(v_{i}|mathbf {h} )} , with

p(vi=1|h)=g(j=1FWijhj+bi){displaystyle p(v_{i}=1|mathbf {h} )=g(sum _{j=1}^{F}W_{ij}h_{j}+b_{i})}

where

g(x)=1(1+exp(x)){displaystyle g(x)={frac {1}{(1+mathrm {exp} (-x))}}}

is the logistic function.

The derivative of the log-likelihood with respect to the model parameters can be decomposed as the difference between the model’s expectation and data-dependent expectation.

. . . Multimodal learning . . .

This article is issued from web site Wikipedia. The original article may be a bit shortened or modified. Some links may have been modified. The text is licensed under “Creative Commons – Attribution – Sharealike” [1] and some of the text can also be licensed under the terms of the “GNU Free Documentation License” [2]. Additional terms may apply for the media files. By using this site, you agree to our Legal pages . Web links: [1] [2]

. . . Multimodal learning . . .