Page 82 - Handbook of Deep Learning in Biomedical Engineering Techniques and Applications
P. 82
70 Chapter 3 Application, algorithm, tools directly related to deep learning
3. Train the second layer like an RBM; use the transformed data
(samples or mean activations) as training examples for the
visible layer of that RBM.
4. Iterate second and third step for the number of layers each
time propagating upward either samples or mean values.
5. Fine-tune every parameters of deep architecture with respect
to a proxy for the DBN likelihood, or with respect to a super-
vised training algorithm after adding extra learning machinery
to convert the representation into clear supervised predictions,
e.g., a linear classifier. The architecture of DBN is illustrated in
Fig. 3.5.
3.1.1 Architecture of Deep belief network
It is a mass of RBM or autoencoders. Upper two layers of
DBNs are undirected, symmetric connection that forms associa-
tive memory. The links between all lower layers are directed, with
the arrows pointed to layer that is closest to the data. Lower
layers must have directed acyclic connections that convert all
associative memory to known variables. The lowest layer only is
the visible unit, which receives the input data. Input data can
be binary or real [14].
Figure 3.5 Architecture of DBN. DBN, deep belief network.