Page 157 - Handbook of Deep Learning in Biomedical Engineering Techniques and Applications
P. 157
146 Chapter 5 Depression discovery in cancer communities using deep learning
time steps and calculate the error that defines the shift of weights
of the model.
The unsupervised learning is also known as reinforcement
learning where no teacher is provided to give feedback to the
node. Instead, a fitness function or reward function is occasion-
ally used to evaluate the performance of the RNN model that in-
fluences the input stream through output units connected to
actuators of the model.
However, the RNN model also has the limitation due to its
exponential increase in size and the gradient decaying with
each layer for the long sentences. The long short-term memory
(LSTM) is the solution to handle the gradient decaying of the
model.
4.3 Long short-term memory
LSTM units explicitly avoid the RNN problem by regulating
the information in a cell state using input and output. Originally,
the LSTM is found in 1997 [86], and later its different revised ver-
sions are released for its improvement [87,88]. In this chapter, we
are discussing [87] their architecture that provides the capabilities
in nodes for remembering information that can be used in final
results for long sequencing of inputs for the accumulation of in-
formation during operation and uses feedback to remember pre-
vious network call states. In short, LSTM cares about crucial
information and positively affects the network performance.
As depicted in Fig. 5.9, the hidden layer is composed of the sig-
moid s function, pointwise mutual multiplication, and tangent
function tan. The sigmoid function gives the output in 0 and 1
Figure 5.9 LSTM hidden layer structure. LSTM, long short-term memory.