A Survey on Deep Leaning Architectures and Its Applications
Author`s Contribution:
- SCSVMV University, Kanchipuram, Tamil Nadu, India
Background and aim of study:
Deep learning is a set of algorithms in machine
learning that attempt to learn in multiple levels,
corresponding to different levels of abstraction. It
typically uses artificial neural networks. The levels in
these learned statistical models correspond to distinct
levels of concepts, where higher-level concepts are
defined from lower-level ones, and the same lower
level concepts can help to define many higher-level
concepts.
Deep learning, a new era in machine learning, can be
defined as a cascade
Representations. Unlike conventional machine-learning
and data mining techniques, deep learning is able to
generate very high-level data representations from
massive volumes of raw data. Therefore, it has
provided a solution to many real-world applications.
A deep learning technology works on the artificial
neural network system (ANNs). These ANNs
constantly take learning algorithms and by
continuously increasing the amounts of data, the
efficiency of training processes can be improved. The
efficiency is dependent on the larger data volumes. The
training process is called deep because the number of
levels of neural network increases with the time. The
working of the deep learning process is purely
dependent on two phases which are called the training
phase and inferring phase. The training phase includes
labelling of large amounts of data and determining
their matching characteristics and the inferring phase
deals with making conclusions and label new
unexposed data using their previous knowledge.
There are many deep learning models developed by the
researchers which give a better learning from the
representation of large-scale unlabelled data. Some
popular deep learning architecture like Convolutional
Neural Networks (CNN), Deep Neural Networks
(DNN), Deep Belief Network (DBN) and Recurrent
Neural Networks (RNN) are applied as predictive
models in the domains of computer vision and
predictive analytics in order to find the insights from
data.
Research methods:
In this study a survey is performed on various research
papers on various architecture of deep learning and its
applications. The research paper from 2012-2020
selected for study. This study presents a brief survey on
the advances that have occurred in the area of Deep
Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional
Neural Network (CNN), Recurrent Neural Network
(RNN), including Long Short-Term Memory (LSTM)
and Gated Recurrent Units (GRU), Auto-Encoder
(AE), Deep Belief Network (DBN), Generative
Adversarial Network (GAN), and Deep Reinforcement
Learning (DRL). The paper also explained the major
differences between the deep learning, classical
machine learning and conventional learning approaches
and the major challenges. The objective this paper is to
explore a comprehensive survey of the major
applications of deep learning covering variety of areas,
study of the techniques and architectures used and
further the contribution of that respective application in
the real world.
Results:
The majority of the existing deep learning
implementations are supervised as well as
unsupervised learning. There are different supervised
learning approaches for deep leaning, including Deep
Neural Networks (DNN), Convolutional Neural
Networks (CNN), Recurrent Neural Networks (RNN),
including Long Short Term Memory (LSTM), and
Gated Recurrent Units (GRU). This study explained in
detail the different supervised deep learning
techniques, including DNN, CNN, and RNN. The un-
supervised deep learning techniques, including AE,
RBM, and GAN, were reviewed in detail. This survey
also inspect that deep learning architectures such as
deep neural networks, deep belief networks, recurrent
neural networks and convolutional neural networks
have been applied to fields including computer vision,
machine vision, speech recognition, natural language
processing, audio recognition, social network filtering,
machine translation, bioinformatics, drug design,
medical image analysis, material inspection and board
game programs, where they have produced results
comparable to and in some cases surpassing human
expert performance.
Conclusion:
This study surveys the state-of-the-art techniques and
architectures in deep learning. It starts with a history of
artificial neural networks since 1940 and moves to
recent deep learning algorithms and major
breakthroughs in different applications. Then, the key
algorithms and frameworks in this area, as well as
popular techniques in deep learning, are presented. In
this paper, we have provided an in-depth review of
deep learning and its applications over the past few
years.
DOI and UDC:
UDC: 37.022:37.04
DOI: 10.26697/ijes.2020.4.2
Information about the authors:
Abhishek Pandey – PhD Research Scholar, Sri
Chandrasekharendra Saraswathi Viswa Mahavidyalaya
University, Kanchipuram, Tamil Nadu, India.
Research interests: data mining, machine learning,
deep learning; https://orcid.org/0000-0001-7381-7909
Ramesh Vamanan – Assistant Professor, Sri
Chandrasekharendra Saraswathi Viswa Mahavidyalaya
University, Kanchipuram, Tamil Nadu, India.
Research interests: data mining and data warehousing,
data base management systems, software engineering;
https://orcid.org/0000-0001-5323-866X