Architectures for deep learning with convolutional networks
GPU computation is becoming popular, primarily within the scientific computation and machine learning communities. By exploiting the inherent parallelism of numerous treads operating on an array of dedicated special purpose processors, the computations of many deep learning problems can be substantially accelerated, including training a convolutional neural network and locating the optimal topology for a Kohonen map. HELICON 2018 aspires to illustrate the technological potential and the financial benefits of adopting the GPU technology for deep learning purposes through a series of high quality papers.
Neural networks, coming in numerous topologies and con figurations, constitute an integral part of deep learning. Moreover, they contribute to the ongoing brain research by being models of complex neurophysiological functions at an abstract level. A big number of significant problems related to neural networks is computationally challenging because of the training procedure or the volume of data to be processed. Examples include the BEP training of feedforward neural networks and the input size of tensor stack networks.
One possibility to address those issues is to assign critical computation parts to GPU in order to take advantage of its parallelism. In contrast to distributed systems such as Apache Spark, GPU computations take place in the same component of the same node, reducing communication time to a bare minimum while maintaining a high degree of reliability. Thus, key computations such as evaluating a single kernel or multiple kernels at multiple points, deriving the Mahalanobis distance for a large number of vector pairs, estimating the coefficient matrix in a least squares problem, or finding the Fourier transform of a long vector can be executed efficiently, provided the hardware capabilities are algorithmically taken into careful consideration.
The primary contribution of HELICON is threefold:
- Consolidate the theory and practice of GPU computing for deep learning.
- Provide a meeting point for forging professional bonds between practitioners.
- Increase the visibility of the field by serving as a central point of high quality work.
The main thematic area of HELICON is deep learning with GPU computation. Thus, although the following list is by no means exhaustive, major topics of HELICON are expected to be:
- Deep learning
- GPU programming for deep learning
- GPU architectures for efficient kernel computation
- III conditioned and inverse problems
- Regularization methods, Tikhonov theory, and Sobolev spaces
- Reproducible kernel Hilbert spaces
- Recurrent and convolutional networks
- Kohonen networks and self organizing maps
- Tensor stack networks
- Big data, large set, and tensor analytics
- Data structures for deep learning
- Image processing and applications
- Higher order moments and cumulants
- Pattern mining in multilayer graphs
- Multilinear discriminant analysis
- Adaptive signal processing in nonstationary environments
- Efficient computation of multivariate Taylor, Volterra, and Wallace series
- Adaptive nonlinear system identification
- TensorFlow, theano, and keras applications
Workshop Organizers & Chairs
- Georgios Drakopoulos, Senior Researcher and Founding CEO, Cloudminers Inc.
- Andreas Kanavos, Adjunct Professor, Hellenic Open University and TEI of Western Greece
- Spyros Sioutas, Associate Professor, Ionion University
- Phivos-Apostolos Mylonas, Assistant Professor, Ionion University
- Ioannis E. Venetis, Postdoctoral Researcher, University of Patras
Each submitted paper is expected to abide by the ACM SIG Proceedings guidelines: http://www.acm.org/publications/article-templates/proceedings-template.html
The peer review will rely on content novelty, contribution potential, as well as on topic elucidation and expressive power. Mathematically rigorous papers or works supported by an original or outstanding implementation, preferably posted on GitHub, are especially welcome, and so are maverick papers with thought provoking ideas.
All paper submissions will be only handled electronically via Easychair separately but under supervision from the main event. Please note that submitted papers should not have been submitted, accepted, or published to another conference or journal. The submission site is:
HELICON papers can either be short (2-4 pages) or full (6-10 pages). Both paper types will be included in the SETN proceedings and, consequently, will be published in the ACM Digital Portal.
Finally, if a paper is accepted, then it has to be registered for before the deadline set by SETN. Also, at least one author must present the paper to HELICON.
Paper submission deadline: February 28, 2018
Author Notification: March 18, 2018
Camera Ready Paper Submission Deadline: April 3, 2018
Special Session Dates: TBA
Conference Dates: July9-13, 2018
The proceedings of the special session will be included in the conference proceedings, which will be published by ACM (International Conference Proceedings Series) and will be available through the ACM Digital Library.