Plenary Talk 1
Spiking Neural Networks for Deep Machine Learning and Predictive Data Modelling on Temporal and Spatio-Temporal Data
The current development of the third generation of artificial neural networks - the spiking neural networks (SNN) along with the technological development of highly parallel neuromorphic hardware systems of millions of artificial spiking neurons as processing elements, makes it possible to model complex data streams in a more efficient, brain-like way [1,2].
The talk first presents some principles of deep learning implemented in a recently proposed evolving SNN (eSNN) architecture called NeuCube. NeuCube was first proposed for brain data modelling [3,4]. It was further developed as a general purpose SNN development system for the creation and testing of spatio/spectro temporal data machines (STDM) to address challenging data analysis and modelling problems. A version of the NeuCube development system is available free from: http://www.kedri.aut.ac.nz/neucube/, along with papers and case study data.
The talk introduces a methodology for the design and implementation of SNN systems, called spatio-temporal data machines (STDM) for deep learning and for predictive data modelling of temporal or spatio-/spectro temporal data . A STDM has modules for: preliminary data analysis, data encoding into spike sequences, unsupervised learning of temporal or spatio-temporal patterns, classification, regression, prediction, optimisation, visualisation and knowledge discovery. A STDM can be used to predict early and accurately events and outcomes through the ability of SNN to be trained to spike early, when only a part of a new pattern is presented as input data. The methodology is illustrated on benchmark data with different characteristics, such as: financial data streams; brain data for brain computer interfaces; personalised and climate date for individual stroke occurrence prediction ; ecological and environmental disaster prediction, such as earthquakes. The talk discusses implementation on highly parallel neuromorphic hardware platforms such as the Manchester SpiNNaker  and the ETH Zurich chip [8,9]. These STDM are not only significantly more accurate and faster than traditional machine learning methods and systems, but they lead to a significantly better understanding of the data and the processes that generated it. New directions for the development of SNN and STDM are pointed towards a further integration of principles from the science areas of computational intelligence, bioinformatics and neuroinformatics and new applications across domain areas [10,11].
- EU Marie Curie EvoSpike Project (Kasabov, Indiveri): http://ncs.ethz.ch/projects/EvoSpike/
- Schliebs, S., Kasabov, N. (2013). Evolving spiking neural network-a survey. Evolving Systems, 4(2), 87-98.
- Kasabov, N. (2014) NeuCube: A Spiking Neural Network Architecture for Mapping, Learning and Understanding of Spatio-Temporal Brain Data, Neural Networks, 52, 62-76.
- Kasabov, N., Dhoble, K., Nuntalid, N., Indiveri, G. (2013). Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition. Neural Networks, 41, 188-201.
- Kasabov, N. et al (2015) A SNN methodology for the design of evolving spatio-temporal data machines, Neural Networks, in print.
- Kasabov, N., et al. (2014). Evolving Spiking Neural Networks for Personalised Modelling of Spatio-Temporal Data and Early Prediction of Events: A Case Study on Stroke. Neurocomputing, 2014.
- Furber, S. et al (2012) Overview of the SpiNNaker system architecture, IEEE Trans. Computers, 99.
- Indiveri, G., Horiuchi, T.K. (2011) Frontiers in neuromorphic engineering, Frontiers in Neuroscience, 5, 2011.
- Scott, N., N. Kasabov, G. Indiveri (2013) NeuCube Neuromorphic Framework for Spatio-Temporal Brain Data and Its Python Implementation, Proc. ICONIP 2013, Springer LNCS, 8228, pp.78-84.
- Kasabov, N. (ed) (2014) The Springer Handbook of Bio- and Neuroinformatics, Springer.
- Kasabov, N (2016) Spiking Neural Networks for Deep Machine Learning and Predictive Data Modelling on Temporal and Spatio/Spectro-Temporal Data, Springer, 2016
Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand and DVF of the Royal Academy of Engineering, UK. He is the Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland. He holds a Chair of Knowledge Engineering at the School of Computing and Mathematical Sciences at Auckland University of Technology. Kasabov is a Past President and Governor Board member of the International Neural Network Society (INNS) and also of the Asia Pacific Neural Network Society (APNNS). He is a member of several technical committees of IEEE Computational Intelligence Society and a Distinguished Lecturer of the IEEE CIS (2012-2014). He is a Co-Editor-in-Chief of the Springer journal Evolving Systems and serves as Associate Editor of Neural Networks, IEEE TrNN, -Tr CDS, -TrFS, Information Science, Applied Soft Computing and other journals. Kasabov holds MSc and PhD from the TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 600 publications that include 15 books, 180 journal papers, 80 book chapters, 28 patents and numerous conference papers. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: TU Sofia, University of Essex, University of Otago, Advisor- Professor at the Shanghai Jiao Tong University, Visiting Professor at ETH/University of Zurich. Prof. Kasabov has received the APNNA ‘Outstanding Achievements Award’, the INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’, the EU Marie Curie Fellowship, the Bayer Science Innovation Award, the APNNA Excellent Service Award, the RSNZ Science and Technology Medal, and others. He has supervised to completion 38 PhD students. More information of Prof. Kasabov can be found on the KEDRI web site: http://www.kedri.aut.ac.nz.
Plenary Talk 2
Data Mining with Tensor Methods: Introdution and Recent Advance
One of the goals of data mining is extraction of patterns and knowledge from large amount of data. Frequently, processed data depend on many factors and the well known vector-based classification methods become unsatisfactory since they do not exploit full information contained in data and their structure. Recently developed tensor based methods allow data representation and analysis which directly account for data multidimensionality. Examples can be found in many applications such as face recognition, image synthesis, video analysis, surveillance systems, sensor networks, marketing and medical data analysis, to name a few. This talk will be focused on presentation of the basic ideas, as well as recent achievements, in the domain of data mining with tensor methods. We will present a systematic overview of tensor data representation, tensor decompositions, as well as pattern recognition with single and ensembles of tensor based classifiers. Also practical aspects and implementation issues, related to data processing with tensors, will be presented.
Bogusław Cyganek received his M.Sc. degree in electronics in 1993, and then M.Sc. in computer science in 1996, from the AGH University of Science and Technology, Krakow, Poland. He obtained his Ph.D. degree cum laude in 2001 with a thesis on correlation of stereo images, and D.Sc. degree in 2011 with a thesis on methods and algorithms of object recognition in digital images.
During recent years dr. Bogusław Cyganek cooperated with many scientific and industrial partners such as Glasgow University Scotland UK, DLR Germany, and Surrey University UK, as well as Nisus Writer, USA, Compression Techniques, USA, Pandora Int., UK, and The Polished Group, Poland. He is an associated professor at the Department of Electronics of the AGH University of Science and Technology, Poland, currently serving as a visiting professor to the Wroclaw Technical University in the ENGINE project. His research interests include computer vision, pattern recognition, data mining, as well as development of embedded systems. He is an author or a co-author of over a hundred of conference and journal papers, as well as books with the latest “Object Detection and Recognition in Digital Images: Theory and Practice” published by Wiley in 2013. Dr. Cyganek is a member of the IEEE, IAPR and SPIE.
To be added.