|
Selected publications:
-
M. Cernansky:
Training Recurrent Neural Network Using Multistream Extended Kalman Filter on Multicore Processor and Cuda Enabled Graphic Processor Unit.
In: Lecture Notes in Computer Science. Artificial Neural Networks - ICANN 2008, Limassol, Cyprus, 2009, Accepted.
-
M. Cernansky, M. Makula and L. Benuskova:
Improving the state space organization of untrained recurrent networks.
In: Lecture Notes in Computer Science. Neural Information Processing - ICONIP 2008, Auckland, New Zealand, In Press..
-
M. Cernansky and L. Benuskova:
Training recurrent connectionist models on symbolic time series.
In: Lecture Notes in Computer Science. Neural Information Processing - ICONIP 2008, Auckland, New Zealand, In Press..
-
M. Cernansky and P. Tino:
Predictive modeling with echo state networks.
In: Lecture Notes in Computer Science. Artificial Neural Networks - ICANN 2008, Prague, Czech Republik, 2008, pages 778–787.
-
M. Cernansky and P. Tino:
Processing Symbolic Sequences using Echo State Networks.
In: From associations to rules: Proceedings of the 10th Neural Computation and Psychology Workshop (NCPW10), Dijon, France, pages 153-164, 2008.
-
S. Fank and M. Cernansky:
Generalization and Systematicity in Echo State Networks.
In: Proceedings of the 30th Cognitive Science Conference, Washington DC, USA, pages 733–738, 2008.
-
M. Cernansky, M. Makula and Ľ. Beňušková:
Organization of the state space of a simple recurrent neural network before and after training on recursive linguistic structures.
Neural Networks, 20(2), pages 236-244, 2007.
-
M. Cernansky and P. Tino:
Comparison of Echo State Networks with Simple Recurrent Networks and Variable-Length Markov Models on Symbolic Sequences.
In: Lecture Notes in Computer Science. - ISSN 0302-9743. - Vol. 4668 Artificial Neural Networks - ICANN 2007, 17th International Conference, Porto, Portugal, September 2007 : Proceedings, Part I (2007). - : Springer-Verlag Berlin Heidelberg, 2007, s. 618-627
-
M. Cernansky, M. Makula, P. Lacko and P. Trebatický
Text Correction Using Approaches Based on Markovian Architectural Bias.
In: EANN 2007, Proceedings of the 10th International Conference on Engineering Applications of Neural Networks, Thessaloniki, Greece, 29.-31.8.2007. ISBN 978-960-287-093-8. pages 221-228
-
M. Cernansky and M. Makula.
Feed-forward Echo-state Networks.
In Proceedings of International Joint Conference on Neural Networks IJCNN 2005, Montreal, Canada, pages 1479-1482, 2005.
-
P. Tino, M. Cernansky and L. Benuskova.
Markovian architectural bias of recurrent neural networks.
In: IEEE Transactions on Neural Networks, 15(1), pages 6-15, 2004.
-
M. Cernansky, M. Makula and L. Benuskova.
Processing symbolic sequences by recurrent neural networks trained by Kalman filter based algorithms.
In: SOFSEM 2004: Theory and Practice of Computer Science, Merin, Czech Republic, pages 58-65. 2004.
-
M. Cernansky and L. Benuskova.
Simple recurrent network trained by RTRL and extended Kalman filter algorithms.
In: Neural Network World, 13(3), pp. 223-234, 2003.
-
P. Tino, M. Cernansky and L. Benuskova.
Markovian architectural bias of recurrent neural networks.
In: P. Sincak, J. Vascak, V. Kvasnicka and J. Pospichal, editors, Intelligent Technologies - Theory and applications. Frontiers in AI and Applications, vol. 76, 13(3), pages 17-23. IOS Press, Amsterdam, 2002.
-
M. Cernansky and L. Benuskova.
Finite-state Reber automaton and the recurrent neural networks trained in supervised and unsupervised manner.
In: Lecture Notes in Computer Science, Artificial Neural Networks - ICANN 2001, Wienna, Austria, pages 737-742. 2001.
-
M. Cernansky.
Comparison of Recurrent Neural Networks with Markov Models on Complex Symbolic Sequences.
PhD Thesis, supervised by Ľ. Benuskova, Faculty of Informatics and Information Technologies, Slovak Technical University, Bratislava, 2007.
All publications:
All publiactions
|