Plenary Sessions

Plenary Sessions

Models for Communication, Understanding, Search and Analysis

Bernhard Thalheim (Department of Computer Science, Christian-Albrechts-University at Kiel)

betaNeu

Short Bio

Bernhard Thalheim is a full professor and chair at the Information Systems Engineering (ISE), Department of Computer Science, Christian-Albrechts-University at Kiel. He got his PhD in Discrete mathematics in 1979 at the Lomonosov Moscow State University, and his Habilitation in Theoretical computer science in 1985 at the Dresden University of Technology. From 1986 to 1989 Bernhard was an Associate Professor at the Dresden University of Technology. In 1989 he moved to the University of Rostock, where he was Professor until 1993. From 1993 to 2003 he was Dean and Full Professor at the Brandenburg Technical University, and since 2003 he is Professor at the Christian-Albrechts-Universität of Kiel. He has been Visiting Professor at the Kuwait University; at the University of Klagenfurt, Austria; at the Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences; and at Massey University at Palmerston North, New Zealand.

Bernhard Thalheim has received a number of awards for his achievements in information systems engineering. He was honored in 2005 as the Kolmogorov Professor h.c. at the Lomonosov Moscow State University. In 2008 he received the Peter P. Chen Award of Elsevier for Entity–relationship model research. He was PC-co-chair of more than a dozen of international conferences (e.g. WISE, EJC, NLDB, SDKB, FoIKS, ASM, ADBIS), member in Program Committees of several hundreds of conferences (e.g. ACM SIGMOD, ADBC, ADBIS, AI, BTW, CAiSE, CoopIS, EDBT, EJIMK, Emisa, ER, ICDT, INFO, IDBW, MFDBS, NGITS, NLDB, ORM, SCCC, TFCS, WITS), member of Steering Committees of more than a dozen international conferences (e.g., ADBIS, ASM, ER, FoIKS, NLDB). 

His professional interests include models in computer science and other sciences, models in Computer Science, Computer Engineering, in Sciences, in Engineering, entity-relationship model, database and information systems theory anвtools, interactive information services for internet and cable nets, systematic development of information-intensive e-learning, e-government, e-community or information websites, content management systems.

Abstract

Models are one of the universal instruments of humans. They are equally important as languages. Models often use languages for their representation. Other models are conscious, sub-conscious or pre-conscious and have no proper language representation. The wide use in all kinds of human activities allows to distinguish different kinds of models in dependence on their utilisation scenarios. In this keynote we consider only four specific utilisation scenarios for models. We show that these scenarios can be properly supported by a number of model construction conceptions. The development of proper and well-applicable models can be governed by various methodologies in dependence on the specific objectives and aims of model utilisation.

How to Put Algorithms into Neural Networks?

Anton Osokin (National Research University Higher School of Economics)

OsokinFace 201609

Short Bio

Anton Osokin is a leading research fellow and deputy head of the Centre of Deep Learning and Bayesian Methods at NRU HSE. He got his PhD in Discrete Mathematics and Mathematical Cybernetics at Lomonosov Moscow State University in 2014. In 2014-2017 he performed postdoctoral research at École Normale Supérieure & INRIA, Paris. His professional interests include machine learning and deep learning, computer vision, discrete optimization and structured prediction.

Abstract

Recently, neural networks have achieved remarkable success in many fields. Many practical systems for fundamental tasks are built with neural networks. For example, in computer vision, it is image classification,, object detection and image segmentation; in natural language processing, it is language modeling and automatic translation; in audio processing, both speech recognition and synthesis. Many approaches have become an industrial standard, and companies around the world are building products based on this technology.

Successful algorithms for various tasks are very different from each other and required years of research to arrive at the current level of performance. Constructing a good algorithm for a new task is often a non-trivial challenge. It also turns out that networks can not just learn from data without exploiting some domain knowledge.

This knowledge is usually encoded at least in the architecture itself. For example, convolutional neural networks exploit intuition that translation of the object does not change the object itself, i.e., a cat does not stop being a cat if moved left.

At the same time, in many domains we already have powerful algorithms that do a decent job. It is a very natural idea to exploit those to construct better networks. We can look at this from two sides. From one side, this means constructing new layers or blocks of layers for networks. From another side, this means making trainable algorithms. In any case, the attempt is to take best of both worlds. This direction has been around since 90s, but for long time was not getting significant attention (together with neural networks). In this talk, we will review three ways to combine algorithms and networks:

  1. structured pooling: an algorithm is used to select active features (similarly to max pooling);
  2. unrolling iterations into layers: an algorithm simply becomes a part of the network;
  3. analytical derivative w.r.t. the algorithm input, i.e., building a layer with a special backward operator.

To illustrate all the approaches, we will use a running example of a simplified task of handwriting recognition: recognize a word given a sequence of images where each image shows exactly one letter.

Formal Education in Data Science – Recent Experiences from Faculty of Technical Sciences of University of Novi Sad

Ivan Lukovic (Faculty of Technical Sciences, University of Novi Sad). 

1863865786

Short Bio

Ivan Luković received his diploma degree in Informatics from the Faculty of Military and Technical Sciences in Zagreb in 1990. He completed his M.Sc degree at the University of Belgrade, Faculty of Electrical Engineering in 1993, and his Ph.D. at the University of Novi Sad, Faculty of Technical Sciences in 1996. Currently, he works as a Full Professor at the Faculty of Technical Sciences of the University of Novi Sad, where he lectures in several Computer Science and Informatics courses. His research interests are related to Database Systems, Business Intelligence Systems, and Software Engineering. He created a new set of B.Sc. and M.Sc. study programs in Information Engineering, i.e. Data Science at the Faculty of Technical Sciences. The programs were accredited in 2015. 

Abstract

In recent years, Data Science has become an emerging education and research discipline all over the world. Software industry shows an increasing and even quite intensive interest for academic education in this area. In this talk, we announce main motivation factors for creating a new study program in Data Science at Faculty of Technical Sciences of University of Novi Sad, and why it is important to nurture the culture of interdisciplinary orientation of such program from early beginning of B.Sc. studies. Also, we announce how we structured the new study program and addressed the main issues that come from evident industry requirements. The program was initiated in 2017, both B.Sc. and M.Sc. studies, and we collect the new experiences.

Big Data Processing and Analytics Inside DBMS

Mikhail Zymbler (South Ural State University)

mzym2

Short Bio

Mikhail Zymbler is head of Department for Data Mining and Virtualization at South Ural State University. Hуis an associate professor the executive secretary of journal Bulletin of the South Ural State University. His professional interests cover data mining, parallel algorithms for time series analysis, parallel database management systems.

Abstract

In the era of Big Data, there are two basic challenges for humans: how to effectively manipulate and analyze huge amounts of data. Currently, relational DBMSs remain the most popular tool for processing large tables in various data intensive domains, despite the widespread use of numerous NoSQL systems. At the same time, most of modern tools for mining the large data sets are non-DBMS and based on the MapReduce paradigm. If we consider DBMS only as a fast and reliable data repository, we get significant overhead for export large data volumes outside a DBMS, changing data format, and import results of analysis back into DBMS. That is why integration of data mining methods with relational DBMS is a topical issue.

There exist parallel DBMSs that can effciently process transactions and SQL queries on very large databases. Such DBMSs could be a subject for integration of data mining methods but they are expensive and oriented to custom hardwarethat is diffcult to expand. Open-source DBMSs are now being a reliable alternative to commercial DBMSs but there is a lack of open-source parallel DBMSs since the development of such software is rather expensive and takes a lot of time.

In the talk, we will consider an approach to deal with the problems described above. A parallel DBMS can be developed not from scratch but by small-scale modifications of the original codes of an open-source serial DBMS to encapsulate parallelism. Large- and small-scale data mining problems can be solved inside such a parallel DBMS.