The open list of topics proposed for submission is organized in form of the tracks presented in the list given below. It is important to note that such classification is valid only on the period of submission, reviewing and selection of papers. The structure of the conference program is defined by the Program committee relying on the selected papers and does not necessarily reflects the submission track structure. All tacks are open for submission of any categories of contribution including regular papers, demos, tutorials.
Tracks for data analysis, problem solving, experiment organization
Problem statement and solving: urgent problem or phenomena required study in a specific domain or in a generalized way, thorough insight based on the nature, characteristics of the phenomenon and data available, approaches for organization of problem solving and methods selection, problem classification in various domains, process of problem solving and tools applied.
Organization of experiments: survey of approaches for the organization of experimental research, scientific theory justification, experiment simulation, research cycles, robotization, infrastructures for experiment organization, reproducing of results, workflow metadefinition and reuse, verification of results, comparison of new results with those obtained earlier.
Hypotheses and models as constituents of research experiments: methods and facilities for hypotheses generation and testing, construction of computerized models, models as a mean for theory and hypothesis verification, cognitive modeling paradigm, experience of creation of predictive models in research.
Advanced data intensive analysis methods and procedures: state of the art in methods of statistics, data mining, machine learning, evaluation of methods generality and specialization, orientation of methods on specific domains and kinds of data, classification of methods, systematization of experience of methods application for problem solving, cognitive analytics for data-driven decision making, information visualization and exploratory analysis, meta-analysis methods, Big Data analytics –efficiency and scalability, new data analysis methods development.
Conceptual modeling: formalization of semantics of the subject domains, conceptual specification of problems and evolution of ontologies in specific domains, experience of applying of various models and tools for ontology support, semantic annotation for concept formation, progress of ontological modeling, ontological models use for database schema specification, independence of conceptual specification of data, abstract specification of algorithms and workflows in the conceptual models, semantic interoperability of programs.
Research support in data infrastructures, data intensive use cases: functions and architectures of facilities for research support (virtual laboratories/observatories, data centers), cross-infrastructure interoperability and data sharing between interdisciplinary researches, data intensive use cases for research data infrastructures, experience of use case implementation
Tracks for data management
Methods, tools and infrastructures for data acquisition and storage: advanced projects, experience of data acquisition and storage in long-living projects, comparative analysis of the projects, project surveys, facilities and approaches for data collecting and storage, specificity of semantics, structure and characteristics of data (including streaming data), data representation, metadata organization, data quality, data provenance (including taking them from the literature), data cleansing, problems of Big Data storage.
Data integration: methods and tools for entity resolution and fusion in the Big Data infrastructures, unification of various data models (such as NoSQL, graph-based, RDF-based, array-based models), canonical data models and their synthesis, schema and ontology matching and mapping, methods and tools for virtual data integration, application-driven subject mediators, semantic integration of data, data warehouses, ETL process support, multidimensional data models, data integration in hybrid infrastructures supporting structured, semi-structured and non-structured data, infrastructures of data integration systems, application of data integration facilities in specific domains.
Information extraction from texts: identification and extraction of structured information from the texts, declarative languages and methods for information extraction, linguistic methods, NLP, multilingual textual data, instruments for textual analysis.
Research data infrastructures and their applications:various data infrastructures, based on data and compute-intensive platforms (such as clouds and grids, distributed clusters, supercomputers, parallel database machines, etc.), new models for data intensive programming in such infrastructures and Big data platforms, metadata and modeling in data infrastructures, virtualization based technologies, evaluation of performance of data infrastructures, scalability issues.
Semantic Web: languages, tools, and methodologies for representing and managing data, semantics and reasoning on the Web, semantic interoperability and cross identification of the Semantic Web resources, spatio-temporal Semantic Web data and ontologies, harvesting of Semantic Web data from diverse data collections, Web data quality and provenance, multidialect architectures for declarative conceptual specification and problem solving over heterogeneous collections of data, application of Semantic Web facilities for problem solving, linked open data.