Wil van der AalstSeptember 3, 2014

Process Mining as the Superglue Between Data Science
and Enterprise Computing

Speaker Biography

EDOC-2014-Keynote-Speakers-Wil-van-der-Aalst-SmallWil van der Aalst is a full professor of Information Systems at the Technische Universiteit Eindhoven (TU/e). He is also the Academic Supervisor of the International Laboratory of Process-Aware Information Systems of the National Research University, Higher School of Economics in Moscow. Moreover, since 2003 he has a part-time appointment at Queensland University of Technology (QUT). At TU/e he is the scientific director of the Data Science Center Eindhoven (DSC/e). Many of his papers are highly cited (he has an H-index of more than 107 according to Google Scholar) and his ideas have influenced researchers, software developers, and standardization committees working on process support. In 2012, he received the degree of doctor honoris causa from Hasselt University. In 2013, he was appointed as Distinguished University Professor of TU/e and was awarded an honorary guest professorship at Tsinghua University. He is also a member of the Royal Holland Society of Sciences and Humanities (Koninklijke Hollandsche Maatschappij der Wetenschappen) and the Academy of Europe (Academia Europaea).

Keynote Abstract

Process mining provides new ways to utilize the abundance of data in enterprises. Suddenly many organizations realize that survival is not possible without exploiting available data intelligently. A new profession is emerging: the data scientist.  Just like computer science emerged as a new discipline from mathematics when computers became abundantly available, we now see the birth of data science as a new discipline driven by the torrents of data available today. Process mining will be an integral part of the data scientist's toolbox. Also enterprise computing will need to focus on process innovation through the intelligent use of event data. In his talk Wil van der Aalst will focus on challenges related to "process mining in the large", i.e., dealing with many processes, many actors, many data sources, and huge amounts of data at the same time. By adequately addressing these challenges (e.g., using process cubes) we get a new kind of superglue that will impact the future of enterprise computing.

Keynote Recording


Heiko LudwigSeptember 4, 2014

Managing Big Data Effectively - A Cloud Provider and a Cloud Consumer Perspective

Speaker Biography

EDOC-2014-Keynote-Speakers-Heiko-Ludwig-Small

Heiko Ludwig is a Research Staff Member with IBM’s Almaden Research Center in San Jose, CA and leads the Cloud Management Services team in Computing-as-a-Service organization, working on issues of service and storage management for Cloud environments. Prior work addressed various issues of service and process management and the corresponding platforms, mostly relating to large scale, crossing organizational boundaries, and the interrelationship of business and IT, such as work on WSLA, WS-Agreement and CrossFlow. Heiko has published more than 100 refereed articles, conference papers, and book chapters as well as technical reports. He is a managing editor of the International Journal of Cooperative Information Systems, has served on about 150 program committees and co-organized workshops resp. served as PC Co-Chair and General Co-Chair on a number of conferences. He also gave a number of keynote speeches at conferences and workshops in the field. He represented IBM in the OGF GRAAP working group, publishing the WS-Agreement standard. Prior to the Alamaden Research Center, Heiko held different positions at IBM around the world.

Keynote Abstract

Instrumentation of processes and an organization's environment provides vast amounts of data that can be used to drive decisions. Next to setting up data collection, supervising data quality, and applying proper methods of analysis, organizations face the challenge to set up an infrastructure and architecture to do so efficiently and cost-effectively. Virtualized platforms such as private or public clouds are the method of choice for deployment, in particular for data analyses not occurring constantly. A cloud provider, either a commercial Cloud company or an IT organization within an enterprise, will like to set up a cloud platform such that clients can run big data workloads effectively on. Cloud customers would like to set up big data applications in a cost-effective and performant way on their platform. This keynote will walk through a few real life big data analysis scenarios from different industries and discuss the challenges Cloud providers face making trade-offs. Understanding those challenges and solutions help cloud users choose the right match between their algorithm, big data system and cloud platform.

Keynote Recording


Barbara WeberSeptember 5, 2014

Investigating the Process of Process Modeling: Towards an In-depth Understanding of How Process Models are Created

Speaker Biography

Barbara Weber

Barbara Weber is an associate professor at the Department of Computer Science at the University of Innsbruck (Austria), where she leads the leads the research cluster on business processes and workflows. Barbara holds a Habilitation degree in Computer Science and a Ph.D. in Economics from the University of Innsbruck. Barbara has published more than 90 refereed papers, for example, in Data & Knowledge Engineering, Computers in Industry, Enterprise Information Systems, Information and Software Technology, and Software and System Modeling, has been serving as editorial board member for the Information Systems journal and the Computing journal and has been organizing the successful BPI workshop series. Moreover, she is co-author of the recently published book “Enabling Flexibility in Process-aware Information Systems” by Springer. Barbara’s research interests include process model understandability, process of process modeling, integrated process life cycle support, change patterns, process flexibility, user support in flexible process-aware systems, and recommendations to optimize process execution.

Keynote Abstract

Business process models have gained significant importance due to their critical role for managing business processes. Still, process models display a wide range of quality problems. For example, literature reports on error rates between 10% and 20% in industrial process model collections. Most research in the context of quality issues in process models puts a strong emphasis on the product or outcome of the process modeling act (i.e., the resulting process models), while the process followed for creating process models is only considered to a limited extent.

The creation of process models involves the elicitation of requirements from the domain as well as the formalization of these requirements as process models. In this presentation the focus will be on the formalization of process models, which can be considered a process by itself­---the process of process modeling (PPM). In particular,  this presentation will discuss how the PPM can be captured and analyzed. For this, it will present a specialized modeling environment, which logs all interactions of the process modeler with the modeling environment, thus, providing the infrastructure to investigate the PPM. The presentation will also shed light on the way how process models are created, present different behavioral patterns that can be observed, and discuss factors that influence the PPM, e.g., modeler-specific factors like domain knowledge or process modeling competence and task-specific factors. In addition, the presentation will provide an outline on how methods like eye movement analysis, think aloud, or the analysis of bio-feedback (e.g., heart rate variability) might enable even deeper insights into the PPM.

Keynote Recording