Monika Steidl

PhD in Software Engineering at the University of Innsbruck


Main focus: Software Quality & MLOps

Website/blog: https://www.linkedin.com/in/monika-steidl-uibk/

Languages: German, English

State: Tyrol

Country: Austria

Topics: requirements engineering, mlops, algorithmen and data structures, continuous development of artificial intelligence, anomaly detection, data engineering and analytics, risk analysis of anomalies during runtime, runtime verification

Services: Talk, Workshop management, Interview

  Willing to travel for an event.

  Willing to talk for nonprofit.

Examples of previous talks / appearances:

MLOps: the continuous pipeline from data to a machine learning model in production

Recently, we have been surrounded by more and more machine learning (ML) based applications due to better technology and an increasing amount of data. Therefore, automating the continuous development and deployment of such ML models is necessary. For "traditional" software, DevOps and CI/CD are well established. Nonetheless, the development of ML-based applications differs fundamentally because we do not only need to handle code but also data and the ML model itself, in addition to large system-level complexity. So, how can we adapt the DevOps principles to ML?

We investigate:
- a potential pipeline for ML's continuous development and deployment (including data handling, model learning, software development, and system operations) to answer this question.
- possible triggers for reiterating this pipeline.
- challenges in setting up this continuous pipeline.

This talk is in: English
Requirements for Anomaly Detection Techniques for Microservices

Version configurations of third-party software are essential to ensure a reliable and executable microservice architecture. Although these minor version configurations seem straightforward as the functionality does not need to be adapted, unexpected behaviour emerges due to the complex infrastructure and many dependencies. Anomaly detection techniques determine these unexpected behaviour changes during runtime. However, the requirements anomaly detection algorithms need to fulfil are unexplored. Thus, this case study collects experiences from practitioners and monitoring datasets from a well-known benchmark system (Train Ticket) to identify five requirements - namely: (1) early detectability, (2) reliability, (3) risk analysis, (4) adaptability, and (5) root causes analysis. In this work, we additionally evaluate three anomaly detection techniques on their practical applicability with the help of these identified requirements and extracted monitoring data.