Klara Krieg

AI Program Manager @ Bosch

Main focus: Artificial Intelligence & bias

Website/blog: https://www.linkedin.com/in/klara-krieg/

Languages: German, English

City: Munich

State: Bavaria

Country: Germany

Topics: machine learning, chatbots, ai, diversity in tech, algorithmic fairness, fair ai, unconscious bias in ai, gender biased algorithms, unfair ai, chatgpt, ai fairness, biased ai, diversity in ai

Services: Talk, Moderation, Workshop management, Consulting, Interview

  Willing to travel for an event.

  I don't want to talk for nonprofit.

Personal note:

In my keynotes, I present the issue of unfair AI, catering to diverse audiences, be it HR or Tech-oriented. I can flexibly adapt to the audience and show new scientific knowledge from sociology and computer science. The focus of my speaking gigs revolves around igniting awareness and shaking up the perception of AI and Machine Learning as they can perpetuate discrimination. By exposing how our own biases find their way into these technologies, I emphasize the importance of recognizing and rectifying these shortcomings.
My ultimate goal is to promote a more inclusive and equitable approach to AI development and implementation, fostering a future where these technologies empower and benefit everyone without exacerbating societal prejudices.

As a dedicated hobby researcher, I have contributed to the field by presenting and publishing three impactful papers, with a particular focus on addressing gender bias in AI.


Who is Klara Krieg?
Klara Krieg combines the worlds of engineering and AI not only through her academic career, but also through her current position as AI Program Manager for generative AI at Bosch. In her last project, she worked on AI and Large Language Models in Silicon Valley.
Klara's passion lies in making the complex world of AI algorithms tangible and understandable. She is convinced that technological understanding and the practical application of new technologies must go hand in hand in order to create real added value. Her work goes far beyond technical know-how; it aims to foster a deep understanding of the nuances of human-AI interaction and to demonstrate the enormous potential for everyone.
In addition to her professional work, Klara is involved in research on topics such as discriminatory AI and Fair AI. Her findings are not only incorporated into her projects, but also into her presentations, in which she makes complex issues accessible. Klara speaks both internally at Bosch and on external stages and aims to inspire audiences with her clear, application-oriented view of AI and provide a hands-on perspective of the complex subject area.
As an active member of various tech networks, Klara is passionate about women in tech and diversity. Her goal is to promote an inclusive future and working world where technology serves as a tool to create equality and fair opportunities for all. Klara Krieg stands at the intersection of technology and society and not only imparts knowledge about AI, but also inspires people to reflect on the ethical and social aspects of technology use.

What can you expect?
I currently offer two different keynotes:
1) Human - machine - AI: How do we interact with artificial intelligence?
In a world where Artificial Intelligence (AI) is increasingly permeating our everyday lives, Generative AI is opening up completely new horizons for companies, developers and end users. This keynote will take you into the fascinating world of Generative Artificial Intelligence, a technology that not only analyzes data, but also independently creates content such as text, images and music. We will dive into the mechanisms that drive Generative AI, from the basics of machine learning to the latest developments in deep learning and neural networks. At the same time, it addresses the challenges associated with this technology, particularly in the area of ethics and potential discrimination through internalized stereotypes. The aim is to raise awareness of the responsible use of AI and to discuss how unfair results can be avoided.

2) Unfair AI & discriminatory algorithms
Machine learning processes have now become an integral part of our society: Whether in applicant selection, medical treatment recommendations or credit scoring, whether Alexa, ChatGPT or Google - the boundaries between us and the technology around us are often blurred. Despite all the benefits, more and more reports are being published about unfair or discriminatory algorithms. Certain groups of people are often disproportionately affected and discriminated against on the basis of sensitive characteristics (e.g. race, gender). I will give you an insight into how our internalized stereotypes and prejudices can be reflected in AI-based systems. The talk is suitable for non-techies, techies and anyone interested!

Examples of previous talks / appearances:

EnBW (2024): Lunch & Learn: Diskriminierende Algorithmen und unfaire Künstliche Intelligenz

As part of the Lunch & Learns at EnBW, I had the opportunity to talk about gender bias in AI and discuss it with the participants.

This talk is in: German
Wirtschaftsforum Singen (2024) Mensch – Maschine – KI: Wie interagieren wir mit Künstlicher Intelligenz?

The kick-off explores the dynamic interaction between humans and machines through the use of generative AI, which independently creates content such as texts, images and music. Klara sheds light on how generative AI opens up new opportunities for businesses, developers and end users by enabling innovative solutions in various fields such as art, research and medicine. The presentation dives into the technical foundations and latest developments in generative AI, from machine learning to deep learning. At the same time, it addresses the challenges associated with this technology, particularly in the area of ethics and potential discrimination through internalized stereotypes. The aim is to raise awareness of responsible use of AI and to discuss how unfair results can be avoided.

Afterwards, Klara was part of the panel discussion with Südkurier editor-in-chief Stefan Lutz, Dr. Philip Häusser and Peter Keck. (https://www.suedkurier.de/region/kreis-konstanz/singen/denken-muessen-wir-immer-noch-selber-beim-wirtschaftsforum-geht-es-um-das-zukunftsthema-ki;art372458,11986689)

This talk is in: German
Google (2024): International Women's Day (IWD) #ImpactTheFuture 2024, Munich

As panelist I was invited to a session with the topic "Impact the Future of Technology and Society for the Better!" together with panelists from Microsoft and AI startups.
As part of the panel discussion, I brought insides on my ethical AI research and ethical AI practices across industries.

This talk is in: English
Bosch Connected World (2024): AI - technology for life?

Kenynote Speaker & Panelist at the Bosch ConnectedWorld, the biggest Bosch event worldwide, happening once a year in Berlin.

AI - technology for life?
Is that true? Tap into the experiences of our two AI-experts Klara and Carissa, how the reproduction of unconscious biases of the real life in AI are endangering this vision. Together with the DEI-expert Stefanie they will explore opportunities to make AI bias-free and to bring this vision to life - For everyone!

This talk is in: English
Women in tech e.V. (2024): Gender Bias & unfair AI - wieso Algorithmen diskriminieren

19. März 2024 12:00 Uhr – 19. März 2024 13:00 Uhr
Ort: digital
Veranstalter: Women in Tech e.V.
Kategorie: #PowerLunch

Bundesministerium für Digitales und Verkehr (2024): Bias in Bias out - Einblicke in diskriminierende KI & unfaire Algorithmen

As an AI mentor and speaker, I was able to accompany the Digital Future Challenge 2024, an initiative of the Federal Ministry & Deloitte.

This talk is in: German
Institut für Medienpädagogik und Kommunikation Hessen e.V. (2023) : Wenn der Algorithmus dich nicht für fähig hält - ein Einblick in diskriminierende Algorithmen und unfaire Künstliche Intelligenz

Machine learning methods have now established a firm place in our society: Whether in the selection of job applicants, medical treatment recommendations, or creditworthiness checks, whether it's Alexa, ChatGPT, or Google. Despite all the advantages, the algorithm often acts unfairly or discriminates. Certain groups of people are disproportionately affected due to sensitive characteristics (such as race, gender). Klara Krieg shows how our internalized stereotypes and prejudices can be found in AI-based systems and what this can mean for girls' work in Hesse. The lecture is suitable for non-techies, techies, and all interested parties!

This talk is in: German
Bosch Level Up Event (2023): Ethics in Chatbots

Unfair AI - ethics in chatbots?
ChatGPT as a hyped topic is currently omnipresent in the media. But we must not forget that, in addition to technical innovation, the ethical implications must also be examined and taken seriously.
In my talk, I will discuss Transformer models, how the Google Paper in 2017 makes them possible and how such models have made solutions like ChatGPT possible in the first place as a technical basis.
I will show the consequences and causes of the viscous cycle of bias and how we need to pay more attention to breaking stereotypes in order to avoid social grievances and their reflection in algorithms.

Deloitte WIN Session (2023): Unfaire Künstliche Intelligenz & diskriminierende Algorithmen

Presentation on unfair algorithms and gender-discriminatory AI at Deloitte's WIN Lunch.

DemoZ Ludwigsburg (2023): Unfaire KI in Chatbots - wie ChatGPT diskrimierend sein kann

As part of a DemoZ event Ludwigsburg, I spoke about unfair algorithms in Google and ChatGPT in particular and discussed the vicious circle of algorithmic biases. Whether non-techies or IT specialists, the topic is presented in a low-threshold and simple way, so that in the end everything from discussions about transformer architecture to sociology and stereotype theory is included.

Klara Krieg über Gender Bias in Algorithmen - Frauen machen MINT (2022)

In der 20. Folge unseres #FrauenMachenMINT Podcasts sprechen wir mit Klara Krieg über ihr Forschungsgebiet zum Thema #GenderBias in Algorithmen. Klara ist Trainee im Junior Managers Program AI (Artifical Intelligence) & IoT (Internet of Things) bei Bosch, hat ihren Master in Wirtschaftsinformatik absolviert und ihre Masterarbeit zu dem Thema Geschlechterstereotype in Suchmaschinen geschrieben.

Gender Bias in Algorithmen sind systematische, unfaire Diskriminierungen in Informationssystemen, und somit Teil unserer digitalisierten Gesellschaft. Klara beschäftigt sich sowohl wissenschaftlich als auch privat sehr stark damit, wie Algorithmen, und ganz spezifisch Suchmaschinen, unfair sein können, welche Stereotype von Frauen und Männern dadurch gefestigt werden, und was wir als User*innen dagegen tun können. Sie beschreibt, warum das Gendern in unserer Sprache ganz konkrete Auswirkungen auf Informationssysteme und Algorithmen hat, und warum unsere Realität die Bilder-Ergebnisse in Suchmaschinen beeinflussen und vice versa.

Klara spricht sich offen dafür aus, dass wir als Teil der Gesellschaft unser Bewusstsein dazu schärfen sollten, wie wir unsere Sprache bezüglich Stereotype nutzen, und wie ein Informationssystem, das in 1 und 0 "denkt", keinen semantischen Spielraum hat, marginalisierte Gruppen "mitzumeinen" bzw. zu inkludieren. Es ist ein sehr spannendes und relevantes Gespräch über ein Thema, das sich im Spannungsfeld der Verschmelzung von Technik, Gesellschaft und Soziologie befindet.

Höre rein:

Spotify: https://lnkd.in/dSQRPpU

Apple Podcast: https://lnkd.in/ddqmeJx

This talk is in: German
Bosch LEVEL UP (2022) Future Topics Speaking Event: Unfair AI & Racial Biases

Racial biases and prejudices unfortunately accompany us unconsciously - not only in the real world, but also in the digital world. I took the audience through the harrowing world of unfair AI, specifically social media and racial biases. From saliency mapping in neural networks to discrimination against people on Twitter: the audience was shaken up and at times left speechless as to how this can happen.

Femtec Alumnae Event (2022): Wenn der Algorithmus dich nicht für fähig hält – Ein Einblick in diskriminierende Algorithmen und fair AI

Maschinelle Lernverfahren haben mittlerweile einen festen Platz in unserer Gesellschaft eingenommen: Ob bei der Bewerber*innenauswahl, medizinischen Behandlungsempfehlung oder Kreditwürdigkeitsprüfung – oft unterstützen oder ersetzen sie gar menschliche Entscheidungsträger*innen. Während die Vorteile solcher Automatisierung unbestreitbar sind, wird vermehrt über Algorithmen berichtet, die bestimmte Personengruppen wegen sogenannter sensibler Merkmale, wie z.B. Geschlecht oder Hautfarbe, diskriminieren. Besonders als Frauen sind wir oft davon betroffen. Können Algorithmen lernen „fair“ zu entscheiden? Kann man „Fairness“ überhaupt messen? Wir geben euch einen Einblick in das Forschungsfeld des fairen maschinellen Lernens, das sich mit diesen Fragen befasst. Diskutiert mit uns, vor welche Herausforderungen uns automatisierte Entscheidungssysteme stellen und was wir dagegen tun können!

This talk is in: German
European Conference on Information Retrieval (2022) l: Do Perceived Gender Biases in Retrieval Results affect Users’ Relevance Judgements?

Do Perceived Gender Biases in Retrieval Results affect Users’ Relevance Judgements?
Paper presentation and discussion about my research.

This talk is in: English