Main focus: Artificial Intelligence & bias
Websites/Blogs/Social Media Accounts:
https://www.linkedin.com/in/klara-krieg/
https://premium-speakers.com/referent-moderator/klara-krieg/
Languages: English, German
City: Munich
State: Bavaria
Country: Germany
Topics: machine learning, chatbots, ai, diversity in tech, algorithmic fairness, fair ai, unconscious bias in ai, gender biased algorithms, unfair ai, chatgpt, ai fairness, biased ai, diversity in ai
Services: Talk, Moderation, Workshop management, Consulting, Interview
Willing to travel for an event.
Willing to talk for nonprofit.
In my keynotes, I present the issue of unfair AI, catering to diverse audiences, be it HR or Tech-oriented. I can flexibly adapt to the audience and show new scientific knowledge from sociology and computer science. The focus of my speaking gigs revolves around igniting awareness and shaking up the perception of AI and Machine Learning as they can perpetuate discrimination. By exposing how our own biases find their way into these technologies, I emphasize the importance of recognizing and rectifying these shortcomings.
My ultimate goal is to promote a more inclusive and equitable approach to AI development and implementation, fostering a future where these technologies empower and benefit everyone without exacerbating societal prejudices.
As a dedicated hobby researcher, I have contributed to the field by presenting and publishing three impactful papers, with a particular focus on addressing gender bias in AI.
Who is Klara Krieg?
Klara Krieg combines the worlds of engineering and AI not only through her academic career, but also through her current position as AI Program Manager for generative AI at Bosch. In her last project, she worked on AI and Large Language Models in Silicon Valley.
Klara's passion lies in making the complex world of AI algorithms tangible and understandable. She is convinced that technological understanding and the practical application of new technologies must go hand in hand in order to create real added value. Her work goes far beyond technical know-how; it aims to foster a deep understanding of the nuances of human-AI interaction and to demonstrate the enormous potential for everyone.
In addition to her professional work, Klara is involved in research on topics such as discriminatory AI and Fair AI. Klara speaks both internally at Bosch and on external stages and aims to inspire audiences with her clear, application-oriented view of AI and provide a hands-on perspective of the complex subject area.
As an active member of various tech networks, Klara is passionate about women in tech and diversity. Her goal is to promote an inclusive future and working world where technology serves as a tool to create equality & fair opportunities for all and to inspire people to reflect on the ethical and social aspects of technology use.
What can you expect?
I currently offer 3 different keynotes:
1) Human - machine - AI: How do we interact with artificial intelligence?
In a world where Artificial Intelligence (AI) is increasingly permeating our everyday lives, Generative AI is opening up completely new horizons for companies, developers and end users. This keynote will take you into the fascinating world of Generative Artificial Intelligence, a technology that not only analyzes data, but also independently creates content such as text, images and music. We will dive into the mechanisms that drive Generative AI, from the basics of machine learning to the latest developments in deep learning and neural networks. At the same time, it addresses the challenges associated with this technology, particularly in the area of ethics and potential discrimination through internalized stereotypes. The aim is to raise awareness of the responsible use of AI and to discuss how unfair results can be avoided.
2) Unfair AI & discriminatory algorithms
Machine learning processes have now become an integral part of our society: Whether in applicant selection, medical treatment recommendations or credit scoring, whether Alexa, ChatGPT or Google - the boundaries between us and the technology around us are often blurred. Despite all the benefits, more and more reports are being published about unfair or discriminatory algorithms. Certain groups of people are often disproportionately affected and discriminated against on the basis of sensitive characteristics (e.g. race, gender). I will give you an insight into how our internalized stereotypes and prejudices can be reflected in AI-based systems. The talk is suitable for non-techies, techies and anyone interested!
3) AI and the new way of networking: How technology is redefining the way we connect
Together with the audience, we will explore the recently proclaimed era of the “Relationship Economy” and why genuine human connections and traits such as empathy and critical thinking are becoming increasingly important in an AI-driven world. The keynote aims to create a better understanding of how we can effectively utilize the synergy between humans and AI to thrive in both our professional and personal lives.
Examples of previous talks / appearances:
Alter Bundestag, Bonn: Talking about AI in front of 500 people!
I had the huge pleasure of giving a keynote for Buderus Deutschland at the historic former German parliament.
Explaining complex tech theories in an easy-to-digest, entertaining way is my favorite form of storytelling.
And for sure: discussing AI in a place so rich in history & tradition was a special experience, making me incredibly happy (I think you can tell from the 1st picture).
Die Masterclass beleuchtet aktuelle Trends und Strategien zur Umsetzung von GenAI-Anwendungen – praxisnah illustriert durch Projekte wie Bosch Digital. Sie thematisiert typische Pain Points und die zentrale Herausforderung der Produktionsreife von GenAI. Anhand von Beispielen und Projekten führender Unternehmen wie Toyota und Aleph Alpha werden Erfolgsfaktoren und Methoden zur Risikoreduktion aufgezeigt, mit einem besonderen Fokus auf Wertschöpfung und Mehrwert durch GenAI.
Klara Krieg
AI Program Manager – Bosch
Hans Ramsl
Success Machine Learning Engineer – Weights & Biases
Keynote for women in cyber security event in November 2024
This talk is in: EnglishThis talk is in: German
As part of She Is Tech, a UN women event, I held a Keynote on Gender Bias in AI and how to mitigate the risk of systematic discrimination in tech.
This talk is in: English
Gemeinsam mit den Zuhörerinnen werden wir die kürzlich ausgerufene Ära der "Relationship Economy" erkunden und herausfinden, warum genuine menschliche Verbindungen und Eigenschaften wie Empathie und kritisches Denken in einer von KI geprägten Welt zunehmend wichtiger werden. Die Keynote zielt darauf ab, ein besseres Verständnis dafür zu schaffen, wie wir die Synergie zwischen Mensch und KI effektiv nutzen können, um sowohl im Beruf als auch im persönlichen Bereich zu wachsen.
Feedback von Barbara Bosch, Speaking Coach:
"Klara während ihrer Keynote auf der Bühne zu erleben, war ein Genuss. Ihre Keynote war on point, inspirierend und rhetorisch grandios (und sie war nicht mal bei mir im Coaching 😊)."
_________________________
AI and the new way of networking: How technology is redefining the way we connect
Together with the audience, we will explore the recently proclaimed era of the “Relationship Economy” and why genuine human connections and traits such as empathy and critical thinking are becoming increasingly important in an AI-driven world. The keynote aims to create a better understanding of how we can effectively utilize the synergy between humans and AI to thrive in both our professional and personal lives.
This talk is in: German
Imagine the following scene: ‘A doctor arrives at the hotel, checks in at reception and then runs into the cleaner on the way to his room.’
How did you imagine this scene? Was the doctor a middle-aged white man? Was the cleaner a woman busily tidying up? Did you also imagine a young person at the hotel reception?
Such images are often firmly anchored in our minds - and also in the algorithms of artificial intelligence. How do such stereotypes influence AI systems like ChatGPT? And how can we ensure that AI algorithms develop in a non-discriminatory direction?
We will address these and many other questions together with Klara Krieg, expert for fair AI.
Der Sommer neigt sich dem Ende zu, der GenAI-Hype von 2023 verblasst langsam – ist die Blase geplatzt und wir stehen vor einem neuen „AI-Winter“? Im Off the Record „Business Value through GenAI“ gewähren wir exklusive Einblicke in den aktuellen Status Quo in Wirtschaft und Industrie. Wie lassen sich Deutschland und Europa in die globalen AI-Entwicklungen einordnen? Wir zeigen praxisnahe Lösungsansätze für eine erfolgreiche Integration, nachhaltige Skalierung und optimale Datenwertschöpfung. Freuen Sie sich auf inspirierende Impulse und lebhafte Diskussionen!
Moderation: Barbara Burk
Daniela Rittmeier
Head of Data and AI Center of Excellence – Capgemini
Klara Krieg
AI Program Manager – Bosch Digital
Dr. Philipp Hartmann
Director of AI Strategy – appliedAI
Dr. Graciana Petersen
Head of Strategy & Transformation – ZF Group Friedrichshafen
Barbara Burk
Director Business Development & Projektmanagement – Handelsblatt Research Institute
Panel Talk at the Women of Tech event in Stuttgart
Wie können wir eine faire KI schaffen?
together with
Klara Krieg, Robert Bosch GmbH
Walid Mehanna, Merck Group
Naz van Norel, ETHOS AI, Women AI Academy
Heike Schenk, Mercedes-Benz Group AG
As part of the Lunch & Learns at EnBW, I had the opportunity to talk about gender bias in AI and discuss it with the participants.
This talk is in: German
The kick-off explores the dynamic interaction between humans and machines through the use of generative AI, which independently creates content such as texts, images and music. Klara sheds light on how generative AI opens up new opportunities for businesses, developers and end users by enabling innovative solutions in various fields such as art, research and medicine. The presentation dives into the technical foundations and latest developments in generative AI, from machine learning to deep learning. At the same time, it addresses the challenges associated with this technology, particularly in the area of ethics and potential discrimination through internalized stereotypes. The aim is to raise awareness of responsible use of AI and to discuss how unfair results can be avoided.
Afterwards, Klara was part of the panel discussion with Südkurier editor-in-chief Stefan Lutz, Dr. Philip Häusser and Peter Keck. (https://www.suedkurier.de/region/kreis-konstanz/singen/denken-muessen-wir-immer-noch-selber-beim-wirtschaftsforum-geht-es-um-das-zukunftsthema-ki;art372458,11986689)
This talk is in: German
As panelist I was invited to a session with the topic "Impact the Future of Technology and Society for the Better!" together with panelists from Microsoft and AI startups.
As part of the panel discussion, I brought insides on my ethical AI research and ethical AI practices across industries.
Kenynote Speaker & Panelist at the Bosch ConnectedWorld, the biggest Bosch event worldwide, happening once a year in Berlin.
AI - technology for life?
Is that true? Tap into the experiences of our two AI-experts Klara and Carissa, how the reproduction of unconscious biases of the real life in AI are endangering this vision. Together with the DEI-expert Stefanie they will explore opportunities to make AI bias-free and to bring this vision to life - For everyone!
19. März 2024 12:00 Uhr – 19. März 2024 13:00 Uhr
Ort: digital
Veranstalter: Women in Tech e.V.
Kategorie: #PowerLunch
As an AI mentor and speaker, I was able to accompany the Digital Future Challenge 2024, an initiative of the Federal Ministry & Deloitte.
This talk is in: German
Machine learning methods have now established a firm place in our society: Whether in the selection of job applicants, medical treatment recommendations, or creditworthiness checks, whether it's Alexa, ChatGPT, or Google. Despite all the advantages, the algorithm often acts unfairly or discriminates. Certain groups of people are disproportionately affected due to sensitive characteristics (such as race, gender). Klara Krieg shows how our internalized stereotypes and prejudices can be found in AI-based systems and what this can mean for girls' work in Hesse. The lecture is suitable for non-techies, techies, and all interested parties!
This talk is in: German
Unfair AI - ethics in chatbots?
ChatGPT as a hyped topic is currently omnipresent in the media. But we must not forget that, in addition to technical innovation, the ethical implications must also be examined and taken seriously.
In my talk, I will discuss Transformer models, how the Google Paper in 2017 makes them possible and how such models have made solutions like ChatGPT possible in the first place as a technical basis.
I will show the consequences and causes of the viscous cycle of bias and how we need to pay more attention to breaking stereotypes in order to avoid social grievances and their reflection in algorithms.
Presentation on unfair algorithms and gender-discriminatory AI at Deloitte's WIN Lunch.
As part of a DemoZ event Ludwigsburg, I spoke about unfair algorithms in Google and ChatGPT in particular and discussed the vicious circle of algorithmic biases. Whether non-techies or IT specialists, the topic is presented in a low-threshold and simple way, so that in the end everything from discussions about transformer architecture to sociology and stereotype theory is included.
In der 20. Folge unseres #FrauenMachenMINT Podcasts sprechen wir mit Klara Krieg über ihr Forschungsgebiet zum Thema #GenderBias in Algorithmen. Klara ist Trainee im Junior Managers Program AI (Artifical Intelligence) & IoT (Internet of Things) bei Bosch, hat ihren Master in Wirtschaftsinformatik absolviert und ihre Masterarbeit zu dem Thema Geschlechterstereotype in Suchmaschinen geschrieben.
Gender Bias in Algorithmen sind systematische, unfaire Diskriminierungen in Informationssystemen, und somit Teil unserer digitalisierten Gesellschaft. Klara beschäftigt sich sowohl wissenschaftlich als auch privat sehr stark damit, wie Algorithmen, und ganz spezifisch Suchmaschinen, unfair sein können, welche Stereotype von Frauen und Männern dadurch gefestigt werden, und was wir als User*innen dagegen tun können. Sie beschreibt, warum das Gendern in unserer Sprache ganz konkrete Auswirkungen auf Informationssysteme und Algorithmen hat, und warum unsere Realität die Bilder-Ergebnisse in Suchmaschinen beeinflussen und vice versa.
Klara spricht sich offen dafür aus, dass wir als Teil der Gesellschaft unser Bewusstsein dazu schärfen sollten, wie wir unsere Sprache bezüglich Stereotype nutzen, und wie ein Informationssystem, das in 1 und 0 "denkt", keinen semantischen Spielraum hat, marginalisierte Gruppen "mitzumeinen" bzw. zu inkludieren. Es ist ein sehr spannendes und relevantes Gespräch über ein Thema, das sich im Spannungsfeld der Verschmelzung von Technik, Gesellschaft und Soziologie befindet.
Höre rein:
Spotify: https://lnkd.in/dSQRPpU
Apple Podcast: https://lnkd.in/ddqmeJx
This talk is in: German
Racial biases and prejudices unfortunately accompany us unconsciously - not only in the real world, but also in the digital world. I took the audience through the harrowing world of unfair AI, specifically social media and racial biases. From saliency mapping in neural networks to discrimination against people on Twitter: the audience was shaken up and at times left speechless as to how this can happen.
Maschinelle Lernverfahren haben mittlerweile einen festen Platz in unserer Gesellschaft eingenommen: Ob bei der Bewerber*innenauswahl, medizinischen Behandlungsempfehlung oder Kreditwürdigkeitsprüfung – oft unterstützen oder ersetzen sie gar menschliche Entscheidungsträger*innen. Während die Vorteile solcher Automatisierung unbestreitbar sind, wird vermehrt über Algorithmen berichtet, die bestimmte Personengruppen wegen sogenannter sensibler Merkmale, wie z.B. Geschlecht oder Hautfarbe, diskriminieren. Besonders als Frauen sind wir oft davon betroffen. Können Algorithmen lernen „fair“ zu entscheiden? Kann man „Fairness“ überhaupt messen? Wir geben euch einen Einblick in das Forschungsfeld des fairen maschinellen Lernens, das sich mit diesen Fragen befasst. Diskutiert mit uns, vor welche Herausforderungen uns automatisierte Entscheidungssysteme stellen und was wir dagegen tun können!
This talk is in: German
Do Perceived Gender Biases in Retrieval Results affect Users’ Relevance Judgements?
Paper presentation and discussion about my research.