Discover the World of LLMs und AI:

Lecture Series on AI and Large Language Models

Large language models are driving the current AI revolution. HIDA's Lecture Series on AI and LLMs will give you an insight into various facets of the topic.

Not only since the launch of ChatGPT, the use, development and implementation of AI and large language models has been of great interest to the scientific community.

For this reason, the Helmholtz Information & Data Science Academy (HIDA) is organizing a series of monthly lectures on these topics.

From basic technical knowledge about the systems and their impact on the scientific community to ethical issues that may arise from the use of AI and LLM, the lectures cover a wide range of topics.

All speakers are highly qualified scientists from the Helmholtz Association and its partners.

 

Watch again: Past Lectures

The speaker: Lea Schönherr

The speaker: Lea Schönherr

Lea Schönherr is a tenure-track faculty at CISPA Helmholtz Center for Information Security interested in information security with a focus on adversarial machine learning. She received her Ph.D. in 2021 from Ruhr University Bochum, where she was advised by Prof. Dr.-Ing. Dorothea Kolossa at the Cognitive Signal Processing group at Ruhr University Bochum (RUB), Germany. She received two scholarships from UbiCrypt (DFG Research Training Group) and CASA (DFG Cluster of Excellence).

Challenges and Threats in Generative AI: Exploits and Misuse

Abstract

This talk will discuss the security challenges associated with generative AI in more detail. These fall into two possible categories: firstly, manipulated inputs and, secondly, the misuse of computer-generated results.

The speaker: Sahar Abdelnabi

The speaker: Sahar Abdelnabi

Sahar Abdelnabi is an AI security researcher at the Microsoft Security Response Center (Cambridge, UK). Previously, she was a doctoral researcher at CISPA - Helmholtz Center for Information Security. Her research interests lie in the broad intersection of machine learning with security, safety, and sociopolitical aspects.

On New Security and Safety Challenges Posed by LLMs and How to Evaluate Them

Abstract

This online lecture explores the widespread integration of Large Language Models (LLMs) into various real-world applications, highlighting both the vast opportunities for aiding diverse tasks and the significant challenges related to security and safety. Unlike earlier models with static generation, the dynamic, multi-turn, and adaptable nature of LLMs presents substantial difficulties in achieving robust evaluation and control. Join us as we examine the emerging risks associated with LLMs, delve into methodologies for rigorous evaluation, and tackle the complex challenges involved in implementing effective mitigations.

The speaker: Steffen Albrecht

The speaker: Dr. Steffen Albrecht

Dr. Steffen Albrecht is scientific staff member at the Office of Technology Assessment at the German Bundestag. He advises members of the parliament on scientific and technological developments. After completing his doctorate in sociology in Hamburg, he worked for several years at universities and in companies in Hamburg, Berlin and Dresden, where he researched the impact of digitalization on society and its various sectors, in particular education, politics and science. He currently focuses on bio- and medical technologies, digitalization and artificial intelligence. He is also involved in the further development of technology assessment methods.

Contextualizing LLMs – What are the social and ethical implications?

Abstract

This talk focuses on the social implications of the widespread use of large-scale language models. Steffen Albrecht discusses how generative AI could change public debate, administration and the arts, and how politics and society can steer these developments. After all, in order to recognize the potential and limitations of current AI systems and find ways to improve them, we need to look beyond their technological functions and consider the context of their development and use.

The speaker: Jörg Pohle

The speaker: Jörg Pohle

Jörg Pohle is a postdoc and head of the research programme “Data, actors, infrastructures: The governance of data-driven innovation and cyber security” at the Alexander von Humboldt Institute for Internet and Society in Berlin.

Ally or Adversary: Examining the Impact of Large Language Models on Academics and Academic Work

Abstract:

This talk will analyse and present the impact of Large Language Models (LLMs) on science. Potential benefits and risks for the scientific system and the scientific profession will be discussed. Ethical issues and the role of LLMs in academic tasks will be highlighted using empirical data. In addition, the changing role of academics in the digital world will be discussed, including the datafication and quantification of academics and their institutions and future challenges.

The speaker: Jan Ebert

The speaker: Jan Ebert

Jan Ebert

Jan Ebert has studied Cognitive Informatics and Intelligent Systems at Bielefeld University. With high interest in deep learning and high-performance computing, he started to work at Jülich Supercomputing Centre as Software Engineer and Researcher Large-Scale HPC Machine and Deep Learning, supporting researchers in various domains to apply artificial intelligence (AI) techniques for their research and co-founding LAION, an open community for open AI projects.

ChatGPT's Backgrounds: Exploring the World of Large Language Models

Abstract

In this talk, various key aspects of training LLMs will be explained in detail. This includes discussions on the architecture of such models, the selection and fine-tuning of training data, and the challenges and approaches to dealing with large datasets and computational resources. Current research trends and methodological innovations in the field of LLM training will also be discussed.

In addition, various exemplary applications and possible uses of LLMs will be presented during the lecture. This includes applications in natural language processing, automatic text generation, translation, sentiment analysis and many more. Practical case studies and success stories from industry and research will also be presented to illustrate the versatility and potential of LLMs.

An important part of the presentation is the current state of the technology and an outlook for the future. Current developments and trends in LLM research and application will be highlighted and potential challenges and opportunities that could arise in the coming years will be discussed. Possible areas of application beyond the current range of applications will also be considered in order to take a look at future developments and innovations in the field of speech technology.

Speaker: Bert Heinrichs

Speaker: Bert Heinrichs

Bert Heinrichs is professor of ethics and applied ethics at the Institute for Science and Ethics (IWE) at the University of Bonn and leader of the research group “Neuroethics and Ethics of AI” at the Institute of Neuroscience and Medicine: Brain and Behaviour (INM-7) at the Forschungszentrum Jülich.

He studied philosophy, mathematics and education in Bonn and Grenoble. He received his MA in 2001, followed by a doctorate in 2007 and his habilitation in 2013.

Prior to his current position, he was Head of the Scientific Department of the German Reference Center for Ethics in the Life Sciences (DRZE). He works on topics of neuroethics, ethics of AI, research ethics and medical ethics. He is also interested in questions of contemporary metaethics.

Ethical Considerations on Hate Speech and AI

Inhaltszusammenfassung

In this presentation, the speaker will delve into ethical dilemmas surrounding the handling of hate speech. The primary emphasis will be on exploring how AI can play a role in curbing the dissemination of hate speech and the potential challenges it poses. The discussion will distinguish between two key realms: hate speech prevalent on social media platforms and that which emerges from Large Language Models (LLMs).

Within these realms, there are distinct concerns to address. On social media platforms, hasty intervention may inadvertently lead to censorship, while with LLM-generated hate speech, there's a looming threat of stifling innovation. Thus, the imperative lies in identifying ethically sound compromises and devising technical mechanisms for their effective implementation.

Alternativ-Text

Subscribe newsletter