Pro­jects

In the following, basic information about the funded research projects the CSS group participates in is provided:

Arg­School: Com­pu­ta­ti­o­nal Sup­port for Lear­ning Ar­gu­men­ta­ti­ve Wri­ting in Di­gi­tal School Edu­ca­ti­on (2021–2024)

In this project, we aim to study how to support German school students in learning to write argumentative texts through computational methods that provide developmental feedback. These methods will assess and explain which aspects of a text are good, which need to be improved, and how to improve them, adapted to the student’s learning stage. We seek to provide answers to three main research questions: (1) How to robustly mine the structure of German argumentative learner texts? (2) How to effectively assess the learning stage of a student based on a given argumentative text? (3) How to provide developmental feedback to an argumentative text adapted to the learning stage?

The motivation behind this DFG-funded project is that digital technology is more and more transforming our culture and forms of learning. While vigorous efforts are made to implement digital technologies in school education, software for teaching German is so far limited to simple multiple-choice tests and the like, not providing any formative, let alone individualized, feedback. Argumentative writing is one the most standard tasks in school education, taught incrementally at different ages. Due to its importance across school subjects, it defines a suitable starting point for more “intelligent” computational learning support. We focus on the structural composition of argumentative texts, leaving their content and its relation to underlying sources to future work.
 

To­wards a Fra­me­work for As­ses­sing Ex­pla­na­ti­on Qua­li­ty

We take part with two subprojects in the transregional Collaborative Research Center TRR 318 "Constructing Explainability". In Subproject INF, we study the pragmatic goal of all explaining processes: to be successful — that is, for the explanation to achieve the intended form of understanding (enabling, comprehension) of the given explanandum on the explainee's side.

In particular, we aim to investigate the question as to what characteristics successful explaining processes share in general, and what is specific to a given context or setting. To this end, we will first establish and define a common vocabulary of the different elements of an explaining process. We will then explore what quality dimensions can be assessed for explaining processes. Modeling these processes based on the elements represented in the vocabulary, we will develop and evaluate new computational methods that analyze the content, style, and structure of explanations in terms of linguistic features, interaction aspects, and available context parameters. Our goal is to establish and empirically underpin a first theory of explanation quality based on the vocabulary, thereby laying a common ground for the whole TRR to understand how success in explaining processes is achieved. This is a challenge in light of our assumptions that any explanation is dynamic and co-constructed and that the quality and success of explanations and explaining processes may be seen differently from different viewpoints.

More information can be found on the TRR web page of Subproject INF.
 

Me­ta­phors as an Ex­pla­na­ti­on Tool (2021–2025)

We take part with two subprojects in the transregional Collaborative Research Center TRR 318 "Constructing Explainability". In Subproject C04, we study how explainers and explainees focus attention,  through their choice of metaphors, on some aspects of the explanandum and draw attention away from others.

In particular, this project focuses on the metaphorical space established by different metaphors for one and the same concept. We seek to understand how metaphors foster (and impede) understanding through highlighting and hiding. Moreover, we aim to establish knowledge about when and how metaphors are used and adapted in explanatory dialogues; as well as how explainee, explainer, and the topical domain of the explanandum contribute to this process. By providing an understanding of how metaphorical explanations function and of how metaphor use responds to and changes contextual factors, we will contribute to the development of co-constructive explaining AI systems.

More information can be found on the TRR web page of Subproject C04.
 

OA­SiS: Ob­jec­ti­ve Ar­gu­ment Sum­ma­ri­za­ti­on in Sea­rch (2021–2024)

Conceptually, an argument logically combines a claim with a set of reasons. In real-world text, however, arguments may be spread over several sentences, often intertwine multiple claims and reasons along with context information and rhetorical devices, and are inherently subjective. This project aims to study how to computationally obtain an objective summary of the gist of an argumentative text. In particular, we aim to establish foundations of natural language processing methods that (1) analyze the gist of an argument's reasoning, (2) generate a text snippet that summarizes the gist concisely, and (3) neutralize potential subjective bias in the summary as far as possible.

The rationale of the DFG-funded project is that argumentation machines, as envisioned by the RATIO priority program (SPP 1999), are meant to present the different positions people may have towards controversial issues, such as abortion or social distancing. One prototypical machine is our argument search engine, args.me, which opposes pro and con arguments from the web in response to user queries, in order to support self-determined opinion formation. A key aspect of args.me and comparable machines is to generate argument snippets, which give the user an efficient overview of the usually manifold arguments. Standard snippet generation has turned out to be insufficient for this purpose. We hypothesize that the best argument snippet summarizes the argument's gist objectively.

More information about the priority program can be found on the RATIO website.

 

Bi­as in AI Mo­dels (2020–2022)

"Bias in AI models" is a joint project by the Paderborn University and Bielefeld University in support of the Joint Artificial Intelligence Institute (JAII).

The term "bias" is describing the phenomenon that AI models reflect correlations in data instead of actual causalities when making decisions, even if it is based on non-justifiable and rather historically caused relations. Popular examples are the prediction of the probability of committing a crime based on the ethnicity of a person or the recommendation to employ a person or not based on genders. Since AI models will inherently be more ubiquitous in all fields of society, economy and science in the future, such biases have a large potential impact on marginalized groups and society at large.

Within this project, we are analyzing the impact of data on the learning process of AI models with a focus on language and its influence on different aspects, e.g. building an opinion.

 

Pa­ra­me­te­ri­zed Ser­vice Spe­ci­fi­ca­ti­ons (2019–2023)

We take part in the DFG-funded Collaborative Research Center (CRC) 901 "On-the-Fly Computing" of Paderborn University. Our subproject B1 deals with different types of requirement specifications, which enable the successful search, composition, and analysis of services. 

In particular, we work on generating explanations of the configurated services. The users so far do not know which of their requirements have been fulfilled in a created service and which have not. Thus, the configurated services should be explained and adequately presented. Therefore, we will generate natural language explanations that describe the configuration of the created service in comparison to the service specification at different levels of granularity (from facts to reasons). For generation, we will explore combinations of classic grammar-based and discourse planning methods with state-of-the-art neural sequence-to-sequence models. A key feature of our approach is the adaptation of the explanation style to the language of the user.

More information can be found on the web page of Subproject B1.