Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Show image information

Photo: https://commons.wikimedia.org/wiki/File:Mocking_Bird_Argument.jpg

Projects

In the following, basic information about the funded research projects the CSS group participates is provided.

 

OASiS: Objective Argument Summarization in Search (2021–2024)

Conceptually, an argument logically combines a claim with a set of reasons. In real-world text, however, arguments may be spread over several sentences, often intertwine multiple claims and reasons along with context information and rhetorical devices, and are inherently subjective. This project aims to study how to computationally obtain an objective summary of the gist of an argumentative text. In particular, we aim to establish foundations of natural language processing methods that (1) analyze the gist of an argument's reasoning, (2) generate a text snippet that summarizes the gist concisely, and (3) neutralize potential subjective bias in the summary as far as possible.

The rationale of the DFG-funded project is that argumentation machines, as envisioned by the RATIO priority program (SPP 1999), are meant to present the different positions people may have towards controversial issues, such as abortion or social distancing. One prototypical machine is our argument search engine, args.me, which opposes pro and con arguments from the web in response to user queries, in order to support self-determined opinion formation. A key aspect of args.me and comparable machines is to generate argument snippets, which give the user an efficient overview of the usually manifold arguments. Standard snippet generation has turned out to be insufficient for this purpose. We hypothesize that the best argument snippet summarizes the argument's gist objectively.

More information about the priority program can be found on the RATIO website.

 

Bias in AI Models (2020–2022)

"Bias in AI models" is a joint project by the Paderborn University and Bielefeld University in support of the Joint Artificial Intelligence Institute (JAII).

The term "bias" is describing the phenomenon that AI models reflect correlations in data instead of actual causalities when making decisions, even if it is based on non-justifiable and rather historically caused relations. Popular examples are the prediction of the probability of committing a crime based on the ethnicity of a person or the recommendation to employ a person or not based on genders. Since AI models will inherently be more ubiquitous in all fields of society, economy and science in the future, such biases have a large potential impact on marginalized groups and society at large.

Within this project, we are analyzing the impact of data on the learning process of AI models with a focus on language and its influence on different aspects, e.g. building an opinion.

 

On-the-Fly Computing, Subproject B1 (2019–2023)

We take part in the DFG-funded Collaborative Research Center (CRC) 901 of Paderborn University. Our subproject deals with different types of requirement specifications, which enable the successful search, composition, and analysis of services. 

In particular, we work on generating explanations of the configurated services. The users so far do not know which of their requirements have been fulfilled in a created service and which have not. Thus, the configurated services should be explained and adequately presented. Therefore, we will generate natural language explanations that describe the configuration of the created service in comparison to the service specification at different levels of granularity (from facts to reasons). For generation, we will explore combinations of classic grammar-based and discourse planning methods with state-of-the-art neural sequence-to-sequence models. A key feature of our approach is the adaptation of the explanation style to the language of the user.

More information can be found on the web page of Subproject B1.

The University for the Information Society