Reading Group: Natural Language Processing

Description

This is the page of the internal reading group of the NLP Group. It is currently invitation-only. In the reading group, recent research papers from the field of natural language processing (NLP) are discussed, with varying themes.

Topics

  • Winter 2024/25. Selected state-of-the art NLP techniques in depth

  • Summer 2024. Selected papers from recent NLP research

  • Winter 2023/24. State-of-the-art techniques in NLP

  • Summer 2023. NLP papers outside the NLP community

  • Winter 2022/23. Best papers from major NLP conferences

  • Summer 2022. Selected papers from our main research areas

  • Winter 2021/22. Self-assessment of recent papers by the NLP Group

  • Summer 2021. Selected papers from our main research areas

  • Winter 2020/21. Self-assessment of recent papers by the NLP Group

Winter 2024/25

Summer 2024

Winter 2023/24

Summer 2023

Winter 2022/23

Summer 2022

  • Bianci and Hovy (2022). On the Gap between Adoption and Understanding in NLP

  • Ke et al. (2022). CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation

  • Barzilay and Lapata (2005). Modeling Local Coherence: An Entity-based Approach

  • Ye et al. (2021). CROSSFIT: A Few-shot Learning Challenge for Cross-task Generalization in NLP

  • Jin et al. (2022). Logical Fallacy Detection

  • Sun et al. (2021). Do Long-Range Language Models Actually Use Long-Range Context?

  • Hanawa et al. (2021). Exploring Methods for Generating Feedback Comments for Writing Learning 

  • Schick and Schütze (2021). Generating Datasets with Pretrained Language Models

  • Paul and Frank (2020). Social Commonsense Reasoning with Multi-Head Knowledge Attention

Winter 2021/22

  • Barrow et al. (2021). Syntopical Graphs for Computational Argumentation Tasks

  • Chen et al. (2021). Controlled Neural Sentence-Level Reframing of News Articles

  • Alshomary et al. (2021). Argument Undermining: Counter-Argument Generation by Attacking Weak Premises

  • Wachsmuth et al. (2014). Modeling Review Argumentation for Robust Sentiment Analysis

  • Spliethöver and Wachsmuth (2021). Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics for Word Embedding Models

  • Gurcke et al. (2021). Assessing the Sufficiency of Arguments through Conclusion Generation

Summer 2021

  • Prabhumoye et al. (2021). Case Study: Deontological Ethics in NLP

  • Chen et al. (2018). Application of Sentiment Analysis to Language Learning

  • Blodgett et al. (2020). Language (Technology) is Power: A Critical Survey of “Bias” in NLP

  • Orbach and Goldberg (2020). Facts2Story: Controlling Text Generation by Key Facts

  • Lertvittayakumjorn et al. (2020). FIND: Human-in-the-Loop Debugging Deep Text Classifiers

  • Hegel et al. (2020). Substance over Style: Document-Level Targeted Content Transfer

  • Tan et al. (2020). Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised Approach

Winter 2020/21

  • Spliethöver and Wachsmuth (2020). Argument from Old Man’s View: Assessing Social Bias in Argumentation

  • Alshomary et al. (2020). Belief-based Generation of Argumentative Claims

  • Chen et al. (2020). Detecting Media Bias in News Articles using Gaussian Bias Distributions

  • Wachsmuth and Werner (2020). Intrinsic Quality Assessment of Arguments

  • Stein et al. (2014). Generating Acrostics via Paraphrasing and Heuristic Search

  • Wachsmuth (2015). Towards Ad-hoc Large-Scale Text Mining, Chapter 1 (Introduction)

  • Alshomary et al. (2019). Wikipedia Text Reuse: Within and Without

  • Spliethöver et al. (2019). Is It Worth the Attention? A Comparative Evaluation of Attention Layers for Argument Unit Segmentation

  • Wachsmuth et al. (2018). Retrieval of the Best Counterargument without Prior Topic Knowledge

  • Chen et al. (2018). Learning to Flip the Bias of News Headlines