Description
This is the page of the internal reading group of the NLP Group. It is currently invitation-only. In the reading group, recent research papers from the field of natural language processing (NLP) are discussed, with varying themes.
Topics
-
Summer 2024. Selected papers from recent NLP research
-
Winter 2023/24. State-of-the-art techniques in NLP
-
Summer 2023. NLP papers outside the NLP community
-
Winter 2022/23. Best papers from major NLP conferences
-
Summer 2022. Selected papers from our main research areas
-
Winter 2021/22. Self-assessment of recent papers by the NLP Group
-
Summer 2021. Selected papers from our main research areas
-
Winter 2020/21. Self-assessment of recent papers by the NLP Group
Summer 2024
- Zhang et al. (2024). Fair Abstractive Summarization of Diverse Perspectives, https://aclanthology.org/2024.naacl-long.187/ (July 16, 2024)
- Zhou et al. (2024). Diffusion-NAT: Self-Prompting Discrete Diffusion for Non-Autoregressive Text Generation, https://aclanthology.org/2024.eacl-long.86 (June 18, 2024)
- Konen et al. (2024). Style Vectors for Steering Generative Large Language Models, https://aclanthology.org/2024.findings-eacl.52 (June 4, 2024)
- Kim et al. (2024). Prometheus: Inducing Fine-grained Evaluation Capability in Language Models, https://arxiv.org/abs/2310.08491 (May 7, 2024)
- Qiao et al. (2023). Reasoning with Language Model Prompting: A Survey, https://aclanthology.org/2023.acl-long.294/ (April 23, 2024)
- Fillippova (2020). Controlled Hallucinations:Learning to Generate Faithfully from Noisy Data, https://aclanthology.org/2020.findings-emnlp.76/ (April 9, 2024)
Winter 2023/24
- Lewis et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, https://proceedings.neurips.cc/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf (February 6, 2024)
- Hewitt et al. (2023). Backpack Language Models, https://aclanthology.org/2023.acl-long.506.pdf (January 23, 2024)
- Li et al. (2022). Diffusion-LM Improves Controllable Text Generation, https://proceedings.neurips.cc/paper_files/paper/2022/hash/1be5bc25d50895ee656b8c2d9eb89d6a-Abstract-Conference.html (January 9, 2024)
- Wei et al. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, https://proceedings.neurips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html (November 28, 2023)
- Pfeiffer et al. (2021). AdapterFusion: Non-Destructive Task Composition for Transfer Learning, https://aclanthology.org/2021.eacl-main.39.pdf (November 14, 2023)
- Wang et al. (2023). Self-Instruct: Aligning Language Models with Self-Generated Instructions, https://aclanthology.org/2023.acl-long.754.pdf (October 17, 2023)
Summer 2023
- Islam et al. (2023). PATRON: Perspective-Aware Multitask Model for Referring Expression Grounding Using Embodied Multimodal Cues. https://ojs.aaai.org/index.php/AAAI/article/view/25177 (July 13, 2023, area: multimodal)
- Li et al. (2022). SimStu-Transformer: A Transformer-Based Approach to Simulating Student Behaviour. https://link.springer.com/chapter/10.1007/978-3-031-11647-6_67 (June 30, 2023, area: education)
- Niu et al. (2022). AttExplainer: Explain Transformer via Attention by Reinforcement Learning. https://www.ijcai.org/proceedings/2022/0102 (June 16, 2023, area: AI)
- Nakashima et al. (2020). Virus database annotations assist in tracing information on patients infected with emerging pathogens. https://www.sciencedirect.com/science/article/pii/S2352914820306067 (June 2, 2023, area: medicine)
- Jakesh et al. (2023). Co-Writing with Opinionated Language Models Affects Users’ Views, https://dl.acm.org/doi/10.1145/3544548.3581196 (May 11, 2023, area: HCI)
- Agarwal et al. (2022). GraphNLI: A Graph-based Natural Language Inference Model for Polarity Prediction in Online Debates, https://dl.acm.org/doi/pdf/10.1145/3485447.3512144 (April 27, 2023, area: web)
- Lin et al. (2022). What Makes the Story Forward? Inferring Commonsense Explanations as Prompts for Future Event Generation, https://dl.acm.org/doi/10.1145/3477495.3532080 (April 13, 2023, area: information retrieval)
Winter 2022/23
-
Wen et al. (2015). Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems, https://aclanthology.org/D15-1199 (February 28, 2023)
-
Anonymized. Unpublished paper (February 14, 2023)
-
Andreas et al. (2016). Learning to Compose Neural Networks for Question Answering, https://aclanthology.org/N16-1181 (January 31, 2023)
-
Kottur et al. (2017). Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog, https://aclanthology.org/D17-1321, (January 17, 2023)
-
Peters et al. (2018). Deep contextualized word representations, https://aclanthology.org/N18-1202 (January 3, 2023)
-
Moon et al. (2019). OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs, https://aclanthology.org/P19-1081 (December 20, 2022)
-
Bender and Koller (2020). Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data, https://aclanthology.org/2020.acl-main.463 (December 6, 2022)
-
Qin et al. (2021). Learning How to Ask: Querying LMs with Mixtures of Soft Prompts, https://aclanthology.org/2021.naacl-main.410 (November 22, 2022)
-
Lu et al. (2022). NEUROLOGIC A*esque Decoding: Constrained Text Generation with Lookahead Heuristics, https://aclanthology.org/2022.naacl-main.57 (November 8, 2022)
Summer 2022
-
Bianci and Hovy (2022). On the Gap between Adoption and Understanding in NLP
-
Ke et al. (2022). CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation
-
Barzilay and Lapata (2005). Modeling Local Coherence: An Entity-based Approach
-
Ye et al. (2021). CROSSFIT: A Few-shot Learning Challenge for Cross-task Generalization in NLP
-
Jin et al. (2022). Logical Fallacy Detection
-
Sun et al. (2021). Do Long-Range Language Models Actually Use Long-Range Context?
-
Hanawa et al. (2021). Exploring Methods for Generating Feedback Comments for Writing Learning
-
Schick and Schütze (2021). Generating Datasets with Pretrained Language Models
-
Paul and Frank (2020). Social Commonsense Reasoning with Multi-Head Knowledge Attention
Winter 2021/22
-
Barrow et al. (2021). Syntopical Graphs for Computational Argumentation Tasks
-
Chen et al. (2021). Controlled Neural Sentence-Level Reframing of News Articles
-
Alshomary et al. (2021). Argument Undermining: Counter-Argument Generation by Attacking Weak Premises
-
Wachsmuth et al. (2014). Modeling Review Argumentation for Robust Sentiment Analysis
-
Spliethöver and Wachsmuth (2021). Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics for Word Embedding Models
-
Gurcke et al. (2021). Assessing the Sufficiency of Arguments through Conclusion Generation
Summer 2021
-
Prabhumoye et al. (2021). Case Study: Deontological Ethics in NLP
-
Chen et al. (2018). Application of Sentiment Analysis to Language Learning
-
Blodgett et al. (2020). Language (Technology) is Power: A Critical Survey of “Bias” in NLP
-
Orbach and Goldberg (2020). Facts2Story: Controlling Text Generation by Key Facts
-
Lertvittayakumjorn et al. (2020). FIND: Human-in-the-Loop Debugging Deep Text Classifiers
-
Hegel et al. (2020). Substance over Style: Document-Level Targeted Content Transfer
-
Tan et al. (2020). Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised Approach
Winter 2020/21
-
Spliethöver and Wachsmuth (2020). Argument from Old Man’s View: Assessing Social Bias in Argumentation
-
Alshomary et al. (2020). Belief-based Generation of Argumentative Claims
-
Chen et al. (2020). Detecting Media Bias in News Articles using Gaussian Bias Distributions
-
Wachsmuth and Werner (2020). Intrinsic Quality Assessment of Arguments
-
Stein et al. (2014). Generating Acrostics via Paraphrasing and Heuristic Search
-
Wachsmuth (2015). Towards Ad-hoc Large-Scale Text Mining, Chapter 1 (Introduction)
-
Alshomary et al. (2019). Wikipedia Text Reuse: Within and Without
-
Spliethöver et al. (2019). Is It Worth the Attention? A Comparative Evaluation of Attention Layers for Argument Unit Segmentation
-
Wachsmuth et al. (2018). Retrieval of the Best Counterargument without Prior Topic Knowledge
-
Chen et al. (2018). Learning to Flip the Bias of News Headlines