Publications

Definitions matter: Guiding GPT for multi-label classification

Published in EMNLP 2023, Conference on Empirical Methods in Natural Language Processing, 2023

Large language models have recently risen in popularity due to their ability to perform many natural language tasks without requiring any fine-tuning. In this work, we focus on two novel ideas: (1) generating definitions from examples and using them for zero-shot classification, and (2) investigating how an LLM makes use of the definitions. We thoroughly analyze the performance of GPT-3 model for fine-grained multi-label conspiracy theory classification of tweets using zero-shot labeling. In doing so, we asses how to improve the labeling by providing minimal but meaningful context in the form of the definitions of the labels. We compare descriptive noun phrases, humancrafted definitions, introduce a new method to help the model generate definitions from examples, and propose a method to evaluate GPT-3’s understanding of the definitions. We demonstrate that improving definitions of class labels has a direct consequence on the downstream classification results.

Recommended citation: Peskine, Y., Korencic, D., Grubišic, I., Papotti, P., Troncy, R., & Rosso, P. Definitions Matter: Guiding GPT for Multi-label Classification.