Target-Phrase Zero-Shot Stance Detection: Where Do We Stand? | Computational Science – ICCS 2024 (2024)

Article

Authors: Dawid Motyka and Maciej Piasecki

Computational Science – ICCS 2024: 24th International Conference, Malaga, Spain, July 2–4, 2024, Proceedings, Part II

July 2024

Pages 34 - 49

Published: 02 July 2024 Publication History

  • 0citation
  • 0
  • Downloads

Metrics

Total Citations0Total Downloads0

Last 12 Months0

Last 6 weeks0

  • Get Citation Alerts

    New Citation Alert added!

    This alert has been successfully added and will be sent to:

    You will be notified whenever a record that you have chosen has been cited.

    To manage your alert preferences, click on the button below.

    Manage my Alerts

    New Citation Alert!

    Please log in to your account

      • View Options
      • References
      • Media
      • Tables
      • Share

    Abstract

    Stance detection, i.e. recognition of utterances in favor, against or neutral in relation to some targets is important for text analysis. However, different approaches were tested on different datasets, often interpreted in different ways. We propose a unified overview of the state-of-the-art stance detection methods in which targets are expressed by short phrases. Special attention is given to zero-shot learning settings. An overview of the available multiple target datasets is presented that reveals several problems with the sets and their proper interpretation. Wherever possible, methods were re-run or even re-implemented to facilitate reliable comparison. A novel modification of a prompt-based approach to training encoder transformers for stance detection is proposed. It showed comparable results to those obtained with large language models, but at the cost of an order of magnitude fewer parameters. Our work tries to reliably show where do we stand in stance detection and where should we go, especially in terms of datasets and experimental settings.

    References

    [1]

    Addawood, A., Schneider, J., Bashir, M.: Stance classification of twitter debates. In: Processing of the 8th International Conference on Social Media. ACM Press, pp .1–10 (2017)

    [2]

    Allaway, E., McKeown, K.: Zero-Shot Stance Detection: a dataset and model using generalized topic representations. In: Proc. of the 2020 EMNLP, pp. 8913–8931. ACL, Online (Nov 2020)

    [3]

    Allaway E and McKeown K Zero-shot stance detection: paradigms and challenges Front. Artif. Intell. 2023 5 1070429

    [4]

    Allaway, E., Srikanth, M., McKeown, K.: Adversarial learning for zero-shot stance detection on social media. In: Proc. of the 2021 NAACL: Human Language Technologies, pp. 4756–4767. ACL, Online (Jun 2021)

    [5]

    Barbieri, F., Camacho-Collados, J., EspinosaAnke, L., Neves, L.: TweetEval: Unified benchmark and comparative evaluation for tweet classification. In: Findings of the ACL: EMNLP 2020, pp. 1644–1650. ACL, Online (Nov 2020)

    [6]

    Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems. vol.33, pp. 1877–1901. Curran Associates, Inc. (2020)

    [7]

    Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Wong SakHoi, G., Zubiaga, A.: SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for rumours. In: Proc. of the 11th (SemEval-2017), pp. 69–76. ACL, Vancouver, Canada (Aug 2017)

    [8]

    Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proc. of the 2019 NAACL: Human Language Technologies, pp. 4171–4186. ACL, Minneapolis, Minnesota (Jun 2019)

    [9]

    Fan, L., White, M., Sharma, E., Su, R., Choubey, P.K., Huang, R., Wang, L.: In plain sight: Media bias through the lens of factual reporting. In: Proc. of the 2019 EMNLP-IJCNLP. pp. 6343–6349. ACL, Hong Kong, China (Nov 2019)

    [10]

    Gao, T., Fisch, A., Chen, D.: Making pre-trained language models better few-shot learners. In: Proceeding of the 59th ACL and the 11th IJCNLP, pp. 3816–3830. ACL, Online (Aug (2021)

    [11]

    Glandt, K., Khanal, S., Li, Y., Caragea, D., Caragea, C.: Stance detection in COVID-19 tweets. In: Proceeding of the 59th ACL and the 11th Inter. Joint Conference on Natural Language Processing, pp. 1596–1611. ACL, Online (Aug 2021)

    [12]

    Hardalov, M., Arora, A., Nakov, P., Augenstein, I.: Cross-domain label-adaptive stance detection. In: Proceeding of the 2021 EMNLP, pp. 9011–9028. ACL, Online and Punta Cana, Dominican Republic (Nov 2021)

    [13]

    He, Z., Mokhberian, N., Lerman, K.: Infusing knowledge from Wikipedia to enhance stance detection. In: Proceeding of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pp. 71–77. ACL, Dublin, Ireland (May 2022)

    [14]

    Jiang Y, Gao J, Shen H, and Cheng X Zero-shot stance detection via multi-perspective contrastive learning with unlabeled data Inf. Process. Manage. 2023 60 4

    Digital Library

    [15]

    Kocoń, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., etal.: ChatGPT: Jack of all trades, master of none. Inf. Fusion 99, 101861 (Nov 2023). 10.1016/j.inffus.2023.101861

    [16]

    Lewis, M., et al.: BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceeding of the 58th ACL, pp. 7871–7880. ACL, Online (Jul 2020)

    [17]

    Li, Y., Sosea, T., Sawant, A., Nair, A.J., Inkpen, D., Caragea, C.: P-stance: A large dataset for stance detection in political domain. In: Findings of the ACL: ACL-IJCNLP 2021, pp. 2355–2365. ACL, Online (Aug 2021)

    [18]

    Li, Y., Zhao, C., Caragea, C.: Tts: A target-based teacher-student framework for zero-shot stance detection. In: Proceeding of the ACM Web Conference. 2023, pp . 1500-1509. WWW ’23, ACM, New York, NY, USA (2023)

    [19]

    Liang, B., Chen, Z., Gui, L., He, Y., Yang, M., Xu, R.: Zero-shot stance detection via contrastive learning. In: Proceeding of the ACM Web Conference. 2022, pp. 2738-2747. WWW ’22, ACM, New York, NY, USA (2022)

    [20]

    Liang, B., et al.: JointCL: A joint contrastive learning framework for zero-shot stance detection. In: Proceeding of the 60th ACL, pp. 81–91. ACL, Dublin, Ireland (May 2022)

    [21]

    Liu, R., Lin, Z., Fu, P., Liu, Y., Wang, W.: Connecting targets via latent topics and contrastive learning: A unified framework for robust zero-shot and few-shot stance detection. In: ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7812–7816 (2022)

    [22]

    Liu, R., Lin, Z., Tan, Y., Wang, W.: Enhancing zero-shot and few-shot stance detection with commonsense knowledge graph. In: Findings of the ACL: ACL-IJCNLP 2021, pp. 3152–3157. ACL, Online (Aug 2021)

    [23]

    Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. CoRR abs/1907.11692 (2019)

    [24]

    Liu, Y., Zhang, X.F., Wegsman, D., Beauchamp, N., Wang, L.: POLITICS: Pretraining with same-story article comparison for ideology prediction and stance detection. In: Findings of the ACL: NAACL 2022, pp. 1354–1374. ACL, Seattle, United States (Jul 2022)

    [25]

    Longpre, S., et al.: The flan collection: designing data and methods for effective instruction tuning, pp. 22631– 22648 (2023)

    [26]

    Luo, Y., Liu, Z., Shi, Y., Li, S.Z., Zhang, Y.: Exploiting sentiment and common sense for zero-shot stance detection. In: Proceeding of the 29th COLING, pp. 7112–7123. International Committee on Computational Linguistics, Gyeongju, Republic of Korea (Oct 2022)

    [27]

    Mohammad, S., Kiritchenko, S., Sobhani, P., Zhu, X., Cherry, C.: SemEval-2016 task 6: Detecting stance in tweets. In: Proceeding of the 10th (SemEval-2016), pp. 31–41. ACL, San Diego, California (Jun 2016)

    [28]

    Schick, T., Schütze, H.: Exploiting cloze-questions for few-shot text classification and natural language inference. In: Proceeding of the 16th Conference of EACL, pp. 255–269. ACL, Online (Apr 2021)

    [29]

    Schick, T., Schütze, H.: It’s not just size that matters: Small language models are also few-shot learners. In: Proceeding of the 2021 NAACL: Human Language Technologies, pp. 2339–2352. ACL, Online (Jun 2021)

    [30]

    Sobhani, P., Inkpen, D., Zhu, X.: A dataset for multi-target stance detection. In: Proceeding of the 15th EACL, pp. 551–557. ACL, Valencia, Spain (Apr 2017)

    [31]

    Tay, Y., et al.: Unifying language learning paradigms. arXiv preprint arXiv:2205.05131 (2023)

    [32]

    Wen, H., Hauptmann, A.: Zero-shot and few-shot stance detection on varied topics via conditional generation. In: Proceeding of the 61st ACL, pp. 1491–1499. ACL, Toronto, Canada (Jul 2023)

    [33]

    Xu, H., Vucetic, S., Yin, W.: OpenStance: Real-world zero-shot stance detection. In: Proceeding of the 26th CoNLL, pp. 314–324. ACL, Abu Dhabi (Dec 2022)

    [34]

    Zhang, B., Ding, D., Jing, L.: How would stance detection techniques evolve after the launch of chatgpt? arXiv preprint arXiv:2212.14548 (2023)

    [35]

    Zhao, X., Zou, J., Zhang, Z., Xie, F., Zhou, B., Tian, L.:Feature enhanced zero-shot stance detection via contrastive learning, pp. 900–908 (2023)

    [36]

    Zhu, Q., Liang, B., Sun, J., Du, J., Zhou, L., Xu, R.: Enhancing zero-shot stance detection via targeted background knowledge. In: Proceeding of the 45th International ACM SIGIR Conference on Research and Development in IR, pp . 2070-2075. SIGIR ’22, ACM, New York, NY, USA (2022)

    Recommendations

    • Zero-shot stance detection via multi-perspective contrastive learning with unlabeled data

      Abstract

      Stance detection is to distinguish whether the text’s author supports, opposes, or maintains a neutral stance towards a given target. In most real-world scenarios, stance detection needs to work in a zero-shot manner, i.e., predicting ...

      Highlights

      • We use unlabeled texts of unseen targets in training for zero-shot stance detection.

      Read More

    • TTS: A Target-based Teacher-Student Framework for Zero-Shot Stance Detection

      WWW '23: Proceedings of the ACM Web Conference 2023

      The goal of zero-shot stance detection (ZSSD) is to identify the stance (in favor of, against, or neutral) of a text towards an unseen target in the inference stage. In this paper, we explore this problem from a novel angle by proposing a Target-based ...

      Read More

    • Task-Specific Data Augmentation for Zero-shot and Few-shot Stance Detection

      WWW '23 Companion: Companion Proceedings of the ACM Web Conference 2023

      Various targets keep coming up on social media, and most of them lack labeled data. In this paper, we focus on zero-shot and few-shot stance detection, which aims to identify stances with few or even no training instances. In order to solve the lack of ...

      Read More

    Comments

    Information & Contributors

    Information

    Published In

    Target-Phrase Zero-Shot Stance Detection: Where Do We Stand? | Computational Science – ICCS 2024 (3)

    Computational Science – ICCS 2024: 24th International Conference, Malaga, Spain, July 2–4, 2024, Proceedings, Part II

    Jul 2024

    421 pages

    ISBN:978-3-031-63753-7

    DOI:10.1007/978-3-031-63751-3

    • Editors:
    • Leonardo Franco

      https://ror.org/036b2ww28University of Malaga, Malaga, Spain

      ,
    • Clélia de Mulatier

      University of Amsterdam, Amsterdam, The Netherlands

      ,
    • Maciej Paszynski

      AGH University of Science and Technology, Krakow, Poland

      ,
    • Valeria V. Krzhizhanovskaya

      https://ror.org/04dkp9463University of Amsterdam, Amsterdam, The Netherlands

      ,
    • Jack J. Dongarra

      University of Tennessee, Knoxville, TN, USA

      ,
    • Peter M. A. Sloot

      University of Amsterdam, Amsterdam, The Netherlands

    © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

    Publisher

    Springer-Verlag

    Berlin, Heidelberg

    Publication History

    Published: 02 July 2024

    Author Tags

    1. stance detection
    2. zero-shot learning
    3. prompt based learning for transformers

    Qualifiers

    • Article

    Contributors

    Target-Phrase Zero-Shot Stance Detection: Where Do We Stand? | Computational Science – ICCS 2024 (10)

    Other Metrics

    View Article Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Total Citations

    • Total Downloads

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0

    Other Metrics

    View Author Metrics

    Citations

    View Options

    View options

    Get Access

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    Get this Publication

    Media

    Figures

    Other

    Tables

    Target-Phrase Zero-Shot Stance Detection: Where Do We Stand? | Computational Science – ICCS 2024 (2024)

    FAQs

    What is zero-shot stance detection? ›

    Zero-shot and few-shot stance detection identify the polarity of text with regard to a certain target when we have only limited or no training resources for the target. Previous work generally formulates the problem into a classification setting, ignoring the potential use of label text.

    What is a zero-shot prediction? ›

    Zero-shot learning is a technique that enables pre-trained models to predict class labels of previously unknown data, i.e., data samples not present in the training data.

    What are zero-shot prompts? ›

    What is Zero-Shot Prompting? Zero-shot prompting is like being asked to solve a problem or perform a task without any specific preparation or examples just for that task. Imagine someone asks you to do something you've never done before, but they don't give you any specific instructions or examples to follow.

    What is zero-shot object detection? ›

    Zero-shot object detection is a computer vision task to detect objects and their classes in images, without any prior training or knowledge of the classes.

    What is zero-shot action recognition? ›

    Zero-shot action recognition, which recognizes actions in videos without having received any training examples, is gaining wide attention considering it can save labor costs and training time. Nevertheless, the performance of zero-shot learning is still unsatisfactory, which limits its practical application.

    What does zero-shot performance mean? ›

    Zero-shot learning refers to the ability to complete a task without having received any training examples. Consider the case of recognizing a category of object in images without ever having seen a photo of that type of object.

    What is the difference between zero-shot and unsupervised? ›

    On one hand, unsupervised domain adaptation assumes there is no labelled data from target domain, while in the zero-shot learning problem it is assumed there exist some labelled examples from the target domain although the labelled examples are only restricted to a subset of the whole target label space.

    What is zero-shot image retrieval? ›

    To avoid difficult to-obtain labeled triplet training data, zero-shot composed image retrieval (ZS-CIR) has been introduced, which aims to retrieve the target image by learning from image-text pairs (self-supervised triplets), without the need for human-labeled triplets.

    Top Articles
    Latest Posts
    Recommended Articles
    Article information

    Author: Edmund Hettinger DC

    Last Updated:

    Views: 5341

    Rating: 4.8 / 5 (58 voted)

    Reviews: 89% of readers found this page helpful

    Author information

    Name: Edmund Hettinger DC

    Birthday: 1994-08-17

    Address: 2033 Gerhold Pine, Port Jocelyn, VA 12101-5654

    Phone: +8524399971620

    Job: Central Manufacturing Supervisor

    Hobby: Jogging, Metalworking, Tai chi, Shopping, Puzzles, Rock climbing, Crocheting

    Introduction: My name is Edmund Hettinger DC, I am a adventurous, colorful, gifted, determined, precious, open, colorful person who loves writing and wants to share my knowledge and understanding with you.