Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean
Healthcare Professional Licensing Examinations | We introduce KorMedMCQA, the first Korean multiple-choice question answering
(MCQA) benchmark derived from Korean healthcare professional licensing
examinations, covering from the year 2012 to year 2023. This dataset consists
of a selection of questions from the license examinations for doctors, nurses,
and pharmacists, featuring a diverse array of subjects. We conduct baseline
experiments on various large language models, including
proprietary/open-source, multilingual/Korean-additional pretrained, and
clinical context pretrained models, highlighting the potential for further
enhancements. We make our data publicly available on HuggingFace
(https://huggingface.co/datasets/sean0042/KorMedMCQA) and provide a evaluation
script via LM-Harness, inviting further exploration and advancement in Korean
healthcare environments.
| 2,024 | Computation and Language |
Align-to-Distill: Trainable Attention Alignment for Knowledge
Distillation in Neural Machine Translation | The advent of scalable deep models and large datasets has improved the
performance of Neural Machine Translation. Knowledge Distillation (KD) enhances
efficiency by transferring knowledge from a teacher model to a more compact
student model. However, KD approaches to Transformer architecture often rely on
heuristics, particularly when deciding which teacher layers to distill from. In
this paper, we introduce the 'Align-to-Distill' (A2D) strategy, designed to
address the feature mapping problem by adaptively aligning student attention
heads with their teacher counterparts during training. The Attention Alignment
Module in A2D performs a dense head-by-head comparison between student and
teacher attention heads across layers, turning the combinatorial mapping
heuristics into a learning problem. Our experiments show the efficacy of A2D,
demonstrating gains of up to +3.61 and +0.63 BLEU points for WMT-2022 De->Dsb
and WMT-2014 En->De, respectively, compared to Transformer baselines.
| 2,024 | Computation and Language |
Infusing Knowledge into Large Language Models with Contextual Prompts | Knowledge infusion is a promising method for enhancing Large Language Models
for domain-specific NLP tasks rather than pre-training models over large data
from scratch. These augmented LLMs typically depend on additional pre-training
or knowledge prompts from an existing knowledge graph, which is impractical in
many applications. In contrast, knowledge infusion directly from relevant
documents is more generalisable and alleviates the need for structured
knowledge graphs while also being useful for entities that are usually not
found in any knowledge graph. With this motivation, we propose a simple yet
generalisable approach for knowledge infusion by generating prompts from the
context in the input text. Our experiments show the effectiveness of our
approach which we evaluate by probing the fine-tuned LLMs.
| 2,024 | Computation and Language |
Fantastic Semantics and Where to Find Them: Investigating Which Layers
of Generative LLMs Reflect Lexical Semantics | Large language models have achieved remarkable success in general language
understanding tasks. However, as a family of generative methods with the
objective of next token prediction, the semantic evolution with the depth of
these models are not fully explored, unlike their predecessors, such as
BERT-like architectures. In this paper, we specifically investigate the
bottom-up evolution of lexical semantics for a popular LLM, namely Llama2, by
probing its hidden states at the end of each layer using a contextualized word
identification task. Our experiments show that the representations in lower
layers encode lexical semantics, while the higher layers, with weaker semantic
induction, are responsible for prediction. This is in contrast to models with
discriminative objectives, such as mask language modeling, where the higher
layers obtain better lexical semantics. The conclusion is further supported by
the monotonic increase in performance via the hidden states for the last
meaningless symbols, such as punctuation, in the prompting strategy.
| 2,024 | Computation and Language |
Revisiting Dynamic Evaluation: Online Adaptation for Large Language
Models | We consider the problem of online fine tuning the parameters of a language
model at test time, also known as dynamic evaluation. While it is generally
known that this approach improves the overall predictive performance,
especially when considering distributional shift between training and
evaluation data, we here emphasize the perspective that online adaptation turns
parameters into temporally changing states and provides a form of
context-length extension with memory in weights, more in line with the concept
of memory in neuroscience. We pay particular attention to the speed of
adaptation (in terms of sample efficiency),sensitivity to the overall
distributional drift, and the computational overhead for performing gradient
computations and parameter updates. Our empirical study provides insights on
when online adaptation is particularly interesting. We highlight that with
online adaptation the conceptual distinction between in-context learning and
fine tuning blurs: both are methods to condition the model on previously
observed tokens.
| 2,024 | Computation and Language |
Leveraging Biomolecule and Natural Language through Multi-Modal
Learning: A Survey | The integration of biomolecular modeling with natural language (BL) has
emerged as a promising interdisciplinary area at the intersection of artificial
intelligence, chemistry and biology. This approach leverages the rich,
multifaceted descriptions of biomolecules contained within textual data sources
to enhance our fundamental understanding and enable downstream computational
tasks such as biomolecule property prediction. The fusion of the nuanced
narratives expressed through natural language with the structural and
functional specifics of biomolecules described via various molecular modeling
techniques opens new avenues for comprehensively representing and analyzing
biomolecules. By incorporating the contextual language data that surrounds
biomolecules into their modeling, BL aims to capture a holistic view
encompassing both the symbolic qualities conveyed through language as well as
quantitative structural characteristics. In this review, we provide an
extensive analysis of recent advancements achieved through cross modeling of
biomolecules and natural language. (1) We begin by outlining the technical
representations of biomolecules employed, including sequences, 2D graphs, and
3D structures. (2) We then examine in depth the rationale and key objectives
underlying effective multi-modal integration of language and molecular data
sources. (3) We subsequently survey the practical applications enabled to date
in this developing research area. (4) We also compile and summarize the
available resources and datasets to facilitate future work. (5) Looking ahead,
we identify several promising research directions worthy of further exploration
and investment to continue advancing the field. The related resources and
contents are updating in
\url{https://github.com/QizhiPei/Awesome-Biomolecule-Language-Cross-Modeling}.
| 2,024 | Computation and Language |
In-Context Sharpness as Alerts: An Inner Representation Perspective for
Hallucination Mitigation | Large language models (LLMs) frequently hallucinate and produce factual
errors, yet our understanding of why they make these errors remains limited. In
this study, we delve into the underlying mechanisms of LLM hallucinations from
the perspective of inner representations, and discover a salient pattern
associated with hallucinations: correct generations tend to have sharper
context activations in the hidden states of the in-context tokens, compared to
the incorrect ones. Leveraging this insight, we propose an entropy-based metric
to quantify the ``sharpness'' among the in-context hidden states and
incorporate it into the decoding process to formulate a constrained decoding
approach. Experiments on various knowledge-seeking and hallucination benchmarks
demonstrate our approach's consistent effectiveness, for example, achieving up
to an 8.6 point improvement on TruthfulQA. We believe this study can improve
our understanding of hallucinations and serve as a practical solution for
hallucination mitigation.
| 2,024 | Computation and Language |
SERVAL: Synergy Learning between Vertical Models and LLMs towards
Oracle-Level Zero-shot Medical Prediction | Recent development of large language models (LLMs) has exhibited impressive
zero-shot proficiency on generic and common sense questions. However, LLMs'
application on domain-specific vertical questions still lags behind, primarily
due to the humiliation problems and deficiencies in vertical knowledge.
Furthermore, the vertical data annotation process often requires
labor-intensive expert involvement, thereby presenting an additional challenge
in enhancing the model's vertical capabilities. In this paper, we propose
SERVAL, a synergy learning pipeline designed for unsupervised development of
vertical capabilities in both LLMs and small models by mutual enhancement.
Specifically, SERVAL utilizes the LLM's zero-shot outputs as annotations,
leveraging its confidence to teach a robust vertical model from scratch.
Reversely, the trained vertical model guides the LLM fine-tuning to enhance its
zero-shot capability, progressively improving both models through an iterative
process. In medical domain, known for complex vertical knowledge and costly
annotations, comprehensive experiments show that, without access to any gold
labels, SERVAL with the synergy learning of OpenAI GPT-3.5 and a simple model
attains fully-supervised competitive performance across ten widely used medical
datasets. These datasets represent vertically specialized medical diagnostic
scenarios (e.g., diabetes, heart diseases, COVID-19), highlighting the
potential of SERVAL in refining the vertical capabilities of LLMs and training
vertical models from scratch, all achieved without the need for annotations.
| 2,024 | Computation and Language |
Enhancing Neural Machine Translation of Low-Resource Languages: Corpus
Development, Human Evaluation and Explainable AI Architectures | In the current machine translation (MT) landscape, the Transformer
architecture stands out as the gold standard, especially for high-resource
language pairs. This research delves into its efficacy for low-resource
language pairs including both the English$\leftrightarrow$Irish and
English$\leftrightarrow$Marathi language pairs. Notably, the study identifies
the optimal hyperparameters and subword model type to significantly improve the
translation quality of Transformer models for low-resource language pairs.
The scarcity of parallel datasets for low-resource languages can hinder MT
development. To address this, gaHealth was developed, the first bilingual
corpus of health data for the Irish language. Focusing on the health domain,
models developed using this in-domain dataset exhibited very significant
improvements in BLEU score when compared with models from the LoResMT2021
Shared Task. A subsequent human evaluation using the multidimensional quality
metrics error taxonomy showcased the superior performance of the Transformer
system in reducing both accuracy and fluency errors compared to an RNN-based
counterpart.
Furthermore, this thesis introduces adaptNMT and adaptMLLM, two open-source
applications streamlined for the development, fine-tuning, and deployment of
neural machine translation models. These tools considerably simplify the setup
and evaluation process, making MT more accessible to both developers and
translators. Notably, adaptNMT, grounded in the OpenNMT ecosystem, promotes
eco-friendly natural language processing research by highlighting the
environmental footprint of model development. Fine-tuning of MLLMs by adaptMLLM
demonstrated advancements in translation performance for two low-resource
language pairs: English$\leftrightarrow$Irish and
English$\leftrightarrow$Marathi, compared to baselines from the LoResMT2021
Shared Task.
| 2,024 | Computation and Language |
Towards Comprehensive Vietnamese Retrieval-Augmented Generation and
Large Language Models | This paper presents our contributions towards advancing the state of
Vietnamese language understanding and generation through the development and
dissemination of open datasets and pre-trained models for Vietnamese
Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs).
| 2,024 | Computation and Language |
Multi-level Product Category Prediction through Text Classification | This article investigates applying advanced machine learning models,
specifically LSTM and BERT, for text classification to predict multiple
categories in the retail sector. The study demonstrates how applying data
augmentation techniques and the focal loss function can significantly enhance
accuracy in classifying products into multiple categories using a robust
Brazilian retail dataset. The LSTM model, enriched with Brazilian word
embedding, and BERT, known for its effectiveness in understanding complex
contexts, were adapted and optimized for this specific task. The results showed
that the BERT model, with an F1 Macro Score of up to $99\%$ for segments,
$96\%$ for categories and subcategories and $93\%$ for name products,
outperformed LSTM in more detailed categories. However, LSTM also achieved high
performance, especially after applying data augmentation and focal loss
techniques. These results underscore the effectiveness of NLP techniques in
retail and highlight the importance of the careful selection of modelling and
preprocessing strategies. This work contributes significantly to the field of
NLP in retail, providing valuable insights for future research and practical
applications.
| 2,024 | Computation and Language |
Hypertext Entity Extraction in Webpage | Webpage entity extraction is a fundamental natural language processing task
in both research and applications. Nowadays, the majority of webpage entity
extraction models are trained on structured datasets which strive to retain
textual content and its structure information. However, existing datasets all
overlook the rich hypertext features (e.g., font color, font size) which show
their effectiveness in previous works. To this end, we first collect a
\textbf{H}ypertext \textbf{E}ntity \textbf{E}xtraction \textbf{D}ataset
(\textit{HEED}) from the e-commerce domains, scraping both the text and the
corresponding explicit hypertext features with high-quality manual entity
annotations. Furthermore, we present the \textbf{Mo}E-based \textbf{E}ntity
\textbf{E}xtraction \textbf{F}ramework (\textit{MoEEF}), which efficiently
integrates multiple features to enhance model performance by Mixture of Experts
and outperforms strong baselines, including the state-of-the-art small-scale
models and GPT-3.5-turbo. Moreover, the effectiveness of hypertext features in
\textit{HEED} and several model components in \textit{MoEEF} are analyzed.
| 2,024 | Computation and Language |
Brilla AI: AI Contestant for the National Science and Maths Quiz | The African continent lacks enough qualified teachers which hampers the
provision of adequate learning support. An AI could potentially augment the
efforts of the limited number of teachers, leading to better learning outcomes.
Towards that end, this work describes and evaluates the first key output for
the NSMQ AI Grand Challenge, which proposes a robust, real-world benchmark for
such an AI: "Build an AI to compete live in Ghana's National Science and Maths
Quiz (NSMQ) competition and win - performing better than the best contestants
in all rounds and stages of the competition". The NSMQ is an annual live
science and mathematics competition for senior secondary school students in
Ghana in which 3 teams of 2 students compete by answering questions across
biology, chemistry, physics, and math in 5 rounds over 5 progressive stages
until a winning team is crowned for that year. In this work, we built Brilla
AI, an AI contestant that we deployed to unofficially compete remotely and live
in the Riddles round of the 2023 NSMQ Grand Finale, the first of its kind in
the 30-year history of the competition. Brilla AI is currently available as a
web app that livestreams the Riddles round of the contest, and runs 4 machine
learning systems: (1) speech to text (2) question extraction (3) question
answering and (4) text to speech that work together in real-time to quickly and
accurately provide an answer, and then say it with a Ghanaian accent. In its
debut, our AI answered one of the 4 riddles ahead of the 3 human contesting
teams, unofficially placing second (tied). Improvements and extensions of this
AI could potentially be deployed to offer science tutoring to students and
eventually enable millions across Africa to have one-on-one learning
interactions, democratizing science education.
| 2,024 | Computation and Language |
Decode Neural signal as Speech | Decoding language from brain dynamics is an important open direction in the
realm of brain-computer interface (BCI), especially considering the rapid
growth of large language models. Compared to invasive-based signals which
require electrode implantation surgery, non-invasive neural signals (e.g. EEG,
MEG) have attracted increasing attention considering their safety and
generality. However, the exploration is not adequate in three aspects: 1)
previous methods mainly focus on EEG but none of the previous works address
this problem on MEG with better signal quality; 2) prior works have
predominantly used ``teacher-forcing" during generative decoding, which is
impractical; 3) prior works are mostly ``BART-based" not fully auto-regressive,
which performs better in other sequence tasks. In this paper, we explore the
brain-to-text translation of MEG signals in a speech-decoding formation. Here
we are the first to investigate a cross-attention-based ``whisper" model for
generating text directly from MEG signals without teacher forcing. Our model
achieves impressive BLEU-1 scores of 60.30 and 52.89 without pretraining \&
teacher-forcing on two major datasets (\textit{GWilliams} and
\textit{Schoffelen}). This paper conducts a comprehensive review to understand
how speech decoding formation performs on the neural decoding tasks, including
pretraining initialization, training \& evaluation set splitting, augmentation,
and scaling law.
| 2,024 | Computation and Language |
Differentially Private Synthetic Data via Foundation Model APIs 2: Text | Text data has become extremely valuable due to the emergence of machine
learning algorithms that learn from it. A lot of high-quality text data
generated in the real world is private and therefore cannot be shared or used
freely due to privacy concerns. Generating synthetic replicas of private text
data with a formal privacy guarantee, i.e., differential privacy (DP), offers a
promising and scalable solution. However, existing methods necessitate DP
finetuning of large language models (LLMs) on private data to generate DP
synthetic data. This approach is not viable for proprietary LLMs (e.g.,
GPT-3.5) and also demands considerable computational resources for open-source
LLMs. Lin et al. (2024) recently introduced the Private Evolution (PE)
algorithm to generate DP synthetic images with only API access to diffusion
models. In this work, we propose an augmented PE algorithm, named Aug-PE, that
applies to the complex setting of text. We use API access to an LLM and
generate DP synthetic text without any model training. We conduct comprehensive
experiments on three benchmark datasets. Our results demonstrate that Aug-PE
produces DP synthetic text that yields competitive utility with the SOTA DP
finetuning baselines. This underscores the feasibility of relying solely on API
access of LLMs to produce high-quality DP synthetic texts, thereby facilitating
more accessible routes to privacy-preserving LLM applications. Our code and
data are available at https://github.com/AI-secure/aug-pe.
| 2,024 | Computation and Language |
Derivative-Free Optimization for Low-Rank Adaptation in Large Language
Models | Parameter-efficient tuning methods such as LoRA could achieve comparable
performance to model tuning by tuning a small portion of the parameters.
However, substantial computational resources are still required, as this
process involves calculating gradients and performing back-propagation
throughout the model. Much effort has recently been devoted to utilizing the
derivative-free optimization method to eschew the computation of gradients and
showcase an augmented level of robustness in few-shot settings. In this paper,
we prepend the low-rank modules into each self-attention layer of the model and
employ two derivative-free optimization methods to optimize these low-rank
modules at each layer alternately. Extensive results on various tasks and
language models demonstrate that our proposed method achieves substantial
improvement and exhibits clear advantages in memory usage and convergence speed
compared to existing gradient-based parameter-efficient tuning and
derivative-free optimization methods in few-shot settings.
| 2,024 | Computation and Language |
KeNet:Knowledge-enhanced Doc-Label Attention Network for Multi-label
text classification | Multi-Label Text Classification (MLTC) is a fundamental task in the field of
Natural Language Processing (NLP) that involves the assignment of multiple
labels to a given text. MLTC has gained significant importance and has been
widely applied in various domains such as topic recognition, recommendation
systems, sentiment analysis, and information retrieval. However, traditional
machine learning and Deep neural network have not yet addressed certain issues,
such as the fact that some documents are brief but have a large number of
labels and how to establish relationships between the labels. It is imperative
to additionally acknowledge that the significance of knowledge is substantiated
in the realm of MLTC. To address this issue, we provide a novel approach known
as Knowledge-enhanced Doc-Label Attention Network (KeNet). Specifically, we
design an Attention Network that incorporates external knowledge, label
embedding, and a comprehensive attention mechanism. In contrast to conventional
methods, we use comprehensive representation of documents, knowledge and labels
to predict all labels for each single text. Our approach has been validated by
comprehensive research conducted on three multi-label datasets. Experimental
results demonstrate that our method outperforms state-of-the-art MLTC method.
Additionally, a case study is undertaken to illustrate the practical
implementation of KeNet.
| 2,024 | Computation and Language |
WebCiteS: Attributed Query-Focused Summarization on Chinese Web Search
Results with Citations | Enhancing the attribution in large language models (LLMs) is a crucial task.
One feasible approach is to enable LLMs to cite external sources that support
their generations. However, existing datasets and evaluation methods in this
domain still exhibit notable limitations. In this work, we formulate the task
of attributed query-focused summarization (AQFS) and present WebCiteS, a
Chinese dataset featuring 7k human-annotated summaries with citations. WebCiteS
derives from real-world user queries and web search results, offering a
valuable resource for model training and evaluation. Prior works in attribution
evaluation do not differentiate between groundedness errors and citation
errors. They also fall short in automatically verifying sentences that draw
partial support from multiple sources. We tackle these issues by developing
detailed metrics and enabling the automatic evaluator to decompose the
sentences into sub-claims for fine-grained verification. Our comprehensive
evaluation of both open-source and proprietary models on WebCiteS highlights
the challenge LLMs face in correctly citing sources, underscoring the necessity
for further improvement. The dataset and code will be open-sourced to
facilitate further research in this crucial field.
| 2,024 | Computation and Language |
NPHardEval4V: A Dynamic Reasoning Benchmark of Multimodal Large Language
Models | Understanding the reasoning capabilities of Multimodal Large Language Models
(MLLMs) is an important area of research. In this study, we introduce a dynamic
benchmark, NPHardEval4V, aimed at addressing the existing gaps in evaluating
the pure reasoning abilities of MLLMs. Our benchmark aims to provide a venue to
disentangle the effect of various factors such as image recognition and
instruction following, from the overall performance of the models, allowing us
to focus solely on evaluating their reasoning abilities. It is built by
converting textual description of questions from NPHardEval to image
representations. Our findings reveal significant discrepancies in reasoning
abilities across different models and highlight the relatively weak performance
of MLLMs compared to LLMs in terms of reasoning. We also investigate the impact
of different prompting styles, including visual, text, and combined visual and
text prompts, on the reasoning abilities of MLLMs, demonstrating the different
impacts of multimodal inputs in model performance. Unlike traditional
benchmarks, which focus primarily on static evaluations, our benchmark will be
updated monthly to prevent overfitting and ensure a more authentic and
fine-grained evaluation of the models. We believe that this benchmark can aid
in understanding and guide the further development of reasoning abilities in
MLLMs. The benchmark dataset and code are available at
https://github.com/lizhouf/NPHardEval4V
| 2,024 | Computation and Language |
Enhancing Multi-Domain Automatic Short Answer Grading through an
Explainable Neuro-Symbolic Pipeline | Grading short answer questions automatically with interpretable reasoning
behind the grading decision is a challenging goal for current transformer
approaches. Justification cue detection, in combination with logical reasoners,
has shown a promising direction for neuro-symbolic architectures in ASAG. But,
one of the main challenges is the requirement of annotated justification cues
in the students' responses, which only exist for a few ASAG datasets. To
overcome this challenge, we contribute (1) a weakly supervised annotation
procedure for justification cues in ASAG datasets, and (2) a neuro-symbolic
model for explainable ASAG based on justification cues. Our approach improves
upon the RMSE by 0.24 to 0.3 compared to the state-of-the-art on the Short
Answer Feedback dataset in a bilingual, multi-domain, and multi-question
training setup. This result shows that our approach provides a promising
direction for generating high-quality grades and accompanying explanations for
future research in ASAG and educational NLP.
| 2,024 | Computation and Language |
NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural | Indonesia's linguistic landscape is remarkably diverse, encompassing over 700
languages and dialects, making it one of the world's most linguistically rich
nations. This diversity, coupled with the widespread practice of code-switching
and the presence of low-resource regional languages, presents unique challenges
for modern pre-trained language models. In response to these challenges, we
developed NusaBERT, building upon IndoBERT by incorporating vocabulary
expansion and leveraging a diverse multilingual corpus that includes regional
languages and dialects. Through rigorous evaluation across a range of
benchmarks, NusaBERT demonstrates state-of-the-art performance in tasks
involving multiple languages of Indonesia, paving the way for future natural
language understanding research for under-represented languages.
| 2,024 | Computation and Language |
Making Pre-trained Language Models Great on Tabular Prediction | The transferability of deep neural networks (DNNs) has made significant
progress in image and language processing. However, due to the heterogeneity
among tables, such DNN bonus is still far from being well exploited on tabular
data prediction (e.g., regression or classification tasks). Condensing
knowledge from diverse domains, language models (LMs) possess the capability to
comprehend feature names from various tables, potentially serving as versatile
learners in transferring knowledge across distinct tables and diverse
prediction tasks, but their discrete text representation space is inherently
incompatible with numerical feature values in tables. In this paper, we present
TP-BERTa, a specifically pre-trained LM model for tabular data prediction.
Concretely, a novel relative magnitude tokenization converts scalar numerical
feature values to finely discrete, high-dimensional tokens, and an
intra-feature attention approach integrates feature values with the
corresponding feature names. Comprehensive experiments demonstrate that our
pre-trained TP-BERTa leads the performance among tabular DNNs and is
competitive with Gradient Boosted Decision Tree models in typical tabular data
regime.
| 2,024 | Computation and Language |
CET2: Modelling Topic Transitions for Coherent and Engaging
Knowledge-Grounded Conversations | Knowledge-grounded dialogue systems aim to generate coherent and engaging
responses based on the dialogue contexts and selected external knowledge.
Previous knowledge selection methods tend to rely too heavily on the dialogue
contexts or over-emphasize the new information in the selected knowledge,
resulting in the selection of repetitious or incongruous knowledge and further
generating repetitive or incoherent responses, as the generation of the
response depends on the chosen knowledge. To address these shortcomings, we
introduce a Coherent and Engaging Topic Transition (CET2) framework to model
topic transitions for selecting knowledge that is coherent to the context of
the conversations while providing adequate knowledge diversity for topic
development. Our CET2 framework considers multiple factors for knowledge
selection, including valid transition logic from dialogue contexts to the
following topics and systematic comparisons between available knowledge
candidates. Extensive experiments on two public benchmarks demonstrate the
superiority and the better generalization ability of CET2 on knowledge
selection. This is due to our well-designed transition features and comparative
knowledge selection strategy, which are more transferable to conversations
about unseen topics. Analysis of fine-grained knowledge selection accuracy also
shows that CET2 can better balance topic entailment (contextual coherence) and
development (knowledge diversity) in dialogue than existing approaches.
| 2,024 | Computation and Language |
Rethinking LLM Language Adaptation: A Case Study on Chinese Mixtral | Mixtral, a representative sparse mixture of experts (SMoE) language model,
has received significant attention due to its unique model design and superior
performance. Based on Mixtral-8x7B-v0.1, in this paper, we propose
Chinese-Mixtral and Chinese-Mixtral-Instruct with improved Chinese language
abilities by adopting further pre-training and instruction fine-tuning.
Experimental results show that our Chinese-Mixtral and Chinese-Mixtral-Instruct
successfully improve Chinese understanding and generation performance while
retaining the original English abilities. Then, we discuss several key
questions when performing language adaptation on large language models,
including the necessity of extending the language-specific vocabulary and the
choice of the initialization model (foundation model v.s. instruction model),
by providing empirical results and analysis. We also present the visualizations
of each expert to examine their importance on downstream tasks. Our resources
are publicly available through \url{https://github.com/ymcui/Chinese-Mixtral}.
| 2,024 | Computation and Language |
An Improved Traditional Chinese Evaluation Suite for Foundation Model | We present TMMLU+, a comprehensive dataset designed for the Traditional
Chinese massive multitask language understanding dataset. TMMLU+ is a
multiple-choice question-answering dataset with 66 subjects from elementary to
professional level. Compared to its predecessor, TMMLU, TMMLU+ is six times
larger and boasts a more balanced subject distribution. We included benchmark
results in TMMLU+ from closed-source models and 24 open-weight Chinese large
language models of parameters ranging from 1.8B to 72B. Our findings reveal
that Traditional Chinese models still trail behind their Simplified Chinese
counterparts. Additionally, current large language models have yet to
outperform human performance in average scores. We publicly release our dataset
and the corresponding benchmark source code.
| 2,024 | Computation and Language |
FCDS: Fusing Constituency and Dependency Syntax into Document-Level
Relation Extraction | Document-level Relation Extraction (DocRE) aims to identify relation labels
between entities within a single document. It requires handling several
sentences and reasoning over them. State-of-the-art DocRE methods use a graph
structure to connect entities across the document to capture dependency syntax
information. However, this is insufficient to fully exploit the rich syntax
information in the document. In this work, we propose to fuse constituency and
dependency syntax into DocRE. It uses constituency syntax to aggregate the
whole sentence information and select the instructive sentences for the pairs
of targets. It exploits the dependency syntax in a graph structure with
constituency syntax enhancement and chooses the path between entity pairs based
on the dependency graph. The experimental results on datasets from various
domains demonstrate the effectiveness of the proposed method. The code is
publicly available at this url.
| 2,024 | Computation and Language |
Fostering the Ecosystem of Open Neural Encoders for Portuguese with
Albertina PT* Family | To foster the neural encoding of Portuguese, this paper contributes
foundation encoder models that represent an expansion of the still very scarce
ecosystem of large language models specifically developed for this language
that are fully open, in the sense that they are open source and openly
distributed for free under an open license for any purpose, thus including
research and commercial usages. Like most languages other than English,
Portuguese is low-resourced in terms of these foundational language resources,
there being the inaugural 900 million parameter Albertina and 335 million
Bertimbau. Taking this couple of models as an inaugural set, we present the
extension of the ecosystem of state-of-the-art open encoders for Portuguese
with a larger, top performance-driven model with 1.5 billion parameters, and a
smaller, efficiency-driven model with 100 million parameters. While achieving
this primary goal, further results that are relevant for this ecosystem were
obtained as well, namely new datasets for Portuguese based on the SuperGLUE
benchmark, which we also distribute openly.
| 2,024 | Computation and Language |
Arabic Text Sentiment Analysis: Reinforcing Human-Performed Surveys with
Wider Topic Analysis | Sentiment analysis (SA) has been, and is still, a thriving research area.
However, the task of Arabic sentiment analysis (ASA) is still underrepresented
in the body of research. This study offers the first in-depth and in-breadth
analysis of existing ASA studies of textual content and identifies their common
themes, domains of application, methods, approaches, technologies and
algorithms used. The in-depth study manually analyses 133 ASA papers published
in the English language between 2002 and 2020 from four academic databases
(SAGE, IEEE, Springer, WILEY) and from Google Scholar. The in-breadth study
uses modern, automatic machine learning techniques, such as topic modelling and
temporal analysis, on Open Access resources, to reinforce themes and trends
identified by the prior study, on 2297 ASA publications between 2010-2020. The
main findings show the different approaches used for ASA: machine learning,
lexicon-based and hybrid approaches. Other findings include ASA 'winning'
algorithms (SVM, NB, hybrid methods). Deep learning methods, such as LSTM can
provide higher accuracy, but for ASA sometimes the corpora are not large enough
to support them. Additionally, whilst there are some ASA corpora and lexicons,
more are required. Specifically, Arabic tweets corpora and datasets are
currently only moderately sized. Moreover, Arabic lexicons that have high
coverage contain only Modern Standard Arabic (MSA) words, and those with Arabic
dialects are quite small. Thus, new corpora need to be created. On the other
hand, ASA tools are stringently lacking. There is a need to develop ASA tools
that can be used in industry, as well as in academia, for Arabic text SA.
Hence, our study offers insights into the challenges associated with ASA
research and provides suggestions for ways to move the field forward such as
lack of Dialectical Arabic resource, Arabic tweets, corpora and data sets for
SA.
| 2,024 | Computation and Language |
To Generate or to Retrieve? On the Effectiveness of Artificial Contexts
for Medical Open-Domain Question Answering | Medical open-domain question answering demands substantial access to
specialized knowledge. Recent efforts have sought to decouple knowledge from
model parameters, counteracting architectural scaling and allowing for training
on common low-resource hardware. The retrieve-then-read paradigm has become
ubiquitous, with model predictions grounded on relevant knowledge pieces from
external repositories such as PubMed, textbooks, and UMLS. An alternative path,
still under-explored but made possible by the advent of domain-specific large
language models, entails constructing artificial contexts through prompting. As
a result, "to generate or to retrieve" is the modern equivalent of Hamlet's
dilemma. This paper presents MedGENIE, the first generate-then-read framework
for multiple-choice question answering in medicine. We conduct extensive
experiments on MedQA-USMLE, MedMCQA, and MMLU, incorporating a practical
perspective by assuming a maximum of 24GB VRAM. MedGENIE sets a new
state-of-the-art (SOTA) in the open-book setting of each testbed, even allowing
a small-scale reader to outcompete zero-shot closed-book 175B baselines while
using up to 706$\times$ fewer parameters. Overall, our findings reveal that
generated passages are more effective than retrieved counterparts in attaining
higher accuracy.
| 2,024 | Computation and Language |
IndicVoices: Towards building an Inclusive Multilingual Speech Dataset
for Indian Languages | We present INDICVOICES, a dataset of natural and spontaneous speech
containing a total of 7348 hours of read (9%), extempore (74%) and
conversational (17%) audio from 16237 speakers covering 145 Indian districts
and 22 languages. Of these 7348 hours, 1639 hours have already been
transcribed, with a median of 73 hours per language. Through this paper, we
share our journey of capturing the cultural, linguistic and demographic
diversity of India to create a one-of-its-kind inclusive and representative
dataset. More specifically, we share an open-source blueprint for data
collection at scale comprising of standardised protocols, centralised tools, a
repository of engaging questions, prompts and conversation scenarios spanning
multiple domains and topics of interest, quality control mechanisms,
comprehensive transcription guidelines and transcription tools. We hope that
this open source blueprint will serve as a comprehensive starter kit for data
collection efforts in other multilingual regions of the world. Using
INDICVOICES, we build IndicASR, the first ASR model to support all the 22
languages listed in the 8th schedule of the Constitution of India. All the
data, tools, guidelines, models and other materials developed as a part of this
work will be made publicly available
| 2,024 | Computation and Language |
Analyzing and Adapting Large Language Models for Few-Shot Multilingual
NLU: Are We There Yet? | Supervised fine-tuning (SFT), supervised instruction tuning (SIT) and
in-context learning (ICL) are three alternative, de facto standard approaches
to few-shot learning. ICL has gained popularity recently with the advent of
LLMs due to its simplicity and sample efficiency. Prior research has conducted
only limited investigation into how these approaches work for multilingual
few-shot learning, and the focus so far has been mostly on their performance.
In this work, we present an extensive and systematic comparison of the three
approaches, testing them on 6 high- and low-resource languages, three different
NLU tasks, and a myriad of language and domain setups. Importantly, performance
is only one aspect of the comparison, where we also analyse the approaches
through the optics of their computational, inference and financial costs. Our
observations show that supervised instruction tuning has the best trade-off
between performance and resource requirements. As another contribution, we
analyse the impact of target language adaptation of pretrained LLMs and find
that the standard adaptation approaches can (superficially) improve target
language generation capabilities, but language understanding elicited through
ICL does not improve and remains limited, with low scores especially for
low-resource languages.
| 2,024 | Computation and Language |
VariErr NLI: Separating Annotation Error from Human Label Variation | Human label variation arises when annotators assign different labels to the
same item for valid reasons, while annotation errors occur when labels are
assigned for invalid reasons. These two issues are prevalent in NLP benchmarks,
yet existing research has studied them in isolation. To the best of our
knowledge, there exists no prior work that focuses on teasing apart error from
signal, especially in cases where signal is beyond black-and-white. To fill
this gap, we introduce a systematic methodology and a new dataset, VariErr
(variation versus error), focusing on the NLI task in English. We propose a
2-round annotation scheme with annotators explaining each label and
subsequently judging the validity of label-explanation pairs. \name{} contains
7,574 validity judgments on 1,933 explanations for 500 re-annotated NLI items.
We assess the effectiveness of various automatic error detection (AED) methods
and GPTs in uncovering errors versus human label variation. We find that
state-of-the-art AED methods significantly underperform compared to GPTs and
humans. While GPT-4 is the best system, it still falls short of human
performance. Our methodology is applicable beyond NLI, offering fertile ground
for future research on error versus plausible variation, which in turn can
yield better and more trustworthy NLP systems.
| 2,024 | Computation and Language |
DECIDER: A Rule-Controllable Decoding Strategy for Language Generation
by Imitating Dual-System Cognitive Theory | Lexicon-based constrained decoding approaches aim to control the meaning or
style of the generated text through certain target concepts. Existing
approaches over-focus the targets themselves, leading to a lack of high-level
reasoning about how to achieve them. However, human usually tackles tasks by
following certain rules that not only focuses on the targets but also on
semantically relevant concepts that induce the occurrence of targets. In this
work, we present DECIDER, a rule-controllable decoding strategy for constrained
language generation inspired by dual-system cognitive theory. Specifically, in
DECIDER, a pre-trained language model (PLM) is equiped with a logic reasoner
that takes high-level rules as input. Then, the DECIDER allows rule signals to
flow into the PLM at each decoding step. Extensive experimental results
demonstrate that DECIDER can effectively follow given rules to guide generation
direction toward the targets in a more human-like manner.
| 2,024 | Computation and Language |
AS-ES Learning: Towards Efficient CoT Learning in Small Models | Chain-of-Thought (CoT) serves as a critical emerging ability in LLMs,
especially when it comes to logical reasoning. Attempts have been made to
induce such ability in small models as well by distilling from the data with
CoT generated by Large Language Models (LLMs). However, existing methods often
simply generate and incorporate more data from LLMs and fail to note the
importance of efficiently utilizing existing CoT data. We here propose a new
training paradigm AS-ES (Abstractive Segments - Extractive Segments) learning,
which exploits the inherent information in CoT for iterative generation.
Experiments show that our methods surpass the direct seq2seq training on
CoT-extensive tasks like MWP and PET summarization, without data augmentation
or altering the model itself. Furthermore, we explore the reason behind the
inefficiency of small models in learning CoT and provide an explanation of why
AS-ES learning works, giving insights into the underlying mechanism of CoT.
| 2,024 | Computation and Language |
Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models | Knowledge graph completion (KGC) is a widely used method to tackle
incompleteness in knowledge graphs (KGs) by making predictions for missing
links. Description-based KGC leverages pre-trained language models to learn
entity and relation representations with their names or descriptions, which
shows promising results. However, the performance of description-based KGC is
still limited by the quality of text and the incomplete structure, as it lacks
sufficient entity descriptions and relies solely on relation names, leading to
sub-optimal results. To address this issue, we propose MPIKGC, a general
framework to compensate for the deficiency of contextualized knowledge and
improve KGC by querying large language models (LLMs) from various perspectives,
which involves leveraging the reasoning, explanation, and summarization
capabilities of LLMs to expand entity descriptions, understand relations, and
extract structures, respectively. We conducted extensive evaluation of the
effectiveness and improvement of our framework based on four description-based
KGC models and four datasets, for both link prediction and triplet
classification tasks.
| 2,024 | Computation and Language |
SciAssess: Benchmarking LLM Proficiency in Scientific Literature
Analysis | Recent breakthroughs in Large Language Models (LLMs) have revolutionized
natural language understanding and generation, igniting a surge of interest in
leveraging these technologies for the nuanced field of scientific literature
analysis. Existing benchmarks, however, inadequately evaluate the proficiency
of LLMs in the scientific domain, especially in scenarios involving complex
comprehension and multimodal data. In response, we introduced SciAssess, a
benchmark tailored for the in-depth analysis of scientific literature, crafted
to provide a thorough assessment of LLMs' efficacy. SciAssess focuses on
evaluating LLMs' abilities in memorization, comprehension, and analysis within
scientific contexts. It includes representative tasks from diverse scientific
fields, such as general chemistry, organic materials, and alloy materials. And
rigorous quality control measures ensure its reliability in terms of
correctness, anonymization, and copyright compliance. SciAssess evaluates
leading LLMs, including GPT-4, GPT-3.5-turbo, and Gemini, identifying their
strengths and areas for improvement and supporting the ongoing development of
LLM applications in scientific literature analysis. SciAssess and its resources
are made available at https://sci-assess.github.io, offering a valuable tool
for advancing LLM capabilities in scientific literature analysis.
| 2,024 | Computation and Language |
Language and Speech Technology for Central Kurdish Varieties | Kurdish, an Indo-European language spoken by over 30 million speakers, is
considered a dialect continuum and known for its diversity in language
varieties. Previous studies addressing language and speech technology for
Kurdish handle it in a monolithic way as a macro-language, resulting in
disparities for dialects and varieties for which there are few resources and
tools available. In this paper, we take a step towards developing resources for
language and speech technology for varieties of Central Kurdish, creating a
corpus by transcribing movies and TV series as an alternative to fieldwork.
Additionally, we report the performance of machine translation, automatic
speech recognition, and language identification as downstream tasks evaluated
on Central Kurdish varieties. Data and models are publicly available under an
open license at https://github.com/sinaahmadi/CORDI.
| 2,024 | Computation and Language |
Transformers for Low-Resource Languages:Is F\'eidir Linn! | The Transformer model is the state-of-the-art in Machine Translation.
However, in general, neural translation models often under perform on language
pairs with insufficient training data. As a consequence, relatively few
experiments have been carried out using this architecture on low-resource
language pairs. In this study, hyperparameter optimization of Transformer
models in translating the low-resource English-Irish language pair is
evaluated. We demonstrate that choosing appropriate parameters leads to
considerable performance improvements. Most importantly, the correct choice of
subword model is shown to be the biggest driver of translation performance.
SentencePiece models using both unigram and BPE approaches were appraised.
Variations on model architectures included modifying the number of layers,
testing various regularisation techniques and evaluating the optimal number of
heads for attention. A generic 55k DGT corpus and an in-domain 88k public admin
corpus were used for evaluation. A Transformer optimized model demonstrated a
BLEU score improvement of 7.8 points when compared with a baseline RNN model.
Improvements were observed across a range of metrics, including TER, indicating
a substantially reduced post editing effort for Transformer optimized models
with 16k BPE subword models. Bench-marked against Google Translate, our
translation engines demonstrated significant improvements. The question of
whether or not Transformers can be used effectively in a low-resource setting
of English-Irish translation has been addressed. Is f\'eidir linn - yes we can.
| 2,021 | Computation and Language |
FakeNewsGPT4: Advancing Multimodal Fake News Detection through
Knowledge-Augmented LVLMs | The massive generation of multimodal fake news exhibits substantial
distribution discrepancies, prompting the need for generalized detectors.
However, the insulated nature of training within specific domains restricts the
capability of classical detectors to obtain open-world facts. In this paper, we
propose FakeNewsGPT4, a novel framework that augments Large Vision-Language
Models (LVLMs) with forgery-specific knowledge for manipulation reasoning while
inheriting extensive world knowledge as complementary. Knowledge augmentation
in FakeNewsGPT4 involves acquiring two types of forgery-specific knowledge,
i.e., semantic correlation and artifact trace, and merging them into LVLMs.
Specifically, we design a multi-level cross-modal reasoning module that
establishes interactions across modalities for extracting semantic
correlations. Concurrently, a dual-branch fine-grained verification module is
presented to comprehend localized details to encode artifact traces. The
generated knowledge is translated into refined embeddings compatible with
LVLMs. We also incorporate candidate answer heuristics and soft prompts to
enhance input informativeness. Extensive experiments on the public benchmark
demonstrate that FakeNewsGPT4 achieves superior cross-domain performance
compared to previous methods. Code will be available.
| 2,024 | Computation and Language |
Vanilla Transformers are Transfer Capability Teachers | Recently, Mixture of Experts (MoE) Transformers have garnered increasing
attention due to their advantages in model capacity and computational
efficiency. However, studies have indicated that MoE Transformers underperform
vanilla Transformers in many downstream tasks, significantly diminishing the
practical value of MoE models. To explain this issue, we propose that the
pre-training performance and transfer capability of a model are joint
determinants of its downstream task performance. MoE models, in comparison to
vanilla models, have poorer transfer capability, leading to their subpar
performance in downstream tasks. To address this issue, we introduce the
concept of transfer capability distillation, positing that although vanilla
models have weaker performance, they are effective teachers of transfer
capability. The MoE models guided by vanilla models can achieve both strong
pre-training performance and transfer capability, ultimately enhancing their
performance in downstream tasks. We design a specific distillation method and
conduct experiments on the BERT architecture. Experimental results show a
significant improvement in downstream performance of MoE models, and many
further evidences also strongly support the concept of transfer capability
distillation. Finally, we attempt to interpret transfer capability distillation
and provide some insights from the perspective of model feature.
| 2,024 | Computation and Language |
LLM-Oriented Retrieval Tuner | Dense Retrieval (DR) is now considered as a promising tool to enhance the
memorization capacity of Large Language Models (LLM) such as GPT3 and GPT-4 by
incorporating external memories. However, due to the paradigm discrepancy
between text generation of LLM and DR, it is still an open challenge to
integrate the retrieval and generation tasks in a shared LLM. In this paper, we
propose an efficient LLM-Oriented Retrieval Tuner, namely LMORT, which
decouples DR capacity from base LLM and non-invasively coordinates the
optimally aligned and uniform layers of the LLM towards a unified DR space,
achieving an efficient and effective DR without tuning the LLM itself. The
extensive experiments on six BEIR datasets show that our approach could achieve
competitive zero-shot retrieval performance compared to a range of strong DR
models while maintaining the generation ability of LLM.
| 2,024 | Computation and Language |
Topic Aware Probing: From Sentence Length Prediction to Idiom
Identification how reliant are Neural Language Models on Topic? | Transformer-based Neural Language Models achieve state-of-the-art performance
on various natural language processing tasks. However, an open question is the
extent to which these models rely on word-order/syntactic or word
co-occurrence/topic-based information when processing natural language. This
work contributes to this debate by addressing the question of whether these
models primarily use topic as a signal, by exploring the relationship between
Transformer-based models' (BERT and RoBERTa's) performance on a range of
probing tasks in English, from simple lexical tasks such as sentence length
prediction to complex semantic tasks such as idiom token identification, and
the sensitivity of these tasks to the topic information. To this end, we
propose a novel probing method which we call topic-aware probing. Our initial
results indicate that Transformer-based models encode both topic and non-topic
information in their intermediate layers, but also that the facility of these
models to distinguish idiomatic usage is primarily based on their ability to
identify and encode topic. Furthermore, our analysis of these models'
performance on other standard probing tasks suggests that tasks that are
relatively insensitive to the topic information are also tasks that are
relatively difficult for these models.
| 2,024 | Computation and Language |
Automated Generation of Multiple-Choice Cloze Questions for Assessing
English Vocabulary Using GPT-turbo 3.5 | A common way of assessing language learners' mastery of vocabulary is via
multiple-choice cloze (i.e., fill-in-the-blank) questions. But the creation of
test items can be laborious for individual teachers or in large-scale language
programs. In this paper, we evaluate a new method for automatically generating
these types of questions using large language models (LLM). The VocaTT
(vocabulary teaching and training) engine is written in Python and comprises
three basic steps: pre-processing target word lists, generating sentences and
candidate word options using GPT, and finally selecting suitable word options.
To test the efficiency of this system, 60 questions were generated targeting
academic words. The generated items were reviewed by expert reviewers who
judged the well-formedness of the sentences and word options, adding comments
to items judged not well-formed. Results showed a 75% rate of well-formedness
for sentences and 66.85% rate for suitable word options. This is a marked
improvement over the generator used earlier in our research which did not take
advantage of GPT's capabilities. Post-hoc qualitative analysis reveals several
points for improvement in future work including cross-referencing
part-of-speech tagging, better sentence validation, and improving GPT prompts.
| 2,023 | Computation and Language |
Leveraging Weakly Annotated Data for Hate Speech Detection in Code-Mixed
Hinglish: A Feasibility-Driven Transfer Learning Approach with Large Language
Models | The advent of Large Language Models (LLMs) has advanced the benchmark in
various Natural Language Processing (NLP) tasks. However, large amounts of
labelled training data are required to train LLMs. Furthermore, data annotation
and training are computationally expensive and time-consuming. Zero and
few-shot learning have recently emerged as viable options for labelling data
using large pre-trained models. Hate speech detection in mix-code low-resource
languages is an active problem area where the use of LLMs has proven
beneficial. In this study, we have compiled a dataset of 100 YouTube comments,
and weakly labelled them for coarse and fine-grained misogyny classification in
mix-code Hinglish. Weak annotation was applied due to the labor-intensive
annotation process. Zero-shot learning, one-shot learning, and few-shot
learning and prompting approaches have then been applied to assign labels to
the comments and compare them to human-assigned labels. Out of all the
approaches, zero-shot classification using the Bidirectional Auto-Regressive
Transformers (BART) large model and few-shot prompting using Generative
Pre-trained Transformer- 3 (ChatGPT-3) achieve the best results
| 2,024 | Computation and Language |
Using LLMs for the Extraction and Normalization of Product Attribute
Values | Product offers on e-commerce websites often consist of a textual product
title and a textual product description. In order to provide features such as
faceted product filtering or content-based product recommendation, the websites
need to extract attribute-value pairs from the unstructured product
descriptions. This paper explores the potential of using large language models
(LLMs), such as OpenAI's GPT-3.5 and GPT-4, to extract and normalize attribute
values from product titles and product descriptions. For our experiments, we
introduce the WDC Product Attribute-Value Extraction (WDC PAVE) dataset. WDC
PAVE consists of product offers from 87 websites that provide schema$.$org
annotations. The offers belong to five different categories, each featuring a
specific set of attributes. The dataset provides manually verified
attribute-value pairs in two forms: (i) directly extracted values and (ii)
normalized attribute values. The normalization of the attribute values requires
systems to perform the following types of operations: name expansion,
generalization, unit of measurement normalization, and string wrangling. Our
experiments demonstrate that GPT-4 outperforms PLM-based extraction methods by
10%, achieving an F1-Score of 91%. For the extraction and normalization of
product attribute values, GPT-4 achieves a similar performance to the
extraction scenario, while being particularly strong at string wrangling and
name expansion.
| 2,024 | Computation and Language |
What has LeBenchmark Learnt about French Syntax? | The paper reports on a series of experiments aiming at probing LeBenchmark, a
pretrained acoustic model trained on 7k hours of spoken French, for syntactic
information. Pretrained acoustic models are increasingly used for downstream
speech tasks such as automatic speech recognition, speech translation, spoken
language understanding or speech parsing. They are trained on very low level
information (the raw speech signal), and do not have explicit lexical
knowledge. Despite that, they obtained reasonable results on tasks that
requires higher level linguistic knowledge. As a result, an emerging question
is whether these models encode syntactic information. We probe each
representation layer of LeBenchmark for syntax, using the Orf\'eo treebank, and
observe that it has learnt some syntactic information. Our results show that
syntactic information is more easily extractable from the middle layers of the
network, after which a very sharp decrease is observed.
| 2,024 | Computation and Language |
EEE-QA: Exploring Effective and Efficient Question-Answer
Representations | Current approaches to question answering rely on pre-trained language models
(PLMs) like RoBERTa. This work challenges the existing question-answer encoding
convention and explores finer representations. We begin with testing various
pooling methods compared to using the begin-of-sentence token as a question
representation for better quality. Next, we explore opportunities to
simultaneously embed all answer candidates with the question. This enables
cross-reference between answer choices and improves inference throughput via
reduced memory usage. Despite their simplicity and effectiveness, these methods
have yet to be widely studied in current frameworks. We experiment with
different PLMs, and with and without the integration of knowledge graphs.
Results prove that the memory efficacy of the proposed techniques with little
sacrifice in performance. Practically, our work enhances 38-100% throughput
with 26-65% speedups on consumer-grade GPUs by allowing for considerably larger
batch sizes. Our work sends a message to the community with promising
directions in both representation quality and efficiency for the
question-answering task in natural language processing.
| 2,024 | Computation and Language |
ProTrix: Building Models for Planning and Reasoning over Tables with
Sentence Context | Tables play a crucial role in conveying information in various domains,
serving as indispensable tools for organizing and presenting data in a
structured manner. We propose a Plan-then-Reason framework to answer different
types of user queries over tables with sentence context. The framework first
plans the reasoning paths over the context, then assigns each step to
program-based or textual reasoning to reach the final answer. We construct an
instruction tuning set TrixInstruct following the framework. Our dataset cover
queries that are program-unsolvable or need combining information from tables
and sentences to obtain planning and reasoning abilities. We present ProTrix by
finetuning Llama-2-7B on TrixInstruct. Our experiments show that ProTrix
generalizes to diverse tabular tasks and achieves comparable performance to
GPT-3.5-turbo. We further demonstrate that ProTrix can generate accurate and
faithful explanations to answer complex free-form questions. Our work
underscores the importance of the planning and reasoning abilities towards a
model over tabular tasks with generalizability and interpretability. We will
release our dataset and model at https://github.com/WilliamZR/ProTrix.
| 2,024 | Computation and Language |
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve
Mathematical Reasoning Learning of Language Models | In reasoning tasks, even a minor error can cascade into inaccurate results,
leading to suboptimal performance of large language models in such domains.
Earlier fine-tuning approaches sought to mitigate this by leveraging more
precise supervisory signals from human labeling, larger models, or
self-sampling, although at a high cost. Conversely, we develop a method that
avoids external resources, relying instead on introducing perturbations to the
input. Our training approach randomly masks certain tokens within the chain of
thought, a technique we found to be particularly effective for reasoning tasks.
When applied to fine-tuning with GSM8K, this method achieved a 5% improvement
in accuracy over standard supervised fine-tuning with a few codes modified and
no additional labeling effort. Furthermore, it is complementary to existing
methods. When integrated with related data augmentation methods, it leads to an
average improvement of 3% improvement in GSM8K accuracy and 1% improvement in
MATH accuracy across five datasets of various quality and size, as well as two
base models. We further investigate the mechanisms behind this improvement
through case studies and quantitative analysis, suggesting that our approach
may provide superior support for the model in capturing long-distance
dependencies, especially those related to questions. This enhancement could
deepen understanding of premises in questions and prior steps. Our code is
available at Github.
| 2,024 | Computation and Language |
Not all Layers of LLMs are Necessary during Inference | The inference phase of Large Language Models (LLMs) is very expensive. An
ideal inference stage of LLMs could utilize fewer computational resources while
still maintaining its capabilities (e.g., generalization and in-context
learning ability). In this paper, we try to answer the question, "During LLM
inference, can we use shallow layers for easy instances; and deep layers for
hard ones?" To answer this question, we first indicate that Not all Layers are
Necessary during Inference by statistically analyzing the activated layers
across tasks. Then, we propose a simple algorithm named AdaInfer to determine
the inference termination moment based on the input instance adaptively. More
importantly, AdaInfer does not alter LLM parameters and maintains
generalizability across tasks. Experiments on well-known LLMs (i.e., Llama2
series and OPT) show that AdaInfer saves an average of 14.8% of computational
resources, even up to 50% on sentiment tasks, while maintaining comparable
performance. Additionally, this method is orthogonal to other model
acceleration techniques, potentially boosting inference efficiency further.
| 2,024 | Computation and Language |
PHAnToM: Personality Has An Effect on Theory-of-Mind Reasoning in Large
Language Models | Recent advances in large language models (LLMs) demonstrate that their
capabilities are comparable, or even superior, to humans in many tasks in
natural language processing. Despite this progress, LLMs are still inadequate
at social-cognitive reasoning, which humans are naturally good at. Drawing
inspiration from psychological research on the links between certain
personality traits and Theory-of-Mind (ToM) reasoning, and from prompt
engineering research on the hyper-sensitivity of prompts in affecting LLMs
capabilities, this study investigates how inducing personalities in LLMs using
prompts affects their ToM reasoning capabilities. Our findings show that
certain induced personalities can significantly affect the LLMs' reasoning
capabilities in three different ToM tasks. In particular, traits from the Dark
Triad have a larger variable effect on LLMs like GPT-3.5, Llama 2, and Mistral
across the different ToM tasks. We find that LLMs that exhibit a higher
variance across personality prompts in ToM also tends to be more controllable
in personality tests: personality traits in LLMs like GPT-3.5, Llama 2 and
Mistral can be controllably adjusted through our personality prompts. In
today's landscape where role-play is a common strategy when using LLMs, our
research highlights the need for caution, as models that adopt specific
personas with personalities potentially also alter their reasoning abilities in
an unexpected manner.
| 2,024 | Computation and Language |
Birbal: An efficient 7B instruct-model fine-tuned with curated datasets | LLMOps incur significant costs due to hardware requirements, hindering their
widespread accessibility. Additionally, a lack of transparency in model
training methods and data contributes to the majority of models being
non-reproducible. To tackle these challenges, the LLM Efficiency Challenge was
introduced at NeurIPS Workshop, aiming to adapt foundation models on a diverse
set of tasks via fine-tuning on a single GPU (RTX 4090 or A100 with 40GB)
within a 24-hour timeframe. In this system description paper, we introduce
Birbal, our Mistral-7B based winning model, fine-tuned on a single RTX 4090 for
16 hours. Birbal's success lies in curating high-quality instructions covering
diverse tasks, resulting in a 35% performance improvement over second-best
Qwen-14B based submission.
| 2,024 | Computation and Language |
Subjective $\textit{Isms}$? On the Danger of Conflating Hate and Offence
in Abusive Language Detection | Natural language processing research has begun to embrace the notion of
annotator subjectivity, motivated by variations in labelling. This approach
understands each annotator's view as valid, which can be highly suitable for
tasks that embed subjectivity, e.g., sentiment analysis. However, this
construction may be inappropriate for tasks such as hate speech detection, as
it affords equal validity to all positions on e.g., sexism or racism. We argue
that the conflation of hate and offence can invalidate findings on hate speech,
and call for future work to be situated in theory, disentangling hate from its
orthogonal concept, offence.
| 2,024 | Computation and Language |
FENICE: Factuality Evaluation of summarization based on Natural language
Inference and Claim Extraction | Recent advancements in text summarization, particularly with the advent of
Large Language Models (LLMs), have shown remarkable performance. However, a
notable challenge persists as a substantial number of automatically-generated
summaries exhibit factual inconsistencies, such as hallucinations. In response
to this issue, various approaches for the evaluation of consistency for
summarization have emerged. Yet, these newly-introduced metrics face several
limitations, including lack of interpretability, focus on short document
summaries (e.g., news articles), and computational impracticality, especially
for LLM-based metrics. To address these shortcomings, we propose Factuality
Evaluation of summarization based on Natural language Inference and Claim
Extraction (FENICE), a more interpretable and efficient factuality-oriented
metric. FENICE leverages an NLI-based alignment between information in the
source document and a set of atomic facts, referred to as claims, extracted
from the summary. Our metric sets a new state of the art on AGGREFACT, the
de-facto benchmark for factuality evaluation. Moreover, we extend our
evaluation to a more challenging setting by conducting a human annotation
process of long-form summarization.
| 2,024 | Computation and Language |
RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language
Models | Pre-trained Language Models (PLMs) can be accurately fine-tuned for
downstream text processing tasks. Recently, researchers have introduced several
parameter-efficient fine-tuning methods that optimize input prompts or adjust a
small number of model parameters (e.g LoRA). In this study, we explore the
impact of altering the input text of the original task in conjunction with
parameter-efficient fine-tuning methods. To most effectively rewrite the input
text, we train a few-shot paraphrase model with a Maximum-Marginal Likelihood
objective. Using six few-shot text classification datasets, we show that
enriching data with paraphrases at train and test time enhances the performance
beyond what can be achieved with parameter-efficient fine-tuning alone.
| 2,024 | Computation and Language |
Emotion Granularity from Text: An Aggregate-Level Indicator of Mental
Health | We are united in how emotions are central to shaping our experiences; and
yet, individuals differ greatly in how we each identify, categorize, and
express emotions. In psychology, variation in the ability of individuals to
differentiate between emotion concepts is called emotion granularity
(determined through self-reports of one's emotions). High emotion granularity
has been linked with better mental and physical health; whereas low emotion
granularity has been linked with maladaptive emotion regulation strategies and
poor health outcomes. In this work, we propose computational measures of
emotion granularity derived from temporally-ordered speaker utterances in
social media (in lieu of self-reports that suffer from various biases). We then
investigate the effectiveness of such text-derived measures of emotion
granularity in functioning as markers of various mental health conditions
(MHCs). We establish baseline measures of emotion granularity derived from
textual utterances, and show that, at an aggregate level, emotion granularities
are significantly lower for people self-reporting as having an MHC than for the
control population. This paves the way towards a better understanding of the
MHCs, and specifically the role emotions play in our well-being.
| 2,024 | Computation and Language |
Detection of Non-recorded Word Senses in English and Swedish | This study addresses the task of Unknown Sense Detection in English and
Swedish. The primary objective of this task is to determine whether the meaning
of a particular word usage is documented in a dictionary or not. For this
purpose, sense entries are compared with word usages from modern and historical
corpora using a pre-trained Word-in-Context embedder that allows us to model
this task in a few-shot scenario. Additionally, we use human annotations to
adapt and evaluate our models. Compared to a random sample from a corpus, our
model is able to considerably increase the detected number of word usages with
non-recorded senses.
| 2,024 | Computation and Language |
Key-Point-Driven Data Synthesis with its Enhancement on Mathematical
Reasoning | Large language models (LLMs) have shown great potential in complex reasoning
tasks, yet their performance is often hampered by the scarcity of high-quality,
reasoning-focused training datasets. Addressing this challenge, we propose
Key-Point-Driven Data Synthesis (KPDDS), a novel data synthesis framework that
synthesizes question-answer pairs by leveraging key points and exemplar pairs
from authentic data sources. KPDDS ensures the generation of novel questions
with rigorous quality control and substantial scalability. As a result, we
present KPMath, the most extensive synthetic dataset tailored for mathematical
reasoning to date, comprising over one million question-answer pairs. Utilizing
KPMath and augmenting it with additional reasoning-intensive corpora, we create
the comprehensive KPMath-Plus dataset. Fine-tuning the Mistral-7B model on
KPMath-Plus yields a zero-shot PASS@1 accuracy of 39.3% on the MATH test set, a
performance that not only outpaces other finetuned 7B models but also exceeds
that of certain 34B models. Our ablation studies further confirm the
substantial enhancement in mathematical reasoning across various subtopics,
marking a significant stride in LLMs' reasoning capabilities.
| 2,024 | Computation and Language |
Human Evaluation of English--Irish Transformer-Based NMT | In this study, a human evaluation is carried out on how hyperparameter
settings impact the quality of Transformer-based Neural Machine Translation
(NMT) for the low-resourced English--Irish pair. SentencePiece models using
both Byte Pair Encoding (BPE) and unigram approaches were appraised. Variations
in model architectures included modifying the number of layers, evaluating the
optimal number of heads for attention and testing various regularisation
techniques. The greatest performance improvement was recorded for a
Transformer-optimized model with a 16k BPE subword model. Compared with a
baseline Recurrent Neural Network (RNN) model, a Transformer-optimized model
demonstrated a BLEU score improvement of 7.8 points. When benchmarked against
Google Translate, our translation engines demonstrated significant
improvements. Furthermore, a quantitative fine-grained manual evaluation was
conducted which compared the performance of machine translation systems. Using
the Multidimensional Quality Metrics (MQM) error taxonomy, a human evaluation
of the error types generated by an RNN-based system and a Transformer-based
system was explored. Our findings show the best-performing Transformer system
significantly reduces both accuracy and fluency errors when compared with an
RNN-based model.
| 2,022 | Computation and Language |
adaptNMT: an open-source, language-agnostic development environment for
Neural Machine Translation | adaptNMT streamlines all processes involved in the development and deployment
of RNN and Transformer neural translation models. As an open-source
application, it is designed for both technical and non-technical users who work
in the field of machine translation. Built upon the widely-adopted OpenNMT
ecosystem, the application is particularly useful for new entrants to the field
since the setup of the development environment and creation of train,
validation and test splits is greatly simplified. Graphing, embedded within the
application, illustrates the progress of model training, and SentencePiece is
used for creating subword segmentation models. Hyperparameter customization is
facilitated through an intuitive user interface, and a single-click model
development approach has been implemented. Models developed by adaptNMT can be
evaluated using a range of metrics, and deployed as a translation service
within the application. To support eco-friendly research in the NLP space, a
green report also flags the power consumption and kgCO$_{2}$ emissions
generated during model development. The application is freely available.
| 2,023 | Computation and Language |
adaptMLLM: Fine-Tuning Multilingual Language Models on Low-Resource
Languages with Integrated LLM Playgrounds | The advent of Multilingual Language Models (MLLMs) and Large Language Models
has spawned innovation in many areas of natural language processing. Despite
the exciting potential of this technology, its impact on developing
high-quality Machine Translation (MT) outputs for low-resource languages
remains relatively under-explored. Furthermore, an open-source application,
dedicated to both fine-tuning MLLMs and managing the complete MT workflow for
low-resources languages, remains unavailable. We aim to address these
imbalances through the development of adaptMLLM, which streamlines all
processes involved in the fine-tuning of MLLMs for MT. This open-source
application is tailored for developers, translators, and users who are engaged
in MT. An intuitive interface allows for easy customisation of hyperparameters,
and the application offers a range of metrics for model evaluation and the
capability to deploy models as a translation service directly within the
application. As a multilingual tool, we used adaptMLLM to fine-tune models for
two low-resource language pairs: English to Irish (EN$\leftrightarrow$GA) and
English to Marathi (EN$\leftrightarrow$MR). Compared with baselines from the
LoResMT2021 Shared Task, the adaptMLLM system demonstrated significant
improvements. In the EN$\rightarrow$GA direction, an improvement of 5.2 BLEU
points was observed and an increase of 40.5 BLEU points was recorded in the
GA$\rightarrow$EN direction. Significant improvements in the translation
performance of the EN$\leftrightarrow$MR pair were also observed notably in the
MR$\rightarrow$EN direction with an increase of 21.3 BLEU points. Finally, a
fine-grained human evaluation of the MLLM output on the EN$\rightarrow$GA pair
was conducted using the Multidimensional Quality Metrics and Scalar Quality
Metrics error taxonomies. The application and models are freely available.
| 2,023 | Computation and Language |
How does Architecture Influence the Base Capabilities of Pre-trained
Language Models? A Case Study Based on FFN-Wider Transformer Models | Pre-trained language models have been proven to possess strong base
capabilities, which not only excel in in-distribution language modeling but
also show powerful abilities in out-of-distribution language modeling, transfer
learning and few-shot learning. Unlike existing work focusing on the influence
of scale on base capabilities, our work examines the influence of architecture
on those. Specifically, our concern is: How does architecture influence the
base capabilities of pre-trained language models? In this work, we attempt to
explain and reverse the decline in base capabilities caused by the architecture
of FFN-Wider Transformers, seeking to provide some insights. Through analysis,
we found the contribution ratio of Multi-Head Attention (a combination
function) to pre-trained language modeling is a key factor affecting base
capabilities. FFN-Wider Transformers reduce the contribution ratio of this
combination function, leading to a decline in base capabilities. We confirmed
this by experiments and proposed Combination Enhancement Architecture (CEA) to
address the decline in base capabilities of such models. Significantly, we
extended our explanation and CEA to Mixture of Experts (MoE) architecture
Transformers, which also alleviated their decline in base capabilities to some
extent, proving our work can offer useful guidance for architecture analysis,
architecture improvement and architecture design.
| 2,024 | Computation and Language |
Views Are My Own, But Also Yours: Benchmarking Theory of Mind using
Common Ground | Evaluating the theory of mind (ToM) capabilities of language models (LMs) has
recently received much attention. However, many existing benchmarks rely on
synthetic data which risks misaligning the resulting experiments with human
behavior. We introduce the first ToM dataset based on naturally occurring
spoken dialogs, Common-ToM, and show that LMs struggle to demonstrate ToM. We
then show that integrating a simple, explicit representation of beliefs
improves LM performance on Common-ToM.
| 2,024 | Computation and Language |
OffLanDat: A Community Based Implicit Offensive Language Dataset
Generated by Large Language Model Through Prompt Engineering | The widespread presence of offensive languages on social media has resulted
in adverse effects on societal well-being. As a result, it has become very
important to address this issue with high priority. Offensive languages exist
in both explicit and implicit forms, with the latter being more challenging to
detect. Current research in this domain encounters several challenges. Firstly,
the existing datasets primarily rely on the collection of texts containing
explicit offensive keywords, making it challenging to capture implicitly
offensive contents that are devoid of these keywords. Secondly, usual
methodologies tend to focus solely on textual analysis, neglecting the valuable
insights that community information can provide. In this research paper, we
introduce a novel dataset OffLanDat, a community based implicit offensive
language dataset generated by ChatGPT containing data for 38 different target
groups. Despite limitations in generating offensive texts using ChatGPT due to
ethical constraints, we present a prompt-based approach that effectively
generates implicit offensive languages. To ensure data quality, we evaluate our
data with human. Additionally, we employ a prompt-based Zero-Shot method with
ChatGPT and compare the detection results between human annotation and ChatGPT
annotation. We utilize existing state-of-the-art models to see how effective
they are in detecting such languages. We will make our code and dataset public
for other researchers.
| 2,024 | Computation and Language |
The Emotion Dynamics of Literary Novels | Stories are rich in the emotions they exhibit in their narratives and evoke
in the readers. The emotional journeys of the various characters within a story
are central to their appeal. Computational analysis of the emotions of novels,
however, has rarely examined the variation in the emotional trajectories of the
different characters within them, instead considering the entire novel to
represent a single story arc. In this work, we use character dialogue to
distinguish between the emotion arcs of the narration and the various
characters. We analyze the emotion arcs of the various characters in a dataset
of English literary novels using the framework of Utterance Emotion Dynamics.
Our findings show that the narration and the dialogue largely express disparate
emotions through the course of a novel, and that the commonalities or
differences in the emotional arcs of stories are more accurately captured by
those associated with individual characters.
| 2,024 | Computation and Language |
Choose Your Own Adventure: Interactive E-Books to Improve Word Knowledge
and Comprehension Skills | The purpose of this feasibility study was to examine the potential impact of
reading digital interactive e-books on essential skills that support reading
comprehension with third-fifth grade students. Students read two e-Books that
taught word learning and comprehension monitoring strategies in the service of
learning difficult vocabulary and targeted science concepts about hurricanes.
We investigated whether specific comprehension strategies including word
learning and strategies that supported general reading comprehension,
summarization, and question generation, show promise of effectiveness in
building vocabulary knowledge and comprehension skills in the e-Books. Students
were assigned to read one of three versions of each of the e-Books, each
version implemented one strategy. The books employed a choose-your-adventure
format with embedded comprehension questions that provided students with
immediate feedback on their responses. Paired samples t-tests were run to
examine pre-to-post differences in learning the targeted vocabulary and science
concepts taught in both e-Books. For both e-Books, students demonstrated
significant gains in word learning and on the targeted hurricane concepts.
Additionally, Hierarchical Linear Modeling (HLM) revealed that no one strategy
was more associated with larger gains than the other. Performance on the
embedded questions in the books was also associated with greater posttest
outcomes for both e-Books. This work discusses important considerations for
implementation and future development of e-books that can enhance student
engagement and improve reading comprehension.
| 2,024 | Computation and Language |
Trial and Error: Exploration-Based Trajectory Optimization for LLM
Agents | Large Language Models (LLMs) have become integral components in various
autonomous agent systems. In this study, we present an exploration-based
trajectory optimization approach, referred to as ETO. This learning method is
designed to enhance the performance of open LLM agents. Contrary to previous
studies that exclusively train on successful expert trajectories, our method
allows agents to learn from their exploration failures. This leads to improved
performance through an iterative optimization framework. During the exploration
phase, the agent interacts with the environment while completing given tasks,
gathering failure trajectories to create contrastive trajectory pairs. In the
subsequent training phase, the agent utilizes these trajectory preference pairs
to update its policy using contrastive learning methods like DPO. This
iterative cycle of exploration and training fosters continued improvement in
the agents. Our experiments on three complex tasks demonstrate that ETO
consistently surpasses baseline performance by a large margin. Furthermore, an
examination of task-solving efficiency and potential in scenarios lacking
expert trajectory underscores the effectiveness of our approach.
| 2,024 | Computation and Language |
A Tutorial on the Pretrain-Finetune Paradigm for Natural Language
Processing | The pretrain-finetune paradigm represents a transformative approach in
natural language processing (NLP). This paradigm distinguishes itself through
the use of large pretrained language models, demonstrating remarkable
efficiency in finetuning tasks, even with limited training data. This
efficiency is especially beneficial for research in social sciences, where the
number of annotated samples is often quite limited. Our tutorial offers a
comprehensive introduction to the pretrain-finetune paradigm. We first delve
into the fundamental concepts of pretraining and finetuning, followed by
practical exercises using real-world applications. We demonstrate the
application of the paradigm across various tasks, including multi-class
classification and regression. Emphasizing its efficacy and user-friendliness,
the tutorial aims to encourage broader adoption of this paradigm. To this end,
we have provided open access to all our code and datasets. The tutorial is
particularly valuable for quantitative researchers in psychology, offering them
an insightful guide into this innovative approach.
| 2,024 | Computation and Language |
SPUQ: Perturbation-Based Uncertainty Quantification for Large Language
Models | In recent years, large language models (LLMs) have become increasingly
prevalent, offering remarkable text generation capabilities. However, a
pressing challenge is their tendency to make confidently wrong predictions,
highlighting the critical need for uncertainty quantification (UQ) in LLMs.
While previous works have mainly focused on addressing aleatoric uncertainty,
the full spectrum of uncertainties, including epistemic, remains inadequately
explored. Motivated by this gap, we introduce a novel UQ method, sampling with
perturbation for UQ (SPUQ), designed to tackle both aleatoric and epistemic
uncertainties. The method entails generating a set of perturbations for LLM
inputs, sampling outputs for each perturbation, and incorporating an
aggregation module that generalizes the sampling uncertainty approach for text
generation tasks. Through extensive experiments on various datasets, we
investigated different perturbation and aggregation techniques. Our findings
show a substantial improvement in model uncertainty calibration, with a
reduction in Expected Calibration Error (ECE) by 50\% on average. Our findings
suggest that our proposed UQ method offers promising steps toward enhancing the
reliability and trustworthiness of LLMs.
| 2,024 | Computation and Language |
Balancing Enhancement, Harmlessness, and General Capabilities: Enhancing
Conversational LLMs with Direct RLHF | In recent advancements in Conversational Large Language Models (LLMs), a
concerning trend has emerged, showing that many new base LLMs experience a
knowledge reduction in their foundational capabilities following Supervised
Fine-Tuning (SFT). This process often leads to issues such as forgetting or a
decrease in the base model's abilities. Moreover, fine-tuned models struggle to
align with user preferences, inadvertently increasing the generation of toxic
outputs when specifically prompted. To overcome these challenges, we adopted an
innovative approach by completely bypassing SFT and directly implementing
Harmless Reinforcement Learning from Human Feedback (RLHF). Our method not only
preserves the base model's general capabilities but also significantly enhances
its conversational abilities, while notably reducing the generation of toxic
outputs. Our approach holds significant implications for fields that demand a
nuanced understanding and generation of responses, such as customer service. We
applied this methodology to Mistral, the most popular base model, thereby
creating Mistral-Plus. Our validation across 11 general tasks demonstrates that
Mistral-Plus outperforms similarly sized open-source base models and their
corresponding instruct versions. Importantly, the conversational abilities of
Mistral-Plus were significantly improved, indicating a substantial advancement
over traditional SFT models in both safety and user preference alignment.
| 2,024 | Computation and Language |
DACO: Towards Application-Driven and Comprehensive Data Analysis via
Code Generation | Data analysis is a crucial analytical process to generate in-depth studies
and conclusive insights to comprehensively answer a given user query for
tabular data. In this work, we aim to propose new resources and benchmarks to
inspire future research on this crucial yet challenging and under-explored
task. However, collecting data analysis annotations curated by experts can be
prohibitively expensive. We propose to automatically generate high-quality
answer annotations leveraging the code-generation capabilities of LLMs with a
multi-turn prompting technique. We construct the DACO dataset, containing (1)
440 databases (of tabular data) collected from real-world scenarios, (2) ~2k
query-answer pairs that can serve as weak supervision for model training, and
(3) a concentrated but high-quality test set with human refined annotations
that serves as our main evaluation benchmark. We train a 6B supervised
fine-tuning (SFT) model on DACO dataset, and find that the SFT model learns
reasonable data analysis capabilities. To further align the models with human
preference, we use reinforcement learning to encourage generating analysis
perceived by human as helpful, and design a set of dense rewards to propagate
the sparse human preference reward to intermediate code generation steps. Our
DACO-RL algorithm is evaluated by human annotators to produce more helpful
answers than SFT model in 57.72% cases, validating the effectiveness of our
proposed algorithm. Data and code are released at
https://github.com/shirley-wu/daco
| 2,024 | Computation and Language |
Updating the Minimum Information about CLinical Artificial Intelligence
(MI-CLAIM) checklist for generative modeling research | Recent advances in generative models, including large language models (LLMs),
vision language models (VLMs), and diffusion models, have accelerated the field
of natural language and image processing in medicine and marked a significant
paradigm shift in how biomedical models can be developed and deployed. While
these models are highly adaptable to new tasks, scaling and evaluating their
usage presents new challenges not addressed in previous frameworks. In
particular, the ability of these models to produce useful outputs with little
to no specialized training data ("zero-" or "few-shot" approaches), as well as
the open-ended nature of their outputs, necessitate the development of updated
guidelines in using and evaluating these models. In response to gaps in
standards and best practices for the development of clinical AI tools
identified by US Executive Order 141103 and several emerging national networks
for clinical AI evaluation, we begin to formalize some of these guidelines by
building on the "Minimum information about clinical artificial intelligence
modeling" (MI-CLAIM) checklist. The MI-CLAIM checklist, originally developed in
2020, provided a set of six steps with guidelines on the minimum information
necessary to encourage transparent, reproducible research for artificial
intelligence (AI) in medicine. Here, we propose modifications to the original
checklist that highlight differences in training, evaluation, interpretability,
and reproducibility of generative models compared to traditional AI models for
clinical research. This updated checklist also seeks to clarify cohort
selection reporting and adds additional items on alignment with ethical
standards.
| 2,024 | Computation and Language |
Eliciting Better Multilingual Structured Reasoning from LLMs through
Code | Development of large language models (LLM) have shown progress on reasoning,
though studies have been limited to English or simple reasoning tasks. We thus
introduce a multilingual structured reasoning and explanation dataset, termed
xSTREET, that covers four tasks across six languages. xSTREET exposes a gap in
base LLM performance between English and non-English reasoning tasks. We then
propose two methods to remedy this gap, building on the insight that LLMs
trained on code are better reasoners. First, at training time, we augment a
code dataset with multi-lingual comments using machine translation while
keeping program code as-is. Second, at inference time, we bridge the gap
between training and inference by employing a prompt structure that
incorporates step-by-step code primitives to derive new facts and find a
solution. Our methods show improved multilingual performance on xSTREET, most
notably on the scientific commonsense reasoning subtask. Furthermore, the
models show no regression on non-reasoning tasks, thus showing our techniques
maintain general-purpose abilities.
| 2,024 | Computation and Language |
Improving Event Definition Following For Zero-Shot Event Detection | Existing approaches on zero-shot event detection usually train models on
datasets annotated with known event types, and prompt them with unseen event
definitions. These approaches yield sporadic successes, yet generally fall
short of expectations. In this work, we aim to improve zero-shot event
detection by training models to better follow event definitions. We hypothesize
that a diverse set of event types and definitions are the key for models to
learn to follow event definitions while existing event extraction datasets
focus on annotating many high-quality examples for a few event types. To verify
our hypothesis, we construct an automatically generated Diverse Event
Definition (DivED) dataset and conduct comparative studies. Our experiments
reveal that a large number of event types (200) and diverse event definitions
can significantly boost event extraction performance; on the other hand, the
performance does not scale with over ten examples per event type. Beyond
scaling, we incorporate event ontology information and hard-negative samples
during training, further boosting the performance. Based on these findings, we
fine-tuned a LLaMA-2-7B model on our DivED dataset, yielding performance that
surpasses SOTA large language models like GPT-3.5 across three open benchmarks
on zero-shot event detection.
| 2,024 | Computation and Language |
Exploring the Limitations of Large Language Models in Compositional
Relation Reasoning | We present a comprehensive evaluation of large language models(LLMs)' ability
to reason about composition relations through a benchmark encompassing 1,500
test cases in English, designed to cover six distinct types of composition
relations: Positional, Comparative, Personal, Mathematical, Identity, and
Other. Acknowledging the significance of multilingual capabilities, we expanded
our assessment to include translations of these cases into Chinese, Japanese,
French, and Korean. Our Multilingual Composition Relation (MCR) benchmark aims
at investigating the robustness and adaptability of LLMs in handling
composition relation reasoning across diverse linguistic contexts.
| 2,024 | Computation and Language |
FinReport: Explainable Stock Earnings Forecasting via News Factor
Analyzing Model | The task of stock earnings forecasting has received considerable attention
due to the demand investors in real-world scenarios. However, compared with
financial institutions, it is not easy for ordinary investors to mine factors
and analyze news. On the other hand, although large language models in the
financial field can serve users in the form of dialogue robots, it still
requires users to have financial knowledge to ask reasonable questions. To
serve the user experience, we aim to build an automatic system, FinReport, for
ordinary investors to collect information, analyze it, and generate reports
after summarizing.
Specifically, our FinReport is based on financial news announcements and a
multi-factor model to ensure the professionalism of the report. The FinReport
consists of three modules: news factorization module, return forecasting
module, risk assessment module. The news factorization module involves
understanding news information and combining it with stock factors, the return
forecasting module aim to analysis the impact of news on market sentiment, and
the risk assessment module is adopted to control investment risk. Extensive
experiments on real-world datasets have well verified the effectiveness and
explainability of our proposed FinReport. Our codes and datasets are available
at https://github.com/frinkleko/FinReport.
| 2,024 | Computation and Language |
Revisiting Meta-evaluation for Grammatical Error Correction | Metrics are the foundation for automatic evaluation in grammatical error
correction (GEC), with their evaluation of the metrics (meta-evaluation)
relying on their correlation with human judgments. However, conventional
meta-evaluations in English GEC encounter several challenges including biases
caused by inconsistencies in evaluation granularity, and an outdated setup
using classical systems. These problems can lead to misinterpretation of
metrics and potentially hinder the applicability of GEC techniques. To address
these issues, this paper proposes SEEDA, a new dataset for GEC meta-evaluation.
SEEDA consists of corrections with human ratings along two different
granularities: edit-based and sentence-based, covering 12 state-of-the-art
systems including large language models (LLMs), and two human corrections with
different focuses. The results of improved correlations by aligning the
granularity in the sentence-level meta-evaluation, suggest that edit-based
metrics may have been underestimated in existing studies. Furthermore,
correlations of most metrics decrease when changing from classical to neural
systems, indicating that traditional metrics are relatively poor at evaluating
fluently corrected sentences with many edits.
| 2,024 | Computation and Language |
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated
Large Language Model Agents | Recent work has embodied LLMs as agents, allowing them to access tools,
perform actions, and interact with external content (e.g., emails or websites).
However, external content introduces the risk of indirect prompt injection
(IPI) attacks, where malicious instructions are embedded within the content
processed by LLMs, aiming to manipulate these agents into executing detrimental
actions against users. Given the potentially severe consequences of such
attacks, establishing benchmarks to assess and mitigate these risks is
imperative.
In this work, we introduce InjecAgent, a benchmark designed to assess the
vulnerability of tool-integrated LLM agents to IPI attacks. InjecAgent
comprises 1,054 test cases covering 17 different user tools and 62 attacker
tools. We categorize attack intentions into two primary types: direct harm to
users and exfiltration of private data. We evaluate 30 different LLM agents and
show that agents are vulnerable to IPI attacks, with ReAct-prompted GPT-4
vulnerable to attacks 24% of the time. Further investigation into an enhanced
setting, where the attacker instructions are reinforced with a hacking prompt,
shows additional increases in success rates, nearly doubling the attack success
rate on the ReAct-prompted GPT-4. Our findings raise questions about the
widespread deployment of LLM Agents. Our benchmark is available at
https://github.com/uiuc-kang-lab/InjecAgent.
| 2,024 | Computation and Language |
Causal Walk: Debiasing Multi-Hop Fact Verification with Front-Door
Adjustment | Conventional multi-hop fact verification models are prone to rely on spurious
correlations from the annotation artifacts, leading to an obvious performance
decline on unbiased datasets. Among the various debiasing works, the causal
inference-based methods become popular by performing theoretically guaranteed
debiasing such as casual intervention or counterfactual reasoning. However,
existing causal inference-based debiasing methods, which mainly formulate fact
verification as a single-hop reasoning task to tackle shallow bias patterns,
cannot deal with the complicated bias patterns hidden in multiple hops of
evidence. To address the challenge, we propose Causal Walk, a novel method for
debiasing multi-hop fact verification from a causal perspective with front-door
adjustment. Specifically, in the structural causal model, the reasoning path
between the treatment (the input claim-evidence graph) and the outcome (the
veracity label) is introduced as the mediator to block the confounder. With the
front-door adjustment, the causal effect between the treatment and the outcome
is decomposed into the causal effect between the treatment and the mediator,
which is estimated by applying the idea of random walk, and the causal effect
between the mediator and the outcome, which is estimated with normalized
weighted geometric mean approximation. To investigate the effectiveness of the
proposed method, an adversarial multi-hop fact verification dataset and a
symmetric multi-hop fact verification dataset are proposed with the help of the
large language model. Experimental results show that Causal Walk outperforms
some previous debiasing methods on both existing datasets and the newly
constructed datasets. Code and data will be released at
https://github.com/zcccccz/CausalWalk.
| 2,024 | Computation and Language |
Breeze-7B Technical Report | Breeze-7B is an open-source language model based on Mistral-7B, designed to
address the need for improved language comprehension and chatbot-oriented
capabilities in Traditional Chinese. This technical report provides an overview
of the additional pretraining, finetuning, and evaluation stages for the
Breeze-7B model. The Breeze-7B family of base and chat models exhibits good
performance on language comprehension and chatbot-oriented tasks, reaching the
top in several benchmarks among models comparable in its complexity class.
| 2,024 | Computation and Language |
Android in the Zoo: Chain-of-Action-Thought for GUI Agents | Large language model (LLM) leads to a surge of autonomous GUI agents for
smartphone, which completes a task triggered by natural language through
predicting a sequence of actions of API. Even though the task highly relies on
past actions and visual observations, existing studies typical consider little
semantic information carried out by intermediate screenshots and screen
operations. To address this, this work presents Chain-of-Action-Thought (dubbed
CoAT), which takes the description of the previous actions, the current screen,
and more importantly the action thinking of what actions should be performed
and the outcomes led by the chosen action. We demonstrate that, in a zero-shot
setting upon an off-the-shell LLM, CoAT significantly improves the goal
progress compared to standard context modeling. To further facilitate the
research in this line, we construct a benchmark Android-In-The-Zoo (AitZ),
which contains 18,643 screen-action pairs together with chain-of-action-thought
annotations. Experiments show that fine-tuning a 200M model on our AitZ dataset
achieves on par performance with CogAgent-Chat-18B.
| 2,024 | Computation and Language |
Crossing Linguistic Horizons: Finetuning and Comprehensive Evaluation of
Vietnamese Large Language Models | Recent advancements in large language models (LLMs) have underscored their
importance in the evolution of artificial intelligence. However, despite
extensive pretraining on multilingual datasets, available open-sourced LLMs
exhibit limited effectiveness in processing Vietnamese. The challenge is
exacerbated by the absence of systematic benchmark datasets and metrics
tailored for Vietnamese LLM evaluation. To mitigate these issues, we have
finetuned LLMs specifically for Vietnamese and developed a comprehensive
evaluation framework encompassing 10 common tasks and 31 metrics. Our
evaluation results reveal that the fine-tuned LLMs exhibit enhanced
comprehension and generative capabilities in Vietnamese. Moreover, our analysis
indicates that models with more parameters can introduce more biases and
uncalibrated outputs and the key factor influencing LLM performance is the
quality of the training or fine-tuning datasets. These insights underscore the
significance of meticulous fine-tuning with high-quality datasets in enhancing
LLM performance.
| 2,024 | Computation and Language |
DP-CRE: Continual Relation Extraction via Decoupled Contrastive Learning
and Memory Structure Preservation | Continuous Relation Extraction (CRE) aims to incrementally learn relation
knowledge from a non-stationary stream of data. Since the introduction of new
relational tasks can overshadow previously learned information, catastrophic
forgetting becomes a significant challenge in this domain. Current replay-based
training paradigms prioritize all data uniformly and train memory samples
through multiple rounds, which would result in overfitting old tasks and
pronounced bias towards new tasks because of the imbalances of the replay set.
To handle the problem, we introduce the DecouPled CRE (DP-CRE) framework that
decouples the process of prior information preservation and new knowledge
acquisition. This framework examines alterations in the embedding space as new
relation classes emerge, distinctly managing the preservation and acquisition
of knowledge. Extensive experiments show that DP-CRE significantly outperforms
other CRE baselines across two datasets.
| 2,024 | Computation and Language |
HARGPT: Are LLMs Zero-Shot Human Activity Recognizers? | There is an ongoing debate regarding the potential of Large Language Models
(LLMs) as foundational models seamlessly integrated with Cyber-Physical Systems
(CPS) for interpreting the physical world. In this paper, we carry out a case
study to answer the following question: Are LLMs capable of zero-shot human
activity recognition (HAR). Our study, HARGPT, presents an affirmative answer
by demonstrating that LLMs can comprehend raw IMU data and perform HAR tasks in
a zero-shot manner, with only appropriate prompts. HARGPT inputs raw IMU data
into LLMs and utilizes the role-play and think step-by-step strategies for
prompting. We benchmark HARGPT on GPT4 using two public datasets of different
inter-class similarities and compare various baselines both based on
traditional machine learning and state-of-the-art deep classification models.
Remarkably, LLMs successfully recognize human activities from raw IMU data and
consistently outperform all the baselines on both datasets. Our findings
indicate that by effective prompting, LLMs can interpret raw IMU data based on
their knowledge base, possessing a promising potential to analyze raw sensor
data of the physical world effectively.
| 2,024 | Computation and Language |
Causal Prompting: Debiasing Large Language Model Prompting based on
Front-Door Adjustment | Despite the significant achievements of existing prompting methods such as
in-context learning and chain-of-thought for large language models (LLMs), they
still face challenges of various biases. Traditional debiasing methods
primarily focus on the model training stage, including data augmentation-based
and reweight-based approaches, with the limitations of addressing the complex
biases of LLMs. To address such limitations, the causal relationship behind the
prompting methods is uncovered using a structural causal model, and a novel
causal prompting method based on front-door adjustment is proposed to
effectively mitigate the bias of LLMs. In specific, causal intervention is
implemented by designing the prompts without accessing the parameters and
logits of LLMs.The chain-of-thoughts generated by LLMs are employed as the
mediator variable and the causal effect between the input prompt and the output
answers is calculated through front-door adjustment to mitigate model biases.
Moreover, to obtain the representation of the samples precisely and estimate
the causal effect more accurately, contrastive learning is used to fine-tune
the encoder of the samples by aligning the space of the encoder with the LLM.
Experimental results show that the proposed causal prompting approach achieves
excellent performance on 3 natural language processing datasets on both
open-source and closed-source LLMs.
| 2,024 | Computation and Language |
Towards Training A Chinese Large Language Model for Anesthesiology | Medical large language models (LLMs) have gained popularity recently due to
their significant practical utility. However, most existing research focuses on
general medicine, and there is a need for in-depth study of LLMs in specific
fields like anesthesiology. To fill the gap, we introduce Hypnos, a Chinese
Anesthesia model built upon existing LLMs, e.g., Llama. Hypnos' contributions
have three aspects: 1) The data, such as utilizing Self-Instruct, acquired from
current LLMs likely includes inaccuracies. Hypnos implements a cross-filtering
strategy to improve the data quality. This strategy involves using one LLM to
assess the quality of the generated data from another LLM and filtering out the
data with low quality. 2) Hypnos employs a general-to-specific training
strategy that starts by fine-tuning LLMs using the general medicine data and
subsequently improving the fine-tuned LLMs using data specifically from
Anesthesiology. The general medical data supplement the medical expertise in
Anesthesiology and enhance the effectiveness of Hypnos' generation. 3) We
introduce a standardized benchmark for evaluating medical LLM in
Anesthesiology. Our benchmark includes both publicly available instances from
the Internet and privately obtained cases from the Hospital. Hypnos outperforms
other medical LLMs in anesthesiology in metrics, GPT-4, and human evaluation on
the benchmark dataset.
| 2,024 | Computation and Language |
Role Prompting Guided Domain Adaptation with General Capability Preserve
for Large Language Models | The growing interest in Large Language Models (LLMs) for specialized
applications has revealed a significant challenge: when tailored to specific
domains, LLMs tend to experience catastrophic forgetting, compromising their
general capabilities and leading to a suboptimal user experience. Additionally,
crafting a versatile model for multiple domains simultaneously often results in
a decline in overall performance due to confusion between domains. In response
to these issues, we present the RolE Prompting Guided Multi-Domain Adaptation
(REGA) strategy. This novel approach effectively manages multi-domain LLM
adaptation through three key components: 1) Self-Distillation constructs and
replays general-domain exemplars to alleviate catastrophic forgetting. 2) Role
Prompting assigns a central prompt to the general domain and a unique role
prompt to each specific domain to minimize inter-domain confusion during
training. 3) Role Integration reuses and integrates a small portion of
domain-specific data to the general-domain data, which are trained under the
guidance of the central prompt. The central prompt is used for a streamlined
inference process, removing the necessity to switch prompts for different
domains. Empirical results demonstrate that REGA effectively alleviates
catastrophic forgetting and inter-domain confusion. This leads to improved
domain-specific performance compared to standard fine-tuned models, while still
preserving robust general capabilities.
| 2,024 | Computation and Language |
In-Memory Learning: A Declarative Learning Framework for Large Language
Models | The exploration of whether agents can align with their environment without
relying on human-labeled data presents an intriguing research topic. Drawing
inspiration from the alignment process observed in intelligent organisms, where
declarative memory plays a pivotal role in summarizing past experiences, we
propose a novel learning framework. The agents adeptly distill insights from
past experiences, refining and updating existing notes to enhance their
performance in the environment. This entire process transpires within the
memory components and is implemented through natural language, so we character
this framework as In-memory Learning. We also delve into the key features of
benchmarks designed to evaluate the self-improvement process. Through
systematic experiments, we demonstrate the effectiveness of our framework and
provide insights into this problem.
| 2,024 | Computation and Language |
DPPA: Pruning Method for Large Language Model to Model Merging | Model merging is to combine fine-tuned models derived from multiple domains,
with the intent of enhancing the model's proficiency across various domains.
The principal concern is the resolution of parameter conflicts. A substantial
amount of existing research remedy this issue during the merging stage, with
the latest study focusing on resolving this issue throughout the pruning stage.
The DARE approach has exhibited promising outcomes when applied to a simplistic
fine-tuned model. However, the efficacy of this method tends to wane when
employed on complex fine-tuned models that show a significant parameter bias
relative to the baseline model. In this paper, we introduce a dual-stage method
termed Dynamic Pruning Partition Amplification (DPPA), devised to tackle the
challenge of merging complex fine-tuned models. Initially, we introduce
Dynamically Pruning (DP), an improved approach based on magnitude pruning,
which aim is to enhance performance at higher pruning rates. Subsequently, we
propose Dynamically Partition Amplification (DPA), a rescaling strategy, is
designed to dynamically amplify parameter partitions in relation to their
significance levels. The experimental results show that our method maintains a
mere 20% of domain-specific parameters and yet delivers a performance
comparable to other methodologies that preserve up to 90% of parameters.
Furthermore, our method displays outstanding performance post-pruning, leading
to a significant improvement of nearly 20% performance in model merging. We
make our code on Github.
| 2,024 | Computation and Language |
An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned
Judge Models are Task-specific Classifiers | Recently, there has been a growing trend of utilizing Large Language Model
(LLM) to evaluate the quality of other LLMs. Many studies have employed
proprietary close-source models, especially GPT4, as the evaluator.
Alternatively, other works have fine-tuned judge models based on open-source
LLMs as the evaluator. In this study, we conduct an empirical study of
different judge models on their evaluation capability. Our findings indicate
that although the fine-tuned judge models achieve high accuracy on in-domain
test sets, even surpassing GPT4, they are inherently task-specific classifiers,
and their generalizability and fairness severely underperform GPT4.
| 2,024 | Computation and Language |
MathScale: Scaling Instruction Tuning for Mathematical Reasoning | Large language models (LLMs) have demonstrated remarkable capabilities in
problem-solving. However, their proficiency in solving mathematical problems
remains inadequate. We propose MathScale, a simple and scalable method to
create high-quality mathematical reasoning data using frontier LLMs (e.g., {\tt
GPT-3.5}). Inspired by the cognitive mechanism in human mathematical learning,
it first extracts topics and knowledge points from seed math questions and then
build a concept graph, which is subsequently used to generate new math
questions. MathScale exhibits effective scalability along the size axis of the
math dataset that we generate. As a result, we create a mathematical reasoning
dataset (MathScaleQA) containing two million math question-answer pairs. To
evaluate mathematical reasoning abilities of LLMs comprehensively, we construct
{\sc MwpBench}, a benchmark of Math Word Problems, which is a collection of ten
datasets (including GSM8K and MATH) covering K-12, college, and competition
level math problems. We apply MathScaleQA to fine-tune open-source LLMs (e.g.,
LLaMA-2 and Mistral), resulting in significantly improved capabilities in
mathematical reasoning. Evaluated on {\sc MwpBench}, MathScale-7B achieves
state-of-the-art performance across all datasets, surpassing its best peers of
equivalent size by 42.9\% in micro average accuracy and 43.7\% in macro average
accuracy, respectively.
| 2,024 | Computation and Language |
In Search of Truth: An Interrogation Approach to Hallucination Detection | Despite the many advances of Large Language Models (LLMs) and their
unprecedented rapid evolution, their impact and integration into every facet of
our daily lives is limited due to various reasons. One critical factor
hindering their widespread adoption is the occurrence of hallucinations, where
LLMs invent answers that sound realistic, yet drift away from factual truth. In
this paper, we present a novel method for detecting hallucinations in large
language models, which tackles a critical issue in the adoption of these models
in various real-world scenarios. Through extensive evaluations across multiple
datasets and LLMs, including Llama-2, we study the hallucination levels of
various recent LLMs and demonstrate the effectiveness of our method to
automatically detect them. Notably, we observe up to 62% hallucinations for
Llama-2 in a specific experiment, where our method achieves a Balanced Accuracy
(B-ACC) of 87%, all without relying on external knowledge.
| 2,024 | Computation and Language |
Zero-Shot Cross-Lingual Document-Level Event Causality Identification
with Heterogeneous Graph Contrastive Transfer Learning | Event Causality Identification (ECI) refers to detect causal relations
between events in texts. However, most existing studies focus on sentence-level
ECI with high-resource language, leaving more challenging document-level ECI
(DECI) with low-resource languages under-explored. In this paper, we propose a
Heterogeneous Graph Interaction Model with Multi-granularity Contrastive
Transfer Learning (GIMC) for zero-shot cross-lingual document-level ECI.
Specifically, we introduce a heterogeneous graph interaction network to model
the long-distance dependencies between events that are scattered over document.
Then, to improve cross-lingual transferability of causal knowledge learned from
source language, we propose a multi-granularity contrastive transfer learning
module to align the causal representations across languages. Extensive
experiments show our framework outperforms previous state-of-the-art model by
9.4% and 8.2% of average F1 score on monolingual and multilingual scenarios
respectively. Notably, in multilingual scenario, our zero-shot framework even
exceeds GPT-3.5 with few-shot learning by 24.3% in overall performance.
| 2,024 | Computation and Language |
Demonstrating Mutual Reinforcement Effect through Information Flow | The Mutual Reinforcement Effect (MRE) investigates the synergistic
relationship between word-level and text-level classifications in text
classification tasks. It posits that the performance of both classification
levels can be mutually enhanced. However, this mechanism has not been
adequately demonstrated or explained in prior research. To address this gap, we
employ information flow analysis to observe and substantiate the MRE theory.
Our experiments on six MRE hybrid datasets revealed the presence of MRE in the
model and its impact. Additionally, we conducted fine-tuning experiments, whose
results were consistent with those of the information flow experiments. The
convergence of findings from both experiments corroborates the existence of
MRE. Furthermore, we extended the application of MRE to prompt learning,
utilizing word-level information as a verbalizer to bolster the model's
prediction of text-level classification labels. In our final experiment, the
F1-score significantly surpassed the baseline in five out of six datasets,
further validating the notion that word-level information enhances the language
model's comprehension of the text as a whole.
| 2,024 | Computation and Language |
A Second Look on BASS -- Boosting Abstractive Summarization with Unified
Semantic Graphs -- A Replication Study | We present a detailed replication study of the BASS framework, an abstractive
summarization system based on the notion of Unified Semantic Graphs. Our
investigation includes challenges in replicating key components and an ablation
study to systematically isolate error sources rooted in replicating novel
components. Our findings reveal discrepancies in performance compared to the
original work. We highlight the significance of paying careful attention even
to reasonably omitted details for replicating advanced frameworks like BASS,
and emphasize key practices for writing replicable papers.
| 2,024 | Computation and Language |
RulePrompt: Weakly Supervised Text Classification with Prompting PLMs
and Self-Iterative Logical Rules | Weakly supervised text classification (WSTC), also called zero-shot or
dataless text classification, has attracted increasing attention due to its
applicability in classifying a mass of texts within the dynamic and open Web
environment, since it requires only a limited set of seed words (label names)
for each category instead of labeled data. With the help of recently popular
prompting Pre-trained Language Models (PLMs), many studies leveraged manually
crafted and/or automatically identified verbalizers to estimate the likelihood
of categories, but they failed to differentiate the effects of these
category-indicative words, let alone capture their correlations and realize
adaptive adjustments according to the unlabeled corpus. In this paper, in order
to let the PLM effectively understand each category, we at first propose a
novel form of rule-based knowledge using logical expressions to characterize
the meanings of categories. Then, we develop a prompting PLM-based approach
named RulePrompt for the WSTC task, consisting of a rule mining module and a
rule-enhanced pseudo label generation module, plus a self-supervised
fine-tuning module to make the PLM align with this task. Within this framework,
the inaccurate pseudo labels assigned to texts and the imprecise logical rules
associated with categories mutually enhance each other in an alternative
manner. That establishes a self-iterative closed loop of knowledge (rule)
acquisition and utilization, with seed words serving as the starting point.
Extensive experiments validate the effectiveness and robustness of our
approach, which markedly outperforms state-of-the-art weakly supervised
methods. What is more, our approach yields interpretable category rules,
proving its advantage in disambiguating easily-confused categories.
| 2,024 | Computation and Language |
AIx Speed: Playback Speed Optimization Using Listening Comprehension of
Speech Recognition Models | Since humans can listen to audio and watch videos at faster speeds than
actually observed, we often listen to or watch these pieces of content at
higher playback speeds to increase the time efficiency of content
comprehension. To further utilize this capability, systems that automatically
adjust the playback speed according to the user's condition and the type of
content to assist in more efficient comprehension of time-series content have
been developed. However, there is still room for these systems to further
extend human speed-listening ability by generating speech with playback speed
optimized for even finer time units and providing it to humans. In this study,
we determine whether humans can hear the optimized speech and propose a system
that automatically adjusts playback speed at units as small as phonemes while
ensuring speech intelligibility. The system uses the speech recognizer score as
a proxy for how well a human can hear a certain unit of speech and maximizes
the speech playback speed to the extent that a human can hear. This method can
be used to produce fast but intelligible speech. In the evaluation experiment,
we compared the speech played back at a constant fast speed and the flexibly
speed-up speech generated by the proposed method in a blind test and confirmed
that the proposed method produced speech that was easier to listen to.
| 2,023 | Computation and Language |
Benchmarking the Text-to-SQL Capability of Large Language Models: A
Comprehensive Evaluation | Large Language Models (LLMs) have emerged as a powerful tool in advancing the
Text-to-SQL task, significantly outperforming traditional methods.
Nevertheless, as a nascent research field, there is still no consensus on the
optimal prompt templates and design frameworks. Additionally, existing
benchmarks inadequately explore the performance of LLMs across the various
sub-tasks of the Text-to-SQL process, which hinders the assessment of LLMs'
cognitive capabilities and the optimization of LLM-based solutions. To address
the aforementioned issues, we firstly construct a new dataset designed to
mitigate the risk of overfitting in LLMs. Then we formulate five evaluation
tasks to comprehensively assess the performance of diverse methods across
various LLMs throughout the Text-to-SQL process.Our study highlights the
performance disparities among LLMs and proposes optimal in-context learning
solutions tailored to each task. These findings offer valuable insights for
enhancing the development of LLM-based Text-to-SQL systems.
| 2,024 | Computation and Language |
SimuCourt: Building Judicial Decision-Making Agents with Real-world
Judgement Documents | With the development of deep learning, natural language processing technology
has effectively improved the efficiency of various aspects of the traditional
judicial industry. However, most current efforts focus solely on individual
judicial stage, overlooking cross-stage collaboration. As the autonomous agents
powered by large language models are becoming increasingly smart and able to
make complex decisions in real-world settings, offering new insights for
judicial intelligence. In this paper, (1) we introduce SimuCourt, a judicial
benchmark that encompasses 420 judgment documents from real-world, spanning the
three most common types of judicial cases, and a novel task Judicial
Decision-Making to evaluate the judicial analysis and decision-making power of
agents. To support this task, we construct a large-scale judicial knowledge
base, JudicialKB, with multiple legal knowledge. (2) we propose a novel
multi-agent framework, AgentsCourt. Our framework follows the real-world
classic court trial process, consisting of court debate simulation, legal
information retrieval and judgement refinement to simulate the decision-making
of judge. (3) we perform extensive experiments, the results demonstrate that,
our framework outperforms the existing advanced methods in various aspects,
especially in generating legal grounds, where our model achieves significant
improvements of 8.6% and 9.1% F1 score in the first and second instance
settings, respectively.
| 2,024 | Computation and Language |
Evidence-Focused Fact Summarization for Knowledge-Augmented Zero-Shot
Question Answering | Recent studies have investigated utilizing Knowledge Graphs (KGs) to enhance
Quesetion Answering (QA) performance of Large Language Models (LLMs), yet
structured KG verbalization remains challengin. Existing methods, such as
triple-form or free-form textual conversion of triple-form facts, encounter
several issues. These include reduced evidence density due to duplicated
entities or relationships, and reduced evidence clarity due to an inability to
emphasize crucial evidence. To address these issues, we propose EFSum, an
Evidence-focused Fact Summarization framework for enhanced QA with
knowledge-augmented LLMs. We optimize an open-source LLM as a fact summarizer
through distillation and preference alignment. Our extensive experiments show
that EFSum improves LLM's zero-shot QA performance, and it is possible to
ensure both the helpfulness and faithfulness of the summary.
| 2,024 | Computation and Language |
Subsets and Splits