-
INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models
Paper • 2306.04757 • Published • 6 -
Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation
Paper • 2308.01240 • Published • 2 -
Can Large Language Models Understand Real-World Complex Instructions?
Paper • 2309.09150 • Published • 2 -
Evaluating the Instruction-Following Robustness of Large Language Models to Prompt Injection
Paper • 2308.10819 • Published
Collections
Discover the best community collections!
Collections including paper arxiv:2306.04757
-
Skill-Mix: a Flexible and Expandable Family of Evaluations for AI models
Paper • 2310.17567 • Published • 1 -
This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models
Paper • 2310.15941 • Published • 6 -
Holistic Evaluation of Language Models
Paper • 2211.09110 • Published • 1 -
INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models
Paper • 2306.04757 • Published • 6
-
Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs
Paper • 2310.13961 • Published • 4 -
Tuna: Instruction Tuning using Feedback from Large Language Models
Paper • 2310.13385 • Published • 10 -
Auto-Instruct: Automatic Instruction Generation and Ranking for Black-Box Language Models
Paper • 2310.13127 • Published • 11 -
From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning
Paper • 2310.00492 • Published • 2
-
Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs
Paper • 2310.13961 • Published • 4 -
Fabricator: An Open Source Toolkit for Generating Labeled Training Data with Teacher LLMs
Paper • 2309.09582 • Published • 4 -
Auto-Instruct: Automatic Instruction Generation and Ranking for Black-Box Language Models
Paper • 2310.13127 • Published • 11 -
Evaluating the Robustness to Instructions of Large Language Models
Paper • 2308.14306 • Published • 1
-
KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval
Paper • 2310.15511 • Published • 4 -
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Paper • 2310.14566 • Published • 25 -
SmartPlay : A Benchmark for LLMs as Intelligent Agents
Paper • 2310.01557 • Published • 12 -
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Paper • 2310.03214 • Published • 18