andersoncliffb's picture
Pushed by DataDreamer
da3cde9 verified
|
raw
history blame
5.84 kB
metadata
base_model: google/t5-v1_1-base
tags:
  - datadreamer
  - datadreamer-0.46.0
  - synthetic
  - mistral-tiny
  - mistral-tiny
  - text2text-generation
pipeline_tag: text2text-generation
widget:
  - text: >-
      In this paper, we introduce a novel learning framework for addressing
      inconsistencies and incompleteness inEffects of Pre-training Task
      Structure on Cross-lingual Transferof large-scale, multilingual machine
      reading comprehension (MRC) models. Our proposed method, termed
      Structured-MRC, employs a new task structure that strategically balances
      knowledge transfer and specialized information acquisition across
      languages. Rather than using one universal pre-training task,
      Structured-MRC synchronizes task-wise pre-training across related language
      pairs. This technique allows our models to effectively learn and transfer
      recurring patterns while avoiding overgeneralization. Comprehensive
      experiments are carried out on eight diverse languages from the XNLI,
      XNLG, MARC, and WikiMRC datasets, demonstrating that the Structured-MRC
      framework significantly outperforms state-of-the-art approaches in terms
      of consistency, comprehensibility, and generality. The insights gained
      from this study highlight the importance of structuring learning tasks for
      cross-lingual transfer in MRC, with implications for various NLP
      applications.
    example_title: Example 1
  - text: >-
      Title: Unsupervised Multilingual Contextual Word Embeddings: Incorporating
      Morphological and Syntactic Features


      In this paper, we propose an unsupervised approach for developing
      multilingual contextual word embeddings that capture both morphological
      and syntactic properties. The model framework, dubbed 'UniMorph Syn',
      targets 100 low-resource languages while accounting for diverse inflection
      patterns and structurally complex sentences. By employing morphological
      analyzers, POS taggers, and dependency parsers as pre-processing steps, we
      enhance the quality and comprehensiveness of contextual representations.
      We analyze the model's performance through cross-linguistic and cross-task
      evaluations using several datasets and benchmarks, obtaining promising
      results and outperforming relevant baselines. Our work showcases the
      potential of unsupervised, large-scale, both morphologically and
      syntactically-aware models for low-resource languages within a
      multilingual context.
    example_title: Example 2
  - text: >-
      In this work, we present an innovative deep learning model for Natural
      Language Processing (NLP) tasks that leverages the transformer
      architecture supplemented with a robust pre-training strategy on vast
      unstructured data. Our model, Scored-Efficient Transformer (SET), excels
      in balancing efficiency with quality, achieving competitive and in some
      cases better performance than existing models while being significantly
      more computationally efficient.


      SET's unique feature is the introduction of a novel dynamic attention
      mechanism, which selectively accord focus to important contextual
      features, steadfastly reducing computational requirements without
      compromising the semantic understanding or performance. Furthermore, we
      explore a novel pre-training schema, named PseudoCorpus, which entails the
      creation of pseudo corpora from task-specific data, effectively tailoring
      the model to cater to a diverse range of NLP tasks with minimal need for
      task-specific fine-tuning.


      Experimental evaluations on benchmark datasets including GLUE, SQuAD, and
      STS-B demonstrate not only signs of consistent improvements in various NLP
      tasks for SET, but also highlight its remarkable generalization ability
      across various tasks. In summary, SET revolutionizes the NLP paradigm with
      a critical balance of efficiency, productivity, and efficacy, pushing the
      boundaries of potential applications in real-world scenarios.
    example_title: Example 3

Model Card

Add more information here

Example Usage

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained('andersoncliffb/abstracts_to_tweet_model', revision=None) # Load tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('andersoncliffb/abstracts_to_tweet_model', revision=None) # Load model
pipe = pipeline('text2text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id)

inputs = ['In this paper, we introduce a novel learning framework for addressing inconsistencies and incompleteness inEffects of Pre-training Task Structure on Cross-lingual Transferof large-scale, multilingual machine reading comprehension (MRC) models. Our proposed method, termed Structured-MRC, employs a new task structure that strategically balances knowledge transfer and specialized information acquisition across languages. Rather than using one universal pre-training task, Structured-MRC synchronizes task-wise pre-training across related language pairs. This technique allows our models to effectively learn and transfer recurring patterns while avoiding overgeneralization. Comprehensive experiments are carried out on eight diverse languages from the XNLI, XNLG, MARC, and WikiMRC datasets, demonstrating that the Structured-MRC framework significantly outperforms state-of-the-art approaches in terms of consistency, comprehensibility, and generality. The insights gained from this study highlight the importance of structuring learning tasks for cross-lingual transfer in MRC, with implications for various NLP applications.']
print(pipe(inputs, max_length=512, do_sample=False))

This model was trained with a synthetic dataset with DataDreamer 🤖💤. The synthetic dataset card and model card can be found here. The training arguments can be found here.