Cristian Diaz's picture

Cristian Diaz

CristianJD
·

AI & ML interests

OCR, Deep Learning , NLP

Recent Activity

Organizations

Journalists on Hugging Face's profile picture

CristianJD's activity

New activity in Maple728/Time-300B 5 days ago

How to use it

#15 opened 5 days ago by
CristianJD
New activity in ibm-granite/granite-timeseries-ttm-r1 about 1 year ago
New activity in qantev/trocr-large-spanish about 1 year ago
reacted to merve's post with 🔥 about 1 year ago
view post
Post
2442
Demo for IDEFICS-8B demo is out! HuggingFaceM4/idefics-8b

This checkpoint is not optimized to chat, but rather works very well for various tasks, incl visual question answering and document tasks 💬📑
Chatty one is coming soon!
updated a collection about 1 year ago
New activity in microsoft/trocr-base-handwritten about 1 year ago

Full Page OCR

5
#5 opened over 1 year ago by
DDM007
reacted to gsarti's post with ❤️ about 1 year ago
view post
Post
🔍 Today's pick in Interpretability & Analysis of LMs: Information Flow Routes: Automatically Interpreting Language Models at Scale by @javifer @lena-voita

This work presents a novel method to identify salient components in Transformer-based language models by decomposing the contribution of various model components into the residual stream.

This method is more efficient and scalable than previous techniques such as activation patching, as it only requires a single forward pass through the model to identify critical information flow paths. Moreover, it can be applied without a contrastive template, which is observed to produce results dependent on the selected contrastive example for activation patching.

Information flow routes are applied to Llama 2, showing that:

1. Models show “typical” information flow routes for non-content words, while content words don’t exhibit such patterns.
2. Feedforward networks are more active in the bottom layers of the network (where e.g. subject enrichment is performed) and in very last layer.
3. Positional and subword-merging attention heads are among the most active and important throughout the network.
4. Periods can be treated by the model as BOS tokens by leaving their residual representation mostly untouched during the forward pass.

Finally, the paper also demonstrates that some model components are specialized for specific domains, such as coding or multilingual texts, suggesting a high degree of modularity in the network. The contribution of domain-specific heads obtained by projecting right singular values of the OV circuit to the unembedding matrix show highly interpretable concepts being handled in granular model components.

📄 Paper: Information Flow Routes: Automatically Interpreting Language Models at Scale (2403.00824)

🔍 All daily picks: https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9
reacted to akhaliq's post with 🤯 about 1 year ago
view post
Post
Stealing Part of a Production Language Model

Stealing Part of a Production Language Model (2403.06634)

We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI's ChatGPT or Google's PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under \20 USD, our attack extracts the entire projection matrix of OpenAI's Ada and Babbage language models. We thereby confirm, for the first time, that these black-box models have a hidden dimension of 1024 and 2048, respectively. We also recover the exact hidden dimension size of the gpt-3.5-turbo model, and estimate it would cost under 2,000 in queries to recover the entire projection matrix. We conclude with potential defenses and mitigations, and discuss the implications of possible future work that could extend our attack.
  • 2 replies
·
New activity in qantev/trocr-large-spanish about 1 year ago

Number of Parameters

2
#1 opened about 1 year ago by
CristianJD