File size: 9,933 Bytes
fa2224f
 
 
 
 
 
 
 
0df23dc
fa2224f
 
 
 
 
 
 
 
 
 
352f47d
fa2224f
14fbb0c
fa2224f
af99331
a6c0d30
27de169
 
 
 
 
 
 
 
 
 
14fbb0c
a6c0d30
14fbb0c
af99331
14fbb0c
af99331
 
 
 
 
 
14fbb0c
a6c0d30
14fbb0c
a6c0d30
14fbb0c
a6c0d30
14fbb0c
 
a6c0d30
c79aa4b
 
 
 
 
 
 
 
 
 
14fbb0c
a6c0d30
27de169
14fbb0c
af99331
14fbb0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af99331
14fbb0c
 
c3ad4e9
 
14fbb0c
c3ad4e9
 
 
 
 
14fbb0c
c3ad4e9
 
14fbb0c
 
 
 
 
 
 
 
 
c3ad4e9
14fbb0c
 
 
c3ad4e9
14fbb0c
c3ad4e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14fbb0c
 
 
 
 
 
 
 
 
 
af99331
c3ad4e9
 
 
af99331
 
 
 
 
 
 
5c64de7
af99331
 
 
 
 
 
 
 
 
 
 
 
 
 
14fbb0c
 
a6c0d30
 
 
 
 
 
2a41ea3
a6c0d30
c3ad4e9
 
 
a6c0d30
 
 
 
14fbb0c
a6c0d30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14fbb0c
a6c0d30
 
 
c3ad4e9
 
 
a6c0d30
 
 
 
af99331
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6c0d30
af99331
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
---
pipeline_tag: text-generation
language:
- multilingual
inference: false
license: cc-by-nc-4.0
library_name: transformers
---

<br><br>

<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>

<p align="center">
<b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>

[Blog](https://jina.ai/news/readerlm-v2-frontier-small-language-model-for-html-to-markdown-and-json) | [API](https://jina.ai/reader) | [Colab](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing) | [AWS](https://aws.amazon.com/marketplace/pp/prodview-jwfct4j4rvxk2?sr=0-21&ref_=beagle&applicationId=AWSMPContessa) | [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/jinaai.reader-lm-v2-vm)| [Arxiv (soon!)]

# ReaderLM-v2

`ReaderLM-v2` is a 1.5B parameter language model that converts raw HTML into beautifully formatted markdown or JSON with superior accuracy and improved longer context handling. Supporting multiple languages (29 in total), `ReaderLM-v2` is specialized for tasks involving HTML parsing, transformation, and text extraction.

## What's New in `ReaderLM-v2`

`ReaderLM-v2` represents a significant leap forward from its predecessor, with several key improvements:

- **Better Markdown Generation**: Thanks to its new training paradigm and higher-quality training data, the model excels at generating complex elements like code fences, nested lists, tables, and LaTeX equations.
- **JSON Output**: Introduces direct HTML-to-JSON generation using predefined schemas, eliminating the need for intermediate markdown conversion.
- **Longer Context Handling**: Handles up to 512K tokens combined input and output length, with improved performance on long-form content.
- **Multilingual Support**: Comprehensive support across 29 languages for broader applications.
- **Enhanced Stability**: Greatly alleviates degeneration issues after generating long sequences through contrastive loss during training.

## Model Overview

- **Model Type**: Autoregressive, decoder-only transformer
- **Parameter Count**: 1.54B
- **Context Window**: Up to 512K tokens (combined input and output)
- **Hidden Size**: 1536
- **Number of Layers**: 28
- **Query Heads**: 12
- **KV Heads**: 2
- **Head Size**: 128
- **Intermediate Size**: 8960
- **Supported Languages**: English, Chinese, Japanese, Korean, French, Spanish, Portuguese, German, Italian, Russian, Vietnamese, Thai, Arabic, and more (29 total)

---

# Usage

Below, you will find instructions and examples for using `ReaderLM-v2` locally using the Hugging Face Transformers library.
For a more hands-on experience in a hosted environment, see the [Google Colab Notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing).

## Via Reader API

`ReaderLM-v2` is now fully integrated with [Reader API](https://jina.ai/reader/). To use it, simply specify `x-engine: readerlm-v2` in your request headers and enable response streaming with `-H 'Accept: text/event-stream'`:

```bash
curl https://r.jina.ai/https://news.ycombinator.com/ -H 'x-engine: readerlm-v2' -H 'Accept: text/event-stream'
```

You can try it without an API key at a lower rate limit. For higher rate limits, you can purchase an API key. Please note that ReaderLM-v2 requests consume 3x the normal token count from your API key allocation. This is currently an experimental feature, and we're working with the GCP team to improve GPU efficiency.

## On Google Colab

You can try `ReaderLM-v2` via our [Colab notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing), which demonstrates HTML-to-markdown conversion, JSON extraction, and instruction-following using the HackerNews frontpage as an example. The notebook is optimized for Colab's free T4 GPU tier and requires `vllm` and `triton` for acceleration and running.

Note that the free T4 GPU has limitations—it doesn't support bfloat16 or flash attention 2, leading to higher memory usage and slower processing of longer inputs. Nevertheless, ReaderLM-v2 successfully processes large documents under these constraints, achieving processing speeds of 67 tokens/s input and 36 tokens/s output. For production use, we recommend an RTX 3090/4090 for optimal performance.

## Local Usage

To use `ReaderLM-v2` locally:

1. Install the necessary dependencies:

   ```bash
   pip install transformers
   ```

2. Load and run the model:

   ```python
   from transformers import AutoModelForCausalLM, AutoTokenizer

   device = "cuda"  # or "cpu"
   tokenizer = AutoTokenizer.from_pretrained("jinaai/ReaderLM-v2")
   model = AutoModelForCausalLM.from_pretrained("jinaai/ReaderLM-v2").to(device)
   ```

3. (Optional) Pre-clean your HTML to remove scripts, styles, comments, to reduce the noise and length of the input:

   ```python
   import re

   # Patterns
   SCRIPT_PATTERN = r"<[ ]*script.*?\/[ ]*script[ ]*>"
   STYLE_PATTERN = r"<[ ]*style.*?\/[ ]*style[ ]*>"
   META_PATTERN = r"<[ ]*meta.*?>"
   COMMENT_PATTERN = r"<[ ]*!--.*?--[ ]*>"
   LINK_PATTERN = r"<[ ]*link.*?>"
   BASE64_IMG_PATTERN = r'<img[^>]+src="data:image/[^;]+;base64,[^"]+"[^>]*>'
   SVG_PATTERN = r"(<svg[^>]*>)(.*?)(<\/svg>)"


   def replace_svg(html: str, new_content: str = "this is a placeholder") -> str:
       return re.sub(
           SVG_PATTERN,
           lambda match: f"{match.group(1)}{new_content}{match.group(3)}",
           html,
           flags=re.DOTALL,
       )


   def replace_base64_images(html: str, new_image_src: str = "#") -> str:
       return re.sub(BASE64_IMG_PATTERN, f'<img src="{new_image_src}"/>', html)


   def clean_html(html: str, clean_svg: bool = False, clean_base64: bool = False):
       html = re.sub(
           SCRIPT_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
       )
       html = re.sub(
           STYLE_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
       )
       html = re.sub(
           META_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
       )
       html = re.sub(
           COMMENT_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
       )
       html = re.sub(
           LINK_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
       )

       if clean_svg:
           html = replace_svg(html)
       if clean_base64:
           html = replace_base64_images(html)
       return html
   ```

4. Create a prompt for the model:

   ```python
   def create_prompt(
       text: str, tokenizer=None, instruction: str = None, schema: str = None
   ) -> str:
       """
       Create a prompt for the model with optional instruction and JSON schema.
       """
       if not instruction:
           instruction = "Extract the main content from the given HTML and convert it to Markdown format."
       if schema:
           instruction = "Extract the specified information from a list of news threads and present it in a structured JSON format."
           prompt = f"{instruction}\n```html\n{text}\n```\nThe JSON schema is as follows:```json\n{schema}\n```"
       else:
           prompt = f"{instruction}\n```html\n{text}\n```"

       messages = [
           {
               "role": "user",
               "content": prompt,
           }
       ]

       return tokenizer.apply_chat_template(
           messages, tokenize=False, add_generation_prompt=True
       )
   ```

### HTML to Markdown Example

```python
html = "<html><body><h1>Hello, world!</h1></body></html>"

html = clean_html(html)

input_prompt = create_prompt(html, tokenizer=tokenizer)
inputs = tokenizer.encode(input_prompt, return_tensors="pt").to(device)
outputs = model.generate(
    inputs, max_new_tokens=1024, temperature=0, do_sample=False, repetition_penalty=1.08
)

print(tokenizer.decode(outputs[0]))
```

### HTML to JSON Example

```python
schema = """
{
  "type": "object",
  "properties": {
    "title": {
      "type": "string"
    },
    "author": {
      "type": "string"
    },
    "date": {
      "type": "string"
    },
    "content": {
      "type": "string"
    }
  },
  "required": ["title", "author", "date", "content"]
}
"""

html = clean_html(html)
input_prompt = create_prompt(html, schema=schema)

inputs = tokenizer.encode(input_prompt, return_tensors="pt").to(device)
outputs = model.generate(
    inputs, max_new_tokens=1024, temperature=0, do_sample=False, repetition_penalty=1.08
)

print(tokenizer.decode(outputs[0]))
```

## Model Performance

ReaderLM-v2 has been extensively evaluated on various tasks:

### Quantitative Evaluation

For HTML-to-Markdown tasks, the model outperforms much larger models like Qwen2.5-32B-Instruct and Gemini2-flash-expr, achieving:
- ROUGE-L: 0.84
- Levenshtein Distance: 0.22
- Jaro-Winkler Similarity: 0.82

For HTML-to-JSON tasks, it shows competitive performance with:
- F1 Score: 0.81
- Precision: 0.82
- Recall: 0.81
- Pass-Rate: 0.98

### Qualitative Evaluation

The model excels in three key dimensions:
- Content Integrity: 39/50
- Structural Accuracy: 35/50
- Format Compliance: 36/50

These scores demonstrate strong performance in preserving semantic information, maintaining structural accuracy, and adhering to markdown syntax standards.

## Training Details

ReaderLM-v2 is built on Qwen2.5-1.5B-Instruction and trained using a sophisticated pipeline:

1. Data Preparation: Created html-markdown-1m dataset with 1 million HTML documents
2. Synthetic Data Generation: Three-step pipeline using Qwen2.5-32B-Instruction
   - Drafting: Initial markdown and JSON generation
   - Refinement: Content cleanup and structure alignment
   - Critique: Quality evaluation and filtering

3. Training Process:
   - Long-context pretraining
   - Supervised fine-tuning
   - Direct preference optimization
   - Self-play reinforcement tuning