Datasets:
Upload folder using huggingface_hub
Browse files- .gitattributes +3 -0
- README.md +68 -3
- result_final_make_dataset.ipynb +679 -0
- result_forums_fastcode.csv +0 -0
- result_forums_infostart_WITH_CODE.csv +3 -0
- result_parsing_forums.csv +3 -0
- scripts_parsing/fastcode_parser.py +433 -0
- scripts_parsing/forum_parser.py +474 -0
- scripts_parsing/think_model_parser.py +156 -0
- training_data.jsonl +3 -0
.gitattributes
CHANGED
@@ -57,3 +57,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
60 |
+
result_forums_infostart_WITH_CODE.csv filter=lfs diff=lfs merge=lfs -text
|
61 |
+
result_parsing_forums.csv filter=lfs diff=lfs merge=lfs -text
|
62 |
+
training_data.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,68 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 1C_forums: Dataset based parsed data from two of the most popular forums for 1c
|
2 |
+
|
3 |
+
This dataset made of the parsed data from two forums for coders at the 1C languages:
|
4 |
+
- [Infostart](https://forum.infostart.ru/group2/?a=26694) - all threads presents as like think section for model, and marked as the best result for queestion as final answer
|
5 |
+
- [Fastcode](https://fastcode.im/Templates?TemplatesOnly=True) - Only templates
|
6 |
+
|
7 |
+
## Dataset Overview
|
8 |
+
All rows prepared as useful columns. All text prepared as markdown text, and code 1c looks as like:
|
9 |
+
```md
|
10 |
+
\`\`\`1c
|
11 |
+
"ВЫБРАТЬ
|
12 |
+
| Ссылка.Контрагент,
|
13 |
+
| Ссылка.Магазин,
|
14 |
+
| СУММА(КоличествоСпрос) КАК КолвоСпрос
|
15 |
+
|ИЗ
|
16 |
+
| Документ.СпросНаТовар.Товары
|
17 |
+
|ГДЕ Ссылка.Дата МЕЖДУ ДОБАВИТЬКДАТЕ(&ДатаНачало, ГОД, -10) И &ДатаНачало
|
18 |
+
| СГРУППИРОВАТЬ ПО Ссылка.Контрагент, СсылкаМагазин";.
|
19 |
+
\`\`\`
|
20 |
+
```
|
21 |
+
|
22 |
+
|
23 |
+
### Infostart
|
24 |
+
Choosen data at [Infostart](https://forum.infostart.ru/group2/?a=26694) are all forums threads with found answers in section Dev.
|
25 |
+
- `source` - for all rows for Infostart forum contains this value `forum_infostart`
|
26 |
+
- `in_source_id` - id of the thread from that web forum. To get link to this page use this format: `https://forum.infostart.ru/forum9/{in_source_id}/`
|
27 |
+
- `prompt` - it's a first question(msg) in thread.
|
28 |
+
- `think_process` - its all thread discution presented as like think process that in there each msg between others separeted `\n----\n`
|
29 |
+
- `solution` - msgs marked in thread as like solution for question
|
30 |
+
- `is_answer_a_link` - if link contains a more then 60-70% of all solution, it's contains value `True`, else `False`
|
31 |
+
- `has_link` - Counter links in solution. If solution doesn't contains any link, it value will be a `Nan`
|
32 |
+
- `tags` - predicted tags
|
33 |
+
|
34 |
+
|
35 |
+
### FastCode
|
36 |
+
All data at [Fastcode](https://fastcode.im/Templates?TemplatesOnly=True) are only templates, comments for this code made as new section `# Примечание` that what shows each comment.
|
37 |
+
- `source` - for all rows for Fastcode forum contains this value `fastcode_Templates`
|
38 |
+
- `in_source_id` - id of the thread from that web forum. To get link to this page use this format: `https://fastcode.im/Templates/{in_source_id}/`
|
39 |
+
- `prompt` - it's title of the template.
|
40 |
+
- `think_process` - **`None` - it's dont use it at all!**
|
41 |
+
- `solution` - it's description(if it's contains there) and code with notes(it's comments, if its are there)
|
42 |
+
- `is_answer_a_link` - if link contains a more then 60-70% of all solution, it's contains value `True`, else `False`
|
43 |
+
- `has_link` - Counter links in solution. If solution doesn't contains any link, it value will be a `Nan`
|
44 |
+
- `tags` - Tags at template separated by comma
|
45 |
+
|
46 |
+
## Dataset Statistics
|
47 |
+
|
48 |
+
* **Total examples**: 19041
|
49 |
+
- forum ***infostart***: 18299
|
50 |
+
- ***fastcode*** Template: 742
|
51 |
+
|
52 |
+
## Parsing scripts
|
53 |
+
|
54 |
+
You can find in folder `scripts_parsing`!
|
55 |
+
|
56 |
+
## Citation
|
57 |
+
|
58 |
+
Please cite this dataset if you use it in your work:
|
59 |
+
|
60 |
+
```
|
61 |
+
@misc{arefaste2025,
|
62 |
+
title={Arefaste: Parsed dataset for 1c from dataset},
|
63 |
+
author={arefaste},
|
64 |
+
year={2025},
|
65 |
+
publisher={Hugging Face},
|
66 |
+
howpublished={\url{https://huggingface.co/datasets/arefaste/1C_Forums}}
|
67 |
+
}
|
68 |
+
```
|
result_final_make_dataset.ipynb
ADDED
@@ -0,0 +1,679 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "code",
|
5 |
+
"execution_count": 2,
|
6 |
+
"id": "9524095d",
|
7 |
+
"metadata": {},
|
8 |
+
"outputs": [
|
9 |
+
{
|
10 |
+
"name": "stdout",
|
11 |
+
"output_type": "stream",
|
12 |
+
"text": [
|
13 |
+
"Requirement already satisfied: pandas in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (2.3.0)\n",
|
14 |
+
"Requirement already satisfied: numpy in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (2.3.1)\n",
|
15 |
+
"Requirement already satisfied: matplotlib in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (3.10.3)\n",
|
16 |
+
"Requirement already satisfied: seaborn in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (0.13.2)\n",
|
17 |
+
"Requirement already satisfied: datasets in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (3.6.0)\n",
|
18 |
+
"Requirement already satisfied: python-dateutil>=2.8.2 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from pandas) (2.9.0.post0)\n",
|
19 |
+
"Requirement already satisfied: pytz>=2020.1 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from pandas) (2025.2)\n",
|
20 |
+
"Requirement already satisfied: tzdata>=2022.7 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from pandas) (2025.2)\n",
|
21 |
+
"Requirement already satisfied: contourpy>=1.0.1 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from matplotlib) (1.3.2)\n",
|
22 |
+
"Requirement already satisfied: cycler>=0.10 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from matplotlib) (0.12.1)\n",
|
23 |
+
"Requirement already satisfied: fonttools>=4.22.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from matplotlib) (4.58.4)\n",
|
24 |
+
"Requirement already satisfied: kiwisolver>=1.3.1 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from matplotlib) (1.4.8)\n",
|
25 |
+
"Requirement already satisfied: packaging>=20.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from matplotlib) (25.0)\n",
|
26 |
+
"Requirement already satisfied: pillow>=8 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from matplotlib) (11.2.1)\n",
|
27 |
+
"Requirement already satisfied: pyparsing>=2.3.1 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from matplotlib) (3.2.3)\n",
|
28 |
+
"Requirement already satisfied: filelock in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (3.18.0)\n",
|
29 |
+
"Requirement already satisfied: pyarrow>=15.0.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (20.0.0)\n",
|
30 |
+
"Requirement already satisfied: dill<0.3.9,>=0.3.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (0.3.8)\n",
|
31 |
+
"Requirement already satisfied: requests>=2.32.2 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (2.32.4)\n",
|
32 |
+
"Requirement already satisfied: tqdm>=4.66.3 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (4.67.1)\n",
|
33 |
+
"Requirement already satisfied: xxhash in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (3.5.0)\n",
|
34 |
+
"Requirement already satisfied: multiprocess<0.70.17 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (0.70.16)\n",
|
35 |
+
"Requirement already satisfied: fsspec<=2025.3.0,>=2023.1.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (2025.3.0)\n",
|
36 |
+
"Requirement already satisfied: huggingface-hub>=0.24.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (0.33.1)\n",
|
37 |
+
"Requirement already satisfied: pyyaml>=5.1 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from datasets) (6.0.2)\n",
|
38 |
+
"Requirement already satisfied: aiohttp!=4.0.0a0,!=4.0.0a1 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (3.12.13)\n",
|
39 |
+
"Requirement already satisfied: aiohappyeyeballs>=2.5.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (2.6.1)\n",
|
40 |
+
"Requirement already satisfied: aiosignal>=1.1.2 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (1.3.2)\n",
|
41 |
+
"Requirement already satisfied: attrs>=17.3.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (25.3.0)\n",
|
42 |
+
"Requirement already satisfied: frozenlist>=1.1.1 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (1.7.0)\n",
|
43 |
+
"Requirement already satisfied: multidict<7.0,>=4.5 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (6.6.0)\n",
|
44 |
+
"Requirement already satisfied: propcache>=0.2.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (0.3.2)\n",
|
45 |
+
"Requirement already satisfied: yarl<2.0,>=1.17.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (1.20.1)\n",
|
46 |
+
"Requirement already satisfied: idna>=2.0 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from yarl<2.0,>=1.17.0->aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets) (3.10)\n",
|
47 |
+
"Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from huggingface-hub>=0.24.0->datasets) (4.14.0)\n",
|
48 |
+
"Requirement already satisfied: six>=1.5 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from python-dateutil>=2.8.2->pandas) (1.17.0)\n",
|
49 |
+
"Requirement already satisfied: charset_normalizer<4,>=2 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from requests>=2.32.2->datasets) (3.4.2)\n",
|
50 |
+
"Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from requests>=2.32.2->datasets) (2.5.0)\n",
|
51 |
+
"Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from requests>=2.32.2->datasets) (2025.6.15)\n",
|
52 |
+
"Requirement already satisfied: colorama in c:\\users\\a.s.ivanov\\pycharmprojects\\1c_coder\\.venv\\lib\\site-packages (from tqdm>=4.66.3->datasets) (0.4.6)\n",
|
53 |
+
"Note: you may need to restart the kernel to use updated packages.\n"
|
54 |
+
]
|
55 |
+
},
|
56 |
+
{
|
57 |
+
"name": "stderr",
|
58 |
+
"output_type": "stream",
|
59 |
+
"text": [
|
60 |
+
"c:\\Users\\a.s.ivanov\\PycharmProjects\\1C_CODER\\.venv\\Lib\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
61 |
+
" from .autonotebook import tqdm as notebook_tqdm\n"
|
62 |
+
]
|
63 |
+
}
|
64 |
+
],
|
65 |
+
"source": [
|
66 |
+
"%pip install pandas numpy matplotlib seaborn datasets\n",
|
67 |
+
"import pandas as pd\n",
|
68 |
+
"import numpy as np\n",
|
69 |
+
"import matplotlib.pyplot as plt\n",
|
70 |
+
"import seaborn as sns\n",
|
71 |
+
"import datasets\n"
|
72 |
+
]
|
73 |
+
},
|
74 |
+
{
|
75 |
+
"cell_type": "code",
|
76 |
+
"execution_count": 3,
|
77 |
+
"id": "a889b205",
|
78 |
+
"metadata": {},
|
79 |
+
"outputs": [
|
80 |
+
{
|
81 |
+
"data": {
|
82 |
+
"text/html": [
|
83 |
+
"<div>\n",
|
84 |
+
"<style scoped>\n",
|
85 |
+
" .dataframe tbody tr th:only-of-type {\n",
|
86 |
+
" vertical-align: middle;\n",
|
87 |
+
" }\n",
|
88 |
+
"\n",
|
89 |
+
" .dataframe tbody tr th {\n",
|
90 |
+
" vertical-align: top;\n",
|
91 |
+
" }\n",
|
92 |
+
"\n",
|
93 |
+
" .dataframe thead th {\n",
|
94 |
+
" text-align: right;\n",
|
95 |
+
" }\n",
|
96 |
+
"</style>\n",
|
97 |
+
"<table border=\"1\" class=\"dataframe\">\n",
|
98 |
+
" <thead>\n",
|
99 |
+
" <tr style=\"text-align: right;\">\n",
|
100 |
+
" <th></th>\n",
|
101 |
+
" <th>source</th>\n",
|
102 |
+
" <th>in_source_id</th>\n",
|
103 |
+
" <th>prompt</th>\n",
|
104 |
+
" <th>think_process</th>\n",
|
105 |
+
" <th>solution</th>\n",
|
106 |
+
" <th>is_answer_a_link</th>\n",
|
107 |
+
" <th>has_link</th>\n",
|
108 |
+
" <th>tags_service</th>\n",
|
109 |
+
" </tr>\n",
|
110 |
+
" </thead>\n",
|
111 |
+
" <tbody>\n",
|
112 |
+
" <tr>\n",
|
113 |
+
" <th>0</th>\n",
|
114 |
+
" <td>forum_infostart</td>\n",
|
115 |
+
" <td>topic328184</td>\n",
|
116 |
+
" <td>Здравствуйте. УНФ, есть запрос\\n\\n```1c\\n \"...</td>\n",
|
117 |
+
" <td><think>\\nЗдравствуйте. УНФ, есть запрос\\n\\n```...</td>\n",
|
118 |
+
" <td># Код Реализации\\n```1c\\n ВЫБРАТЬ\\n Това...</td>\n",
|
119 |
+
" <td>False</td>\n",
|
120 |
+
" <td>NaN</td>\n",
|
121 |
+
" <td>УТ</td>\n",
|
122 |
+
" </tr>\n",
|
123 |
+
" <tr>\n",
|
124 |
+
" <th>1</th>\n",
|
125 |
+
" <td>forum_infostart</td>\n",
|
126 |
+
" <td>topic328235</td>\n",
|
127 |
+
" <td>Задача простая. Необходимо заполнить документ ...</td>\n",
|
128 |
+
" <td><think>\\nЗадача простая. Необходимо заполнить ...</td>\n",
|
129 |
+
" <td>Для РН можно по��учить только весь набор регист...</td>\n",
|
130 |
+
" <td>False</td>\n",
|
131 |
+
" <td>NaN</td>\n",
|
132 |
+
" <td>NaN</td>\n",
|
133 |
+
" </tr>\n",
|
134 |
+
" <tr>\n",
|
135 |
+
" <th>2</th>\n",
|
136 |
+
" <td>forum_infostart</td>\n",
|
137 |
+
" <td>topic327650</td>\n",
|
138 |
+
" <td>Доброго времени суток.\\nПосле обновления УТ на...</td>\n",
|
139 |
+
" <td><think>\\nДоброго времени суток.\\nПосле обновле...</td>\n",
|
140 |
+
" <td>Может кому пригодится в последнем релизе не уд...</td>\n",
|
141 |
+
" <td>False</td>\n",
|
142 |
+
" <td>NaN</td>\n",
|
143 |
+
" <td>ERP</td>\n",
|
144 |
+
" </tr>\n",
|
145 |
+
" <tr>\n",
|
146 |
+
" <th>3</th>\n",
|
147 |
+
" <td>forum_infostart</td>\n",
|
148 |
+
" <td>topic328246</td>\n",
|
149 |
+
" <td>Здравствуйте, столкнулся с такой проблемой. Пр...</td>\n",
|
150 |
+
" <td><think>\\nЗдравствуйте, столкнулся с такой проб...</td>\n",
|
151 |
+
" <td>Если оплата кредита, тогда без НДС.</td>\n",
|
152 |
+
" <td>False</td>\n",
|
153 |
+
" <td>NaN</td>\n",
|
154 |
+
" <td>Бухгалтерия</td>\n",
|
155 |
+
" </tr>\n",
|
156 |
+
" <tr>\n",
|
157 |
+
" <th>4</th>\n",
|
158 |
+
" <td>forum_infostart</td>\n",
|
159 |
+
" <td>topic328236</td>\n",
|
160 |
+
" <td>Доброго дня всем!\\n\\nСохраняю данные Таблицы з...</td>\n",
|
161 |
+
" <td><think>\\nДоброго дня всем!\\n\\nСохраняю данные ...</td>\n",
|
162 |
+
" <td>Да зачем все эти построители-шмостроители для ...</td>\n",
|
163 |
+
" <td>False</td>\n",
|
164 |
+
" <td>NaN</td>\n",
|
165 |
+
" <td>NaN</td>\n",
|
166 |
+
" </tr>\n",
|
167 |
+
" </tbody>\n",
|
168 |
+
"</table>\n",
|
169 |
+
"</div>"
|
170 |
+
],
|
171 |
+
"text/plain": [
|
172 |
+
" source in_source_id \\\n",
|
173 |
+
"0 forum_infostart topic328184 \n",
|
174 |
+
"1 forum_infostart topic328235 \n",
|
175 |
+
"2 forum_infostart topic327650 \n",
|
176 |
+
"3 forum_infostart topic328246 \n",
|
177 |
+
"4 forum_infostart topic328236 \n",
|
178 |
+
"\n",
|
179 |
+
" prompt \\\n",
|
180 |
+
"0 Здравствуйте. УНФ, есть запрос\\n\\n```1c\\n \"... \n",
|
181 |
+
"1 Задача простая. Необходимо заполнить документ ... \n",
|
182 |
+
"2 Доброго времени суток.\\nПосле обновления УТ на... \n",
|
183 |
+
"3 Здравствуйте, столкнулся с такой проблемой. Пр... \n",
|
184 |
+
"4 Доброго дня всем!\\n\\nСохраняю данные Таблицы з... \n",
|
185 |
+
"\n",
|
186 |
+
" think_process \\\n",
|
187 |
+
"0 <think>\\nЗдравствуйте. УНФ, есть запрос\\n\\n```... \n",
|
188 |
+
"1 <think>\\nЗадача простая. Необходимо заполнить ... \n",
|
189 |
+
"2 <think>\\nДоброго времени суток.\\nПосле обновле... \n",
|
190 |
+
"3 <think>\\nЗдравствуйте, столкнулся с такой проб... \n",
|
191 |
+
"4 <think>\\nДоброго дня всем!\\n\\nСохраняю данные ... \n",
|
192 |
+
"\n",
|
193 |
+
" solution is_answer_a_link \\\n",
|
194 |
+
"0 # Код Реализации\\n```1c\\n ВЫБРАТЬ\\n Това... False \n",
|
195 |
+
"1 Для РН можно получить только весь набор регист... False \n",
|
196 |
+
"2 Может кому пригодится в последнем релизе не уд... False \n",
|
197 |
+
"3 Если оплата кредита, тогда без НДС. False \n",
|
198 |
+
"4 Да зачем все эти построители-шмостроители для ... False \n",
|
199 |
+
"\n",
|
200 |
+
" has_link tags_service \n",
|
201 |
+
"0 NaN УТ \n",
|
202 |
+
"1 NaN NaN \n",
|
203 |
+
"2 NaN ERP \n",
|
204 |
+
"3 NaN Бухгалтерия \n",
|
205 |
+
"4 NaN NaN "
|
206 |
+
]
|
207 |
+
},
|
208 |
+
"execution_count": 3,
|
209 |
+
"metadata": {},
|
210 |
+
"output_type": "execute_result"
|
211 |
+
}
|
212 |
+
],
|
213 |
+
"source": [
|
214 |
+
"df_think = pd.read_csv('result_forums_infostart_WITH_CODE.csv')\n",
|
215 |
+
"df_think.head()"
|
216 |
+
]
|
217 |
+
},
|
218 |
+
{
|
219 |
+
"cell_type": "code",
|
220 |
+
"execution_count": 4,
|
221 |
+
"id": "6078c4af",
|
222 |
+
"metadata": {},
|
223 |
+
"outputs": [],
|
224 |
+
"source": [
|
225 |
+
"df_regular = pd.read_csv('result_forums_fastcode.csv')"
|
226 |
+
]
|
227 |
+
},
|
228 |
+
{
|
229 |
+
"cell_type": "code",
|
230 |
+
"execution_count": 7,
|
231 |
+
"id": "aa0275f3",
|
232 |
+
"metadata": {},
|
233 |
+
"outputs": [
|
234 |
+
{
|
235 |
+
"data": {
|
236 |
+
"text/html": [
|
237 |
+
"<div>\n",
|
238 |
+
"<style scoped>\n",
|
239 |
+
" .dataframe tbody tr th:only-of-type {\n",
|
240 |
+
" vertical-align: middle;\n",
|
241 |
+
" }\n",
|
242 |
+
"\n",
|
243 |
+
" .dataframe tbody tr th {\n",
|
244 |
+
" vertical-align: top;\n",
|
245 |
+
" }\n",
|
246 |
+
"\n",
|
247 |
+
" .dataframe thead th {\n",
|
248 |
+
" text-align: right;\n",
|
249 |
+
" }\n",
|
250 |
+
"</style>\n",
|
251 |
+
"<table border=\"1\" class=\"dataframe\">\n",
|
252 |
+
" <thead>\n",
|
253 |
+
" <tr style=\"text-align: right;\">\n",
|
254 |
+
" <th></th>\n",
|
255 |
+
" <th>source</th>\n",
|
256 |
+
" <th>in_source_id</th>\n",
|
257 |
+
" <th>prompt</th>\n",
|
258 |
+
" <th>think_process</th>\n",
|
259 |
+
" <th>solution</th>\n",
|
260 |
+
" <th>is_answer_a_link</th>\n",
|
261 |
+
" <th>has_link</th>\n",
|
262 |
+
" <th>tags_service</th>\n",
|
263 |
+
" <th>tags</th>\n",
|
264 |
+
" </tr>\n",
|
265 |
+
" </thead>\n",
|
266 |
+
" <tbody>\n",
|
267 |
+
" <tr>\n",
|
268 |
+
" <th>0</th>\n",
|
269 |
+
" <td>forum_infostart</td>\n",
|
270 |
+
" <td>topic328184</td>\n",
|
271 |
+
" <td>Здравствуйте. УНФ, есть запрос\\n\\n```1c\\n \"...</td>\n",
|
272 |
+
" <td><think>\\nЗдравствуйте. УНФ, есть запрос\\n\\n```...</td>\n",
|
273 |
+
" <td># Код Реализации\\n```1c\\n ВЫБРАТЬ\\n Това...</td>\n",
|
274 |
+
" <td>False</td>\n",
|
275 |
+
" <td>NaN</td>\n",
|
276 |
+
" <td>УТ</td>\n",
|
277 |
+
" <td>NaN</td>\n",
|
278 |
+
" </tr>\n",
|
279 |
+
" <tr>\n",
|
280 |
+
" <th>1</th>\n",
|
281 |
+
" <td>forum_infostart</td>\n",
|
282 |
+
" <td>topic328235</td>\n",
|
283 |
+
" <td>Задача простая. Необходимо заполнить документ ...</td>\n",
|
284 |
+
" <td><think>\\nЗадача простая. Необходимо заполнить ...</td>\n",
|
285 |
+
" <td>Для РН можно получить только весь набор регист...</td>\n",
|
286 |
+
" <td>False</td>\n",
|
287 |
+
" <td>NaN</td>\n",
|
288 |
+
" <td>NaN</td>\n",
|
289 |
+
" <td>NaN</td>\n",
|
290 |
+
" </tr>\n",
|
291 |
+
" <tr>\n",
|
292 |
+
" <th>2</th>\n",
|
293 |
+
" <td>forum_infostart</td>\n",
|
294 |
+
" <td>topic327650</td>\n",
|
295 |
+
" <td>Доброго времени суток.\\nПосле обновления УТ на...</td>\n",
|
296 |
+
" <td><think>\\nДоброго времени суток.\\nПосле обновле...</td>\n",
|
297 |
+
" <td>Может кому пригодится в последнем релизе не уд...</td>\n",
|
298 |
+
" <td>False</td>\n",
|
299 |
+
" <td>NaN</td>\n",
|
300 |
+
" <td>ERP</td>\n",
|
301 |
+
" <td>NaN</td>\n",
|
302 |
+
" </tr>\n",
|
303 |
+
" <tr>\n",
|
304 |
+
" <th>3</th>\n",
|
305 |
+
" <td>forum_infostart</td>\n",
|
306 |
+
" <td>topic328246</td>\n",
|
307 |
+
" <td>Здравствуйте, столкнулся с такой проблемой. Пр...</td>\n",
|
308 |
+
" <td><think>\\nЗдравствуйте, столкнулся с такой проб...</td>\n",
|
309 |
+
" <td>Если оплата кредита, тогда без НДС.</td>\n",
|
310 |
+
" <td>False</td>\n",
|
311 |
+
" <td>NaN</td>\n",
|
312 |
+
" <td>Бухгалтерия</td>\n",
|
313 |
+
" <td>NaN</td>\n",
|
314 |
+
" </tr>\n",
|
315 |
+
" <tr>\n",
|
316 |
+
" <th>4</th>\n",
|
317 |
+
" <td>forum_infostart</td>\n",
|
318 |
+
" <td>topic328236</td>\n",
|
319 |
+
" <td>Доброго дня всем!\\n\\nСохраняю данные Таблицы з...</td>\n",
|
320 |
+
" <td><think>\\nДоброго дня всем!\\n\\nСохраняю данные ...</td>\n",
|
321 |
+
" <td>Да зачем все эти построители-шмостроители для ...</td>\n",
|
322 |
+
" <td>False</td>\n",
|
323 |
+
" <td>NaN</td>\n",
|
324 |
+
" <td>NaN</td>\n",
|
325 |
+
" <td>NaN</td>\n",
|
326 |
+
" </tr>\n",
|
327 |
+
" <tr>\n",
|
328 |
+
" <th>...</th>\n",
|
329 |
+
" <td>...</td>\n",
|
330 |
+
" <td>...</td>\n",
|
331 |
+
" <td>...</td>\n",
|
332 |
+
" <td>...</td>\n",
|
333 |
+
" <td>...</td>\n",
|
334 |
+
" <td>...</td>\n",
|
335 |
+
" <td>...</td>\n",
|
336 |
+
" <td>...</td>\n",
|
337 |
+
" <td>...</td>\n",
|
338 |
+
" </tr>\n",
|
339 |
+
" <tr>\n",
|
340 |
+
" <th>19036</th>\n",
|
341 |
+
" <td>fastcode_Templates</td>\n",
|
342 |
+
" <td>13</td>\n",
|
343 |
+
" <td>ЗаполнитьМассивУникальнымиЗначениями</td>\n",
|
344 |
+
" <td>NaN</td>\n",
|
345 |
+
" <td>Заполняет массив-приемник уникальными значения...</td>\n",
|
346 |
+
" <td>False</td>\n",
|
347 |
+
" <td>NaN</td>\n",
|
348 |
+
" <td>ЗУП</td>\n",
|
349 |
+
" <td>1С,Коллекции</td>\n",
|
350 |
+
" </tr>\n",
|
351 |
+
" <tr>\n",
|
352 |
+
" <th>19037</th>\n",
|
353 |
+
" <td>fastcode_Templates</td>\n",
|
354 |
+
" <td>12</td>\n",
|
355 |
+
" <td>ДобавитьИтераторТаблице</td>\n",
|
356 |
+
" <td>NaN</td>\n",
|
357 |
+
" <td>Добавляет колонку в таблицу значений. Заполняе...</td>\n",
|
358 |
+
" <td>False</td>\n",
|
359 |
+
" <td>NaN</td>\n",
|
360 |
+
" <td>NaN</td>\n",
|
361 |
+
" <td>1С</td>\n",
|
362 |
+
" </tr>\n",
|
363 |
+
" <tr>\n",
|
364 |
+
" <th>19038</th>\n",
|
365 |
+
" <td>fastcode_Templates</td>\n",
|
366 |
+
" <td>7</td>\n",
|
367 |
+
" <td>ТаблицаЗначенийВМассив</td>\n",
|
368 |
+
" <td>NaN</td>\n",
|
369 |
+
" <td>Преобразует таблицу значений в массив.\\n\\n# Ко...</td>\n",
|
370 |
+
" <td>False</td>\n",
|
371 |
+
" <td>NaN</td>\n",
|
372 |
+
" <td>УТ</td>\n",
|
373 |
+
" <td>1С,Коллекции</td>\n",
|
374 |
+
" </tr>\n",
|
375 |
+
" <tr>\n",
|
376 |
+
" <th>19039</th>\n",
|
377 |
+
" <td>fastcode_Templates</td>\n",
|
378 |
+
" <td>5</td>\n",
|
379 |
+
" <td>Получить дату файла</td>\n",
|
380 |
+
" <td>NaN</td>\n",
|
381 |
+
" <td>Функция определяет дату последней модификации ...</td>\n",
|
382 |
+
" <td>False</td>\n",
|
383 |
+
" <td>NaN</td>\n",
|
384 |
+
" <td>Розница</td>\n",
|
385 |
+
" <td>1С,Дата</td>\n",
|
386 |
+
" </tr>\n",
|
387 |
+
" <tr>\n",
|
388 |
+
" <th>19040</th>\n",
|
389 |
+
" <td>fastcode_Templates</td>\n",
|
390 |
+
" <td>4</td>\n",
|
391 |
+
" <td>Получить имя файла</td>\n",
|
392 |
+
" <td>NaN</td>\n",
|
393 |
+
" <td>Составляет полное имя файла из имени каталога ...</td>\n",
|
394 |
+
" <td>False</td>\n",
|
395 |
+
" <td>NaN</td>\n",
|
396 |
+
" <td>УТ</td>\n",
|
397 |
+
" <td>1С</td>\n",
|
398 |
+
" </tr>\n",
|
399 |
+
" </tbody>\n",
|
400 |
+
"</table>\n",
|
401 |
+
"<p>19041 rows × 9 columns</p>\n",
|
402 |
+
"</div>"
|
403 |
+
],
|
404 |
+
"text/plain": [
|
405 |
+
" source in_source_id \\\n",
|
406 |
+
"0 forum_infostart topic328184 \n",
|
407 |
+
"1 forum_infostart topic328235 \n",
|
408 |
+
"2 forum_infostart topic327650 \n",
|
409 |
+
"3 forum_infostart topic328246 \n",
|
410 |
+
"4 forum_infostart topic328236 \n",
|
411 |
+
"... ... ... \n",
|
412 |
+
"19036 fastcode_Templates 13 \n",
|
413 |
+
"19037 fastcode_Templates 12 \n",
|
414 |
+
"19038 fastcode_Templates 7 \n",
|
415 |
+
"19039 fastcode_Templates 5 \n",
|
416 |
+
"19040 fastcode_Templates 4 \n",
|
417 |
+
"\n",
|
418 |
+
" prompt \\\n",
|
419 |
+
"0 Здравствуйте. УНФ, есть запрос\\n\\n```1c\\n \"... \n",
|
420 |
+
"1 Задача простая. Необходимо заполнить документ ... \n",
|
421 |
+
"2 Доброго времени суток.\\nПосле обновления УТ на... \n",
|
422 |
+
"3 Здравствуйте, столкнулся с такой проблемой. Пр... \n",
|
423 |
+
"4 Доброго дня всем!\\n\\nСохраняю данные Таблицы з... \n",
|
424 |
+
"... ... \n",
|
425 |
+
"19036 ЗаполнитьМассивУникальнымиЗначениями \n",
|
426 |
+
"19037 ДобавитьИтераторТаблице \n",
|
427 |
+
"19038 ТаблицаЗначенийВМассив \n",
|
428 |
+
"19039 Получить дату файла \n",
|
429 |
+
"19040 Получить имя файла \n",
|
430 |
+
"\n",
|
431 |
+
" think_process \\\n",
|
432 |
+
"0 <think>\\nЗдравствуйте. УНФ, есть запрос\\n\\n```... \n",
|
433 |
+
"1 <think>\\nЗадача простая. Необходимо заполнить ... \n",
|
434 |
+
"2 <think>\\nДоброго времени суток.\\nПосле обновле... \n",
|
435 |
+
"3 <think>\\nЗдравствуйте, столкнулся с такой проб... \n",
|
436 |
+
"4 <think>\\nДоброго дня всем!\\n\\nСохраняю данные ... \n",
|
437 |
+
"... ... \n",
|
438 |
+
"19036 NaN \n",
|
439 |
+
"19037 NaN \n",
|
440 |
+
"19038 NaN \n",
|
441 |
+
"19039 NaN \n",
|
442 |
+
"19040 NaN \n",
|
443 |
+
"\n",
|
444 |
+
" solution is_answer_a_link \\\n",
|
445 |
+
"0 # Код Реализации\\n```1c\\n ВЫБРАТЬ\\n Това... False \n",
|
446 |
+
"1 Для РН можно получить только весь набор регист... False \n",
|
447 |
+
"2 Может кому пригодится в последнем релизе не уд... False \n",
|
448 |
+
"3 Если оплата кредита, тогда без НДС. False \n",
|
449 |
+
"4 Да зачем все эти построители-шмостроители для ... False \n",
|
450 |
+
"... ... ... \n",
|
451 |
+
"19036 Заполняет массив-приемник уникальными значения... False \n",
|
452 |
+
"19037 Добавляет колонку в таблицу значений. Заполняе... False \n",
|
453 |
+
"19038 Преобразует таблицу значений в массив.\\n\\n# Ко... False \n",
|
454 |
+
"19039 Функция определяет дату последней модификации ... False \n",
|
455 |
+
"19040 Составляет полное имя файла из имени каталога ... False \n",
|
456 |
+
"\n",
|
457 |
+
" has_link tags_service tags \n",
|
458 |
+
"0 NaN УТ NaN \n",
|
459 |
+
"1 NaN NaN NaN \n",
|
460 |
+
"2 NaN ERP NaN \n",
|
461 |
+
"3 NaN Бухгалтерия NaN \n",
|
462 |
+
"4 NaN NaN NaN \n",
|
463 |
+
"... ... ... ... \n",
|
464 |
+
"19036 NaN ЗУП 1С,Коллекции \n",
|
465 |
+
"19037 NaN NaN 1С \n",
|
466 |
+
"19038 NaN УТ 1С,Коллекции \n",
|
467 |
+
"19039 NaN Розница 1С,Дата \n",
|
468 |
+
"19040 NaN УТ 1С \n",
|
469 |
+
"\n",
|
470 |
+
"[19041 rows x 9 columns]"
|
471 |
+
]
|
472 |
+
},
|
473 |
+
"execution_count": 7,
|
474 |
+
"metadata": {},
|
475 |
+
"output_type": "execute_result"
|
476 |
+
}
|
477 |
+
],
|
478 |
+
"source": [
|
479 |
+
"result_df = pd.concat([df_think, df_regular], ignore_index=True)\n",
|
480 |
+
"result_df"
|
481 |
+
]
|
482 |
+
},
|
483 |
+
{
|
484 |
+
"cell_type": "code",
|
485 |
+
"execution_count": 8,
|
486 |
+
"id": "41fce790",
|
487 |
+
"metadata": {},
|
488 |
+
"outputs": [],
|
489 |
+
"source": [
|
490 |
+
"result_df.drop(columns=['tags_service'], inplace=True)"
|
491 |
+
]
|
492 |
+
},
|
493 |
+
{
|
494 |
+
"cell_type": "code",
|
495 |
+
"execution_count": 9,
|
496 |
+
"id": "2fd66671",
|
497 |
+
"metadata": {},
|
498 |
+
"outputs": [],
|
499 |
+
"source": [
|
500 |
+
"result_df.to_csv('result_parsing_forums.csv', index=False)"
|
501 |
+
]
|
502 |
+
},
|
503 |
+
{
|
504 |
+
"cell_type": "code",
|
505 |
+
"execution_count": null,
|
506 |
+
"id": "9788c8b5",
|
507 |
+
"metadata": {},
|
508 |
+
"outputs": [],
|
509 |
+
"source": []
|
510 |
+
},
|
511 |
+
{
|
512 |
+
"cell_type": "code",
|
513 |
+
"execution_count": null,
|
514 |
+
"id": "fd259c4b",
|
515 |
+
"metadata": {},
|
516 |
+
"outputs": [
|
517 |
+
{
|
518 |
+
"data": {
|
519 |
+
"text/plain": [
|
520 |
+
"''"
|
521 |
+
]
|
522 |
+
},
|
523 |
+
"execution_count": 11,
|
524 |
+
"metadata": {},
|
525 |
+
"output_type": "execute_result"
|
526 |
+
}
|
527 |
+
],
|
528 |
+
"source": [
|
529 |
+
"\"\"\"\n",
|
530 |
+
"Make dataset from forums to learn LLM\n",
|
531 |
+
"\n",
|
532 |
+
"[\n",
|
533 |
+
"{\n",
|
534 |
+
"\"content\": \"Please summarize the goals for scientists in this text:\\n\\nWithin three days, the intertwined cup nest of grasses was complete, featuring a canopy of overhanging grasses to conceal it. And decades later, it served as Rinkert’s portal to the past inside the California Academy of Sciences. Information gleaned from such nests, woven long ago from species in plant communities called transitional habitat, could help restore the shoreline in the future. Transitional habitat has nearly disappeared from the San Francisco Bay, and scientists need a clearer picture of its original species composition—which was never properly documented. With that insight, conservation research groups like the San Francisco Bay Bird Observatory can help guide best practices when restoring the native habitat that has long served as critical refuge for imperiled birds and animals as adjacent marshes flood more with rising sea levels. “We can’t ask restoration ecologists to plant nonnative species or to just take their best guess and throw things out there,” says Rinkert.\",\n",
|
535 |
+
"\"role\": \"user\"\n",
|
536 |
+
"},\n",
|
537 |
+
"{\n",
|
538 |
+
"\"content\": \"Scientists are studying nests hoping to learn about transitional habitats that could help restore the shoreline of San Francisco Bay.\",\n",
|
539 |
+
"\"role\": \"assistant\"\n",
|
540 |
+
"}\n",
|
541 |
+
"]\n",
|
542 |
+
"\"\"\"\n",
|
543 |
+
"\n",
|
544 |
+
"\"\"\"\n",
|
545 |
+
"For this task, you can use CSV or JSONL data. If you are formatting the data yourself (adding start, end tokens, etc.), you can use CSV or JSONL format. If you do not want to format the data yourself and want --chat-template parameter to format the data for you, you must use JSONL format. In both cases, CSV and JSONL can be used interchangeably but JSONL is the most preferred format.\n",
|
546 |
+
"\n",
|
547 |
+
"To train a chatbot, your data will have content and role. Some models support system role as well.\n",
|
548 |
+
"\n",
|
549 |
+
"Here is an example of a chatbot dataset (single sample):\n",
|
550 |
+
"\n",
|
551 |
+
"Copied\n",
|
552 |
+
"[{'content': 'Help write a letter of 100 -200 words to my future self for '\n",
|
553 |
+
" 'Kyra, reflecting on her goals and aspirations.',\n",
|
554 |
+
" 'role': 'user'},\n",
|
555 |
+
" {'content': 'Dear Future Self,\\n'\n",
|
556 |
+
" '\\n'\n",
|
557 |
+
" \"I hope you're happy and proud of what you've achieved. As I \"\n",
|
558 |
+
" \"write this, I'm excited to think about our goals and how far \"\n",
|
559 |
+
" \"you've come. One goal was to be a machine learning engineer. I \"\n",
|
560 |
+
" \"hope you've worked hard and become skilled in this field. Keep \"\n",
|
561 |
+
" 'learning and innovating. Traveling was important to us. I hope '\n",
|
562 |
+
" \"you've seen different places and enjoyed the beauty of our \"\n",
|
563 |
+
" 'world. Remember the memories and lessons. Starting a family '\n",
|
564 |
+
" 'mattered to us. If you have kids, treasure every moment. Be '\n",
|
565 |
+
" 'patient, loving, and grateful for your family.\\n'\n",
|
566 |
+
" '\\n'\n",
|
567 |
+
" 'Take care of yourself. Rest, reflect, and cherish the time you '\n",
|
568 |
+
" 'spend with loved ones. Remember your dreams and celebrate what '\n",
|
569 |
+
" \"you've achieved. Your determination brought you here. I'm \"\n",
|
570 |
+
" \"excited to see the person you've become, the impact you've made, \"\n",
|
571 |
+
" 'and the love and joy in your life. Embrace opportunities and '\n",
|
572 |
+
" 'keep dreaming big.\\n'\n",
|
573 |
+
" '\\n'\n",
|
574 |
+
" 'With love,\\n'\n",
|
575 |
+
" 'Kyra',\n",
|
576 |
+
" 'role': 'assistant'}]\n",
|
577 |
+
"As you can see, the data has content and role columns. The role column can be user or assistant or system. This data is, however, not formatted for training. You can use the --chat-template parameter to format the data during training.\n",
|
578 |
+
"\n",
|
579 |
+
"--chat-template supports the following kinds of templates:\n",
|
580 |
+
"\n",
|
581 |
+
"none (default)\n",
|
582 |
+
"zephyr\n",
|
583 |
+
"chatml\n",
|
584 |
+
"tokenizer: use chat template mentioned in tokenizer config\n",
|
585 |
+
"A multi-line sample is also shown below:\n",
|
586 |
+
"\n",
|
587 |
+
"Copied\n",
|
588 |
+
"[{\"content\": \"hello\", \"role\": \"user\"}, {\"content\": \"hi nice to meet you\", \"role\": \"assistant\"}]\n",
|
589 |
+
"[{\"content\": \"how are you\", \"role\": \"user\"}, {\"content\": \"I am fine\", \"role\": \"assistant\"}]\n",
|
590 |
+
"[{\"content\": \"What is your name?\", \"role\": \"user\"}, {\"content\": \"My name is Mary\", \"role\": \"assistant\"}]\n",
|
591 |
+
"[{\"content\": \"Which is the best programming language?\", \"role\": \"user\"}, {\"content\": \"Python\", \"role\": \"assistant\"}]\n",
|
592 |
+
".\n",
|
593 |
+
".\n",
|
594 |
+
".\n",
|
595 |
+
"\"\"\""
|
596 |
+
]
|
597 |
+
},
|
598 |
+
{
|
599 |
+
"cell_type": "code",
|
600 |
+
"execution_count": 14,
|
601 |
+
"id": "b7d0bc0b",
|
602 |
+
"metadata": {},
|
603 |
+
"outputs": [],
|
604 |
+
"source": [
|
605 |
+
"import pandas as pd\n",
|
606 |
+
"import json\n",
|
607 |
+
"\n",
|
608 |
+
"df = result_df.copy()\n",
|
609 |
+
"# Convert DataFrame to training format\n",
|
610 |
+
"training_data = []\n",
|
611 |
+
"\n",
|
612 |
+
"for _, row in df.iterrows():\n",
|
613 |
+
" # Create user message from prompt\n",
|
614 |
+
" user_msg = {\n",
|
615 |
+
" \"content\": row[\"prompt\"],\n",
|
616 |
+
" \"role\": \"user\"\n",
|
617 |
+
" }\n",
|
618 |
+
" \n",
|
619 |
+
" # Create assistant message from solution\n",
|
620 |
+
" assistant_msg = {\n",
|
621 |
+
" \"content\": row[\"solution\"],\n",
|
622 |
+
" \"role\": \"assistant\" \n",
|
623 |
+
" }\n",
|
624 |
+
" \n",
|
625 |
+
" # Add think process as system message if present\n",
|
626 |
+
" if pd.notna(row[\"think_process\"]):\n",
|
627 |
+
" system_msg = {\n",
|
628 |
+
" \"content\": row[\"think_process\"],\n",
|
629 |
+
" \"role\": \"assistant\"\n",
|
630 |
+
" }\n",
|
631 |
+
" training_data.append([user_msg, system_msg, assistant_msg])\n",
|
632 |
+
" else:\n",
|
633 |
+
" training_data.append([user_msg, assistant_msg])\n",
|
634 |
+
"\n",
|
635 |
+
"# Save to JSONL file\n",
|
636 |
+
"with open('training_data.jsonl', 'w', encoding='utf-8') as f:\n",
|
637 |
+
" for messages in training_data:\n",
|
638 |
+
" f.write(json.dumps(messages, ensure_ascii=False) + '\\n')\n"
|
639 |
+
]
|
640 |
+
},
|
641 |
+
{
|
642 |
+
"cell_type": "code",
|
643 |
+
"execution_count": null,
|
644 |
+
"id": "8cc4e93b",
|
645 |
+
"metadata": {},
|
646 |
+
"outputs": [],
|
647 |
+
"source": []
|
648 |
+
},
|
649 |
+
{
|
650 |
+
"cell_type": "code",
|
651 |
+
"execution_count": null,
|
652 |
+
"id": "705b1d94",
|
653 |
+
"metadata": {},
|
654 |
+
"outputs": [],
|
655 |
+
"source": []
|
656 |
+
}
|
657 |
+
],
|
658 |
+
"metadata": {
|
659 |
+
"kernelspec": {
|
660 |
+
"display_name": ".venv",
|
661 |
+
"language": "python",
|
662 |
+
"name": "python3"
|
663 |
+
},
|
664 |
+
"language_info": {
|
665 |
+
"codemirror_mode": {
|
666 |
+
"name": "ipython",
|
667 |
+
"version": 3
|
668 |
+
},
|
669 |
+
"file_extension": ".py",
|
670 |
+
"mimetype": "text/x-python",
|
671 |
+
"name": "python",
|
672 |
+
"nbconvert_exporter": "python",
|
673 |
+
"pygments_lexer": "ipython3",
|
674 |
+
"version": "3.12.7"
|
675 |
+
}
|
676 |
+
},
|
677 |
+
"nbformat": 4,
|
678 |
+
"nbformat_minor": 5
|
679 |
+
}
|
result_forums_fastcode.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
result_forums_infostart_WITH_CODE.csv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:430cef8d9415d2542fa43629bd5aae260e12a84568c3f31a3cb042030772ee0e
|
3 |
+
size 164097197
|
result_parsing_forums.csv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8a82683b0b60aed8be95ca43ab01777523ed835d77f4d137fdafa8b10dbdf820
|
3 |
+
size 165983251
|
scripts_parsing/fastcode_parser.py
ADDED
@@ -0,0 +1,433 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import asyncio
|
2 |
+
import aiohttp
|
3 |
+
import csv
|
4 |
+
import os
|
5 |
+
import re
|
6 |
+
from bs4 import BeautifulSoup
|
7 |
+
from urllib.parse import urljoin, urlparse
|
8 |
+
import logging
|
9 |
+
from typing import List, Dict, Optional, Set
|
10 |
+
import time
|
11 |
+
|
12 |
+
# Настройка логирования
|
13 |
+
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
14 |
+
logger = logging.getLogger(__name__)
|
15 |
+
|
16 |
+
class FastCodeTemplatesParser:
|
17 |
+
def __init__(self, csv_file: str = 'fastcode_templates.csv', delay: float = 1.0):
|
18 |
+
self.csv_file = csv_file
|
19 |
+
self.delay = delay # Задержка между запросами
|
20 |
+
self.base_url = 'https://fastcode.im'
|
21 |
+
self.processed_ids: Set[str] = set()
|
22 |
+
|
23 |
+
# Создаем CSV файл с заголовками если его нет
|
24 |
+
self._init_csv()
|
25 |
+
|
26 |
+
# Загружаем уже обработанные ID из CSV
|
27 |
+
self._load_processed_ids()
|
28 |
+
|
29 |
+
def _init_csv(self):
|
30 |
+
"""Инициализация CSV файла с заголовками"""
|
31 |
+
if not os.path.exists(self.csv_file):
|
32 |
+
with open(self.csv_file, 'w', newline='', encoding='utf-8') as file:
|
33 |
+
writer = csv.writer(file, quoting=csv.QUOTE_ALL)
|
34 |
+
writer.writerow(['source', 'in_source_id', 'prompt', 'solution', 'tags', 'is_answer_a_link', 'has_link'])
|
35 |
+
|
36 |
+
def _load_processed_ids(self):
|
37 |
+
"""Загрузка уже обработанных ID из CSV"""
|
38 |
+
if os.path.exists(self.csv_file):
|
39 |
+
with open(self.csv_file, 'r', encoding='utf-8') as file:
|
40 |
+
reader = csv.DictReader(file)
|
41 |
+
for row in reader:
|
42 |
+
if row['in_source_id']:
|
43 |
+
self.processed_ids.add(row['in_source_id'])
|
44 |
+
logger.info(f"Загружено {len(self.processed_ids)} уже обработанных шаблонов")
|
45 |
+
|
46 |
+
async def fetch_page(self, session: aiohttp.ClientSession, url: str) -> Optional[str]:
|
47 |
+
"""Получение содержимого страницы"""
|
48 |
+
try:
|
49 |
+
await asyncio.sleep(self.delay)
|
50 |
+
async with session.get(url, timeout=30) as response:
|
51 |
+
if response.status == 200:
|
52 |
+
return await response.text()
|
53 |
+
else:
|
54 |
+
logger.warning(f"Ошибка {response.status} при загрузке {url}")
|
55 |
+
return None
|
56 |
+
except Exception as e:
|
57 |
+
logger.error(f"Ошибка при загрузке {url}: {e}")
|
58 |
+
return None
|
59 |
+
|
60 |
+
async def parse_templates_list_page(self, session: aiohttp.ClientSession, page_num: int) -> List[str]:
|
61 |
+
"""Парсинг страницы списка шаблонов"""
|
62 |
+
url = f"{self.base_url}/Templates?Page={page_num}&TemplatesOnly=True"
|
63 |
+
logger.info(f"Парсинг страницы списка шаблонов: {page_num}")
|
64 |
+
|
65 |
+
html = await self.fetch_page(session, url)
|
66 |
+
if not html:
|
67 |
+
return []
|
68 |
+
|
69 |
+
soup = BeautifulSoup(html, 'html.parser')
|
70 |
+
template_ids = []
|
71 |
+
|
72 |
+
# Находим div с id="indexPartial"
|
73 |
+
index_partial = soup.find('div', id='indexPartial')
|
74 |
+
if not index_partial:
|
75 |
+
logger.warning(f"Не найден div#indexPartial на странице {page_num}")
|
76 |
+
return []
|
77 |
+
|
78 |
+
# Ищем все h3 с классом post_title break-word
|
79 |
+
title_headers = index_partial.find_all('h3', class_='post_title break-word')
|
80 |
+
logger.info(f"Найдено {len(title_headers)} заголовков на странице {page_num}")
|
81 |
+
|
82 |
+
for header in title_headers:
|
83 |
+
# Ищем ссылку внутри h3
|
84 |
+
link = header.find('a', href=True)
|
85 |
+
if link:
|
86 |
+
href = link['href']
|
87 |
+
# Проверяем, что ссылка начинается с /Templates/
|
88 |
+
if href.startswith('/Templates/'):
|
89 |
+
# Извлекаем ID из ссылки
|
90 |
+
match = re.search(r'/Templates/(\d+)', href)
|
91 |
+
if match:
|
92 |
+
template_id = match.group(1)
|
93 |
+
template_ids.append(template_id)
|
94 |
+
|
95 |
+
logger.info(f"Найдено {len(template_ids)} валидных шаблонов на странице {page_num}")
|
96 |
+
return template_ids
|
97 |
+
|
98 |
+
def extract_title(self, soup: BeautifulSoup) -> Optional[str]:
|
99 |
+
"""Извлечение названия шаблона"""
|
100 |
+
# Ищем все div.article
|
101 |
+
articles = soup.find_all('div', class_='article')
|
102 |
+
|
103 |
+
# Берем первый article который содержит h1
|
104 |
+
for article in articles:
|
105 |
+
h1 = article.find('h1')
|
106 |
+
if h1:
|
107 |
+
return h1.get_text().strip()
|
108 |
+
|
109 |
+
# Если не найден в article, ищем любой h1
|
110 |
+
h1 = soup.find('h1')
|
111 |
+
if h1:
|
112 |
+
return h1.get_text().strip()
|
113 |
+
|
114 |
+
return None
|
115 |
+
|
116 |
+
def extract_tags(self, soup: BeautifulSoup) -> List[str]:
|
117 |
+
"""Извлечение тегов шаблона"""
|
118 |
+
tags = []
|
119 |
+
|
120 |
+
# Ищем все span с классом tag-label
|
121 |
+
tag_labels = soup.find_all('span', class_='tag-label')
|
122 |
+
|
123 |
+
for tag_label in tag_labels:
|
124 |
+
# Ищем все span с классом label внутри
|
125 |
+
label_spans = tag_label.find_all('span', class_='label')
|
126 |
+
for span in label_spans:
|
127 |
+
tag_text = span.get_text().strip()
|
128 |
+
if tag_text:
|
129 |
+
# Разделяем теги, если они объединены через #
|
130 |
+
if '#' in tag_text:
|
131 |
+
individual_tags = [t.strip() for t in tag_text.split('#') if t.strip()]
|
132 |
+
tags.extend(individual_tags)
|
133 |
+
else:
|
134 |
+
tags.append(tag_text)
|
135 |
+
|
136 |
+
return list(set(tags)) # Убираем дубликаты
|
137 |
+
|
138 |
+
def extract_description(self, soup: BeautifulSoup) -> Optional[str]:
|
139 |
+
"""Извлечение описания шаблона"""
|
140 |
+
# Согласно инструкции, описание в <p class="break-word" style="margin-bottom: 0px;">
|
141 |
+
desc_p = soup.find('p', class_='break-word', style='margin-bottom: 0px;')
|
142 |
+
if desc_p:
|
143 |
+
span = desc_p.find('span', style='white-space: pre-line')
|
144 |
+
if span:
|
145 |
+
return span.get_text().strip()
|
146 |
+
|
147 |
+
# Если не найдено по точному стилю, пробуем искать любой p.break-word с подходящим стилем
|
148 |
+
desc_elements = soup.find_all('p', class_='break-word')
|
149 |
+
for p in desc_elements:
|
150 |
+
style = p.get('style', '')
|
151 |
+
if 'margin-bottom: 0px' in style or 'margin-bottom:0px' in style:
|
152 |
+
span = p.find('span')
|
153 |
+
if span:
|
154 |
+
return span.get_text().strip()
|
155 |
+
else:
|
156 |
+
return p.get_text().strip()
|
157 |
+
|
158 |
+
return None
|
159 |
+
|
160 |
+
def extract_code(self, soup: BeautifulSoup) -> Optional[str]:
|
161 |
+
"""Извлечение кода шаблона"""
|
162 |
+
# Ищем code с классом 1c
|
163 |
+
code_element = soup.find('code', class_='1c')
|
164 |
+
if code_element:
|
165 |
+
return self.clean_1c_code(code_element)
|
166 |
+
return None
|
167 |
+
|
168 |
+
def clean_1c_code(self, code_element) -> str:
|
169 |
+
"""Очистка кода 1С с сохранением отступов"""
|
170 |
+
# Получаем чистый текст
|
171 |
+
code_text = code_element.get_text()
|
172 |
+
|
173 |
+
# Заменяем HTML entities
|
174 |
+
code_text = code_text.replace('"', '"')
|
175 |
+
code_text = code_text.replace('<', '<')
|
176 |
+
code_text = code_text.replace('>', '>')
|
177 |
+
code_text = code_text.replace('&', '&')
|
178 |
+
|
179 |
+
# Нормализуем переносы строк
|
180 |
+
code_text = code_text.replace('\r\n', '\n')
|
181 |
+
code_text = code_text.replace('\r', '\n')
|
182 |
+
|
183 |
+
# Обрабатываем строки, сохраняя отступы
|
184 |
+
lines = code_text.split('\n')
|
185 |
+
cleaned_lines = []
|
186 |
+
|
187 |
+
for line in lines:
|
188 |
+
# Удаляем только trailing пробелы, сохраняя leading
|
189 |
+
cleaned_line = line.rstrip()
|
190 |
+
cleaned_lines.append(cleaned_line)
|
191 |
+
|
192 |
+
# Удаляем пустые строки в начале и конце
|
193 |
+
while cleaned_lines and not cleaned_lines[0].strip():
|
194 |
+
cleaned_lines.pop(0)
|
195 |
+
while cleaned_lines and not cleaned_lines[-1].strip():
|
196 |
+
cleaned_lines.pop()
|
197 |
+
|
198 |
+
return '\n'.join(cleaned_lines)
|
199 |
+
|
200 |
+
def extract_comments(self, soup: BeautifulSoup) -> List[str]:
|
201 |
+
"""Извлечение комментариев"""
|
202 |
+
comments = []
|
203 |
+
|
204 |
+
comments_section = soup.find('div', id='comments_section')
|
205 |
+
if not comments_section:
|
206 |
+
return comments
|
207 |
+
|
208 |
+
# Ищем все div с id, которые содержат комментарии
|
209 |
+
comment_divs = comments_section.find_all('div', id=True)
|
210 |
+
|
211 |
+
for comment_div in comment_divs:
|
212 |
+
# Пропускаем div с id="last_comment"
|
213 |
+
if comment_div.get('id') == 'last_comment':
|
214 |
+
continue
|
215 |
+
|
216 |
+
# Создаем копию для обработки
|
217 |
+
comment_copy = comment_div.__copy__()
|
218 |
+
|
219 |
+
# Удаляем первый div с информацией о пользователе
|
220 |
+
first_div = comment_copy.find('div')
|
221 |
+
if first_div:
|
222 |
+
first_div.decompose()
|
223 |
+
|
224 |
+
# Удаляем последний div с кнопками
|
225 |
+
last_div = comment_copy.find('div', style=lambda x: x and 'margin-top: 15px' in x)
|
226 |
+
if last_div:
|
227 |
+
last_div.decompose()
|
228 |
+
|
229 |
+
# Удаляем hr
|
230 |
+
hr_tags = comment_copy.find_all('hr')
|
231 |
+
for hr in hr_tags:
|
232 |
+
hr.decompose()
|
233 |
+
|
234 |
+
# Получаем текст комментария
|
235 |
+
comment_text = comment_copy.get_text().strip()
|
236 |
+
if comment_text:
|
237 |
+
comments.append(comment_text)
|
238 |
+
|
239 |
+
return comments
|
240 |
+
|
241 |
+
def count_links_in_text(self, text: str) -> int:
|
242 |
+
"""Подсчет количества ссылок в тексте"""
|
243 |
+
# Ищем http/https ссылки
|
244 |
+
url_pattern = r'https?://[^\s<>"{}|\\^`\[\]]+'
|
245 |
+
links = re.findall(url_pattern, text)
|
246 |
+
return len(links)
|
247 |
+
|
248 |
+
def has_links_in_text(self, text: str) -> bool:
|
249 |
+
"""Проверка наличия ссылок в тексте"""
|
250 |
+
url_pattern = r'https?://[^\s<>"{}|\\^`\[\]]+'
|
251 |
+
return bool(re.search(url_pattern, text))
|
252 |
+
|
253 |
+
def format_solution(self, description: str, code: str, comments: List[str]) -> str:
|
254 |
+
"""Форматирование решения в markdown"""
|
255 |
+
solution_parts = []
|
256 |
+
|
257 |
+
# Добавляем описание
|
258 |
+
if description:
|
259 |
+
solution_parts.append(description)
|
260 |
+
|
261 |
+
# Добавляем код
|
262 |
+
if code:
|
263 |
+
solution_parts.append("# Код реализации")
|
264 |
+
solution_parts.append(f"```1c\n{code}\n```")
|
265 |
+
|
266 |
+
# Добавляем комментарии
|
267 |
+
if comments:
|
268 |
+
solution_parts.append("# Примечания")
|
269 |
+
for comment in comments:
|
270 |
+
solution_parts.append(f"- {comment}")
|
271 |
+
|
272 |
+
return '\n\n'.join(solution_parts)
|
273 |
+
|
274 |
+
async def parse_template(self, session: aiohttp.ClientSession, template_id: str) -> Optional[Dict]:
|
275 |
+
"""Парсинг отдельного шаблона"""
|
276 |
+
if template_id in self.processed_ids:
|
277 |
+
logger.debug(f"Шаблон {template_id} уже обработан")
|
278 |
+
return None
|
279 |
+
|
280 |
+
url = f"{self.base_url}/Templates/{template_id}"
|
281 |
+
logger.info(f"Парсинг шаблона {template_id}")
|
282 |
+
|
283 |
+
html = await self.fetch_page(session, url)
|
284 |
+
if not html:
|
285 |
+
return None
|
286 |
+
|
287 |
+
soup = BeautifulSoup(html, 'html.parser')
|
288 |
+
|
289 |
+
# Извлекаем данные
|
290 |
+
title = self.extract_title(soup)
|
291 |
+
if not title:
|
292 |
+
logger.warning(f"Не найден заголовок для шаблона {template_id}")
|
293 |
+
return None
|
294 |
+
|
295 |
+
description = self.extract_description(soup)
|
296 |
+
code = self.extract_code(soup)
|
297 |
+
tags = self.extract_tags(soup)
|
298 |
+
comments = self.extract_comments(soup)
|
299 |
+
|
300 |
+
# Форматируем решение
|
301 |
+
solution = self.format_solution(description or "", code or "", comments)
|
302 |
+
|
303 |
+
# Анализируем ссылки
|
304 |
+
link_count = self.count_links_in_text(solution)
|
305 |
+
has_links = self.has_links_in_text(solution)
|
306 |
+
|
307 |
+
result = {
|
308 |
+
'source': 'fastcode_Templates',
|
309 |
+
'in_source_id': template_id,
|
310 |
+
'prompt': title,
|
311 |
+
'solution': solution,
|
312 |
+
'tags': ','.join(tags) if tags else '',
|
313 |
+
'is_answer_a_link': has_links,
|
314 |
+
'has_link': link_count if link_count > 0 else None
|
315 |
+
}
|
316 |
+
|
317 |
+
logger.info(f"Обработан шаблон {template_id}: '{title}'")
|
318 |
+
self.processed_ids.add(template_id)
|
319 |
+
return result
|
320 |
+
|
321 |
+
async def process_templates_batch(self, session: aiohttp.ClientSession, template_ids: List[str]) -> List[Dict]:
|
322 |
+
"""Обработка пакета шаблонов"""
|
323 |
+
tasks = [self.parse_template(session, template_id) for template_id in template_ids]
|
324 |
+
results = await asyncio.gather(*tasks, return_exceptions=True)
|
325 |
+
|
326 |
+
valid_results = []
|
327 |
+
for result in results:
|
328 |
+
if isinstance(result, dict):
|
329 |
+
valid_results.append(result)
|
330 |
+
elif isinstance(result, Exception):
|
331 |
+
logger.error(f"Ошибка при обработке шаблона: {result}")
|
332 |
+
|
333 |
+
return valid_results
|
334 |
+
|
335 |
+
def escape_for_csv(self, text: str) -> str:
|
336 |
+
"""Экранирование специальных символов для CSV"""
|
337 |
+
if not text:
|
338 |
+
return text
|
339 |
+
|
340 |
+
# Экранируем специальные символы для корректного сохранения в CSV
|
341 |
+
text = text.replace('\\', '\\\\')
|
342 |
+
text = text.replace('\r\n', '\n')
|
343 |
+
text = text.replace('\r', '\n')
|
344 |
+
text = text.replace('\n', '\\n')
|
345 |
+
text = text.replace('\t', '\\t')
|
346 |
+
|
347 |
+
return text
|
348 |
+
|
349 |
+
def save_to_csv(self, data: List[Dict]):
|
350 |
+
"""Сохранение данных в CSV файл"""
|
351 |
+
if not data:
|
352 |
+
return
|
353 |
+
|
354 |
+
with open(self.csv_file, 'a', newline='', encoding='utf-8') as file:
|
355 |
+
writer = csv.DictWriter(file, fieldnames=['source', 'in_source_id', 'prompt', 'solution', 'tags', 'is_answer_a_link', 'has_link'],
|
356 |
+
quoting=csv.QUOTE_ALL)
|
357 |
+
for row in data:
|
358 |
+
# Экранируем специальные символы в текстовых полях
|
359 |
+
escaped_row = {}
|
360 |
+
for key, value in row.items():
|
361 |
+
if isinstance(value, str):
|
362 |
+
escaped_row[key] = self.escape_for_csv(value)
|
363 |
+
else:
|
364 |
+
escaped_row[key] = value
|
365 |
+
writer.writerow(escaped_row)
|
366 |
+
|
367 |
+
logger.info(f"Сохранено {len(data)} записей в {self.csv_file}")
|
368 |
+
|
369 |
+
async def parse_all_pages(self, start_page: int = 1, end_page: int = 36, batch_size: int = 10):
|
370 |
+
"""Парсинг всех страниц с шаблонами"""
|
371 |
+
connector = aiohttp.TCPConnector(limit=20, limit_per_host=10)
|
372 |
+
timeout = aiohttp.ClientTimeout(total=60)
|
373 |
+
|
374 |
+
async with aiohttp.ClientSession(
|
375 |
+
connector=connector,
|
376 |
+
timeout=timeout,
|
377 |
+
headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
|
378 |
+
) as session:
|
379 |
+
|
380 |
+
for page_num in range(start_page, end_page + 1):
|
381 |
+
try:
|
382 |
+
logger.info(f"Обработка страницы {page_num} из {end_page}")
|
383 |
+
|
384 |
+
# Получаем список ID шаблонов со страницы
|
385 |
+
template_ids = await self.parse_templates_list_page(session, page_num)
|
386 |
+
|
387 |
+
if not template_ids:
|
388 |
+
logger.info(f"Нет шаблонов для обработки на странице {page_num}")
|
389 |
+
continue
|
390 |
+
|
391 |
+
# Фильтруем уже обработанные шаблоны
|
392 |
+
new_template_ids = [tid for tid in template_ids if tid not in self.processed_ids]
|
393 |
+
|
394 |
+
logger.info(f"Новых шаблонов для обработки: {len(new_template_ids)}")
|
395 |
+
|
396 |
+
if not new_template_ids:
|
397 |
+
continue
|
398 |
+
|
399 |
+
# Обрабатываем шаблоны пакетами
|
400 |
+
for i in range(0, len(new_template_ids), batch_size):
|
401 |
+
batch = new_template_ids[i:i + batch_size]
|
402 |
+
logger.info(f"Обработка пакета {i//batch_size + 1}, шаблонов в пакете: {len(batch)}")
|
403 |
+
|
404 |
+
# Парсим пакет шаблонов
|
405 |
+
batch_results = await self.process_templates_batch(session, batch)
|
406 |
+
|
407 |
+
# Сохраняем результаты
|
408 |
+
if batch_results:
|
409 |
+
self.save_to_csv(batch_results)
|
410 |
+
|
411 |
+
# Пауза между пакетами
|
412 |
+
await asyncio.sleep(2)
|
413 |
+
|
414 |
+
logger.info(f"Страница {page_num} обработана")
|
415 |
+
|
416 |
+
except Exception as e:
|
417 |
+
logger.error(f"Ошибка при обработке страницы {page_num}: {e}")
|
418 |
+
continue
|
419 |
+
|
420 |
+
async def main():
|
421 |
+
"""Основная функция"""
|
422 |
+
parser = FastCodeTemplatesParser(csv_file='fastcode_templates.csv', delay=1.0)
|
423 |
+
|
424 |
+
try:
|
425 |
+
await parser.parse_all_pages(start_page=1, end_page=36, batch_size=5)
|
426 |
+
logger.info("Парсинг завершен")
|
427 |
+
except KeyboardInterrupt:
|
428 |
+
logger.info("Парсинг прерван пользователем")
|
429 |
+
except Exception as e:
|
430 |
+
logger.error(f"Критическая ошибка: {e}")
|
431 |
+
|
432 |
+
if __name__ == "__main__":
|
433 |
+
asyncio.run(main())
|
scripts_parsing/forum_parser.py
ADDED
@@ -0,0 +1,474 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import asyncio
|
2 |
+
import aiohttp
|
3 |
+
import csv
|
4 |
+
import os
|
5 |
+
import re
|
6 |
+
from bs4 import BeautifulSoup
|
7 |
+
from urllib.parse import urljoin, urlparse
|
8 |
+
import logging
|
9 |
+
from typing import List, Dict, Optional, Set
|
10 |
+
import time
|
11 |
+
|
12 |
+
# Настройка логирования
|
13 |
+
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
14 |
+
logger = logging.getLogger(__name__)
|
15 |
+
|
16 |
+
class InfostartForumParser:
|
17 |
+
def __init__(self, csv_file: str = 'forum_dataset.csv', delay: float = 1.0):
|
18 |
+
self.csv_file = csv_file
|
19 |
+
self.delay = delay # Задержка между запросами
|
20 |
+
self.base_url = 'https://forum.infostart.ru/group2'
|
21 |
+
self.processed_urls: Set[str] = set()
|
22 |
+
|
23 |
+
# Создаем CSV файл с заголовками если его нет
|
24 |
+
self._init_csv()
|
25 |
+
|
26 |
+
# Загружаем уже обработанные URL из CSV
|
27 |
+
self._load_processed_urls()
|
28 |
+
|
29 |
+
def _init_csv(self):
|
30 |
+
"""Инициализация CSV файла с заголовками"""
|
31 |
+
if not os.path.exists(self.csv_file):
|
32 |
+
with open(self.csv_file, 'w', newline='', encoding='utf-8') as file:
|
33 |
+
writer = csv.writer(file, quoting=csv.QUOTE_ALL) # Полное квотирование для сохранения пробелов
|
34 |
+
writer.writerow(['source', 'in_source_id', 'prompt', 'gold_standard_solution'])
|
35 |
+
|
36 |
+
def _load_processed_urls(self):
|
37 |
+
"""Загрузка уже обработанных URL из CSV"""
|
38 |
+
if os.path.exists(self.csv_file):
|
39 |
+
with open(self.csv_file, 'r', encoding='utf-8') as file:
|
40 |
+
reader = csv.DictReader(file)
|
41 |
+
for row in reader:
|
42 |
+
if row['in_source_id']:
|
43 |
+
self.processed_urls.add(row['in_source_id'])
|
44 |
+
logger.info(f"Загружено {len(self.processed_urls)} уже обработанных URL")
|
45 |
+
|
46 |
+
async def fetch_page(self, session: aiohttp.ClientSession, url: str) -> Optional[str]:
|
47 |
+
"""Получение содержимого страницы"""
|
48 |
+
try:
|
49 |
+
await asyncio.sleep(self.delay) # Задержка между запросами
|
50 |
+
async with session.get(url, timeout=30) as response:
|
51 |
+
if response.status == 200:
|
52 |
+
return await response.text()
|
53 |
+
else:
|
54 |
+
logger.warning(f"Ошибка {response.status} при загрузке {url}")
|
55 |
+
return None
|
56 |
+
except Exception as e:
|
57 |
+
logger.error(f"Ошибка при загрузке {url}: {e}")
|
58 |
+
return None
|
59 |
+
|
60 |
+
async def parse_topic_list_page(self, session: aiohttp.ClientSession, page_num: int) -> List[str]:
|
61 |
+
"""Парсинг страницы списка тем"""
|
62 |
+
url = f"{self.base_url}/?a=26694&PAGEN_1={page_num}"
|
63 |
+
logger.info(f"Парсинг страницы списка тем: {page_num}")
|
64 |
+
|
65 |
+
html = await self.fetch_page(session, url)
|
66 |
+
if not html:
|
67 |
+
return []
|
68 |
+
|
69 |
+
soup = BeautifulSoup(html, 'html.parser')
|
70 |
+
topic_urls = []
|
71 |
+
|
72 |
+
# Находим div.forum-topic-list
|
73 |
+
forum_topic_list = soup.find('div', class_='forum-topic-list')
|
74 |
+
if not forum_topic_list:
|
75 |
+
logger.warning(f"Не найден div.forum-topic-list на странице {page_num}")
|
76 |
+
return []
|
77 |
+
|
78 |
+
# Ищем все div.nff-font110
|
79 |
+
topic_divs = forum_topic_list.find_all('div', class_='nff-font110')
|
80 |
+
logger.info(f"Найдено {len(topic_divs)} тем на странице {page_num}")
|
81 |
+
|
82 |
+
for topic_div in topic_divs:
|
83 |
+
# Проверяем есть ли тег <a> с текстом "Dev"
|
84 |
+
dev_link = topic_div.find('a', string='Dev')
|
85 |
+
if dev_link:
|
86 |
+
# Ищем ссылку на тему (обычно первая ссылка в div)
|
87 |
+
topic_link = topic_div.find('a', href=re.compile(r'/forum\d+/topic\d+/'))
|
88 |
+
if topic_link:
|
89 |
+
topic_url = urljoin(self.base_url, topic_link['href'])
|
90 |
+
topic_urls.append(topic_url)
|
91 |
+
|
92 |
+
logger.info(f"Найдено {len(topic_urls)} тем с меткой Dev на странице {page_num}")
|
93 |
+
return topic_urls
|
94 |
+
|
95 |
+
def extract_topic_id(self, url: str) -> Optional[str]:
|
96 |
+
"""Извлечение ID темы из URL"""
|
97 |
+
match = re.search(r'/forum\d+/(topic\d+)/', url)
|
98 |
+
return match.group(1) if match else None
|
99 |
+
|
100 |
+
def extract_meta_identifier(self, soup: BeautifulSoup) -> Optional[str]:
|
101 |
+
"""Извлечение identifier из meta тега"""
|
102 |
+
meta_tag = soup.find('meta', attrs={'itemprop': 'identifier'})
|
103 |
+
return meta_tag.get('content') if meta_tag else None
|
104 |
+
|
105 |
+
def extract_first_message(self, soup: BeautifulSoup) -> Optional[str]:
|
106 |
+
"""Извлечение первого сообщения (вопроса)"""
|
107 |
+
# Ищем первый div с классом post-mesages
|
108 |
+
post_messages = soup.find('div', class_='post-mesages')
|
109 |
+
if not post_messages:
|
110 |
+
return None
|
111 |
+
|
112 |
+
# Ищем первое сообщение
|
113 |
+
first_message_div = post_messages.find('div', class_='m-tree-p')
|
114 |
+
if not first_message_div:
|
115 |
+
return None
|
116 |
+
|
117 |
+
# Извлекаем текст сообщения
|
118 |
+
message_text_div = first_message_div.find('div', class_='forum-message-text')
|
119 |
+
if not message_text_div:
|
120 |
+
return None
|
121 |
+
|
122 |
+
return self.clean_message_text(message_text_div)
|
123 |
+
|
124 |
+
def extract_solutions(self, soup: BeautifulSoup) -> List[str]:
|
125 |
+
"""Извлечение решений из секции 'Найденные решения'"""
|
126 |
+
solutions = []
|
127 |
+
|
128 |
+
# Ищем секцию "Найденные решения"
|
129 |
+
found_solutions_header = soup.find(text=re.compile(r'Найденные решения'))
|
130 |
+
if not found_solutions_header:
|
131 |
+
return solutions
|
132 |
+
|
133 |
+
# Находим следующий элемент после заголовка
|
134 |
+
solutions_section = found_solutions_header.find_parent().find_next_sibling()
|
135 |
+
if not solutions_section:
|
136 |
+
return solutions
|
137 |
+
|
138 |
+
# Ищем все сообщения в секции решений
|
139 |
+
solution_divs = solutions_section.find_all('div', class_='m-tree-p')
|
140 |
+
|
141 |
+
for solution_div in solution_divs:
|
142 |
+
message_text_div = solution_div.find('div', class_='forum-message-text')
|
143 |
+
if message_text_div:
|
144 |
+
solution_text = self.clean_message_text(message_text_div)
|
145 |
+
if solution_text:
|
146 |
+
# Очищаем от цифр в скобках в начале
|
147 |
+
solution_text = self.clean_solution_text(solution_text)
|
148 |
+
if solution_text: # Проверяем, что текст не пустой после очистки
|
149 |
+
solutions.append(solution_text)
|
150 |
+
|
151 |
+
return solutions
|
152 |
+
|
153 |
+
def clean_message_text(self, message_div) -> str:
|
154 |
+
"""Очистка и форматирование текста сообщения"""
|
155 |
+
# Создаем копию для работы, чтобы не изменять оригинал
|
156 |
+
message_copy = message_div.__copy__()
|
157 |
+
|
158 |
+
# Обрабатываем блоки кода
|
159 |
+
code_blocks = message_copy.find_all('div', class_='code')
|
160 |
+
for code_block in code_blocks:
|
161 |
+
pre_tag = code_block.find('pre')
|
162 |
+
if pre_tag:
|
163 |
+
# Извлекаем код с сохранением структуры HTML
|
164 |
+
code_text = self.extract_code_from_pre(pre_tag)
|
165 |
+
|
166 |
+
# Создаем новый элемент для замены
|
167 |
+
from bs4 import NavigableString
|
168 |
+
replacement_text = f"\n```1c\n{code_text}\n```\n"
|
169 |
+
code_block.replace_with(NavigableString(replacement_text))
|
170 |
+
|
171 |
+
# Заменяем <br> на переносы строк
|
172 |
+
for br in message_copy.find_all('br'):
|
173 |
+
br.replace_with('\n')
|
174 |
+
|
175 |
+
# Удаляем цитаты и другие служебные элементы
|
176 |
+
for quote in message_copy.find_all('div', class_='quote-wrap'):
|
177 |
+
quote.decompose()
|
178 |
+
|
179 |
+
# Удаляем служебные элементы
|
180 |
+
for element in message_copy.find_all(['script', 'style']):
|
181 |
+
element.decompose()
|
182 |
+
|
183 |
+
# Получаем чистый текст
|
184 |
+
text = message_copy.get_text()
|
185 |
+
|
186 |
+
# Очищаем от множественных переносов строк, но сохраняем одиночные
|
187 |
+
text = re.sub(r'\n\s*\n\s*\n+', '\n\n', text) # Множественные переносы -> двойной перенос
|
188 |
+
|
189 |
+
# НЕ удаляем пробелы в начале строк - это может быть отступ кода!
|
190 |
+
# Только очищаем trailing пробелы и нормализуем множественные пробелы
|
191 |
+
lines = text.split('\n')
|
192 |
+
cleaned_lines = []
|
193 |
+
for line in lines:
|
194 |
+
# Удаляем trailing пробелы, но сохраняем leading пробелы
|
195 |
+
line = line.rstrip()
|
196 |
+
# Нормализуем множественные пробелы только внутри строки (не в начале)
|
197 |
+
if line.lstrip(): # Если строка не пустая
|
198 |
+
leading_spaces = len(line) - len(line.lstrip())
|
199 |
+
content = line.lstrip()
|
200 |
+
# Нормализуем пробелы только в содержимом, не в о��ступах
|
201 |
+
content = re.sub(r'[ \t]+', ' ', content)
|
202 |
+
line = ' ' * leading_spaces + content
|
203 |
+
cleaned_lines.append(line)
|
204 |
+
|
205 |
+
text = '\n'.join(cleaned_lines).strip()
|
206 |
+
|
207 |
+
return text
|
208 |
+
|
209 |
+
def extract_code_from_pre(self, pre_tag) -> str:
|
210 |
+
"""Извлечение кода из тега <pre> с сохранением переносов строк"""
|
211 |
+
# Находим внутренний тег <pre>, если есть
|
212 |
+
inner_pre = pre_tag.find('pre')
|
213 |
+
if inner_pre:
|
214 |
+
pre_tag = inner_pre
|
215 |
+
|
216 |
+
# Создаем копию для обработки, чтобы не изменять оригинал
|
217 |
+
pre_copy = pre_tag.__copy__()
|
218 |
+
|
219 |
+
# Обрабатываем все элементы внутри pre
|
220 |
+
self.process_code_elements(pre_copy)
|
221 |
+
|
222 |
+
# Получаем текст с сохранением переносов
|
223 |
+
text = pre_copy.get_text()
|
224 |
+
|
225 |
+
# Очищаем код
|
226 |
+
return self.clean_1c_code(text)
|
227 |
+
|
228 |
+
def process_code_elements(self, element):
|
229 |
+
"""Рекурсивная обработка элементов кода для сохранения переносов строк"""
|
230 |
+
children_to_process = list(element.children) # Создаем копию списка детей
|
231 |
+
for child in children_to_process:
|
232 |
+
if hasattr(child, 'name') and child.name:
|
233 |
+
if child.name == 'font':
|
234 |
+
# Заменяем теги font на их содержимое
|
235 |
+
child.replace_with(child.get_text())
|
236 |
+
elif child.name == 'br':
|
237 |
+
# Заменяем <br> на перенос строки
|
238 |
+
child.replace_with('\n')
|
239 |
+
else:
|
240 |
+
# Рекурсивно обрабатываем дочерние элементы
|
241 |
+
self.process_code_elements(child)
|
242 |
+
|
243 |
+
def clean_1c_code(self, code_text: str) -> str:
|
244 |
+
"""Очистка кода 1С от лишних символов с сохранением отступов"""
|
245 |
+
if not code_text:
|
246 |
+
return ""
|
247 |
+
|
248 |
+
# Заменяем HTML entities и исправляем кавычки
|
249 |
+
code_text = code_text.replace('"', '"')
|
250 |
+
code_text = code_text.replace('<', '<')
|
251 |
+
code_text = code_text.replace('>', '>')
|
252 |
+
code_text = code_text.replace('&', '&')
|
253 |
+
# Исправляем двойные кавычки в 1С коде
|
254 |
+
code_text = code_text.replace('""', '"')
|
255 |
+
|
256 |
+
# ИСПРАВЛЕНИЕ: Нормализуем переносы строк ПЕРЕД разбивкой
|
257 |
+
# Заменяем \r\n и \r на \n
|
258 |
+
code_text = code_text.replace('\r\n', '\n')
|
259 |
+
code_text = code_text.replace('\r', '\n')
|
260 |
+
|
261 |
+
lines = []
|
262 |
+
for line in code_text.split('\n'):
|
263 |
+
# Удаляем только trailing пробелы, но сохраняем отступы в начале
|
264 |
+
line = line.rstrip()
|
265 |
+
lines.append(line)
|
266 |
+
|
267 |
+
# Удаляем лишние пустые строки в начале и в конце
|
268 |
+
while lines and not lines[0].strip():
|
269 |
+
lines.pop(0)
|
270 |
+
while lines and not lines[-1].strip():
|
271 |
+
lines.pop()
|
272 |
+
|
273 |
+
# Обрабатываем отступы - добавляем стандартный отступ в 4 пробела для всех строк кода
|
274 |
+
if lines:
|
275 |
+
normalized_lines = []
|
276 |
+
for line in lines:
|
277 |
+
if line.strip(): # Не пустая строка
|
278 |
+
# Добавляем стандартный отступ в 4 пробела
|
279 |
+
normalized_lines.append(' ' + line.lstrip())
|
280 |
+
else: # Пустая строка
|
281 |
+
normalized_lines.append('')
|
282 |
+
lines = normalized_lines
|
283 |
+
|
284 |
+
return '\n'.join(lines)
|
285 |
+
|
286 |
+
def clean_solution_text(self, text: str) -> str:
|
287 |
+
"""Очистка текста решения от цифр в скобках в начале"""
|
288 |
+
if not text:
|
289 |
+
return text
|
290 |
+
|
291 |
+
# Удаляем цифры в скобках в начале строки
|
292 |
+
text = re.sub(r'^\(\d+\)\s*', '', text.strip())
|
293 |
+
|
294 |
+
return text
|
295 |
+
|
296 |
+
async def parse_topic(self, session: aiohttp.ClientSession, topic_url: str) -> Optional[List[Dict]]:
|
297 |
+
"""Парсинг отдельной темы форума"""
|
298 |
+
topic_id = self.extract_topic_id(topic_url)
|
299 |
+
if not topic_id:
|
300 |
+
logger.debug(f"Невалидный URL: {topic_url}")
|
301 |
+
return None
|
302 |
+
|
303 |
+
logger.info(f"Парсинг темы: {topic_url}")
|
304 |
+
|
305 |
+
html = await self.fetch_page(session, topic_url)
|
306 |
+
if not html:
|
307 |
+
return None
|
308 |
+
|
309 |
+
soup = BeautifulSoup(html, 'html.parser')
|
310 |
+
|
311 |
+
# Извлекаем метаданные
|
312 |
+
meta_id = self.extract_meta_identifier(soup)
|
313 |
+
if not meta_id:
|
314 |
+
logger.warning(f"Не найден meta identifier для {topic_url}")
|
315 |
+
return None
|
316 |
+
|
317 |
+
# Проверяем, не обработана ли уже эта тема
|
318 |
+
if meta_id in self.processed_urls:
|
319 |
+
logger.debug(f"Тема {meta_id} уже обработана")
|
320 |
+
return None
|
321 |
+
|
322 |
+
# Извлекаем первое сообщение (вопрос)
|
323 |
+
question = self.extract_first_message(soup)
|
324 |
+
if not question:
|
325 |
+
logger.warning(f"Не найден вопрос для {topic_url}")
|
326 |
+
return None
|
327 |
+
|
328 |
+
# Извлекаем решения
|
329 |
+
solutions = self.extract_solutions(soup)
|
330 |
+
|
331 |
+
# Создаем записи для каждого решения
|
332 |
+
results = []
|
333 |
+
if solutions:
|
334 |
+
for solution in solutions:
|
335 |
+
result = {
|
336 |
+
'source': 'forum_infostart',
|
337 |
+
'in_source_id': meta_id,
|
338 |
+
'prompt': question,
|
339 |
+
'gold_standard_solution': solution
|
340 |
+
}
|
341 |
+
results.append(result)
|
342 |
+
else:
|
343 |
+
# Если нет решений, все равно сохраняем вопрос
|
344 |
+
result = {
|
345 |
+
'source': 'forum_infostart',
|
346 |
+
'in_source_id': meta_id,
|
347 |
+
'prompt': question,
|
348 |
+
'gold_standard_solution': ''
|
349 |
+
}
|
350 |
+
results.append(result)
|
351 |
+
|
352 |
+
self.processed_urls.add(meta_id)
|
353 |
+
return results
|
354 |
+
|
355 |
+
async def process_topics_batch(self, session: aiohttp.ClientSession, topic_urls: List[str]) -> List[Dict]:
|
356 |
+
"""Обработка пакета тем с использованием корутин"""
|
357 |
+
tasks = [self.parse_topic(session, url) for url in topic_urls]
|
358 |
+
results = await asyncio.gather(*tasks, return_exceptions=True)
|
359 |
+
|
360 |
+
valid_results = []
|
361 |
+
for result in results:
|
362 |
+
if isinstance(result, list):
|
363 |
+
valid_results.extend(result)
|
364 |
+
elif isinstance(result, dict):
|
365 |
+
valid_results.append(result)
|
366 |
+
elif isinstance(result, Exception):
|
367 |
+
logger.error(f"Ошибка при обработке темы: {result}")
|
368 |
+
|
369 |
+
return valid_results
|
370 |
+
|
371 |
+
def escape_for_csv(self, text: str) -> str:
|
372 |
+
"""Экранирование специальных символов для CSV"""
|
373 |
+
if not text:
|
374 |
+
return text
|
375 |
+
|
376 |
+
# Экранируем специальные символы для корректного сохранения в CSV
|
377 |
+
text = text.replace('\\', '\\\\') # Экранируем слеши
|
378 |
+
text = text.replace('\r\n', '\n') # Windows переносы -> Unix переносы
|
379 |
+
text = text.replace('\r', '\n') # Mac переносы -> Unix переносы
|
380 |
+
text = text.replace('\n', '\\n') # Реальные переносы -> литеральные \n
|
381 |
+
text = text.replace('\t', '\\t') # Табуляции -> литеральные \t
|
382 |
+
|
383 |
+
|
384 |
+
return text
|
385 |
+
|
386 |
+
def save_to_csv(self, data: List[Dict]):
|
387 |
+
"""Сохранение данных в CSV файл"""
|
388 |
+
if not data:
|
389 |
+
return
|
390 |
+
|
391 |
+
with open(self.csv_file, 'a', newline='', encoding='utf-8') as file:
|
392 |
+
writer = csv.DictWriter(file, fieldnames=['source', 'in_source_id', 'prompt', 'gold_standard_solution'],
|
393 |
+
quoting=csv.QUOTE_ALL) # Полное квотирование для сохранения пробелов
|
394 |
+
for row in data:
|
395 |
+
# Экранируем специальные символы в текстовых полях
|
396 |
+
escaped_row = {}
|
397 |
+
for key, value in row.items():
|
398 |
+
if isinstance(value, str):
|
399 |
+
escaped_row[key] = self.escape_for_csv(value)
|
400 |
+
else:
|
401 |
+
escaped_row[key] = value
|
402 |
+
writer.writerow(escaped_row)
|
403 |
+
|
404 |
+
logger.info(f"Сохранено {len(data)} записей в {self.csv_file}")
|
405 |
+
|
406 |
+
async def parse_all_pages(self, start_page: int = 1, end_page: int = 2100, batch_size: int = 10):
|
407 |
+
"""Парсинг всех страниц форума"""
|
408 |
+
connector = aiohttp.TCPConnector(limit=20, limit_per_host=10)
|
409 |
+
timeout = aiohttp.ClientTimeout(total=60)
|
410 |
+
|
411 |
+
async with aiohttp.ClientSession(
|
412 |
+
connector=connector,
|
413 |
+
timeout=timeout,
|
414 |
+
headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
|
415 |
+
) as session:
|
416 |
+
|
417 |
+
for page_num in range(start_page, end_page + 1):
|
418 |
+
try:
|
419 |
+
logger.info(f"Обработка страницы {page_num} из {end_page}")
|
420 |
+
|
421 |
+
# Получаем список тем со страницы
|
422 |
+
topic_urls = await self.parse_topic_list_page(session, page_num)
|
423 |
+
|
424 |
+
if not topic_urls:
|
425 |
+
logger.info(f"Нет тем для обработки на странице {page_num}")
|
426 |
+
continue
|
427 |
+
|
428 |
+
# Фильтруем уже обработанные темы
|
429 |
+
new_topic_urls = []
|
430 |
+
for url in topic_urls:
|
431 |
+
topic_id = self.extract_topic_id(url)
|
432 |
+
if topic_id and topic_id not in self.processed_urls:
|
433 |
+
new_topic_urls.append(url)
|
434 |
+
|
435 |
+
logger.info(f"Новых тем для обработки: {len(new_topic_urls)}")
|
436 |
+
|
437 |
+
if not new_topic_urls:
|
438 |
+
continue
|
439 |
+
|
440 |
+
# Обрабатываем темы пакетами
|
441 |
+
for i in range(0, len(new_topic_urls), batch_size):
|
442 |
+
batch = new_topic_urls[i:i + batch_size]
|
443 |
+
logger.info(f"Обработка пакета {i//batch_size + 1}, тем в пакете: {len(batch)}")
|
444 |
+
|
445 |
+
# Парсим пакет тем
|
446 |
+
batch_results = await self.process_topics_batch(session, batch)
|
447 |
+
|
448 |
+
# Сохраняем результаты
|
449 |
+
if batch_results:
|
450 |
+
self.save_to_csv(batch_results)
|
451 |
+
|
452 |
+
# Небольшая пауза между пакетами
|
453 |
+
await asyncio.sleep(2)
|
454 |
+
|
455 |
+
logger.info(f"Страница {page_num} обработана")
|
456 |
+
|
457 |
+
except Exception as e:
|
458 |
+
logger.error(f"Ошибка при обработке страницы {page_num}: {e}")
|
459 |
+
continue
|
460 |
+
|
461 |
+
async def main():
|
462 |
+
"""Основная функция"""
|
463 |
+
parser = InfostartForumParser(csv_file='forum_dataset.csv', delay=1.0)
|
464 |
+
|
465 |
+
try:
|
466 |
+
await parser.parse_all_pages(start_page=1, end_page=2100, batch_size=5)
|
467 |
+
logger.info("Парсинг завершен")
|
468 |
+
except KeyboardInterrupt:
|
469 |
+
logger.info("Парсинг прерван пользователем")
|
470 |
+
except Exception as e:
|
471 |
+
logger.error(f"Критическая ошибка: {e}")
|
472 |
+
|
473 |
+
if __name__ == "__main__":
|
474 |
+
asyncio.run(main())
|
scripts_parsing/think_model_parser.py
ADDED
@@ -0,0 +1,156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import re
|
2 |
+
from typing import List, Dict, Optional, Set
|
3 |
+
from bs4 import BeautifulSoup
|
4 |
+
from forum_parser import InfostartForumParser
|
5 |
+
import csv
|
6 |
+
import os
|
7 |
+
import logging
|
8 |
+
|
9 |
+
logger = logging.getLogger(__name__)
|
10 |
+
|
11 |
+
class ThinkModelForumParser(InfostartForumParser):
|
12 |
+
def __init__(self, csv_file: str = 'think_model_dataset.csv', delay: float = 1.0):
|
13 |
+
super().__init__(csv_file, delay)
|
14 |
+
self.csv_file = csv_file
|
15 |
+
self._init_csv() # Override parent's CSV initialization
|
16 |
+
|
17 |
+
def _init_csv(self):
|
18 |
+
"""Инициализация CSV файла с заголовками для think model"""
|
19 |
+
if not os.path.exists(self.csv_file):
|
20 |
+
with open(self.csv_file, 'w', newline='', encoding='utf-8') as file:
|
21 |
+
writer = csv.writer(file, quoting=csv.QUOTE_ALL)
|
22 |
+
writer.writerow(['source', 'in_source_id', 'prompt', 'think_process', 'solution', 'is_answer_a_link', 'has_link'])
|
23 |
+
|
24 |
+
def extract_thread_conversation(self, soup: BeautifulSoup) -> str:
|
25 |
+
"""Извлечение всей ветки обсуждения в формате think process"""
|
26 |
+
conversation = []
|
27 |
+
|
28 |
+
# Находим все сообщения в треде
|
29 |
+
messages = soup.find_all('div', class_='m-tree-p')
|
30 |
+
|
31 |
+
for msg in messages:
|
32 |
+
# Извлекаем текст сообщения
|
33 |
+
message_text_div = msg.find('div', class_='forum-message-text')
|
34 |
+
if message_text_div:
|
35 |
+
text = self.clean_message_text(message_text_div)
|
36 |
+
if text:
|
37 |
+
# Дополнительная очистка от цифр в скобках для сообщений
|
38 |
+
text = self.clean_solution_text(text)
|
39 |
+
if text: # Проверяем что текст не пустой после очистки
|
40 |
+
conversation.append(text)
|
41 |
+
|
42 |
+
# Формируем think process в формате <think>{conversation}</think>
|
43 |
+
think_process = "<think>\n"
|
44 |
+
think_process += "\n---\n".join(conversation) # Разделяем сообщения
|
45 |
+
think_process += "\n</think>"
|
46 |
+
|
47 |
+
return think_process
|
48 |
+
|
49 |
+
def count_links_in_text(self, text: str) -> int:
|
50 |
+
"""Подсчет количества ссылок в тексте"""
|
51 |
+
# Ищем URL-подобные строки
|
52 |
+
url_pattern = r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
|
53 |
+
links = re.findall(url_pattern, text)
|
54 |
+
return len(links)
|
55 |
+
|
56 |
+
def is_answer_mostly_link(self, text: str) -> bool:
|
57 |
+
"""Проверка, состоит ли ответ в основном из ссылки (>80%)"""
|
58 |
+
# Находим все ссылки
|
59 |
+
url_pattern = r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
|
60 |
+
links = re.findall(url_pattern, text)
|
61 |
+
|
62 |
+
if not links:
|
63 |
+
return False
|
64 |
+
|
65 |
+
# Считаем общую длину текста и длину ссылок
|
66 |
+
total_length = len(text.strip())
|
67 |
+
links_length = sum(len(link) for link in links)
|
68 |
+
|
69 |
+
# Проверяем, составляют ли ссылки более 80% текста
|
70 |
+
return (links_length / total_length) > 0.8 if total_length > 0 else False
|
71 |
+
|
72 |
+
async def parse_topic(self, session, topic_url: str) -> Optional[List[Dict]]:
|
73 |
+
"""Переопределенный метод парсинга темы для think model"""
|
74 |
+
topic_id = self.extract_topic_id(topic_url)
|
75 |
+
if not topic_id:
|
76 |
+
logger.debug(f"Невалидный URL: {topic_url}")
|
77 |
+
return None
|
78 |
+
|
79 |
+
logger.info(f"Парсинг темы: {topic_url}")
|
80 |
+
|
81 |
+
html = await self.fetch_page(session, topic_url)
|
82 |
+
if not html:
|
83 |
+
return None
|
84 |
+
|
85 |
+
soup = BeautifulSoup(html, 'html.parser')
|
86 |
+
|
87 |
+
# Извлекаем метаданные
|
88 |
+
meta_id = self.extract_meta_identifier(soup)
|
89 |
+
if not meta_id:
|
90 |
+
logger.warning(f"Не найден meta identifier для {topic_url}")
|
91 |
+
return None
|
92 |
+
|
93 |
+
# Проверяем, не обработана ли уже эта тема
|
94 |
+
if meta_id in self.processed_urls:
|
95 |
+
logger.debug(f"Тема {meta_id} уже обработана")
|
96 |
+
return None
|
97 |
+
|
98 |
+
# Извлекаем основные данные
|
99 |
+
prompt = self.extract_first_message(soup)
|
100 |
+
if not prompt:
|
101 |
+
logger.warning(f"Не найден вопрос для {topic_url}")
|
102 |
+
return None
|
103 |
+
|
104 |
+
think_process = self.extract_thread_conversation(soup)
|
105 |
+
solutions = self.extract_solutions(soup)
|
106 |
+
|
107 |
+
# Создаем записи даже если нет решений
|
108 |
+
if solutions:
|
109 |
+
# Объединяем все ре��ения в одно
|
110 |
+
combined_solution = "\n---\n".join(solutions)
|
111 |
+
else:
|
112 |
+
combined_solution = ""
|
113 |
+
|
114 |
+
# Анализируем ссылки
|
115 |
+
has_link = self.count_links_in_text(combined_solution)
|
116 |
+
is_answer_a_link = self.is_answer_mostly_link(combined_solution)
|
117 |
+
|
118 |
+
self.processed_urls.add(meta_id)
|
119 |
+
|
120 |
+
return [{
|
121 |
+
'source': 'forum_infostart',
|
122 |
+
'in_source_id': meta_id,
|
123 |
+
'prompt': prompt,
|
124 |
+
'think_process': think_process,
|
125 |
+
'solution': combined_solution,
|
126 |
+
'is_answer_a_link': is_answer_a_link,
|
127 |
+
'has_link': has_link if has_link > 0 else 'NaN'
|
128 |
+
}]
|
129 |
+
|
130 |
+
def save_to_csv(self, data: List[Dict]):
|
131 |
+
"""Переопределенный метод сохранения в CSV для think model"""
|
132 |
+
if not data:
|
133 |
+
return
|
134 |
+
|
135 |
+
with open(self.csv_file, 'a', newline='', encoding='utf-8') as file:
|
136 |
+
writer = csv.DictWriter(file, fieldnames=['source', 'in_source_id', 'prompt', 'think_process', 'solution', 'is_answer_a_link', 'has_link'],
|
137 |
+
quoting=csv.QUOTE_ALL) # Полное квотирование для сохранения пробелов
|
138 |
+
for row in data:
|
139 |
+
# Экранируем специальные символы в текстовых полях
|
140 |
+
escaped_row = {}
|
141 |
+
for key, value in row.items():
|
142 |
+
if isinstance(value, str):
|
143 |
+
escaped_row[key] = self.escape_for_csv(value)
|
144 |
+
else:
|
145 |
+
escaped_row[key] = value
|
146 |
+
writer.writerow(escaped_row)
|
147 |
+
|
148 |
+
logger.info(f"Сохранено {len(data)} записей в {self.csv_file}")
|
149 |
+
|
150 |
+
async def main():
|
151 |
+
parser = ThinkModelForumParser()
|
152 |
+
await parser.parse_all_pages(start_page=1, end_page=2100)
|
153 |
+
|
154 |
+
if __name__ == "__main__":
|
155 |
+
import asyncio
|
156 |
+
asyncio.run(main())
|
training_data.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c8cb6f8bed2a5153e90f2dadd75fab0c57dca78346b167eb4087db959e53dbcf
|
3 |
+
size 169291589
|