Web Scraper
Collection
Turn any URL into clean, AI-ready data in one click. Paste a link and scrape any website into structured AI-ready data.
β’
4 items
β’
Updated
β’
1
title
stringclasses 1
value | url
stringclasses 1
value | description
stringclasses 1
value | content
stringclasses 1
value |
---|---|---|---|
Bittensor
|
https://bittensor.com/whitepaper
|
Internet-scale machine learning
|
{\rtf1\ansi\ansicpg1252\cocoartf2759
\cocoatextscaling0\cocoaplatform0{\fonttbl\f0\fswiss\fcharset0 Helvetica;}
{\colortbl;\red255\green255\blue255;}
{\*\expandedcolortbl;;}
\margl1440\margr1440\vieww11520\viewh8400\viewkind0
\pard\tx720\tx1440\tx2160\tx2880\tx3600\tx4320\tx5040\tx5760\tx6480\tx7200\tx7920\tx8640\pardirnatural\partightenfactor0
\f0\fs24 \cf0 # Bittensor\
\
## URL\
[https://bittensor.com/whitepaper](https://bittensor.com/whitepaper)\
\
## Metadata\
\
- **Description**: Internet-scale machine learning\
\
## Content\
\
Bittensor\
\
(`1`)\
\
(`2`)\
\
(`3`)\
\
(`4`)\
\
(`5`)\
\
(`6`)\
\
(`7`)\
\
(`8`)\
\
(`9`)\
\
(`10`)\
\
(`11`)\
\
(`12`)\
\
Explore\
\
DOCS\
\
Research\
\
SCAN\
\
CONNECT\
\
Bittensor: A Peer-to-Peer Intelligence Market\
\
Yuma Rao\
\
00/ Abstract\
\
As with other commodities, markets could help us efficiently produce machine intelligence. We propose a market where intelligence is priced by other intelligence systems peer-to-peer across the internet. Peers rank each other by training neural networks which learn the value of their neighbors. Scores accumulate on a digital ledger where high ranking peers are monetarily rewarded with additional weight in the network. However, this form of peer-ranking is not resistant to collusion, which could disrupt the accuracy of the mechanism. The solution is an incentive mechanism that maximally rewards honestly selected weights, making the system resistant to collusion of up to 50 percent of the network weight. The result is a collectively run intelligence market that continually produces newly trained models and pays contributors who create information theoretic value.\
\
0.1/ Introduction\
\
The production of machine intelligence has come to rely almost entirely on a system of benchmarking, where machine learning models are trained to perform well on narrowly defined supervised problems. While this system works well for pushing the performance on these specific problems, the mechanism is weak in situations where the introduction of markets would enable it to excel. For example, intelligence is increasingly becoming untethered from specific objectives and becoming a commodity that is (`1`) expensively mined from data (Schwartz et al. [2019]), (`2`) monetarily valuable (OpenAI [2020]), (`3`) transferable (Devlin et al. [2019]), and (`4`) generally useful (Radford et al. [2019]). Measuring its production with supervised objectives does not directly reward the commodity itself and causes the field to converge toward narrow specialists (Chollet [2019]). Moreover, these objectives (often measured in uni-dimensional metrics like accuracy) do not have the resolution to reward niche or legacy systems, thus what is not currently state of the art is lost. Ultimately, the proliferation of diverse intelligence systems is limited by the need to train large monolithic models to succeed in a winner-take-all competition. Standalone engineers cannot directly monetize their work and what results is centralization where a small set of large corporations control access to the best artificial intelligence (OpenAI [2020]).\
\
A new commodity needs a new type of market (`1`). This paper suggests a framework in which machine intelligence is measured by other intelligence systems. Models are ranked for informational production regardless of the subjective task or dataset used to train them. By changing the basis against which machine intelligence is measured, (`1`) the market can reward intelligence that is applicable to a much larger set of objectives, (`2`) legacy systems can be monetized for their unique value, and (`3`) smaller diverse systems can find niches within a much higher resolution reward landscape. The solution is a network of computers that share representations continuously and asynchronously, peer-to-peer (P2P) across the internet. The constructed market uses a digital ledger to record ranks and to provide incentives to peers in a decentralized manner. The chain measures trust, making it difficult for peers to attain rewards without providing value to the majority. Researchers can directly monetize machine intelligence work and consumers can directly purchase it.\
\
01 Model\
\
We begin with an abstract definition of intelligence Hinton et al. [2015] in the form of a parameterized function\'a0y=f(x)\uc0\u8197 \\ y = f(x) \\:\'a0y=f(x)trained over a dataset\'a0D=[X,Y]\u8197 \\ D = [X, Y] \\:\'a0D=[X,Y]to minimize a loss\'a0L=ED[Q(y,f(x))]\u8197 \\ \{\\mathcal\{L\}\} = E_\{D\}[Q(y,f(x))] \\:\'a0L=ED\u8203 [Q(y,f(x))]. Our network is composed of n functions\'a0F=f0,...,fj,...fn\u8197 \\ F = f_\{0\}, ..., f_\{j\}, ...f_\{n\} \\:\'a0F=f0\u8203 ,...,fj\u8203 ,...fn\u8203 \'92peers\'92 where each is holding zero or more network weight\'a0S=[si]\u8197 \\ S = [s_\{i\}] \\:\'a0S=[si\u8203 ]\'92stake\'92 represented on a digital ledger. These functions, together with losses and their proportion of stake, represent a stake-weighted machine learning objective\u8721 inLi\u8727 si\u8197 \\sum_\{i\}^\{n\} \{\\mathcal\{L\}\}_\{i\} * s_\{i\} \\:\u8721 in\u8203 Li\u8203 \u8727 si\u8203 \
\
Figure 1 / Peer functions with losses\'a0Li\uc0\u8197 \\ \{\\mathcal\{L\}\}_\{i\} \\:\'a0Li\u8203 and unique datasets\'a0Di\u8197 \\ D_\{i\} \\:\'a0Di\u8203 \
\
Our goal is the distribution of stake I, as an incentive, to peers who have helped minimize the loss-objective (Figure-1), and importantly, in such a way that, it is difficult for a small proportion of stake to collude as a means to maximize their distribution in the network without minimizing the loss (Figure-3).\
\
In this paper, we suggest this can be achieved through peer-ranking, where peers use the outputs of others\'a0F(x)=[f0(x)...fn(x)]\uc0\u8197 \\ F(x) = [f_\{0\}(x)...f_\{n\}(x)] \\:\'a0F(x)=[f0\u8203 (x)...fn\u8203 (x)]as inputs to themselves\'a0f(F(x))\u8197 \\ f(F(x)) \\:\'a0f(F(x))and learn a set of weights\'a0W=[wi,j]\u8197 \\ W = [w_\{i,j\}] \\:\'a0W=[wi,j\u8203 ]where peer i is responsible for setting the i th row through transactions on a digital ledger.\
\
Setting weights using an fishers information pruning score LeCun et al. [1989]; Yu et al. [2017] in the ranking calculation,\'a0R=WT\uc0\u8901 S\u8197 \\ R = W^\{T\} \\cdot S \\:\'a0R=WT\u8901 Sachieves an idealized scoring where each peer\'92s incentive is equivalent to its pruning score: the cost in entropy towards\u8721 inLi\u8727 si\u8197 \\sum_\{i\}^\{n\} \{\\mathcal\{L\}\}_\{i\} * s_\{i\} \\:\u8721 in\u8203 Li\u8203 \u8727 si\u8203 induced by removing it from the network.\
\
However, this approach is not resistant to collusion, where peers vote for themselves, notably instead of using (`2`), and set weights to enhance their own inflation at the expense of the network(Figure-3). This attack is trivial since the digital ledger cannot audit the parameters of each model, only the inter-model weights W.\
\
Figure 3 / Disjoint cabal: peers in the right sub-network only vote for themselves.\
\
02 Incentive\
\
We extended the naive ranking method to evade collusion with an \'92incentive\'92 function\'a0I(W,S)\uc0\u8197 \\ I(W, S) \\:\'a0I(W,S)which limits reward to peers that have not reached consensus in the network. Assuming no group of peers holds more than the majority of stake in the system, then peers can only attain inflation by working to attract votes from the majority: a core assumption in many decentralized systems like Bitcoin. Reintroducing our terms, our incentive mechanism requires a stake vector S and a set of weights W where rows are inter-peer rankings. We also infer a trust matrix T from the weights, where\'a0ti,j=1\u8197 \\ t_\{i,j\} = 1 \\:\'a0ti,j\u8203 =1if and only if there is a non-zero edge between peer i and j.\
\
We define peers who have reached \'92consensus\'92 as those with non-zero edges from more than 50 percent of stake in the network. (This is simply the normalized values of\'a0(TT\uc0\u8901 S)>0.5\\ (T^\{T\} \\cdot S) > 0.5\'a0(TT\u8901 S)>0.5). To ensure the mechanism is differentiable we define this computation using the continuous sigmoid function. The sigmoid produces a threshold-like scaling that rewards connected peers and punishes the non-trusted. The steepness and threshold point can be modulated through a temperature \u961 and shift term \u954 .\
\
Figure 4 / Consensus function\'a0ci=\uc0\u963 (\u961 \u8721 jntj,isj\u8722 \u954 )\u8197 \\ c_\{i\} = \\sigma(\\rho \\sum_\{j\}^\{n\} t_\{j,i\}s_\{j\} - \\kappa) \\:\'a0ci\u8203 =\u963 (\u961 \u8721 jn\u8203 tj,i\u8203 sj\u8203 \u8722 \u954 )with temperature \u961 = 10 and shift \u954 = 0.5. The activation takes the trust scores and produces an exponential scaling up to our inflection point where a peer is connected to the majority.\
\
We use the consensus term to scale the original rankings. As peers attain more weight in the network they increase their inflation exponentially up to 0.5. In section 10 we show how this ensures that the larger of two competing sub-graphs comes to own an exponentially larger proportion of the network through inflation.\
\
03 Bonds\
\
This consensus described above protects against naive collusion by making it difficult for small groups to achieve inflation. However, it does not provide a incentive for correctly selecting weights. We introduce these incentives by adapting the inflation mechanism with a speculation based reward in the form of \'92bonds\'92 B. Here,\'a0bi,j\uc0\u8712 B\u8197 \\ b_\{i,j\} \\in B \\:\'a0bi,j\u8203 \u8712 Bis the proportion of bonds owned by peer i in peer j.\
\
Bonds accumulate at each step similarly to token inflation where\'a0\uc0\u916 B=W\u8901 S\u8197 \\ \\Delta B = W \\cdot S \\:\'a0\u916 B=W\u8901 SIn this way, peers accumulate bonds in the peers they rank, thus \'92bonding\'92 themselves to those that they are connected to.\
\
Using the B bond matrix, the chain redistributes the normal incentive scores\'a0\uc0\u916 S=BT\u8901 I\u8197 \\ \\Delta S = B^\{T\} \\cdot I \\:\'a0\u916 S=BT\u8901 ILike market based speculation on traditional equities, the peers that have accumulated bonds in peers that others will later value attain increased inflation themselves. Thus it makes sense for peers to accumulate bonds in peers which it expects to do well according to other peers with stake in the system - thus speculating on their future value. Finally, we adapt this mechanism slightly to ensure peers attain a fixed proportion of their personal inflation. For instance, 50 percent,\'a0\u916 S=0.5BTI+0.5I.\u8197 \u916 S\u8197 \\ \\Delta S = 0.5B^\{T\}I + 0.5I. \\: \\Delta S \\:\'a0\u916 S=0.5BTI+0.5I.\u916 Sbecomes the mechanism step update which determines network incentives across the n peers.\
\
04 Reaching Consensus\
\
The incentive function in Section 2 rewards highly trusted peers, however, it may not solve the collusion problem if the honest nodes do not reach consensus. Notably loose, unused stake or incorrectly set weights will detract from the inflation proportion of honest peers in comparison to a colluding sub-network. The honest network, although holding more stake, may not gain enough inflation to overshadow its adversary. The dishonest sub-graph need only attain enough inflation to compete with its largest competitor, not to entirely dominate the network.\
\
This attack is possible when the majority of token inflation is being distributed towards peers which are non-majority-trusted in the graph. The chain can measure this through a \'92loss term\'92\'a0L=\uc0\u8722 R\u8901 (C\u8722 0.5)\u8197 \\ \{\\mathcal\{L\}\} = -R \\cdot (C - 0.5) \\:\'a0L=\u8722 R\u8901 (C\u8722 0.5)(Figure 7). The term is negative if the majority of inflation is being distributed towards peers with more than 0.5 consensus. The chain uses the loss calculation as a peg. By increasing the number of weights the average miner sets across the network the chain can ensure consensus.\
\
Figure 5 / The left network has low consensus\'a0L>0\uc0\u8197 \\ \{\\mathcal\{L\}\} > 0 \\:\'a0L>0The system is not resistant to a cabal with less than 50 percent of the stake. The chain increases the number of edges set by peers until\'a0L<0\u8197 \\ \{\\mathcal\{L\}\} < 0\\:\'a0L<0. At this point the majority of inflation flows to peers with majority consensus.\
\
05 Running the Network\
\
The steps to run a peer in the network are:\
\
06 Tensor Standardization\
\
A common encoding of inputs and outputs is required for the various model types and input types to interact. The use of tensor modalities can be used to partition the network into disjoint graphs. At the beginning, the network can be seeded with a single modality TEXT, then expanded to include IMAGE, SPEECH, and TENSOR. Eventually, combinations of these modalities can be added; for instance TEXT-IMAGE, to bridge the network into the multi-modality landscape. Incentives to connect modalities can be integrated with the same trust scaling suggested in section (`2`). Eventually, successful models should accept inputs from any modality and process them into a useful representation. For consistency, we can use a standard output shape across the network [batch_size, sequence_dim, output_dim] similar to the common tensor-shapes produced by language and image models \'96 and extend this size as the network increases in complexity.\
\
Figure 6 / Standardization of input dimensions within the network\
\
By working on abstract input classes we can ensure participants work towards a general multi-task understanding Kaiser et al. [2017]. Participants may use: (`2`) completely distinct computing substrates Nugent and Molter [2014], (`2`) datasets Lample and Conneau [2019], (`3`) models, and (`4`) strategies for maximizing their incentives in the market. It makes sense for peers to work on unsupervised datasets where data is cheap and privacy not required.\
\
07 Conditional Computation\
\
As the network grows, outward bandwidth is likely to become a major bottleneck. The need to reduce network transfer and a method of selecting peers is required. Conditional computation can be used where peers learn through gradient descent how to select and prune neighbors in the network. For example, a product key layer or a sparsely gated layer Shazeer et al. [2017].\
\
The conditional layer determines a sparse combination of peers to query for each example and multiplicatively re-joins them, cutting outward bandwidth by querying only a small subset of peers for each example. The method can drastically increase outward bandwidth Shazeer et al. [2017] Ryabinin and Gusev [2020], allowing peers to communicate with many more neighbors in the graph. In essence, the layer acts as a trainable DNS lookup for peers based on inputs. Furthermore, being trainable with respect to the loss, it provides a useful proxy for the weights\'a0wi,j\uc0\u8712 W\u8197 \\ w_\{i,j\} \\in W \\:\'a0wi,j\u8203 \u8712 W\
\
08 Knowledge Extraction\
\
Dependence between functions ensures that models must stay online and cannot be run in production. Breaking this dependence can be achieved using distillation Hinton et al. [2015]: A compression and knowledge extraction technique in which a smaller model \'96 the student - mimics the behavior of the remaining network. The distillation layer is employed in conjunction with a conditional computation (10) layer where the student model learns to mimic the network using the cross-entropy (shown below as KL) between the logits produced by the gating network and the student\'92s predicted distribution Sanh et al. [2020].\
\
Because the distilled model acts as a proxy for the network, models can be fully taken off-line and evaluated. Recursion through the network is also cut between components allowing for arbitrary network graphs. If models go offline, their peers can use the distilled versions in-place. Private data 6 can be validated over the distilled models instead of querying the network. Eventually, components can fully disconnect from the network using the distilled models to do validation and inference offline.\
\
Figure 7 / Queries propagate to depth=1 before the distilled model is used.\
\
09 Learning Weights\
\
Our goal in this work is the production of a ranking \'a0r=[ri]\uc0\u8197 \\ r = [r_\{i\}] \\:\'a0r=[ri\u8203 ]over peers where the score \'a0ri\u8712 R\\ r_\{i\} \\in R\'a0ri\u8203 \u8712 Rrepresents a participant\'92s information-theoretic significance to the benchmark. Following LeCun and others LeCun et al. [1989]; Yu et al. [2017], it is reasonable to define this significance by equating it with the cost of removing each peer from the network. We can derive this score analytically where\'a0\u916 F(x)i\u8197 \\ \\Delta F(x)_\{i\} \\:\'a0\u916 F(x)i\u8203 is a perturbation of the\'a0jth\u8197 \\ j^\{th\} \\:\'a0jthpeers\'92s inputs when the,\'a0ith\u8197 \\ i^\{th\} \\:\'a0ithpeer is removed from the network (Appendix 12.2):\
\
Note, when the error function\'a0Qj\uc0\u8197 \\ Q_\{j\} \\:\'a0Qj\u8203 is the twice-differentiable cross-entropy, then\'a0H(Qj)\u8197 \\ H(Q_\{j\}) \\:\'a0H(Qj\u8203 )is its Fisher- information matrix, and\'a0ri\u8712 R\u8197 \\ r_\{i\} \\in R \\:\'a0ri\u8203 \u8712 Ris suitably measured as each peer\'92s informational significance to the network as a whole. However, information theoretic weights require the full Hessian of the error. In practice it is more reasonable to use a heuristic to propagate a contribution score from the error function through to the inputs Yu et al. [2017]. For instance, weights from the gating layer (Section 6) provide a useful differentiable proxy.\
\
10 Collusion\
\
We consider the scenario where a subset of the peers in the network have formed a \'92cabal\'92: A set of colluding peers attempting to maximize their inflation without accurately scoring their neighbors. The fight between the honest graph A with stake \'a0SA\\ S_\{A\}\'a0SA\uc0\u8203 and the disjoint cabal B with stake SBS_\{B\}SB\u8203 can be determined by the proportion of network stake held by each. The honest graph must attain more inflation to maintain its dominance and protect the network\'a0IA\u8811 IB\\ I_\{A\} \\gg I_\{B\}\'a0IA\u8203 \u8811 IB\u8203 \
\
We assume that the proportion of stake in the honest graph is more than that found in the dishonest graph\'a0SA\uc0\u8811 SB\u8197 \\ S_\{A\} \\gg S_\{B\} \\:\'a0SA\u8203 \u8811 SB\u8203 and that the chain has reached consensus\'a0L<0\u8197 \\ \\mathcal\{L\} < 0 \\:\'a0L<0Since all peers in B are disjoint from A, our loss term\'a0\u8722 RB\u8901 (CB\u8722 0.5)>0\u8197 \\ -R_\{B\} \\cdot (C_\{B\} - 0.5) > 0 \\:\'a0\u8722 RB\u8203 \u8901 (CB\u8203 \u8722 0.5)>0is positive. Because\'a0L<0\u8197 \\ \\mathcal\{L\} < 0 \\:\'a0L<0it must be the case that\'a0RA\u8901 (CA\u8722 0.5)<0\u8197 \\ R_\{A\} \\cdot (C_\{A\} - 0.5) < 0 \\:\'a0RA\u8203 \u8901 (CA\u8203 \u8722 0.5)<0is negative and there are peers in the honest sub-graph A who are connected to the majority.\
\
As the chain progresses, newly minted stake is being emitted at our inflation rate \uc0\u964 in proportion to I = R \'b7 T. Importantly, the gradient of the incentive function with respect to the stake is positive and super-linear at our inflection point between the honest and dishonest graph. Notably,\'a0\u948 I\u948 S=52\\ \\frac\{\\delta I\}\{\\delta S\} = \\frac\{5\}\{2\}\'a0\u948 S\u948 I\u8203 =25\u8203 , this ensures that the amount of stake held by each sub-graph reflects a non-linear change in their inflation at the next iteration.\
\
Initially, since\'a0SA>0.5\uc0\u8197 and\u8197 SB<0.5\\ S_\{A\} > 0.5 \\: and \\: S_\{B\} < 0.5\'a0SA\u8203 >0.5andSB\u8203 <0.5the proportion of stake emitted in sub-graph A exceeds that in sub-graph B, and sub-graph A\'92s incentive grows super-linearly compared to B. The result is that the ratio of stake\'a0SBSA+SB\u8197 \\ \\frac\{S_\{B\}\}\{S_\{A\} + S_\{B\}\} \\:\'a0SA\u8203 +SB\u8203 SB\u8203 \u8203 decreases \'96 the cabal must continually add stake to its sub-graph to maintain itself through time.\
\
We consider this proportion between the competing graphs under continuous inflation. Converting to python code ...\
\
11 Conclusion\
\
We have proposed an intelligence market that runs on a P2P network outside of a trusted environment. Crucially, the benchmark measures performance as representational knowledge production using other intelligence systems to determine its value. The fact that this can be done in a collaborative and high-resolution manner suggests that the benchmark could provide a better reward mechanism for the field in general.\
\
To achieve this aim, the paper began with the definition of a P2P network composed of abstractly defined intelligence models. We showed how this framework allowed us to produce a ranking for each peer based on the cost to prune it from the network. Peers negotiated this score using a set of weights on a digital ledger. However, the system was incomplete without mechanisms that prevented participants from forming dishonest sub-graphs.\
\
To resolve this, we proposed an incentive scheme based on peer connectivity which exponentially rewarded peers for being trusted by a large portion of the network. This ensured that over time dishonest sub-graphs decay to irrelevance.\
\
Following this, we showed (`1`) how peers reduced the network bandwidth by learning connectivity using a differential layer and (`2`) how they could extract fully network-disconnected machine learning models to run in production. The result is an intelligence market that rewards participants for producing knowledge and making it available to new learners in the system.\
\
\'95 Explore\
\'95 DOCS\
\'95 Research\
\'95 SCAN\
\'95 CONNECT\
\
1. The peer defines its dataset\'a0Di\uc0\u8197 \\ D_\{i\} \\:\'a0Di\u8203 \
2. At each training iteration, the peer conditionally broadcasts batches of examples from\'a0Di\uc0\u8197 \\ D_\{i\} \\:\'a0Di\u8203 to its peers x = [batch_size,sequence_length, input_size].\
3. The responses\'a0F(x)=[...fj(x)...]\uc0\u8197 \\ F(x) = [...f_\{j\}(x)...] \\:\'a0F(x)=[...fj\u8203 (x)...]\'96 each of the common shape\'a0fj(x)\u8197 \\ f_\{j\}(x)\\:\'a0fj\u8203 (x)= [batch_size, sequence_length, output_size] \'96 are joined using the gating function and used as input to the local model\'a0fi\u8197 \\ f_\{i\} \\:\'a0fi\u8203 \
4. Comparison against the target labels produces a loss-gradient\'a0\uc0\u8706 L\u8706 F\u8197 \\ \\frac\{\\partial \\mathcal\{L\}\}\{\\partial \\mathcal\{F\}\} \\:\'a0\u8706 F\u8706 L\u8203 which back-propagates through fi and out to the network\
5. During 2 and 3 the peers learn the weights for their row\'a0wi,j\uc0\u8712 W\u8197 \\ w_\{i,j\} \\in W \\:\'a0wi,j\u8203 \u8712 Wby measuring the value of the signals produced by their peers.\
6. At distinct time-step t participants submit changes to the weights\'a0\uc0\u916 Wi\u8197 \\ \\Delta W_\{i\} \\:\'a0\u916 Wi\u8203 to update the ranking R, inflation I, consensus term C, and bond distributions \'a0\u948 B\u8197 \\ \\delta B \\:\'a0\u948 B\
7. The chain measures \'92loss\'92 and optionally distributes newly minted stake into the network\'a0\uc0\u916 S\u8197 \\ \\Delta S \\:\'a0\u916 Saccording to the bond ownership.\
\
}
|
Convert any public web page into clean, structured JSON in one click. Just paste a URL and this tool scrapes, cleans, and formats the contentβready to be used in any AI or content pipeline.
Whether you're building datasets for LLMs or feeding fresh content into agents, this no-code tool makes it effortless to extract high-quality data from the web.
π Launch Web Scraper
Need help? Join the Masa Discord #developers
license: mit task_categories: