Dataset Viewer
Auto-converted to Parquet Duplicate
Unnamed: 0
int64
0
116k
id
stringlengths
36
47
source
stringclasses
16 values
text
stringlengths
2
56.5k
chunk_num
int64
0
933
start_token
int64
0
5.6M
end_token
int64
3
5.6M
total_tokens
int64
3
5.6M
42,526
eb3b7151-f825-4f38-aef4-d1002d5b5cb7
StampyAI/alignment-research-dataset/blogs
7lVVe lRUP6edGto1abfNM+P6W8PfTiZXuz29ba8flQyWbbLJZ4obtyjjddvvNufdC7MRtJRmw0067FN5z hsTvfndPaLdxaHdj0ZHOklh77fWSbH5N+Hez+I1/KImG+++/t/iQdyWh8NRT44cO23oRri6HUu6R pAPeuv76a0Kbx8I/W2XL/L2p/+6M54ZJRu+V+yt2Dy19phPwtwUC7B6/Cq/5xu9ljDOmvvGVMWTT jDeebvryZMJzdAe+Eofx5xctWlQm6OoqHP61D/tVk+6T7c+UExAaxoyyJU3AjK7rOKXnwxnw26y/ DtTAx3PuIxAB9L9y/QKhRCwCPJYAClTFQzgQqjop6jUIgNC7ZwAreM5MqDopXEpgsiCJpy3CyzH7 z+ydbfYZowjwCCh6YYgK2pelqswKR4Nt394PsqTtuiQAHHbl/mvCfLtEKQnK/iwGRZJiumFe2kdj y0MdGGkryHuSmS1LSkMnK1q2jQAY107AqVqQsalBrDJosX8SQ7LuMnMO/Dsrda8dJbVbZp/qoWZm pfW/gzx3amp0rU8KkIn2hJUxIQPkxAfvMpoM9eI4Pp/+9KeXq51DTkbIEX4eBPC4MXZ2RHNmxXX1 +eBvv7XPueIsMGJkl36pgD9cx18cB7Jbnbhapt9vhrepqzh2kgYVltNVcSAlAguEbmQZbhXgZduF pKHlbVaSyCiTd86fVVEHRsk77KmpG+rz7d9oLHC/ODMSDok0M+tAqHrujKDD2Q9NQD8BiuvabQf3 bc2w8mgikCiVCJAQcFjljQkArVwQwJwVXWCm44DI6WYZM6skHg7PSIo4nNIKiQrwd9YEGq4VIyi5 I2EhebNlnBeGcRnkGoPo+ghmjgLGH+9V57K2RJ7wL+dc8OBj/Nipf/u3f6vFyrfr5LOfWZ3lHlz6 gywfkVUVZikr0EPGnxz74A240Dn0ALzZxGoj63OeYcPhxJkaxJeodfjWloDJ8/SP1Xq2OzZBGbih UwV4CSqaCR3XOIn6KMiXrP3f//IvRQ4seaYH2W+z1WTo5RCl1rriN/pYvbB1+tqEQrfoK2+iaQKZ uynB7PmZFXOOQzmwMv2yItPbairUt9dIqHgLTj3E0n16wNautZtyWh8cfQ9MAfxjthsPk0GzzBIR Y0v92MrDrrNpJsaadsL94kOGB11nrwUP9bl//ud/Xk7nk11yjGfxtXIAz5B/PkEF1wTdeFfb2pkI lMXz7f5EvU6OQPUnyHJ7zKAs/QEH8jZdoF76oGn7yCs9AYwBOgic6RYJH/gKvCq4Lz7xDHp4ZhDQ ngnQhQsXLhs/ePErJG4kGvxvTLQtSKzJh6af4xnl4YGW3cakF3765M12Nc4xW87vaYI2+HeSEzVJ 6T48BLx1DNEEXlaefvZf/7Vsrb76s58tNPcsvSZOkWDrF55/XhIhh/9efF4CZdu+rbr6Y6GToP+l l8ZjxFe9yjb4TRKfjEWO1mldd+2VwfmpTEhflaTvuq39s3rCtozrr7+ydeMN12Q7yi459+KubN17 X5K/0bV5XpJjyZI7WhflXIhbb70hyeS8FTG8gE8feeT3pb/j/W6V9hcs2LH16rEdsuXi7pyVcUd8 1c2zJc95J2R3hySe++9nv/SYqFzliaZeaOdvfGWcjScep3PafXkxKv2At/AGnSQB6bctSGyzcZb4 EhNITixIAn0q8LIHP4VaqiB1q0KHEEFZDCmLh2EruF+DfYzdFLpapts35alOyyabYCB8tAsqHvV/ v5vgNwXgG+EHxaNZVyop9WhbMCUL254cqH0WgMGzCQa74u06BW9/l6yiU2at+JBVNTtrNlYyQILi mCwdm25APfSQhPDhQMmenZvgg8MiG8pYTZSAgI/+CDQq6Bv6fjT7xSyzpvitgLEixCGU1yfQOTvb av75f/7PsgfdgTojmB4KoH2VjW41Vvmos/b15N36DEPMWaCsyPUggB8s48L7dSlofR5uTf6veOAX H7qkCX5TrBSsvjWVcbNcr/+147NMVyXg3jMzgJYiV3C/yq2DlgQBAGdz0NvleP3opve85z0lwDDr hMfJLYfnS1/+cuvKJBL+j499rLVDkgAvS0epcrk/kgz2Zv8yWzoc2Mg47h7aWYFhf7btDBcmMdEJ bOHqVneT1p2ed00/987sp8MuH4gzdnnw5tQ5D+KtSSJ5jaAydWTM+Fpx4fyHJjBydMZ20SOczUpz zpTZ1ybUxErz2uj/6aNA1QO97Fwdo1qeY4JnKrhPD/iuK9zqvX6+1SV5yFHVRoXKl02Z0ka7/Nfy vjnvNeChl5r1Ncv187+2APo49X0nM8SNB+EhoBMsWCFh+xHQZjtN9YH93jEyy4mzmssKSY7frzJj aGXhR2ILD0+Qsn6HRGGj2WX/GoHVg1svkHw4Lbb0e1kObxXSq6O3bMPYIn4Ju2pFw1VZ3QLIXPlO 3x1u6WML1tKLywK28Qujv1OhAP62ElFyiu2SoHYuEDlgG0HlwWU+5NLrtV22SBkJbfX5H/8pbym1 RGIF/Er/stvabQblnm3KmWeqHWRrmmVrfc1vz6qjH1AvOW0HNpw8NRP+7WUG/Y0WcO+mB9Cs9pUs WwkyNja2XFPorIx7g/gX2kUXM9LGqD1Jg25NumsDPvXTRMI1fELXotFU6VQnT9TD/5AEbYJ2fPBS e+yiT02aqkNC+P/+5CfLtnN+zuLFi0vgaxvqoiR03pEJk7GxBc0mOv5PX51//qJs3/lR6/fZOrHl Vtum7r2Cx/z4DBtnW9ypOQTzxmXPwkUSYuedd88ZHr/N6p4bcj7Dza1dd9szQfJO2QI7/orSa6+9 Kodkj4/HXnvtG7913cKHl1x8fs4U+k6STA8ksfLqJISdZ7ZFzr7YqPWdb385k1pLlrXlH7r9sEOP ypmG305i4+q8jjnnSJQyL7V22HGXguNyD8zgD2MAnyYPdWoO7wCyLHHQvloZzY01PpWwU6eYjm9u xTubZUKd3ypGt4KpOf6d2ux1rbfl6lXD0vu9ENEZDgZGIeAcmDpDgDCE21I037JqkxEsSgZxu4H2 ZX/hI9snq4nwfgO4EBwDwZGqMzqu189E9RveUmZpATQxm0HhMCaMgcOqKuirrQcybQa6m4JnNAy8 IF0i4i3Z/25vuiWW9q/bPyhz7gCd9gREWVYZ/DlOqwSnQZZ4Vlx9o529ZFZZ2Nd3RvYLaY+BG4uy bmbPm8/V/wuPpP0KaMwJE8zslEDFwXhHxkETSEmmeCWiPgna0G/VKLjZAgYaXwC8CHdjC2ffhL3y jDLN6+jUSx48MxtQedZ3O8CxG57u4V38pr/G1xkQtd/qNNsimEYrialBAa36lVsJMHzAYWfYrDQw NvBglOEiMwtPODPkg4I+698yXRV+f2P2QFO4QFvddFU7PeEgWWgGi1J3nsuRmQX129kw//RP/1Sy 0WiojbVCj4lAXRJ1Xm1pS8QR0aHvzZ5NKxeM7kVxXq2K6DTWEn/tuC3XTu73AmPlkMpXZ5wvDw7n ZAWWxIx6FyYbLuGgjECsnLAfXazsexuz2hIMZl/viSOyeXSezxbR9xwhb8h4IgHc5vlfnfprlrae VdELv5m4j5Z4Gy74VP8A/eADT7JRr9eyfrveleYzgfAk6uwHR7InyV/tq0C6OfNP/thTKxMkyCcD 5E473QCuyvAhJODZDw5TdeqNiUQWu2G8lKv6Sr2dZKO219SQyhlDekR7xlLg8das1rNCD9jmxCG7 Ojb81Um+NgM995sShX9sHXEivITk67Ni0KHLcJVs/9a3v10S+mydZP76wbtf6Gf8rJI6K6uRJAvf mO2axx55ZGub6COrH/gQv05yooLDNsk1vWIVxOKM667RtQBdrFjy6s+5ArSsttnYVtmDGz8JtMtq LV9t+Vzh3t4u/K0etdUQL7MD+NpZK1aG4utqz5b5kOGPCsWHzIpigTv7orxgEU/gR+cYsGUAffAb H5KttLQfneo9Nsn1ysfK41m0KwFoW+KjPNj4o81uvOge/OgRSQYBjTZc960vfFw6ZDK+RO2Hutqh F254iA9Ov6G3oKtu2VYX/87K7WrHq75pb6fbb2Ot/qpDO5WFJ50jwSGJSqfWJfPK6xse4efgBzgP Ap6v1NFWsdV86+DG/3B+RwW6VFt4QByEB5aDPF8Bj8BJAo1dEJvQcehlctTZArafSkR85CMfrY9N +P3MM+LBi1uLcwjlfvse1Hrrse8JT7w6Z0OtUbZ2/DyJiboCQiXGD1/tntdynn/+oqyk/lV8wsfz zHaJJ7eIDJjs2DixnUMbc7bDmmtl1fgehdcff/zR1i1Z9bA4r9fcaafXtE4++WOtjU2m5M0WfM11 113+uF10W2WVbEXb58BsNTk1dTpaYF7GaklotHm2me+d350nS9EfregwNGdX+oUydsYvn3boxd/u 4xd8S+aNdbsvb1LZ6mELA4whoC9NOrJL4kygDN/bqr2pwsTebqNmyCOWjlNgGK1T9rLxyAr/Ejqr E3xb0kzZ+B/TIIilkpSugFY71aisUNEUL+gL4lLWcDCLj/AcKB/KDw5ANs9geAY+mMYyU4qhBpwV nVIm5XyrVxkCbHbXwNfl45yZ6tR4z7jl2AYWTT07EaA3BfjdHJpzlqVhKetkentIZc8lOhgh7TQB qz4UfG4JfW+No/FYh6xzs3y3/9FAcogjpv/GSx84TGY/BxUmuF6epJMDP2+JMTIrSqFun8SPJelo z6HQ94kp0w3jyd0zzgyxcXEwmnEEviWnLF9iKJUDcGRMXcc/1QkqN+fwTye+rY5YP2h5nuEx5mSU 7JMRMus6ncA44WFbaAapu5/2m2XwHjnC63jPSd6Sh/AhOxSksYKTg5soTlCDQA6Pcp5tB2XUbztA mTmN/mnqKksPOQQr6KoYW7qqm9ziGVsvvvvd7xbjrB08LrmpL+goAdmuT9px9FuQbmsDo25m15kN 5J5OIUtel+fQKEE+XWCmczoB7rZDjMVxVffFSRJeGhnZLLK/V5KSldZe92kvKH7wek/JD8u4BTX2 hTozYlEO5HOOhKQFedcHp/FLQkhAWhJ+T2TMagsHXs4FGFu0FhiYAcA/wHW8h+fcY7vqdbaNniYX 7bq4FJqjP8bOx5jQbXSqfvQL5F8wb9aU3tM/MoZ/jR0aeGOGZbz0xExB1UnwAGw1J9k40QWcIzbd clNOluSEflfeJBfGEY7t/S/J+fAq2ijDjpMz9o6v4gBJegQf41uJfXqPU609vycCNHN+hrObFuUA bNuT6DNBIF3Dlusb+aj2ZqK6BrluhH2cV2H1FLm1ZWObjKV++H1XePb20BA9HCat/S2Dz9Yp4zDb y9NH35IR3rhxUWw2uZ4LgKOggK5HT3IIXDe2Ji7InzKuTWTL5wL3Tm3iTXxqRtJ5Ic5Sg7c30PAp 2AW2guzRJ/yL5XzI6Ej2V+DHduFXCQdBNJ7Er9XPJq/kw2HtnkEbH+CbHyz54TnyIbkmcHRPwEGW pgL4m+/LnyBjxm9x9D286ST+k/Ynoz/YfYBGZHdQ3Wsc6BT6TPv0SdVvgjZ09PYAfgf6KD8TUGlk vOksuo3PCSc44Hd+KfwkaZqrzehoY0UX63+7HlE3vMmJ+ui3GgsVXy5tuc/P8bESit3zJgy82G0F crH14RdvEPQaTnxLv1kVa4sZ/eYaHd0L8P+zzz5XXn357DNee7xpeG/b4JRJpdy7L2+sWHKXwx6f Dx89Xcp6Zr311g+f7ljodsEF5wiR0v724bkNcxBmXi+c1RMPPHBfbPZ1qS+JiaxwWHXV1YqueDw8 g958sy233Cp8sAGlkn7fkG2mXqmepGfw/2MOtdQWOm233VjB6768lePSvCXDtyTJ9jvs3LGLnjOG Vc7EEP1CHTt9ptvQ0f/9guf1jezpJx42/tWXhxu+ohfIJD7yjLK2zfB/LRBgb40jfVATm/3i0Knc uNR2utO4BgkKzTeHi6NBqUGyHzBYygvofThpsmGUkCCTwHBcGBUzKwRJ52cKKBlZZ0JIcRM+TgY8 bSFgxAxWPSTHfQNAIdWAS5AjieGQFoA2DINyDqY7J0re6yY5FxQbR0UgUg5mCxN4FRcaUMIG2CFx 3QAO8GMU0MvS77GxsUJDdMNQ8JGQaALGui33zonDuGZwfEOe2ShGarLAwHkdH+a7Ie3qszMaaqJm kHotV1ef2SorKhhXChWDq9vJ+upl+PR/toBg4/Pv59VkBNb4ad91h+vgAecW4BE8QRl484Jstay5 8cTvcwnkpxPfwq1fuYW/2RBJNM4RQ/iDH/ygOEn4nTNjbxgj6T65mgz0I+vK4AUrpxhFrwulJ/Ah RUo5klsGz6wPIC+MoLFgQL3ZwtLW9kOdHOKKD/WHTFqZZb9i1VUCDA6S2QKzhUVXhT8OX7jwZV0V /IrF60AAPHJ/jA3HD93MluIlhoiR8zYbeBZd0kWnooFX5NHF90UPXBsZEdgLWrxi77qMj+XSzoHw 2wFT/dC2A8oTXqK/rILYNjJpu4dkm7drSEKgt/44eHL36NMro/O8CcN2EVsxyPW1odv5MWJogub2 qUukeMPGTQkczwiNnLDvlaH6qA2GvwAazyKQE7zt9VQch/e///1lnOhU/GQ8/W+GgC5QHg/S83jn iKx0qbOJs4h2x6bYJTymP07It/yyXwcCD+FbcsM+s0FWp5E7/eYMkz82jg1aYaasI0aTv8hpMkPJ ARdsag+92SL945+QkYWRTwEP/D3jPvsOV8+WbSRL7YoybDCe5J+og37RjldylpUOacuZCK7hxMWx A2eElt4cQYfUrVidbxxYJQAAQABJREFUeqZ+dbPdcPRb4p6OdvDbkuAFP446PTOdAFerGiQGvTL3 tjiYF4ZPtUdHXJtEmjISFTcn2Wc713bRq7ZonJmA6/ToRLrJ2Q93R49dmiDROS5WOc02kDd+ApkU SLG3bLTrfKCvfe1rpV9ew2xLruudbPls492tPbKFD8iON8NJoLFV5JTN43/gb/qm+JDhQX3W3+pD 6isex7MSGnSSYKL6VeQB7+F99lO9ZEPbgO7i13g7nADHPTzP9rHreJ78oznenQx4jg7ia+uT+vlY /IuaVJLUNWag33aU03d2ng/GPuvfIHqI7YIH2y8mEIypg55gtyQe6Dd0JsfF1g1IhH77o35+Chuj P5JRtm9X/QU/42Y7R11tZtzxBN7BE/wLY14BvmyusXTgvUDSSi5nN3l7Hz1KN56SVxvbimEEjAU+ ZPvYj9XyvHo7AXx8nAnlGfGJMXaNvqNT8RSce4MtM6uGVzZJn9eKX7YkY3phSQo88shD8UNtFRtf NXNXzoi4//77YmfN7Odckbz+0us4lyxZHHpsEfu0ecZr3UzUbJ44YpuSlJGcOfiQI1qrh19efPGl wjfrJnmxyioOS70/Pu1v88w68fl+lxXqV7T++PRTGW+vn3ww9L0pPOCNaFYYrdv6sz33yYqiW4Lj 3SUZsWDBDuG7zqvX8LUEEhkmg5KNZLUX4Bvjy5fAi/QZHhdb9gvqQH+TXp6jQ/GVLTfGyFjDy/ji E+MNPAdHMovvyKnYVz34carQV1RHOQoCZT1kKzEY5kIUnUIUnagA6crsrvlfWQ6PV38IHhbHCHJc CASGwPwCfq91ITjq6AXKIAKlZnC00Q8oxzlETH05P7NxgmD9NDAUGecFkSk2mcQ6+0NJL8oKBH2R PWSY4aH/FAAFJqlQy5jJ58AAg+wgK6sFOC13xTm3r9RypbEE3kB9+hMtvBxN4YT+3kN+WZxyhmGr MIYAQHBjlta7zAWYTcD0j8ZoWP2wVureO//3C/plfJzSjSEpMTRCC68arK8g9GYLM0LAGzvwBLyU VYfxp+jMuFQnzXVLPB3C5w0lDtM7NwIg8UAIGEtweMbB4Xfqmi1AM+0zQgxQzST7puApD4pUOR/8 77rxxcN+DwPUsSKrlW/xEP5nXGX22+XWM/jPGCpn7DzvFT+cFPJCbowfB4jsMsqc+X4NvmfxExzI Vz9j6xnlJQ7RmcxysOCJXyQP1OOk5SoD+JUMGy8JBEobju0JiAWhCbpICDidXv85glVXMc6y1XTM UzGwnXQVPURv6U+TpnA6KFuv8FPVC1YHmC1llOGDxzmfhQ6Rm4lAfxYkWN83zuDp6fsVwctp9vgS Dcxa0h+CCtsc7k79Y3EaNw3dJC483wQzGmVfZ3i4ibMy9pVLbji4Ujn0B5ZsO4hyzzgtt0RnOnTz 8OikZXWn3Hqhk3MfvJbz3DhuZ8YhEqx5ZafDOOmK19NVkRV0w2fvyJayU5OocGCl2Vav35RIMSO7 V/TL4vSn7HVfikdBZhb+mE2TePNdZ2+KTg2O+JBM0PnAdU41fsErHLdhAGPDVpEF9o3csFU+5KfK YRNX480R9pxvvMkRkXglaxxy9bgnEcGxwsv0AEe+X4AbudFOdXq6PQsvtEVzOokvQa7JJF4i53QS +bWEHV8bF7iNjY0VGa/JyAOTOGvaIz6OBLhtEldlNpbt42zZKsFpNxPI9hpjeEtG6vubEgTvFV5G C7wMPzqEg1tBeUGkQMLh0V73Sf4lBuhRH7ZDMFACh/BVNyDJkoHsKnlrAkllW72Rpi4VV/bQ2Gyr vLwG9w/5ds2qwzBx66jQ69wkk2+MTI/F5u0dPA5J3223cHDlabHNG2ecrE6SeLBtap20oY7ZBvaH TJIvvgMwxoJjNhsfkVfXfDrZ8tnGub09fGxsyB7frsqb4MBhb3QLeyF4E/wLVvjerpuJx8/6yK6z v+yeb3Lq3rtyQDg6sX2eI+fK87nZOvLBj2uu1sS7yvIt4YZu7rO5AhXJRrJf8SZv8CZj+N03f6EJ 7lcdo5/Ghh6y5WRR/Gj+BNnVDhumbvh3qqtZb/N/bdBl+mX8JQpqEoYsVt2iXAXyCBf32Oxax9HZ bgkXdEVn9fDnjIX6TW6MRY/Arx9QTp864TDR88YQvY2FrdWSocYNbek3fI7vbek2JnDXF7qqBqf6 hB+a+k0Z9erLpdGZfNv50W17ZCn9MYlHTK5pjy/LrvF5yM+h0en4D158DWNtVaNXZVdwjw9ja7gY B85WNbv++9gG8Q4+dv/Z5zonMWpdqTq6PAfU74/nH0qf72+dd65E2AaZcLVC68Xw+1FJ9l/aujNb NO65566M07aRI7HHRkne7Bsb8GTs0K7lN772No2ty3aMLVuSDXvvPa430G6j3PMWjNtzZoQ3WZx9 9mllzP7wh8dD8zXDr3u2FudtGE/k93U5Q2KXXSRRrChcJW3tE3n5RfS31z2PtXbOPe1NBHSWLVDo Q/76BWPPrho7dkMCiZwYU+PB9uDpCvpF3pTBC+6RV2NAD7CbElVk3D38bqy86QIfkdkK2jDByK/B e+IBvDUdsMqCBQv+X0j5NAHjVdAZyGM+wgQ5DgADTcAYbc8jQoWaKSXAlCoB1FFlEVOnKBwf/zPO nBszMwjRD8DRKgrl048SFFI4vUB/4EP5w8VvSobQEWhOlP0xlJr+ue9bf6sSlaFUR5PZirJJ+8vK RLAx2ZZxbNSrLTSUJdRnZynoc3PA4eEjSIJHrR/t0F7iYy1OXxiBUlKf/UrOlpBUERgLEjgV8cxK IuPp1Of5BXEgdgtDbmKc8nteHJ50TJprQpI9GmY1u0lx6Qd8nCEh0UABGbej0wf9qeOBXmZ50ch+ V2OEm2TRxqK8XQPoxdnaMAJUxjzjycnXJ7R9XRym4447rvRZnQVP3zMM+sF4w0kfCF/lSTSnaPG7 vqArg4Xf8YuyeN5YzzQUmnRppBPfwttY6Q+51Ad8BZTHe1YGGSu8hOfJFEXm/zrG5Nbv+hYLBrHy aq2rVNrhT6UvnUKRkV119QK0pmOU1z588ZZ2rXygPPGLcQDuVydJn10nVwKNykXKeJ7M+ubQczTU h4cF9GiFJ83Eq6ddV+kPw0IeKGcf5SoODIRZTTKDV5w8jc7kVkLxuDic3pVNHtKQPT3l2fY/cJXE E1zAFW4SGRIFr4lxOnrhwrKawIGTyjnk0ZktZislBHYNXlVHwxkuAihvr5EQUGcFKymUkWA4IAZJ kKN9H/0kz74lBxbGQdVeBSub9Ncr/xxeV+iToNCzW0d/HZJx0p7kiGvG1YoKiQ5Bm6Wf9Zo3a1gd oT54rJvxR6d5cG3gW9ue7m/0gQu7RRewGaDaN7zIoaYT9FPwi5fYRzLUjz2aKs5o2A3cx8NwQTvf Zs70iZ4jF1WfNevhqLD1AmN2h57wIRv4V3/JHxnjHB2ZcwXYrKbz0gs37VWdqh119wJ10q919o+s 0Qfst/FRj+SDcQG1PBnG//Cjz5QrPJ9+lDKhETr5rBd9pM+S6/qMRvqJF8iNumx/onMkKq1cQFsT AfiXHqEbm7oI3TmDdJ3zIyz7ZcfJM13rVZyCgSKj6U+IO27HOxCE3EvosfVkf+fwYRPILxrtFl22 R/gQ3ptH30jiFfnJb7L56sj+4enDoZFHK6nWyP2dYhMcUinJIKmJu9DWZ4v4MUfGRyPb26SMQ3Bd m5fngngThRn7H/3xH/4lZ2QP4EV6mF2ThMYLxpVPYUzbbfm
585
3,510,000
3,518,000
4,433,850
26,549
3b597ef5-1514-4650-ae63-11a1a8697208
trentmkelly/LessWrong-43k
Nuclear Deterrence 101 (and why the US can't even hint at intervening in Ukraine) This is a linkpost for https://acoup.blog/2022/03/11/collections-nuclear-deterrence-101/  . I found it a very good read for explaining the strategy behind the decisions and signaling in this war. I was inspired to post it as a supplement to https://www.lesswrong.com/posts/WX7tpnBCHWrmJcDym/why-a-no-fly-zone-would-be-the-biggest-gift-to-putin-and-why , as this piece explains why it's in Zelensky's interest to continue to call for a no-fly zone that could turn into a hot war.   It also answered a question I've had since a child, visiting old friends of my mother from her childhood growing up on military bases, wondering what the US was doing in so many countries. The answer? Their job is quite literally to die to provoke a US response.   Some excerpts below: >   > > One such method that Beaufre discusses is what he calls the ‘piecemeal maneuver,’ but is often in English referred to as ‘salami tactics’ – including in this absolutely hilarious bit from Yes, Prime Minister, which is also a surprisingly good explanation of the method. The idea is that to make gains while avoiding escalation, a state can break up the gains they would make into a series of smaller actions, each with its own exterior maneuver ‘cover,’ so that it doesn’t rise to the level of triggering nuclear escalation. Putting together several such maneuvers could allow a state to make those gains which had they all been attempted at once, certainly would have triggered such an escalation. Beaufre’s example, unsurprisingly, was Hitler’s piecemeal gains before his last ‘bite’ into Poland triggered WWII. > > Beaufre notes that for piecemeal maneuvers to be effective, they have to be presented as fait accompli – accomplished so quickly that anything but nuclear retaliation would arrive too late to do any good and of course nuclear retaliation would be pointless: who is going to destroy the world to save a country that was already lost? Thus Beaufre suggests that the piecemeal maneuver is best accomplis
0
0
502
502
16,429
fa1e5166-4af0-4e5e-ac38-18cefbd956fe
StampyAI/alignment-research-dataset/alignmentforum
Three ways that "Sufficiently optimized agents appear coherent" can be false There has been a couple of recent posts suggesting that Eliezer Yudkowsky's [Sufficiently optimized agents appear coherent](https://arbital.com/p/optimized_agent_appears_coherent/) thesis does not seem useful because it's vacuously true: one obvious way to formalize "coherent" implies that all agents can be considered coherent. In a [previous comment](https://www.lesswrong.com/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#F2YB5aJgDdK9ZGspw), I suggested that we can formalize "coherent" in a different way to dodge this criticism. I believe there's reason to think that Eliezer never intended "Sufficiently optimized agents appear coherent" to have an airtight argument and be universally true. (The Arbital post contains a number of caveats, including "If there is a particular kind of optimization pressure that seems sufficient to produce a cognitively highly advanced agent, but which also seems sure to overlook some particular form of incoherence, then this would present a loophole in the overall argument and yield a route by which an advanced agent with that particular incoherence might be produced".) In this post, I suggest that considering the ways in which it could be false can be a useful way to frame some recent ideas in AI safety. (Note that this isn't intended to be an exhaustive list.) Distributional shift ==================== Even a very powerful optimization process cannot train or test an agent in every possible environment and for every possible scenario (by this I mean some sequence of inputs) that it might face, and some optimization processes may not care about many possible environments/scenarios. Given this, we can expect that if an agent faces a new environment/scenario that's very different from what is was optimized for, it may fail to behave coherently. (Jessica Taylor made a related point in [Modeling the capabilities of advanced AI systems as episodic reinforcement learning](https://www.greaterwrong.com/posts/5bd75cc58225bf06703751eb/modeling-the-capabilities-of-advanced-ai-systems-as-episodic-reinforcement-learning#section-6): "When the test episode is similar to training episodes (e.g. in an online learning context), we should expect trained policies to act like a rational agent maximizing its expected score in this test episode; otherwise, the policy that acts as a rational agent would get a higher expected test score than this one, and would therefore receive the highest training score.") A caveat to this caveat is that if an agent is optimized for a broad enough range of environments/scenarios, it could become an explicit EU maximizer, and keep doing EU maximization even after facing a distributional shift. (In this case it may be highly unpredictable what the agent's utility function looks like outside the range that it was optimized for. Humans can be considered a good example of this.) Optimize for low compute ======================== Eric Drexler [suggested](https://www.fhi.ox.ac.uk/reframing/) that one way to keep AIs safe is to optimize them to use few computing resources. If computing resources are expensive, it will often be less costly to accept incoherent behavior than to expend computing resources to reduce such incoherence. (Eliezer noted that such incoherence would only be removed "given the option of eliminating it at a reasonable computational cost".) A caveat to this is that the true economic costs for compute will continue to fall, eventually to very low levels, so this depends on people assigning artificially high costs to computing resources (which Eric suggests that they do). However assigning an optimization cost for compute that is equal to its economic cost would often produce a more competitive AI, and safety concerns may not be sufficient incentive for an AI designer (if they are mostly selfish) to choose otherwise (because the benefits of producing a more competitive AI are more easily [internalized](https://en.wikipedia.org/wiki/Externality) than the costs/risks). One can imagine that in a world where computing costs are very low in an economic sense, but everyone is treating compute as having high cost for the sake of safety, the first person to *not* do this would gain a huge competitive advantage. The optimizing process wants the agent to remain incoherent =========================================================== The optimizing process may itself be incoherent and not know how to become coherent or produce an agent that is coherent in an acceptable or safe way. A number of ideas fall into this category, including Peter Eckersley's recent [Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)](https://arxiv.org/abs/1901.00064), which suggests that we should create AIs that handle moral uncertainty by randomly assigning a subagent (representing some moral theory) to each decision, with the argument that this is similar to how humans handle moral uncertainty. This can clearly be seen as an instance where the optimizing process (i.e., AI programmers) opts for the agent to remain incoherent because it does not know an acceptable/safe way to remove the incoherence. A caveat here is that the agent may itself decide to become coherent anyway, and not necessarily in a way that the original optimizing process would endorse. For example, under Peter's proposal, one subagent may take an opportunity to modify the overall AI to become coherent in a way that it prefers, or multiple subagents may decide to cooperate and merge together into a more coherent agent. Another caveat is that incoherence is economically costly especially in a competitive multi-polar scenario, and if such costs are high enough the optimizing process may be forced to create a coherent agent even if it would prefer not to (in the absence of such costs).
0
0
1,288
1,288
46,868
f51c11d9-c21d-4c28-88c2-d8f10ef14bac
StampyAI/alignment-research-dataset/blogs
3qp/1Po7/vaN Lfmi/qIfYVE6W4ivSKbpnF8bJ/JBfVz9F/zKNtB5I9YF/sRT00ePWMprTSzftY89SLzy3CXsusrF 9LJ/kofdZQRcmEeBCfiuU1gEUjuP6Zwypi7uAcCGNi7bjWPips/Uzrtj7FvHCBgBI2AEjh8CJ7pC 4qLEBSPecHIx4WLDxYSbV46V32XH6UZgKgI7zU4zS/8ks/Ti/Ga2e6z0g9zCe8Ua477qraUpbx3b VXw4KN9qdmtpxN+mh9zZPCJt1wHSBtvgXKlJKudTvtVnQrYO4dzMpIzzMOdjhNVF3OxrIpcTl/jD ZIfJG4/+avUS53omjJBHqg/TfNFEGnn4w0QEXSbcrPCrERNLuNRbBIypi8kmE0d8Bh/5xWRK5BqG 8I8VcEx80e8ScORDbNTB/pSJWZfdVdOJDzKDNtKjePIL3OOKG/UF0omV/ocu/YS2In2MRIz140Bg TJ8DH+rBr2WEcUJb0ZfV3/BRBEfsbzX7xEQ/K8fC0C/elraoB1x41JEVOhzjBxgpRuIcI/QxCDwe 58UGY0OYx3MA8WI/xq6xxpYvdMcIbUkbUY/avCxHe+GDcFJspDEWyEfYxx74szpPONBGpGF/LA6l D7Vj6oO84HyjX6zV2CWmVUX9nLjpa3F8lLY5L9Bu9HH1xdjHwQLhfM74Q9hSx7kS6ta5Fr/UlvKb do3nv5qf2OBDP8eG2pwxRP/EvgT86DPlOKGsdGP/oCwYxeuJfGMslMQsBCbnKvSpizajr8o+pNlU AQP6FDa5viHqY/gKZl1CHxQ26Ogcylgrfe+yUUsHY84z9H3OXxHjmv7YtNg+jF/81TmFOGi7MVJr N2EGJsue78fUbR0jYASMgBE4egic6HKZmxAu5FyMuOhyYS+XwnPzEW8eSltbKWE2m5XJKx1vbSV7 a7aJQ//wg3/evP7uhSv5Vhb+ynd+vExa+fgfXvnd5tITb69sJxpYdxtF28vsR0Ix01Dz9uZXOdfb m/Z6F0m8WE9M79qP+nutLn8UbXbVG9OjPrXGvOW92C0Z7XXt9/uQcnPBUmu3juO0x403k1TILiYN 3NBHQmGVWLmh5/zLhCROEqgjripZpg5sMAlkEs5Hong0sVY69WmywmSRX4akPHIYpCL1MJHCP0iP uPIDv5hExusUEy1kaHKKPR5TYyIacSB+PjEtGzykP/KLST19S0I6E8Fyskf74CsTfiaF6JHGpHiK 9GGMvYjxFLv4Q3+h3SKmpNN2YN0ntbGALxByfaRxzSb69KFyXDFu77777lqRahp2iEfjACXigDSL gp/0MeKOsZNOWfrvWIHAZLx1ESRgiXC+iOeMml9dOOAPbb1O0RiFNNLYVV+GtNF4XbZO/GUMK+Yb brhhQaCWNsEOffpAbI9a3LQR98hDhF1Zx0Eca8wTo+KkHvmNr0NCf+OcEvu+xiBjPwq6nHtifV26 +EB/Ql+Yqn25hpXEHL5C7KJLGUmXfeX3bWlX+j99DN8l6ntD+NAn8IfyfBCVVUyyOWXL+YBf3hZZ P3SuG2MbnDin0JbRX85h9JPYP/rsER/tUJ6XFfcQZn22nWcEjIARMALHD4G0CIxlYHXh4ssN3aOP PponIGyZtEA0MiHhWPLiiy82jz32WHPHHXc0V199dU7+za+81Tz3x+slwLZ3Xmpue/tfqNq1bbcu vHCtZOV333xvc+vvf3pt/snQQ3//uebjH/hLHa5le+FPf6K5+Of/q7XYGjJCf6rdhO/vP5G2wur8 +Oz3m+bsa0PVOH9TENi6vGm2Llu0X9uKZdtuirMH5wf9/iDex8Wkli97mEgwrrTaaF2RQELpy6Pa uO2rBxKLawgTsnVMlvrqinlDPnPtYrKEb0yQxohsstp03RiPqb9LZ0r7oxvfjdZlc0w6eCDr7nPC mQnr1P4WseBL0FUk2poSIwQDk3bujYhhCk6qc5nYiRUCARJGdXfFr3rGnjMYx2N1u+ocm36QdRH3 lP6vvtg35rG5SecDcJ7S52rtMqV/RN0xY26qb7K/7Jioxad2XcYmcx+u5cuUrfly0Gnrwm8VzA46 Rts3AkbACBiBjUDggRN9bjAZ5CYP4Zs+9vkWkwvq2Mdy+uw773gjwI0IEx36EcJkh8dp1H9qN6G7 HDer2SCfEPbTJx8qbUEzZo11/hlLeckTvFu3LOvD2HLr9LfEYb8PuyktVgGx3ax1urRxtqYSJGMD YEI7lhgbazPq1cZozGfSwooIVkAwrqNoRUSZHnUOYn/IZxEXU3AbsnkQcYyxOaX90V0XAXJQeKxi dwoWQ9iuy9aUeFatk2stfXpovE2tZ8o4GcJ1KP8g6yJuPmNlTNtNsTe23lX1xvjdV8eU/jFFlzqn +jbVfl9cypvqg8qxZWwNja+of67314XfKpidawxcvxEwAkbACBwOAr2kIhfPuMKEpfrl4zuH46Zr OWoIsLqJxyb0KBarg06fPp0fg4FsJC+udFV87aPPLdOU36zILgIPxWdnl5Da3UNhr0zhqmIVWIl2 ++xEvb21t0d9ZUv9UjfaLvNi2ahHejymHBLT2pT637566iXa1NJ+PG59mD/QXnMoKvdV4ryNRIAx DXkIUSdiEaKR8U46JMFBEgXLgMLjZut6DH2Z+l3GCKwLAVZO8UUdrx1gX4/Brsu+7RgBI2AEjIAR MAJGwAgYgSEEeklFCjNB5KYVkgiCiG+smEBu2kRxKFDnHy4CEIc8MqX3OPHeHPoMfYgt735iErTv W99MMrVM04yliTOxTmKkxsWhUmO0oy61xOO4X7NV6kedobJduqXNITulvo6HysX62Z+qH8urTqXp ONvccyANb48DAozn8n1ZxMXY51UZN99888aFCaHo69fGNYsdWgIB7sW4lrISmmtt/BJ4CXMuYgSM gBEwAkbACBgBI2AEJiPQSypCBPGOICaIkIrcuLLPu6hYsciLwS1GoIaA+ozytJKJY/L6JHNQ+U+k udp98VN95VfJU4199cQ86a9SZ1lWNmM9pU7Mk750ymOlr3s72ofDcmjdAdreKAQgFvnE9zcdxPsj RzkzQqn80YERRaxiBEYhwKs+9LqPUQVWVIJENJG4IogubgSMgBEwAkbACBgBI7ASAltdpSEReVcW xCHEIisUEVaWcdPMr5VZjEAXAvQTSAZWUfDyeFYlanUQaeTvW6U4NzaLbFVRQV69WKQdxGEfDxbz cPWgJNZT1hHzDtKHst54vAk+RH+8f24R0Pubpvywxbn12LUbASNgBIyAETACRsAIGAEjYASMwCoI dJKKPL4KKVR7Rw/fjEM6QhRZjEANAa1o1bvVeDQLEpE+wwvleTSyUyJbhVL+gXJRZ2Vmp5WlMlTL 2MLyZmq5PvtTbR2ED33+1fJG+UBg+tSMOM0IGAEjYASMgBEwAkbACBgBI2AEjIARODIIdD7+DCnE ypOaQChajMAQAqxufeihh/aoQSzyAy1dfWuhDPkkpmr+XsWYtNBb846q7DLb5cNQuS57tfQhW4fh Q82vmLaUD/MlqF1lo33vGwEjYASMgBEwAkbACBgBI2AEjIARMAKbjUDnSkXIHz2+Wobw9NNP9z6+ Wur7+PxDAOKZd3LWZCetPGS1YqeIdWIbZIhsC6pL7xZV7rNT82GozD4jAwlD9g7DhwEXF3xv1Bvy m1/zRmr+RzveNwJGwAgYASNgBIyAETACRsAIGAEjYAQ2H4FOUpGVivyqJ+QPP8zy6quv5l+BZp+0 ++67b/Ojs4fnDAEIad7JWRMegdZ7Fmv5C9Ypr2xDA7pKlJW21ZKTE0tryxBey5Tpc3QZeypTxtNX z7rz6j7II1GK667V9pZBgB9O6h2Dyxg9pDK8k5VrE59nn3126Vq5pnGesgwjAE7gdZSEfsIXoFHO VZsfRfwibnGfV+Ns4rmDtj1qfTTiehj76ofxvEd78t70db/OaFP7CTgTKzHj41QRhlPLWd8IGAEj YASMwHFGoPPxZ4LmhzV4fJX34uniyyOtEIqDj68eZ9QcWycCrFBkwsGWmy9u3EqB0GAlLJ/9IhKK HNFU2rZpWePsXzbN2Z7H8FORRalUYIeDtFWajpvZBU1zwd/KlKXyqGVI8GGK/pC9ZfJrPhy2T8M+ yCNtUzOklaqz+SPty8R9VMpogssPl2yaMEY5p3Mer703d9P8lT/4zZda+tEnrkd9wjnozJkzTa0N +IKMdLaWfgS4B+BcDlbCvr/Euc2FNNCPyV177bWLaw0/Msc7oQ/zF5pB4qmnnso/WsarP+rXvXOL 15TaGYNcwzft3AG2+AWZbKkj8NJLL+UxrHGAFuNET5UsMy64zp08eXLfnGBT+wkxP/PMM/kLKa4P fDk1RY7auXBKbNY1AkbACBgBI7AsAntIRSaZ3IxzE45wg8YE4qabblrWvssZgX0IXHfddZ19CpJq IQvGarGTs6Cndn74XLPzg2+0qiqUMrQLi5hpLFY7zovzey+k5fL5TzrYuqyZXXbHHoJwrt7a7vib bXfkrSN5VR/GlF+Hn304yIeMe1LU9nwgFMF2kye5/HASE8kbbrhhHd3g0GywMpFrEuTWGDlOZM6Y eK3TIgBxB0HCl1tHncTbtDbl3ME4PGrnjk3DcVP8uf322xueTFr2yyXOxbUvZza5n9B3IRT5gsFi BIyAETACRsAIrI7APlKRVSCQino8gImxxQiMRYCbUyZz6j/Tv/nepalYTTgTIzh3QERV8/Z3muYH p1sSMa162y3VKrbl2G9zKAe7yHbxIC4HF/xoTovl4/6iPorPpZamvHVtV/Uhll+XT6WdIRzkgxYl alva8fHhIwAxdxRWnJXImCQqEfFxFwImDLqQWS2d1cEQRpbjgYAI+HVHs8n9hJjdh9fd4rZnBIyA ETAC5zMCe0hFLrSsVuQxJx5t4Zs8PUJUA2nZbzZrtpx2vBCgL41dTRQjFxFFWrsfU5TWltgJLBUE V5QFcRgTS+oxmcZ6W8PZ9Dj1XzU7b7+0KCFCM9pWGkp70/ceNxeeamZbP7qwtcrOXgRaS7W0VepY puwm+LCM37UynOt4JOqFF15YrG6ClIjEGyu3WaHDSlut5pYtvozh3AmJzpZjPf6sVwBwvmRcSNDD HlsEm9iOOnqdAOlM0lQP+vjHpxQIfdkVCVfWzSpFHuNilUr5KgvKkk9ZviTAr7IexUg6OvhFvV36 pY/xmHIxLuJkdXzEgesQegjtIEy7vrQQbsKWR/GIE39jm2KPtqd+7CK1+nNG+jPGV+l2bdWPFI98 ivFSFp/xhbaP/qHPpybYpu0Ud5ftWlmlxfbHp6HrfC0efC77Ffbpc/gXY6/FMmY8qo1VF+9PJA2y ANzAD/9rT1qUbY4PsiMctJUumGJfmJTtJX3aKp5HhvBTObaKCX9Y/RXHBWl8ouATOtTBPm2HjXjt HdNnGU+Mi9I+delcAY7gquN1nDvKsdhlu+zX+FGeIyIu7OucUTtHgAn5tLnO5eDH+aurXUv7Y3CV H/Qh+iWPZlMPbdQXw1jbpU/xOI5L4rrlllti9mJfeuU1ovSh7PdqK2IjJp2T1S+Ur2Mq1JhkpWA8 p9EO9L3aOUP+4Q9xoHfNNddkW7H9avUpSNWrcwH+Uj+2Yh+cOtbxTeedoWtfPK/26U71QTF6awSM gBEwAkbgXCKwh1Tk4sqNDxdnhBsfbo4tRmBZBLiRErlS2pgy2SrLxuNZOsgEH384qEklL6su9JPC O682O6//H4vSrGuEnJzNK2hN7P5dKKaddg0k26S/9SPp8zfSo9WJVHz3e83OO6+kjDeiet6PLsX9 UjHnJR9mJz6SVlb+WJnt4xURYLJy//33Z6KDiR43/JrI0Ec1GSOPdM6PrODWBIg+zqRFeriDTSYH 2mfL+VQTVsowCaMu7CLYYKLzwAMPLPSwgS7CDx+hz4cxRXnqKevVO6KYeOEjPpMG0aC6eLcWdolP cVDXgw8+mPXRox70IIKwESfnxEJ54mGiLH10+WDrxhtvzH73/QFLTUblGxhgO/pLfcSKYFv7Xbal QzkEvIinJrQ9+sSCHrESw8MPP7zAhnJdvqLLe4blf60OpYEVZAa+SJ/4qTu2O/rYxX/0eWWA/ENf +Mgu2ym2Y7m4T7+GXJd/TL7pO11t2VUnPpfxqH8TNx/akFjKvkX6mPFIm4EDwhYSDozU5uBHmoiE rDj/c8899+Q2xw/08QOfyzbv8uXOO+/MhHyJC1iBmXwhNmKB/Bgjiok48Gmo3fE9YkA8sZ+P7bPE Sfvgp84H8hdcqEfjv3buQJfxSrzyYejcEckc1VWzTQycl4gLAhBfOEfUxqjsaAs2JXFEHjGRp75B /DpvPvnkk/swkD1tKQteCPEitfMW6dxD6x6Icuj3nWfoP2CJ0A+IW/Eue54BM+Kr3XMRO36BE/Uh 0Qdhrr6scY1NykriPmm1ttSYpL54zuXaxnmHd7hHKc8v1ElfYJ6CDfwVKVyrT7ZUr9obO5SPfRD/ dS2gjXTtZKzTHlEXu5Snb6qNqIMPccXzAsdd11XqLK/fOvdRX/SBtlesistbI2AEjIARMAKbgsCJ 0hEuunx0gdXNTannYyMwhAA3sfQj3XQN6U/Kh2lLsviB6BkHOan+Z5G3S9+1lOHcUC71TrPz7rd3 y++qJrKwlZY83EoHaWUj723E7lxPOs3s4rl2yjqbSMW3/p9k9zsysFsg72UaMptYlF+UDjvUc/LW 9LsyLal49s1/mcJNJdpnxBc+hJ12FxOV2MnMNZ/4cDO78MNJKR2n2Hd++G8X6tGfhYmsicl5+bTd 50OKf3bxP0hKuzjgxqYKE0xu/MuJg4gGyDlu8JncMbFGj3f1aYJGefq4jpmQoMOHiRjbKIwJbKPH BE2TeNIZM+SVZZjAMLnRe8x0fqZuzteyQVmEyRk+IbLLxFTEQM4o/mALf5nkKBZUSGdyF2NUUVZp UBexIKqLMnFilTOLP+jiL75roiobTKzAQOQtq10QfGNi1RcHevhKefnOhEx4kC+pxcsEl7iY5GoS Kl+xUbYZvjJpHHpVCBN1/MH/u+++e9FmmsB3tXvZHtTFZBbfNdHFv6m2hYG29DFiLusThtLTVvFA RsTVSOrzMR7S6H/gCSEgAeuvfe1rDbbUh6iPeGJ/R5+4ydN4lI1ybCi9a4s+Psc+hG+QRPgscony tC1S9nH5QluqXzFGiGMsftlwxx9sddkBgzg+MSHM1R9Ii3029n/Syz5LHNiAFAEbCbrEFNOUF7dg R/kun8Go9DmW79unj1A+Er74RDuV/kY79DVwZKxEXNDBV2yqz3EOEhkL4aTzabSnfTAh3jHnrVgG u5STdJ1n6FtdtskbOs8QG32kPM+QzjlxjHTFB+aMFcYwfYIP4xQSfaztWj8RFrQVfiPgTBy0UTzn 0vZj6xoTq3SoizrBV2OaY8YK14NaH6pdZ7k3iNc+7BJzeT4jZvLi+Yyy1BnPN/Qb4kVf2Mhnb42A ETACRsAIbAoCW12OcFE9iAt3V31OP14IcBPFzR83R3zoS+VnHRHDqfFZThI1Fgq3u6LS0ha7Otzd yYmqkuyFipxQpo533kwrFv+i/ZxN2z2fv0xkntL+wzyv2KpssLvzxv+dVlX+n+kHsH978WGV5dn0 2clp7Ke8N8if7y+2bRp6zdv/du5lIknffnletrVJvj7Yau1qSz3tZ58Pbz6bQEkxZ9mHzjx9Mzbc wEMyMAkoJw1MnJhoMtGRoIMukwzKMmlly6R9rDCRQCgTJ6+cc7GtsRPtsYJIhCLp6IrwQl+iCbEm RdJlohRJFOlrSzkmi9RTTv45Jm5iLoVJjibm5FHv9ddfn1cTgUufMIGiXvwq/SWNPEiugxTqrcVL nZy/JPK11ma0A7HGdlC5uFW7RwKOfPBTu5c2av7V2h3b9NU+20PtQfvX6lP7x1jYp070aavYjzVG iAWyHlHdsa+Qjr/0TaWjJ9Iv9nfpQlqJ/CENgdgodduc+l/Kl20OpvRlrTSiJP7jD7ryj3TFXPZP +iqxl7Y5Ln3GTp/0tUNtHOI/dUeJfRZ7EvbLPguG+Aj2UcADUZ+LedrXuaMrdtJrPqv80JY22N7e 3tPHaA9WFOJ3lxAPdROT+iG6sV1VFl3uUyKhpLxyK1ynnrdq/QLb8Twj3+KXLOioP4AFOn3COEbK cwFYcJ4ZI1rZXfYbyEaui6sIWJdY3HzzzdlkjE3ny/KcS9uPjWOKn/RjfCtjhsyOXzTIJjGUupxD aCM+iMZG7RxF+1Af9xAS9VNIWgk69Af6psUIGAEjYASMwKYi0Ekq4nC8sG1qAPZrcxHghivedC3v aYWYCgTbNLsUrNirGmn16trJjnxI2z06ew6KvFxPlw8yGJ2JPpSGIe7SpD0/Wv1GIkhfT6QkW9LS fpM+2s/bue5in/wf5sqwvNO8M9eXHvmyEewubM7zSh/4mecFONn8xv7RpJlJKxOp+GFCQP+NEx0C 0YSIVSNM4JhwR9JhKFgmkejvJJxifezLTpxoYu+qq67aZ1ZjCz8lmtSw4gq/NUmJpI9041b1sWqi JtilHulJh4lqKUyCxsgrr7ySJ1U1G6Rh56Bfv1G7xtX8x1fwntJmJQbEQlxqt5gvUqzsazX/amm0 S1efkn5pO9avtq21BXq0fynEg+2yD3OsOnkkEaE8uEIUQPLEfhT7psZjjSwCN8aexoj8qeGpvNqW 8rUyekRZOGmLfhkjPhMP/QIhHoiErvED0T5F5EtZhvMAbSXSQvmr9lnK00bEoXMGtkU0l5irXraU QWp9ROnqX1lx4h/6JP1C5zQVj/1GaeUWn6hb/ZB8+h9S9nX6xBiby5y3sF32ObVZbEt8Q692nlF5 9cscROUP7dF1niljrhTPSeoLJeZd+lPSa31JsdFWknXEIVtjtvhF/ZDLYKxx0NUn5HO0XaZpbIB7 eQ7hmDqlgx21D6sj+TJzyIdYt/eNgBEwAkbACJxLBE6cy8pd9/FFQDeO3BR13ZTVos981L6M2SJl ebqqLYklPXCct4vnp3NG+pM0YiXpMJdJabN5lrJz+sKzMTsqiS6lW5klH/BlV+YVKSlt825cVZmU SeODVfncHqcUdHF4nxQ+zFXacqXy3tR8lIp34jA3PTc5N7b3qKzhXB9rEgNB2CXlRIHJIKs1mHCx 37eKp2aTOplIstJlFdGkNNrQCiQmp3HyDElTI2pUVhNbjVula6u6hJfSV9lqUtVlg/PGOuvrqmdM Or4OtdmQr+SXfUl1K13toPSxW01Ml+1TmrzKj7JetX9MJx4m3311Kh7K8/gfK50YNwh1MYmOK36E oUjJWN+69rtiVLp8kO+1VUryRbradvldw082alv5UuZtb2/nJPpjl47KTO2znB84Z+ixf/rUUJ+n LuF0UOcO+hd9Bt/4gCV14S+fPiGflWCUE2FDn2V/CL8uuwd93hqDeZdvpFNesZZ6Y/shY5JzArjx QbAJnnwOQxhTXf52jbNV/CJm6mRVrb7cWDVmjQ2d84b8A1v6F31W9yT0da7rh4X7kI/ONwJGwAgY ASNQQ8CkYg0Vpy2NABMRTbBYbcE3rtys1YjFrhvffZUHHmyYogrKpaHEWGbyLhlpicW0XRB1lItC Tbtpsd52P9QTdlsLu+XamoIlZQWDewnFHh9UNqmE4rHAbkYmSzu0ZCdn7+pQRFnZKAcz8vNOTtrV VlW7eVnhiP3RpIWJa9ckUzoxNCZapNPX6fNdZWOZuM9Eoe8xsu05eRDLjNnHJ2JhzLE6h9VkWuXD BIfHR/tE5FKpozFdpq9yjK9MoI6K0MZ9BNoqE13hW+trY/ChHH2qr33H9Kmp7c85fGydXA/4QOow ZtgyjtjqcU/FDx5Tx9QYnMboyAfpQiqWacpTurZqR+Uvu+2yo/Gi+obsT+mztCV2WSEFiaFXDwxd p3VtF4Ey5NPUfGLg1SnYp6/oA1FDf+17FJZ4WHWnVV+cE7HD+XEVUTvUbHSNoZpuLQ2f+4jsofMM 5dfhQ9d1ZAjzWkzLpg2Ng2XtdpXjkeRf+IVfaE6fPr3n2kmf6zv3d9lTOufIoXEkXfomj4PTV+m3 un5zrmQcWIyAETACRsAIbCICJhUPqFVOXvh2c89/+sLarX/lz36i+cqf/fha7f69S3+8+S/WZJFv V8vJhb5xLauI75JRXuawdKBtZLKUttiKBks/mpLS9hN0LelFbtYUc5YqUsmFqXk9rQ9tuawzV2xT khLLKdmkgm2d2muPc+LCKHrkk5fqzHW0x3PtbKhcrVj1IZdtDc/mSzpVc1uHnGq9av+2+vK29EHl pRW3u20xx2rudluGutKeqkwFFz5EIxu+f+rUqewhE7GxN/0iQiAFuelncsv7jjS5HgqZSSGkytj6 huzV8pmI84HEwU8mIzzK2EUAyRcmT7XHGPEX2d7eztt1/AF7sGSSWmLHOYSP/FpHfavYwA/amvhL X8faJV7wrYnwVX+s6fSlUQ6iY1m86CuMAflR1lVLpwwytU70+UBcgQd9k4kzx4qfdO1HX4jx5MmT S7cBtrraQKuTFJfqH3NuUJkaTtQ59TH+Ln3ZHyKWqBOMp/ZZCDquy4xJymJDsWGzJjonyLd
632
3,792,000
3,800,000
5,325,818
75,788
<urn:uuid:c1fbaba4-edb6-4e26-9b99-a944890bb1f5>
Kyle1668/dclm-dedup-25B-ai-scifi-docs
was given a pc with Zorin 10 and I have no idea how to use this OS and want to convert to win02:56 cfhowlettDarkAlice, you have to ask zorin.  only ubuntu support here.02:56 DarkAliceBUT it tells me i need to have drive formatted for ntfs and they read system and logical02:56 DeathDealer./configure: line 2157: config.log: Permission denied02:56 DeathDealer./configure: line 2167: config.log: Permission denied02:57 DarkAliceOh sorry f02:57 bazhangask in the zorin channel DarkAlice02:57 OerHeksDeathDealer, what is that? care to tell us more about it?02:57 bazhangDeathDealer, give us a synopsis Here02:57 Abehello can somebody help me I have a question02:57 cfhowlett!ask | Abe02:57 DeathDealertrying to install warzone2100 3.1.202:57 DeathDealeralready./autogen.sh02:57 bazhang!info warzone210002:57 ubottuwarzone2100 (source: warzone2100): 3D real time strategy game. In component universe, is optional. Version 3.1.1-1ubuntu1 (vivid), package size 1281 kB, installed size 3811 kB02:57 DeathDealerbut when i./configure && make it says02:57 AbeI need to change Password of LVM encrypted HDD02:58 bazhangwhats wrong with the repo version DeathDealer02:58 DeathDealer./configure: line 2157: config.log: Permission denied02:58 DeathDealer./configure: line 2167: config.log: Permission denied02:58 DeathDealerwont launch02:58 bazhangso install from ubuntu repos DeathDealer02:58 DeathDealerwont install02:58 DeathDealerso i went this route02:58 bazhangwhat are the exact errors DeathDealer02:59 DeathDealerwhen i try to./configure && make i get this error02:59 DeathDealer./configure: line 2157: config.log: Permission denied02:59 DeathDealer./configure: line 2167: config.log: Permission denied02:59 DeathDealerso i posted the config.log02:59 AbeI know the Password but I need to change it02:59 bazhangDeathDealer, the install from repos error not the compile errors02:59 OerHeksAbe, start Disk Utility, select the encrypted partition. Click Change passphrase.03:00 DeathDealerok then help me tell me what to type03:00 cfhowlettDeathDealer, "ain't nobody got time to read all that!"   at least point to the line with the error message!03:00 bazhangsudo apt-get install warzone2100  DeathDealer03:00 utu8ois Google's Chromebook using Ubuntu or something?03:00 Abehow do i find out which Sda is encrypted?03:00 cfhowlettutu8o, chromebooks use chrome...03:01 bazhangutu8o, chromeOS03:01 utu8owhy would they not just use Ubuntu instead of ChromeOS?03:01 cfhowlettutu8o, ask google about that.03:01 bazhangask them utu8o03:01 utu8otrying to take marketshare from Linux and Windows using Intel CPUs or something?03:02 cfhowlett!ot | utu8o,03:02 DeathDealerbazhang sudo apt-get warzone came back with E: invalid operation.03:02 Mirodroidits google's laptop os... ask google why chromebook's dont run android instead03:02 bazhangits not on topic here utu8o03:02 bazhangDeathDealer, you forgot install03:02 utu8oyou can't even install Ubuntu on a Chromebook, Google locked it out03:02 DeathDealersudo apt-get warzone2100 came back E: Invalid operation03:02 cfhowlettutu8o, OFF TOPIC in this channel.  go to #ubuntu-offtopic03:03 bazhangutu8o, thats not on topic here please stop03:03 DeathDealeridk im about to give up03:03 bazhangDeathDealer, sudo apt-get install03:03 phionathere has been no updates to 14.04 for some time now. is this normal?03:03 bazhangDeathDealer, you did not include install03:03 NathanielHillOerHeks: Mouse doesn't work either, it's an Asus X205TA03:03 cfhowlettphiona, current release is 14.04.303:03 utu8oDeathDealer, you should put "install"03:04 cfhowlettphiona, open a terminal: sudo apt update && sudo apt full-upgrade03:04 DeathDealeralready did that03:04 AbeOerHeks: Do you mean Gparted???03:04 bazhangDeathDealer, you left off install  try again03:05 DeathDealeri did03:05 bazhangDeathDealer, pastebin the terminal command and the exact error for us to see03:05 OerHeksNathanielHill, uh oh, there is a long forumpost about your machine.. http://ubuntuforums.org/showthread.php?t=225432203:05 OerHeksAbe, no, disk utility, type disk in dash and the tool should show up03:06 NathanielHillOerHeks: Yes I know, and I was looking forward to installing a custom bootloader and kernel03:06 NathanielHillOerHeks: but, my keyboard doesn't even work immediately after the grub menu03:06 OerHeksNathanielHill, maybe the next ubuntu 15.10 works OOTB..03:06 NathanielHillOerHeks: I'm using the 15.10 iso03:07 NathanielHillOerHeks: stuck on the install language menu03:07 OerHeksNathanielHill, if that post (maybe start reading from the end) gives no solution, then i am out of clues :-(03:07 phionawhy does the flashplugin-installer upgrade take sooooo long?03:08 OerHeksNathanielHill, maybe use an external usb keyboard?03:08 bazhangDeathDealer, thats the compile, not the install from repos that we asked for03:08 AbeOerHeks: Can I try with sudo cryptsetup luksAddKey /dev/sda3?03:08 NathanielHillOerHeks: not available atm03:08 DeathDealerits a different release all together not just an update03:08 Abesudo cryptsetup luksRemoveKey /dev/sda303:09 OerHeksAbe, never tried the comandline with luks, maybe someone else here knows?03:09 DeathDealeri have 2.1.4 i need 3.1.2 thing is thewre is no "update" its a whole new client03:09 bazhangDeathDealer, sudo apt-get install warzone2100  in terminal  pastebin that exactly and the errors03:09 Abecuz I found this on Google: http://askubuntu.com/questions/109898/how-to-change-the-password-of-an-encrypted-lvm-system-done-with-the-alternate-i03:09 DeathDealerbazhang it doesn't work that way03:10 bazhangDeathDealer, yes it does03:10 bazhangDeathDealer, you have not yet shown us the errors when using that exact command03:10 DeathDealerno it doesnt its a new client not an update i already have 2.1.4 http://paste.ubuntu.com/12813933/03:11 OerHeksAbe, it might work, it has a green sign, that means verified.03:11 bazhangDeathDealer, what version of ubuntu are you on03:11 * OerHeks loves askubuntu03:11 DeathDealerlive disk install03:12 DeathDealerdidnt label it03:12 bazhangDeathDealer, what version03:12 DeathDealerI DONT REMEMBER03:13 inteuschill dude03:13 bazhanglose the caps DeathDealer03:13 bazhanglsb_release -a    DeathDealer03:13 cfhowlettDeathDealer, attitude won't help you here03:13 bazhang2.1.4 is ancient03:14 AbeOK it says when I type in new Password "No key available with this passphrase."03:14 bazhang!info warzone210003:14 bazhang3.1.1 is in the latest release of ubuntu03:14 DeathDealeri need cross platformability thats why i need 3.1.403:14 AbeOerHeks: what Disk utility are you talking about? I dont have it I use Kubuntu!03:15 DeathDealeri have a windows machine running 3.1.203:15 bazhangDeathDealer, you should not have 2.1.4 with that version of ubuntu03:15 DeathDealeridk my ubuntu software center is glitchy as hell03:15 DeathDealerhad a hard time installing as a matter of fact03:15 bazhangDeathDealer, 3.1.1 is the version you should have with that release of ubuntu03:15 OerHeksAbe, oh, you might want to reask in #kubuntu.. not sure how it is called03:16 DeathDealeri know what i compiled was 2.1.403:16 DeathDealerbut now this wont compile03:16 bazhangDeathDealer, dont use usc, install from the command line03:16 OerHeksAbe, and next time, tell us you use kubuntu03:16 DeathDealertrying to remember how to do all of this03:16 phionawhy does the flashplugin-installer upgrade take sooooo long?03:17 bazhangDeathDealer, we gave you the exact terminal command to install the latest stable of warzone03:17 AbeWell Kubuntu is almost the same03:17 slow_hello, can anyone help me setup an l2tp vpn via Ubuntu 15.04 Server VPS03:17 cfhowlettphiona, it just does.  be patient.  stop asking.03:17 slow_i'm having trouble finding an updated tutorial03:17 DeathDealerand this is what happened03:18 bazhangDeathDealer, is that compile error one again03:18 DeathDealerbut its not compatible over multiplayer with anything...03:18 slow_hello, can anyone help me setup an l2tp vpn via Ubuntu 15.04 Server VPS03:18 slow_i'm having trouble finding an updated tutorial03:18 Abesudo cryptsetup luksAddKey /dev/sda303:19 AbeEnter any existing passphrase:03:19 AbeNo key available with this passphrase.03:19 DeathDealerno this is the sudo get-apt install warzone210003:19 DeathDealerapt-get rather03:20 bazhangask every 20 minutes or so DeathDealer03:20 bazhangif someone knows they will perhaps help you DeathDealer03:20 OerHekshmm nice, a recent tutorial for warzone2100 3.1.2 on their site is infected03:23 OerHeksno, https://betaguide.wz2100.bla bla bla03:23 bazhangnice spot03:23 OerHekschrome says so03:24 bazhangeven more reason to get the repos version03:24 OerHeks4th entry: https://www.google.nl/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=ubuntu%20build%20warzone2100%203.1.203:24 OerHeksthat is the same error you get03:25 OerHeks5th entry is infected03:25 OerHekscrappy beta 3.1.2.. wait for a fix, deathdealer03:26 slow_hello, can anyone help me setup an l2tp vpn via Ubuntu 15.04 Server VPS03:27 slow_i'm having trouble finding an updated tutorial03:27 Abeok now I need to change sudo password03:28 bazhangwhats wrong with the old tutorial slow_03:29 bazhang!password | Abe03:30 ubottuAbe: Forgot your password? See https://help.ubuntu.com/community/LostPassword What's the root password? See!sudo. Don't see *** in password prompts? That's normal. Sudo doesn't ask for your password? It remembers you for several minutes. Please use strong passwords, see https://help.ubuntu.com/community/StrongPasswords03:30 slow_bazhang: how would i start the process of key generation? it shows "here is an example of "var file" and continues on with the tutorial03:32 slow_export KEY_COUNTRY="US"03:32 slow_export KEY_PROVINCE="CA"03:32 slow_export KEY_CITY="SanFrancisco"03:32 slow_export KEY_ORG="Fort-Funston"03:32 slow_export KEY_EMAIL="[email protected]"03:32 bazhangslow_, ask in #ubuntu-server03:33 brijithHey Guys, my home PC I don't have a mouse connected to it. How can I control mouse pointer using keyborad03:35 brijithHey Guys, my home PC I don't have a mouse connected to it. How can I control mouse pointer using keyborad. I tried the option in universal access. But mouse pointer is not showing up in the screen.03:38 bazhangpatience brijith, every 15 mins or so not every two03:41 bindibrijith: is your numlock on?03:45 brijithbindi: No03:46 bindiwell turn it on03:46 brijithbindi: still I am not seeing mouse pointer..03:47 bindidid you try pressing the numpad buttons?03:47 brijithbindi: is really require a mouse  connected to see the mouse pointer in the screen03:48 OerHeksbrijith, solution: https://help.ubuntu.com/stable/ubuntu-help/mouse-mousekeys.html03:48 bindinumlock needs to be off anyway apparently :p03:48 OerHeksworks, just tried it. ( use the arrows to navigate )03:48 brijithOerHeks: But in my screen pointer is missing03:49 OerHeksand use space to activate on/off03:49 OerHeksit will appear03:49 OerHekselse buy a mouse.03:49 brijithOerHeks: lol03:50 OerHeksuniversal access is standard, so it is your lucky day03:51 brijithOerHeks: I have two but not with me right now.. :(03:51 OerHeksnever leave the house without your mouse.03:52 brijithOerHeks: universal access is enabled but don't know u mouse pointer is not appearing.. Should I logoff and login again and see if it appears03:52 OerHekshmm that might do the trick.03:53 brijithOerHeks: let me see03:53 OerHeksbrijith, or open terminal: ctrl alt T : sudo service lightdm restart03:54 brijithOerHeks: ok03:55 brijithOerHeks: now mouse pointer has came. but not moving04:15 putroapakah disini pengguna ubuntu semua?04:17 Spiderputro this is the english channel https://wiki.ubuntu.com/IRC/ChannelList << see that list for Ubuntu for your language.04:21 === notsetkeh is now known as setkeh RNevillehello, everyone, when my computer boots I get an error, but can't read it, running ubuntu 140404:25 RNevilleis there a file I could read that would tell me the error on boot?04:26 LatrodectusRNeville: did you just install the os?04:26 RNevilleno, I installed about a month ago04:26 Latrodectusand it's been working fine until now04:26 RNevillecomputer seems to run fine, but I am gettting something that isn't "ok" when booting04:27 Latrodectusoh, well there are log files that you can read04:27 RNevillemy computer is "still" working fine, but I would like to read the error message I'm getting at boot04:27 RNevilleit might be the bios telling me I have a hardware error04:27 LatrodectusRNeville: http://askubuntu.com/questions/91286/how-to-see-log-to-find-a-boot-problem04:27 RNevillethx Latrodectus04:28 LatrodectusRNeville: have you recently changed the hardware?04:28 RNevilleno, but I have a bluetooth dongle that isn't working well, so it might be that!04:28 Latrodectusand you get the message in boot?04:29 Latrodectusis it for a wireless keyboard or mouse?04:30 BayanganIs unity 8 ready for desktop?04:32 RNevilleLatrodectus, it was the boot.log file I wanted to view04:33 LatrodectusRNeville: well glad to help04:33 RNevilleno, it is a generic bluetooth dongle I use for a wireless headset Latrodectus04:33 RNevilleLatrodectus, this maybe be the error I was seeing: Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox04:35 RNevillealso Latrodectus getting this boot error: exportfs: /etc/exports [1]: Neither'subtree_check' or 'no_subtree_check' specified for export "".04:37 RNevilleRecently tried to setup NFS, so probably this was what was causing my boot error I noticed04:39 Latrodectusmakes sense, atleast it's an easy fix04:41 RNevillehopefully, Latrodectus04:41 RNevillenot keeping me from booting, for sure04:41 Latrodectusquestion is there a way to edit a lxde panel from a config file, if so where is said config file... (running lubuntu lts, and yes i asked at #lubuntu already)04:43 antonio_I installed virtualbox yesterday...and just installed the guest additions.  Still can't USB to work.  What do I have to do?04:47 Latrodectusantonio_: https://help.ubuntu.com/community/VirtualBox/USB04:48 Carl_MillerWhere does Ubuntu store compose key sequences?04:50 Carl_MillerBecause I need to change Compose - y from the yen sign to y-macron, and similarly for its uppercase equivalent04:50 LatrodectusCarl_Miller: have you read https://help.ubuntu.com/community/ComposeKey04:51 antonio_latrodectus: That didn't work.  No USB devices are appearing in Virtualbox04:55 Latrodectusantonio_: does everything else work in the vm?04:56 antonio_yeah..pretty sure04:56 antonio_When I plug in my device...it appears in linux..but not in the virtual xp running in Vbox04:56 Latrodectusantonio_: what kind of device is it?05:01 antonio_This is the issue I'm having https://forums.virtualbox.org/viewtopic.php?f=6&t=6892005:01 === kubuntu is now known as jack antonio_Its a brainwave mind machine...just need to access the internal storage to edit some files05:01 antonio_Its telling me "No USB Devices Connected"05:02 === jack is now known as Guest58497 Latrodectusantonio_: what is the filesys on the usb?05:02 antonio_How can I figure this out?05:02 === Guest58497 is now known as jkskdn thechaon ubuntu how can i create an openvpn?05:04 antonio_latrodectus: Its also happening with my gf's phone.  Can't access any USB devices on it.05:04 thechawho is your gf?05:04 antonio_techa: the girl that is locked up in my basement05:05 thechano I mean who is she?05:05 thechalike what's her name05:05 antonio_The woman I sleep with05:05 thechacan you tell Pam i said hi?05:06 antonio_um...sure I guess I can05:06 thechathanks man05:06 antonio_she said hi05:07 Latrodectusantonio_: i'd say use gparted to check what kind of filesystem the storage drive is05:07 thechaask her how she's been over the years05:07 antonio_"I remember him as one pump chump...ask him hows he doing"05:07 jkskdnHello folks.  I'm trying to install (dual-booting) 14.04 w/ win8, and I keep getting the same error message.  So far I've been following all the community guides, but I'm wondering if I'm trying to install the bootloader to the wrong partition.  Will it cause a problem to set the target to dev/sda1, the EFI partition?05:07 thecha"still selfish with the love...otherwise good"05:08 Latrodectusjkskdn: what's the error message?05:10 jkskdnLatrodectus, The 'grub-efi-amd64-signed' package failed to install into /target/. Without the GRUB boot loader, the installed system will not boot.05:11 jkskdnlatrodectus, so far I have sda6 partition for the linux install, and 5gb of swap, and I've been trying to mount the bootloader to sda6 as well05:12 Latrodectusjkskdn: did you verify that iso that you downloaded was intact?05:12 Fahrenhe17hey guys, i have a question, help me please. I found patch for synaptic touchpad (speed asymetrical (horizontal faster than vertical)), but dont know how to apply this for my system? Here is the patch http://patchwork.freedesktop.org/patch/12839/05:12 jkskdnlatrodectus, using the "check disc for errors" from the grub (live) boot, it said it was okay.  I'll double-check the md5 now...05:14 LatrodectusFahrenhe17: http://ubuntuforums.org/showthread.php?t=771087 but change the path's to match your patch...05:15 LatrodectusFahrenhe17: so you would need to cd to the path of the file that you are patching, and then patch it with the file you downloaded05:17 Fahrenhe17Latrodectus: ty, i got it, but i dont know, what file i have to patch05:18 Latrodectusah, give me a minute05:18 Fahrenhe17ty very much05:19 Fahrenhe17in usr/share/X11/xorg.conf.d/ i have only.conf files, i think, i dont think, that i'm on right way05:21 LatrodectusFahrenhe17: i still haven't found the exact location but i found this: https://help.ubuntu.com/community/SynapticsTouchpad05:23 Fahrenhe17Thank u! I would read and try to fix everything! Ty again :)05:24 jkskdnlatrodectus, still working on it...05:25 slow_http://paste.pound-python.org/show/cuOsa5X1l0zzKgAWFLWN/ can someone help w/ this?05:31 jkskdnlatrodectus, md5sum is ok05:33 Latrodectusslow_: http://ubuntuforums.org/showthread.php?t=158302805:34 Latrodectusslow_: http://forums.openvpn.net/topic9208.html (newer)05:34 Latrodectusjkskdn: it halts during the install process right05:35 jkskdnyeah and then I get that error message saying the install failed05:35 Latrodectusjkskdn: can you try a different usb?05:36 jkskdnbefore I go down that route, can I ask - is there a standard way to pick a target for the bootloader?  the whole dev/sda?  The same as the rest of the linux install (sda6, in this case)?05:37 Latrodectusjkskdn: there should be; because servers...05:38 Latrodectusjkskdn: idk but i found this... http://askubuntu.com/questions/126541/how-to-manually-install-boot-loader05:39 Latrodectusjust be careful05:40 jkskdnLatrodectus, which solution were you proposing I follow?  I have seen the advice elsewhere to ry creating a small partition at the end of sda, but I didn't know if that should be merged with the efi05:45 jkskdnLatrodectus, I guess not  since I can't change the size of the efi partition, sda105:46 Latrodectusjkskdn; info on efi partition https://en.wikipedia.org/wiki/EFI_System_partition (if you are into lite reading)05:49 Latrodectusjkskdn: are you replaceing an exsisting windows os?05:50 Latrodectus(did you disable secure boot and "quick start" (idk the name))05:51 jkskdnyes, definitely05:51 jkskdnlatrodectus, not replacing, trying to dual boot05:53 Latrodectuswell then idk05:54 jkskdnlatrodectus, do you think this sounds correct?  "In Linux, a single partition can be both a boot and a system partition if both /boot/ and root directory are in the same partition."05:58 Latrodectusjkskdn: that sounds legit05:59 itaiIm trying to install something from the software center, and there is something that says Applying changes and the green loading bar doesn't seem to move. Is this normal?05:59 Latrodectusitai: how is your internet connection?05:59 jkskdnI think that does answer something important then.  I shouldn't need to send the bootloader to a different partition then...05:59 Latrodectusjkskdn: that would solve your problem, try it out06:00 itaiLatrodectus: i think its alright, not the best06:00 f0xtr0t-qwerty-khi everyone06:01 jkskdnI'll try, but that's precisely what I had been doing before I began looking for other options :p06:01 Latrodectusitai: what else are you doing on your pc while you are installing the software?06:02 jkskdnlatrodectus - actually, I think intead I've figured out WHY it's failing, the way I've been trying...06:02 Latrodectusthe more you know...06:03 jkskdnLatrodectus, it's because windows insists that there be a boot and a system partition, and so maybe I do have to cram the linux boot onto the part where the ms boot is already...06:03 Latrodectusjkskdn: that is what you normally have to do, and then you have to rebuild the windows bootloader06:04 jkskdnso I'm back to where I was in terms of trying to figure out if I aim my boot,oader for sda1 if it'll create massive problems.06:04 itaiLatrodectus: I'm not doing anything else. What is the Applying changes mean06:04 Latrodectusitai: unpacking and cleaning up06:04 slow_Latrodectus: i'm still having the same problem, cant find a tutorial that helps06:05 itaiLatrodectus: Does it usually takee a long time?06:05 Latrodectusitai: depending on the program and hardware it can take seconds to hours (hours is rare)06:07 Latrodectusslow_: what exactly are you trying to do?06:08 slow_Latrodectus: setup a VPN to connect to on my OVH vps on any choice of OS, i was trying w/ ubuntu and couldn't connect06:08 Latrodectusslow_: and you setup the group for the vpn right?06:10 slow_Latrodectus: when i try and setup via addgroup nogroup i get group already created06:11 AbeIs there a way to get a Pc controller working in Wine?06:11 Abeand does it only work with xbox360 controller06:13 Latrodectusslow_: did you bring down the computer's ethernet/wifi and then restart it?06:14 === kubuntu is now known as jkskdn jkskdnLatrodectus, I see you're busy but I'd like your take on a page I just found when you have a second06:14 jkskdnLatrodectus, http://askubuntu.com/questions/219514/where-to-install-bootloader-when-installing-ubuntu-as-secondary-os06:15 jkskdn--- and check out these lines : "06:15 jkskdnHere's an example that could help you out:06:15 jkskdnInstallation type06:15 jkskdnUnder "Device for boot loader installation":06:15 Latrodectusjkskdn: use a service like pastebin to paste more then 3 lines...06:16 Latrodectusjkskdn: you should read this https://help.ubuntu.com/community/WindowsDualBoot06:17 f0xtr0t-qwerty-khi everyone i was hoping you could help me with something silly06:18 LatrodectusAbe: https://help.ubuntu.com/community/Xbox360Controller06:18 f0xtr0t-qwerty-ki have an ubuntu desktop which had some issues with lightdm06:18 f0xtr0t-qwerty-kwhich i thought was an opporutiny to move to gnome. So now i moved to gnome like a month ago but i just noticed that i can't change my wallpaper06:19 f0xtr0t-qwerty-kany ideas?06:19 LatrodectusAbe: http://wiredrevolution.com/ubuntu/setup-the-ps3-bluetooth-controller-on-ubuntu (ps3 controller)06:19 f0xtr0t-qwerty-kin the settings the image i set to be my wallpaper shows up in a small preview but my actual wallpaper doesn't change06:20 Latrodectusf0xtr0t-qwerty-k: try killing nautilus06:21 jkskdnAlright, going to try something else, thanks for the help06:22 Latrodectusjkskdn: np06:23 Latrodectusf0xtr0t-qwerty-k: also this might help http://askubuntu.com/questions/84130/how-do-i-theme-the-nautilus-background-image06:27 Latrodectus^actually disregard that; i'm tired...06:28 Latrodectusi know right...06:28 f0xtr0t-qwerty-kit's alright Latrodectus06:29 f0xtr0t-qwerty-ki understand06:29 Abewell I have an normal USB controller its actually Thrustmaster xD and it's working with Ps2 emulator fine just "Wine don't want to use it ingame06:30 AbeBut I see the Controller in "wine control"06:31 === CHris is now known as Guest52477 Abehttp://www0.xup.in/exec/ximg.php?fid=13597933 But not working ingame06:34 DirksonHey all. Not an ubuntu person, asking for a friend - He's running 14.04 and needs libprotobuf-c1, but can only find -c0. Any ideas for the easiest way to fix that?06:37 emcan someone tell me tl;dr, I have LLDB, I've checked the official site, and just can't get it. So I type in the terminal lldb, then I'm guessing I'm in "lldb" mode because anything I type is within the lldb environment. So if I have a compiled C code called test.out, how do I debug it?06:51 === mission712_ is now known as m712 Guest72840Hi, how can I try an icon pack on LiveUSB? I can install normal way because the ISO (I mean the LiveUSB) is read only06:57 wileeeGuest72840, This from the ubuntu repos and you want kit there on a reboot?07:01 Guest72840wileee: No it's from a website and I'd like to try without reboot.07:02 wileeeGuest72840, should be something the software center can run, or you can unpack and use.07:03 emanybody familiar with the LLDB debugger?07:03 schultzaHaving problems with my ubuntu wireless and intel 5100 card07:03 schultzawill not stay connected07:03 wileeeGuest72840, While running you can add to the OS
1
6,000
14,000
55,720
11,145
8e18506b-4393-48c5-9cd0-b07fdedbcb63
StampyAI/alignment-research-dataset/arxiv
. | 511 | 5184 | 6884 | 27.75† | 88.57 | 10† | 6884 | 83.5† | 6884 | 86.5† | 3628.85 | 169.68ׇ | 95.75† | †The coverages are averaged over c2670, c5315, c6288, and c7552. ‡The reduction is averaged over all except MIPS. Table 2. Comparison of trigger coverage (Cov. (%)) and test length of DETERRENT with random simulations, Synopsys TestMAX (TestMAX), TARMAC (TARMAC\_TCAD), and TGRL (pan2021automated). Evaluation is done on 100 random four-width triggered HT-infected netlists. We implemented our RL agent using PyTorch1.6 and trained it using a Linux machine with Intel 2.4 GHz CPUs and an NVIDIA Tesla K80 GPU. We used the SAT solver provided in the pycosat library. We implemented the parallelized version of TARMAC in Python 3.6. We used Synopsys VCS for logic simulations and for evaluating test patterns on HT-infected netlists. Similar to prior works (TARMAC and TGRL), for sequential circuits, we assume full scan access. To enable a fair comparison, we implemented and evaluated all the techniques on the same benchmarks as TARMAC and TGRL, which were provided to us by the authors of TGRL. They also provided us with the TGRL test patterns. We also performed experiments on the MIPS processor from OpenCores (OpenCores\_MIPS) to demonstrate scalability. For MIPS, we use vectorized environment with 16 parallel processes to speed up the training. For evaluation, we randomly inserted 100 HTs in each benchmark and verified them to be valid using a Boolean satisfiability check. ### 4.2. Trigger Coverage Performance In this section, we compare the trigger coverage provided by different techniques (Table [2](#S4.T2 "Table 2 ‣ 4.1. Experimental Setup ‣ 4. Experimental Evaluation ‣ DETERRENT: Detecting Trojans using Reinforcement Learning")). In addition to TARMAC and TGRL, we also compare the performance of DETERRENT with random test patterns and patterns generated from an industry-standard tool, Synopsys TestMAX (TestMAX). We used the number of patterns from TGRL as a reference for the random test patterns and TARMAC to enable a fair comparison. For TestMAX, the number of patterns is determined by the tool in the default setting (run\_atpg). Note that for s13207, s15850, and s35932, the netlists corresponding to the test patterns provided by the authors of TGRL were not available to us at the time of writing the manuscript. Hence, we could only evaluate the TGRL test patterns for those circuits on our benchmarks. Due to this, the trigger coverage of TGRL for these benchmarks is low. Additionally, TGRL does not evaluate on the MIPS benchmark. Hence the corresponding cells in the table are empty. To enable a fair comparison, we have not included s13207, s15850, and s35932 in the average test length, as well as MIPS in the average trigger coverages for all techniques in Table [2](#S4.T2 "Table 2 ‣ 4.1. Experimental Setup ‣ 4. Experimental Evaluation ‣ DETERRENT: Detecting Trojans using Reinforcement Learning"). The results demonstrate that DETERRENT achieves better trigger coverage than all other techniques while reducing the number of test patterns. On average, DETERRENT improves the coverage over random patterns (68%), TestMAX (85.75%), TARMAC (12.25%), and TGRL (9.25%), and achieves two orders of magnitude reduction in the number of test patterns over TARMAC and TGRL (169×). ### 4.3. Impact of Trigger Width Trigger width, i.e., the number of rare nets that constitute the trigger, directly affects the stealth of the HT. As the trigger width increases, the difficulty to activate the trigger increases exponentially. For example, for a rareness threshold of 0.1, if the trigger width is 4, the probability of activating the trigger through random simulation is 10−4. Whereas, if the trigger width is 12, the probability reduces to 10−12. Thus, it is necessary to maintain the performance with increasing trigger width. Figure [5](#S4.F5 "Figure 5 ‣ 4.3. Impact of Trigger Width ‣ 4. Experimental Evaluation ‣ DETERRENT: Detecting Trojans using Reinforcement Learning") illustrates the results for c6288; we chose this benchmark as TGRL provides a good trigger coverage. With increasing trigger width, the performance of TGRL drops drastically. DETERRENT maintains a steady trigger coverage, demonstrating that it can activate extremely rare trigger conditions. ![Impact of trigger width on the trigger coverage of TGRL ](https://media.arxiv-vanity.com/render-output/7104688/x5.png) Figure 5. Impact of trigger width on the trigger coverage of TGRL (pan2021automated) and DETERRENT for c6288. ### 4.4. Trigger Coverage vs. Number of Patterns We now investigate the marginal impact of test patterns on trigger coverage. To do so, we analyze the increase in trigger coverage provided by each test pattern for DETERRENT and TGRL. Figure [6](#S4.F6 "Figure 6 ‣ 4.4. Trigger Coverage vs. Number of Patterns ‣ 4. Experimental Evaluation ‣ DETERRENT: Detecting Trojans using Reinforcement Learning") demonstrates that DETERRENT obtains the maximum trigger coverage with very few patterns as opposed to TGRL. ![Trigger coverage vs. test patterns comparison.](https://media.arxiv-vanity.com/render-output/7104688/x6.png) Figure 6. Trigger coverage vs. test patterns comparison. ### 4.5. Impact of Rareness Threshold Rareness threshold is the probability below which nets are classified as rare, i.e., the logic values of these nets are strongly biased towards 0 or 1. For a given trigger width (α), as the rareness threshold increases, the number of rare nets increases (say by a factor of β), and so, the number of combinations possible for constructing the trigger increases by a factor of βα, making it much more difficult to activate. Figure [7](#S4.F7 "Figure 7 ‣ 4.5. Impact of Rareness Threshold ‣ 4. Experimental Evaluation ‣ DETERRENT: Detecting Trojans using Reinforcement Learning") shows that the number of rare nets increases with increasing threshold (leading to up to 64× more potential trigger combinations), but DETERRENT is still able to achieve similar trigger coverage (≤2% drop) with less than 2500 patterns.555The authors of TGRL did not provide us the test patterns for thresholds other than 0.1. Hence, we do not compare with TGRL for other threshold values. In another experiment, we trained the agent using rare nets for a threshold of 0.14 and evaluated the generated test patterns on rare nets with threshold of 0.1—the trigger coverage is 99%. This hints that we can train the agent for a large set of rare nets and use it to generate patterns for a subset of rare nets. ![Impact of rareness threshold on the number of rare nets and the trigger coverage of DETERRENT for ](https://media.arxiv-vanity.com/render-output/7104688/x7.png) Figure 7. Impact of rareness threshold on the number of rare nets and the trigger coverage of DETERRENT for c6288. 5. Discussion and Future Work ------------------------------ Comparison with TGRL (pan2021automated). Our RL agent architecture is entirely different from TGRL. TGRL maximizes a heuristic based on the rareness and testability of nets. In contrast, we identify the problem of trigger activation to be a set-cover problem and find maximal sets of compatible rare nets. Moreover, TGRL states and actions are test patterns generated by flipping bits probabilistically, whereas our agent’s efforts are more directed by generating maximal sets of compatible rare nets. Due to our formulation, we achieve better coverage but with orders of magnitude fewer test patterns than TGRL (see Section [4](#S4 "4. Experimental Evaluation ‣ DETERRENT: Detecting Trojans using Reinforcement Learning")). Feasibility of using a SAT solver. We use a SAT solver for the compatibility check during training and for generating test patterns from the maximal sets of compatible rare nets provided by the RL agent. Nevertheless, our technique is scalable for larger designs (as evidenced by our results) because: (i) During training, we reduce the runtime of using the SAT solver as we generate a dictionary containing the compatibility information offline in a parallelized manner. (ii) When generating the test patterns, we only require invoking the SAT solver T times, where T is the required number of test patterns. Hence, even for large benchmarks like MIPS, we can generate test patterns that outperform all the HT detection techniques in less than 12 hours. Meta-learning. We generated test patterns for individual benchmarks using separate agents. Since the training time of our agents for all benchmarks is less than 12 hours, it is practical to use our technique. As part of future work, we would like to explore the principles of designing a standalone agent that can be trained on a corpus of benchmarks once and be used to generate test patterns for unseen benchmarks. To that end, we plan to extend the current framework by using principles from meta-learning. 6. Conclusion -------------- Prior works on trigger activation for HT detection have shown reasonable trigger coverage, but they are ineffective, not scalable, or require a large number of test patterns. To address these limitations, we develop an RL agent to guide the search for optimal test patterns. However, in order to design the agent, we face several challenges like inefficiency and lack of scalability. We overcome these challenges using different features like masking and boosting exploration of the agent. As a result, the final architecture generates a compact set of test patterns for designs of all sizes, including the MIPS processor. Experimental results demonstrate that our agent reduces the number of test patterns by 169× on average while improving trigger coverage. Further evaluations show that our agent is robust against increasing complexity. Our agent maintains steady trigger coverage for different trigger widths, whereas the state-of-the-art technique’s performance drops drastically. Our agent also maintains performance against the increasing number of possible trigger combinations. Although this work demonstrates the power of RL for trigger activation, the challenges related to scalability and efficiency are not specific to the current problem. The ways in which we overcame the challenges can be used to develop better defenses for other hardware security problems. Acknowledgments --------------- The work was partially supported by the National Science Foundation (NSF CNS–1822848 and NSF DGE–2039610). Portions of this work were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing.[SEP]
1
6,000
8,495
8,495
79,779
<urn:uuid:181a2e04-8e6b-40c8-b2bf-993df6175e46>
Kyle1668/dclm-dedup-25B-ai-scifi-docs
Home | tinkering with Google's MapReduce (Current Research) We are seeing an advent of parallel processing in all kinds of computational models. The trend can be seen not only from the progress in cluster computational architectures, but also from the wide adoption of multi-core processors, to the extent that we cannot imagine large data processing without them. One architecture that has reached critical acclaim over the last 3 years is Google’s MapReduce. It’s a model derived from functional programming for handling computation with terabytes of data. As its name implies the model works by dividing computation over to a number of Map and Reduce processes. For example when computing the number of back-links from web-pages, each map function will take a set of web-pages (the input data), and emit key-value pairs like <, [no. of occurrences]>. Using a hash function, all unique keys will end up with a specific Reduce process. The job of these Reduce functions would be to take the input key-value pairs and reduce them so every unique key eventually has a single value. Hence each emitted pair would tell the number of times a web-link appeared in all the web-pages. Need-less to see this architecture is used for a number of other applications. Using MapReduce, an application programmer (at Google) needs to concentrate only on code which dictates only the processing the data needs to go through rather than on the complex parallel processing code that MapReduce already offers. The idea is not only robust but also novel, yet our research team still feeld there is great room for improvement. So me and, the intrepid, Momina Azam are currently working under the wing of Dr. Umar Saif, to improve this architecture. We are aiming for a publication soon, so watch out for this space, to learn more about our improvements over the MapReduce architecture and to read the publication itself :) Research Questions? we are Answering 1. Changing how the Master works. Improving it, will greatly enhance the performance of the whole architecture, since it plays a pivotal role in orchestrating the computation. 2. More efficient work distribution across network nodes. Currently the MapReduce architecture binds a key strongly to a certain node for the Reduce phase. Does this burden some nodes heavily? How can this limitation be laxed? 3. Getting results in stages rather than at the end of the computation. It makes little sense to obtain results for large keys and small ones at the same time. How can the computation be scheduled in a way to obtain meaningful results for small keys much earlier in the computation. 4. Getting meaningful partial results. The original MapReduce architecture was bound to complete the reduce of every key before a computation could successfuly end. Would it make sense to know that had either greater than 2 million or exactly 7.34 million back-links? How can a user obtain approximate, but still meaningful answers. If you're trying to trace how Hadoop works, you might find our Hadoop call-trace doc. helpful. If you do! drop a thank you note to Momina :) Watch out for this space! Our research team will be soon releasing its implementation of plain-vanilla MapReduce in Python Related Literature MapReduce Links General >> Google Code Uni. Distributed Systems | Google Lectures on MapReduce | MapReduce Wikipedia Blogs >> Carnage4Life | Geeking with Greg | Implementations >> Hadoop | Skynet | Cat Programming Language | Qt Concurrent | Andrew McNabb's Mrs Hadoop Links General >> Apache Hadoop | Hadoop Wiki | Hadoop Summit | HDFS | Yahoo Dev. Net. Hadoop | HBase | Hadoop Wikipedia Help + Articles >> Hadoop Docs | Hadoop API | Hadoop on Amazon EC2 and S3 | HDFS with Python | Hadoop Wiki Amazon EC2 | Berkeley CS16x Project | UCSD CSE 124 Project | IBM Hadoop Tools for Eclipse Blogs >> Doug Cutting's Blog | Code Codex | Tom White's Blog | Jeremy Zawodny's Blog Yahoo Other Links People >> Jeffrey Dean | Sanjay Ghemawat | Christopher Olsten | Joseph M. Hellerstein | Mehul A. Shah | Doug Cutting
0
0
929
929
107,951
<urn:uuid:31bf8ef8-695c-469a-9458-974607a1d43b>
Kyle1668/dclm-dedup-25B-ai-scifi-docs
Go Down Topic: SkyNet (Read 1 time) previous topic - next topic skynet started with a single arduino. Fortunately, the authors used the String class and SkyNet was shortly found upside down, burning, in a ditch. As the arduino is a microcontroller and not a super computer maybe it need to aim a bit lower and be called Fishnet or Hairnet instead of Skynet?  XD Go Up Please enter a valid email to subscribe Confirm your email address We need to confirm your email address. Thank you for subscribing! via Egeo 16 Torino, 10131
0
0
147
147
55,423
79559e0f-88c1-4924-a478-f7e3e1b4fc47
trentmkelly/LessWrong-43k
Troubles With CEV Part1 - CEV Sequence The CEV Sequence Summary: The CEV sequence consists of three posts tackling important aspects of CEV. It covers conceptual, practical and computational problems of CEV's current form. On What Selves Are draws on analytic philosophy methods in order to clarify the concept of Self, which is necessary in order to understand whose volition is going to be extrapolated by a machine that implements the CEV procedure. Troubles with CEV part1 and Troubles with CEV part2 on the other hand describe several issues that will be faced by the CEV project if it is actually going to be implemented. Those issues are not of conceptual nature. Many of the objections shown come from scattered discussions found on the web. Finally, some alternatives to CEV are considered.   Troubles with CEV Summary: Starting with a summary of CEV, we proceed to show several objections to CEV. First, specific objections to the use of Coherence, Extrapolation, and Volition. Here Part1 ends. Then, in Part2, we continue with objections related to the end product of performing a CEV, and finally, problems relating to the implementation of CEV. We then go on with a praise of CEV, pointing out particular strengths of the idea. We end by showing six alternatives to CEV that have been proposed, and considering their vices and virtues. Meta: I think Troubles With CEV Part1 and Part2 should be posted to Main. So on the comment section of Part2, I put a place to vote for or against this upgrade.   Troubles with CEV Part1   Summary of CEV To begin with, let us remember the most important slices of Coherent Extrapolated Volition (CEV). > “Friendly AI requires: > > 1.  Solving the technical problems required to maintain a well-specified abstract invariant in a self-modifying goal system. (Interestingly, this problem is relatively straightforward from a theoretical standpoint.) > > 2.  Choosing something nice to do with the AI. This is about midway in theoretical hairiness between problems 1 and 3. > > 3. 
0
0
480
480
51,520
00a3a504-52a0-400f-b273-e863781bb55a
trentmkelly/LessWrong-43k
[SEQ RERUN] Every Cause Wants To Be A Cult Today's post, Every Cause Wants To Be A Cult was originally published on 12 December 2007. A summary (taken from the LW wiki):   > Simply having a good idea at the center of a group of people is not enough to prevent that group from becoming a cult. As long as the idea's adherents are human, they will be vulnerable to the flaws in reasoning that cause cults. Simply basing a group around the idea of being rational is not enough. You have to actually put in the work to oppose the slide into cultishness. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Robbers Cave Experiment, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
0
0
301
301
17,380
a7510b94-79bf-4261-990f-4df9b26b116e
StampyAI/alignment-research-dataset/arxiv
�𝑛1{s\in\mathbb{N}[K\_{0},K\_{1}\ldots K\_{n-1}]}italic\_s ∈ blackboard\_N [ italic\_K start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, italic\_K start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT … italic\_K start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT ] s.t. ∀K∈ℕn:log⁡(Kn−1+4)r(K)≤r(αs(K))for-all𝐾superscriptℕ𝑛:subscript𝐾𝑛14𝑟𝐾𝑟subscript𝛼𝑠𝐾{\forall K\in\mathbb{N}^{n}\mathrel{\mathop{:}}\log(K\_{n-1}+4)r(K)\leq r(\alpha\_{s}(K))}∀ italic\_K ∈ blackboard\_N start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT : roman\_log ( italic\_K start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT + 4 ) italic\_r ( italic\_K ) ≤ italic\_r ( italic\_α start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_K ) ). In particular, r𝑟{r}italic\_r is steadily growing. Consider any γ∈Γr𝛾subscriptnormal-Γ𝑟{\gamma\in\Gamma\_{r}}italic\_γ ∈ roman\_Γ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT and define γ′:ℕ→ℕnormal-:superscript𝛾normal-′ℕnormal-→ℕ{\gamma^{\prime}\mathrel{\mathop{:}}\mathbb{N}\rightarrow\mathbb{N}}italic\_γ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT : blackboard\_N → blackboard\_N by | | | | | --- | --- | --- | | | γ′(K):=⌊log(Kn−1+2)⌋γ(K)\gamma^{\prime}(K)\mathrel{\mathop{:}}=\lfloor\log(K\_{n-1}+2)\rfloor\gamma(K)italic\_γ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_K ) : = ⌊ roman\_log ( italic\_K start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT + 2 ) ⌋ italic\_γ ( italic\_K ) | | Then, γ′∈Γrsuperscript𝛾normal-′subscriptnormal-Γ𝑟{\gamma^{\prime}\in\Gamma\_{r}}italic\_γ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ roman\_Γ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ###### Proof. Choose p∈ℕ[K0,K1…Kn−1]𝑝ℕsubscript𝐾0subscript𝐾1…subscript𝐾𝑛1{p\in\mathbb{N}[K\_{0},K\_{1}\ldots K\_{n-1}]}italic\_p ∈ blackboard\_N [ italic\_K start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, italic\_K start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT … italic\_K start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT ] s.t. p(K)≥Kn−1𝑝𝐾subscript𝐾𝑛1{p(K)\geq K\_{n-1}}italic\_p ( italic\_K ) ≥ italic\_K start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT and r(αp(K))≥γ(K)𝑟subscript𝛼𝑝𝐾𝛾𝐾{r(\alpha\_{p}(K))\geq\gamma(K)}italic\_r ( italic\_α start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ( italic\_K ) ) ≥ italic\_γ ( italic\_K ). We get | | | | | --- | --- | --- | | | γ′(K)≤⌊log⁡(Kn−1+2)⌋r(αp(K))superscript𝛾′𝐾subscript𝐾𝑛12𝑟subscript𝛼𝑝𝐾\gamma^{\prime}(K)\leq\lfloor\log(K\_{n-1}+2)\rfloor r(\alpha\_{p}(K))italic\_γ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_K ) ≤ ⌊ roman\_log ( italic\_K start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT + 2 ) ⌋ italic\_r ( italic\_α start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ( italic\_K ) ) | | | | | | | --- | --- | --- | | | γ′(K)≤⌊log⁡(p(K)+4)⌋r(αp(K))superscript𝛾′𝐾𝑝𝐾4𝑟subscript𝛼𝑝𝐾\gamma^{\prime}(K)\leq\lfloor\log(p(K)+4)\rfloor r(\alpha\_{p}(K))italic\_γ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_K ) ≤ ⌊ roman\_log ( italic\_p ( italic\_K ) + 4 ) ⌋ italic\_r ( italic\_α start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ( italic\_K ) ) | | | | | | | --- | --- | --- | | | γ′(K)≤r(αs(αp(K)))superscript𝛾′𝐾𝑟subscript𝛼𝑠subscript𝛼𝑝𝐾\gamma^{\prime}(K)\leq r(\alpha\_{s}(\alpha\_{p}(K)))italic\_γ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_K ) ≤ italic\_r ( italic\_α start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_α start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ( italic\_K ) ) ) | | ∎ ###### Proposition 5.5. Consider (𝒟,f)𝒟𝑓{(\mathcal{D},f)}( caligraphic\_D, italic\_f ) a distributional estimation problem, σ𝜎{\sigma}italic\_σ an ℱ(Γ)ℱnormal-Γ{\mathcal{F}(\Gamma)}caligraphic\_F ( roman\_Γ )-sampler of (𝒟,f)𝒟𝑓{(\mathcal{D},f)}( caligraphic\_D, italic\_f ), I𝐼{I}italic\_I a set and {hαK:{0,1}\*→mkℝ}α∈I,K∈ℕnsubscriptnormal-:superscriptsubscriptℎ𝛼𝐾superscript01mknormal-→ℝformulae-sequence𝛼𝐼𝐾superscriptℕ𝑛{\{h\_{\alpha}^{K}\mathrel{\mathop{:}}{\{0,1\}^{\*}}\xrightarrow{\textnormal{mk}}\mathbb{R}\}\_{\alpha\in I,K\in\mathbb{N}^{n}}}{ italic\_h start\_POSTSUBSCRIPT italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT : { 0, 1 } start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_ARROW overmk → end\_ARROW blackboard\_R } start\_POSTSUBSCRIPT italic\_α ∈ italic\_I, italic\_K ∈ blackboard\_N start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT uniformly bounded. Then | | | | | | --- | --- | --- | --- | | | EUσK⁡[E⁡[(hαK∘σ0K−σ1K)2]]≡𝛼E𝒟K⁡[E⁡[(hαK−f)2]]+EUσK⁡[(f∘σ0K−σ1K)2](modℱ)annotatedsubscriptEsuperscriptsubscriptU𝜎𝐾Esuperscriptsuperscriptsubscriptℎ𝛼𝐾subscriptsuperscript𝜎𝐾0subscriptsuperscript𝜎𝐾12𝛼subscriptEsuperscript𝒟𝐾Esuperscriptsuperscriptsubscriptℎ𝛼𝐾𝑓2subscriptEsuperscriptsubscriptU𝜎𝐾superscript𝑓subscriptsuperscript𝜎𝐾0subscriptsuperscript𝜎𝐾12pmodℱ\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[\operatorname{E}[(h\_{\alpha}^{K}\circ\sigma^{K}\_{0}-\sigma^{K}\_{1})^{2}]]\overset{\alpha}{\equiv}\operatorname{E}\_{\mathcal{D}^{K}}[\operatorname{E}[(h\_{\alpha}^{K}-f)^{2}]]+\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[(f\circ\sigma^{K}\_{0}-\sigma^{K}\_{1})^{2}]\pmod{\mathcal{F}}roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ roman\_E [ ( italic\_h start\_POSTSUBSCRIPT italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ∘ italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT - italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] ] overitalic\_α start\_ARG ≡ end\_ARG roman\_E start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ roman\_E [ ( italic\_h start\_POSTSUBSCRIPT italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_f ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] ] + roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ( italic\_f ∘ italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT - italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] start\_MODIFIER ( roman\_mod start\_ARG caligraphic\_F end\_ARG ) end\_MODIFIER | | (5.14) | ###### Proof. Denote hσαK:=hαK∘σ0K{h\_{\sigma\alpha}^{K}\mathrel{\mathop{:}}=h\_{\alpha}^{K}\circ\sigma^{K}\_{0}}italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT : = italic\_h start\_POSTSUBSCRIPT italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ∘ italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, fσK:=f∘σ0K{f\_{\sigma}^{K}\mathrel{\mathop{:}}=f\circ\sigma^{K}\_{0}}italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT : = italic\_f ∘ italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Proposition [3.10](#S3.Thmproposition10 "Proposition 3.10. ‣ 3.3.3 Samplers and Samplability ‣ 3.3 Polynomial-Time M⁢Γ-Schemes and Samplers ‣ 3 Optimal Estimators and Probability Theory ‣ Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm") implies | | | | | --- | --- | --- | | | EUσK⁡[(E⁡[hσαK]−fσK)fσK]≡𝛼E𝒟K⁡[(E⁡[hαK]−f)f](modℱ)annotatedsubscriptEsuperscriptsubscriptU𝜎𝐾Esuperscriptsubscriptℎ𝜎𝛼𝐾superscriptsubscript𝑓𝜎𝐾superscriptsubscript𝑓𝜎𝐾𝛼subscriptEsuperscript𝒟𝐾Esuperscriptsubscriptℎ𝛼𝐾𝑓𝑓pmodℱ\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[(\operatorname{E}[h\_{\sigma\alpha}^{K}]-f\_{\sigma}^{K})f\_{\sigma}^{K}]\overset{\alpha}{\equiv}\operatorname{E}\_{\mathcal{D}^{K}}[(\operatorname{E}[h\_{\alpha}^{K}]-f)f]\pmod{\mathcal{F}}roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ( roman\_E [ italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ] - italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ) italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ] overitalic\_α start\_ARG ≡ end\_ARG roman\_E start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ( roman\_E [ italic\_h start\_POSTSUBSCRIPT italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ] - italic\_f ) italic\_f ] start\_MODIFIER ( roman\_mod start\_ARG caligraphic\_F end\_ARG ) end\_MODIFIER | | Applying Proposition [3.11](#S3.Thmproposition11 "Proposition 3.11. ‣ 3.3.3 Samplers and Samplability ‣ 3.3 Polynomial-Time M⁢Γ-Schemes and Samplers ‣ 3 Optimal Estimators and Probability Theory ‣ Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm") to the right hand side | | | | | --- | --- | --- | | | EUσK[(E[hσαK]−fσK)fσK]]≡𝛼EUσK[(E[hσαK]−fσK)σ1K](modℱ)\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[(\operatorname{E}[h\_{\sigma\alpha}^{K}]-f\_{\sigma}^{K})f\_{\sigma}^{K}]]\overset{\alpha}{\equiv}\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[(\operatorname{E}[h\_{\sigma\alpha}^{K}]-f\_{\sigma}^{K})\sigma^{K}\_{1}]\pmod{\mathcal{F}}roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ( roman\_E [ italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ] - italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ) italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ] ] overitalic\_α start\_ARG ≡ end\_ARG roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ( roman\_E [ italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ] - italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ) italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ] start\_MODIFIER ( roman\_mod start\_ARG caligraphic\_F end\_ARG ) end\_MODIFIER | | | | | | | | --- | --- | --- | --- | | | EUσK[(E[hσαK]−fσK)(fσK−σ1K)]]≡𝛼0(modℱ)\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[(\operatorname{E}[h\_{\sigma\alpha}^{K}]-f\_{\sigma}^{K})(f\_{\sigma}^{K}-\sigma\_{1}^{K})]]\overset{\alpha}{\equiv}0\pmod{\mathcal{F}}roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ( roman\_E [ italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ] - italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ) ( italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_σ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ) ] ] overitalic\_α start\_ARG ≡ end\_ARG 0 start\_MODIFIER ( roman\_mod start\_ARG caligraphic\_F end\_ARG ) end\_MODIFIER | | (5.15) | On the other hand | | | | | --- | --- | --- | | | EUσK⁡[E⁡[(hσαK−σ1K)2]]=EUσK⁡[E⁡[(hσαK−fσK+fσK−σ1K)2]]subscriptEsuperscriptsubscriptU𝜎𝐾Esuperscriptsuperscriptsubscriptℎ𝜎𝛼𝐾subscriptsuperscript𝜎𝐾12subscriptEsuperscriptsubscriptU𝜎𝐾Esuperscriptsuperscriptsubscriptℎ𝜎𝛼𝐾superscriptsubscript𝑓𝜎𝐾superscriptsubscript𝑓𝜎𝐾subscriptsuperscript𝜎𝐾12\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[\operatorname{E}[(h\_{\sigma\alpha}^{K}-\sigma^{K}\_{1})^{2}]]=\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[\operatorname{E}[(h\_{\sigma\alpha}^{K}-f\_{\sigma}^{K}+f\_{\sigma}^{K}-\sigma^{K}\_{1})^{2}]]roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ roman\_E [ ( italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] ] = roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ roman\_E [ ( italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT + italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] ] | | | | | | | --- | --- | --- | | | EUσK[E[(hσαK−σ1K)2]]=EUσK[E[(hσαK−fσK)2]]+2EUσK[(E[hσαK]−fσK)(fσK−σ1K)]]+EUσK[E[(fσK−σ1K)2]]\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[\operatorname{E}[(h\_{\sigma\alpha}^{K}-\sigma^{K}\_{1})^{2}]]=\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[\operatorname{E}[(h\_{\sigma\alpha}^{K}-f\_{\sigma}^{K})^{2}]]+2\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[(\operatorname{E}[h\_{\sigma\alpha}^{K}]-f\_{\sigma}^{K})(f\_{\sigma}^{K}-\sigma^{K}\_{1})]]+\operatorname{E}\_{\operatorname{U}\_{\sigma}^{K}}[\operatorname{E}[(f\_{\sigma}^{K}-\sigma^{K}\_{1})^{2}]]roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ roman\_E [ ( italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] ] = roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ roman\_E [ ( italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] ] + 2 roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ( roman\_E [ italic\_h start\_POSTSUBSCRIPT italic\_σ italic\_α end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ] - italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ) ( italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ] ] + roman\_E start\_POSTSUBSCRIPT roman\_U start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ roman\_E [ ( italic\_f start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT - italic\_σ start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] ] | | Applying Proposition [3.10](#S3.Thmproposition10 "Proposition 3.10. ‣ 3.3.3 Samplers and Samplability ‣ 3.3 Polynomial-Time M⁢Γ-Schemes and Samplers ‣ 3 Optimal Estimators and Probability Theory ‣ Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm") to the first term on the right hand side and [5.15](#S5.E15 "5.15 ‣ Proof. ‣ 5.1.1 Positive Results ‣ 5.1 Existence ‣ 5 Existence and Uniqueness ‣ Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm") to the second term on the right hand side, we get [5.14](#S5.E14 "5.14 ‣ Proposition 5.5. ‣ 5.1.1 Positive Results ‣ 5.1 Existence ‣ 5 Existence and Uniqueness ‣ Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm"). ∎ ###### Proof of Theorem [5.2](#S5.Thmtheorem2 "Theorem 5.2. ‣ 5.1.1 Positive Results ‣ 5.1 Existence ‣ 5 Existence and Uniqueness ‣ Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm"). Fix M≥sup|f|𝑀supremum𝑓M\geq\sup\lvert f\rvertitalic\_M ≥ roman\_sup | italic\_f | and construct D:{0,1}\*→algℚ:𝐷superscript01alg→ℚ{D\mathrel{\mathop{:}}{\{0,1\}^{\*}}\xrightarrow{\textnormal{alg}}\mathbb{Q}}italic\_D : { 0, 1 } start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_ARROW overalg → end\_ARROW blackboard\_Q s.t. | | | | | --- | --- | --- | | | D(x)={D(x)=max⁡(min⁡(t,M),−M) if x=cℚ⁡(t)D(x)=0 if x∉Im⁡cℚ𝐷𝑥cases𝐷𝑥𝑡𝑀𝑀 if x=cℚ⁡(t)𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒𝐷𝑥0 if 𝑥Imsubscriptcℚ𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒D(x)=\begin{cases}D(x)=\max(\min(t,M),-M)\text{ if ${x=\operatorname{c}\_{\mathbb{Q}}(t)}$}\\ D(x)=0\text{ if }x\not\in\operatorname{Im}\operatorname{c}\_{\mathbb{Q}}\end{cases}italic\_D ( italic\_x ) = { start\_ROW start\_CELL italic\_D ( italic\_x ) = roman\_max ( roman\_min ( italic\_t, italic\_M ), - italic\_M ) if italic\_x = roman\_c start\_POSTSUBSCRIPT blackboard\_Q end\_POSTSUBSCRIPT ( italic\_t ) end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_D ( italic\_x ) = 0 if italic\_x ∉ roman\_Im roman\_c start\_POSTSUBSCRIPT blackboard\_Q end\_POSTSUBSCRIPT end\_CELL start\_CELL end\_CELL end\_ROW | | Denote l(K):=⌊log(Kn−1+2)⌋{l(K)\mathrel{\mathop{:}}=\lfloor\log(K\_{n-1}+2)\rfloor}italic\_l ( italic\_K ) : = ⌊ roman\_log ( italic\_K start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT + 2 ) ⌋. Denote s(K):=2⌈M2⌉l(K)2s(K)\mathrel{\mathop{:}}=2\lceil M^{2}\rceil l(K)^{2}italic\_s ( italic\_K ) : = 2 ⌈ italic\_M start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ⌉ italic\_l ( italic\_K ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. Construct R:{0,1}\*→Γℚ:𝑅superscript01Γ→ℚ{R\mathrel{\mathop{:}}{\{0,1\}^{\*}}\xrightarrow{\Gamma}\mathbb{Q}}italic\_R : { 0, 1 } start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_ARROW overroman\_Γ → end\_ARROW blackboard\_Q s.t. for any K∈ℕn𝐾superscriptℕ�
76
456,000
464,000
535,446
82,932
<urn:uuid:11f2a4d2-3618-415c-845f-89c25b685946>
Kyle1668/dclm-dedup-25B-ai-scifi-docs
this week in science. .. is death is destruction week science #385 to #160 - babuger (03/04/2013) [-] You are good! You are good! User avatar #439 to #385 - goobyman (03/04/2013) [-] not so difficult as there are about 20 artiles on each. User avatar #389 to #160 - margotka (03/04/2013) [-] good job goobyman, thanks #217 to #160 - vdo (03/04/2013) [-] you are awesome! User avatar #220 to #217 - goobyman (03/04/2013) [-] i do what i can #276 to #160 - anon (03/04/2013) [-] I like the science stuff her on FJ, but I feel that when I read many of the science posts are that OP has just read the headlines and rendered it into a nice frame with extremely superficial texts. If you read the sources this post is just a one-line conclusion to the actual science. Carl Sagan would like your effort, but emphasized the ability to read science with a critical mind. Nonetheless, I applaud you for effort and contribution. Sincerely Norwegian Faggot User avatar #289 to #276 - goobyman (03/04/2013) [-] that is sadly what he did. #46 - knifeyoass ONLINE (03/04/2013) [-] This is how it starts. #100 to #46 - xzayviaaeyeres (03/04/2013) [-] The beginning of the end is upon you..... #118 to #46 - bluelips (03/04/2013) [-] and they thought I was insane User avatar #183 to #46 - erethilful ONLINE (03/04/2013) [-] Be afraid, be very afraid. #253 to #46 - cpthaze **User deleted account** has deleted their comment [-] #343 to #46 - kvarpis (03/04/2013) [-] and how we end. #451 to #46 - majortomcomics (03/05/2013) [-] Soon they'll evolve by themselves... #101 to #46 - physicsdude (03/04/2013) [-] Appart from the fact we've had that for about 10 years. Also artificial "Brain" how can you create something you don't know how works. User avatar #448 to #101 - geofalke (03/04/2013) [-] how can you create if can't grammer #136 to #46 - roarflmao ONLINE (03/04/2013) [-] And then bamm! straight out of ******* nowhere C3PO User avatar #252 to #46 - fuckinniggers (03/04/2013) [-] its ok we still have 16 years till skynet nukes everyone. #222 to #46 - betesta ONLINE (03/04/2013) [-] They will destroy us with feels! They will destroy us with feels! #382 to #46 - yindragon (03/04/2013) [-] There is nothing you can do to stop us. #164 - deathdiedead (03/04/2013) [-] ************ is death ************ is destruction #378 to #164 - anon (03/04/2013) [-] I was thinking more like mother ******* Omantye and **** . User avatar #468 to #450 - WolfPrince (03/05/2013) [-] You need to show us a video of ************ . #460 to #450 - adamks (03/05/2013) [-] I agree. This is not alright. Neither is it funny when used this much. And it doesn't even ******* make sense. User avatar #461 to #460 - snakefire (03/05/2013) [-] it makes perfect sense. and when I say I own him, I mean literally. It's my shrip #462 to #461 - adamks (03/05/2013) [-] I know. I read that story of yours. But it doesn't make sense just to say it in every single situation with any shrimp, no matter what. #359 to #164 - goochmaster (03/04/2013) [-] User avatar #467 to #359 - Eralus ONLINE (03/05/2013) [-] can i get a source on this gif? #471 to #470 - Eralus ONLINE (03/05/2013) [-] thanks mate. have a bird #298 to #164 - anon (03/04/2013) [-] you sir win the internets User avatar #200 to #164 - dreadscythe ONLINE (03/04/2013) [-] all hail ************ User avatar #9 - Mebeshe (03/03/2013) [-] Hold **** up. A lost continent? HOW THE **** DID WE MANAGE TOO MISS ONE? #218 to #9 - anon (03/04/2013) [-] http://www. csmonitor. com/Science/2013/0225/Did-scientists-find-a-lost-continent-beneath-the-Indian-Oc ean supposedly its currently underwater and not too large #231 to #9 - omgwtfwasthat (03/04/2013) [-] Google is your friend #304 to #9 - anon (03/04/2013) [-] continents sink. continents at the bottom of the ocean are hard to find. User avatar #10 to #9 - Mebeshe (03/03/2013) [-] User avatar #11 to #10 - lolwatthe (03/04/2013) [-] It went under the water Atlantis style? User avatar #12 to #11 - Mebeshe (03/04/2013) [-] Then where are all the people in the bubble suits with pet sharks? User avatar #13 to #12 - lolwatthe (03/04/2013) [-] Most didn't have time to put their suits on, there were too few to breed over the long term. They died out in the early 1800s. User avatar #14 to #13 - Mebeshe (03/04/2013) [-] Well **** . I give it ten minutes until we colonize it and drill for oil. User avatar #15 to #14 - lolwatthe (03/04/2013) [-] But first we need to salvage some of those suits and reverse engineer them to fit us. User avatar #16 to #15 - Mebeshe (03/04/2013) [-] And we need to bring tanks, guns, grenades, rocket launchers, drones, and fighter jets. Because we can't bring democracy without violence. User avatar #17 to #16 - lolwatthe (03/04/2013) [-] Can't bring a democracy to a continent without people on it. User avatar #18 to #17 - Mebeshe (03/04/2013) [-] That's never stopped us before. User avatar #19 to #18 - lolwatthe (03/04/2013) [-] What would be funny is if the USA populated this continent, taxed it heavily and then the "New Atlantians" revolted. User avatar #20 to #19 - Mebeshe (03/04/2013) [-] And they won, but in ten years time they're so Americanized there's a Starbucks on every corner and they all eat McDonald's. User avatar #21 to #20 - lolwatthe (03/04/2013) [-] Then they themselves find another "lost" continent and start the cycle over again into infinity. User avatar #59 to #21 - thisisspartah ONLINE (03/04/2013) [-] wait wait wait, how the **** are they going to bring the place out of the water? and where will all the mermaids go? User avatar #147 to #59 - lolwatthe (03/04/2013) [-] a) We're not, we're going to reverse engineer and use "Mermaid" tech b) they died out in the 1800's. #22 to #21 - Mebeshe (03/04/2013) [-] Comment Picture User avatar #23 to #22 - lolwatthe (03/04/2013) [-] No need to be that drastic but... alright. #194 to #9 - nipplegun (03/04/2013) [-] We must build Rapture... #337 to #9 - mattkingg **User deleted account** (03/04/2013) [-] Listen, we were drunk, we may have unleashed cthulu at one point, so measures had to be taken. User avatar #130 to #9 - grogovic (03/04/2013) [-] Life of Pi. #373 to #130 - chillybilly has deleted their comment [-] #215 to #9 - kanpai (03/04/2013) [-] yeah i´ve just noticed it recently because i´ve never heard of it . apperently it´s called oceania ,and the reason they couldn´t find it is because it´s in the same place as australia. if you look real carefull on this map you should be able to spot it. hope this helps! #177 to #9 - Girondins (03/04/2013) [-] Because it's constantly moving Because it's constantly moving #38 to #9 - obligatoryusername (03/04/2013) [-] It sunk beneath the ocean a long time ago. #423 to #38 - amuter ONLINE (03/04/2013) [-] #334 - anonymoose (03/04/2013) [-] "oh, hey, we just found this entire new continent" - What is this? Pokemon? User avatar #1 - jbails (03/03/2013) [-] ...the 7th one..."Scientists start Skynet" #8 to #1 - joshythehipster (03/03/2013) [-] It has begun... User avatar #214 - anonymoose (03/04/2013) [-] they can hear, and see what your visually thinking this is the complete truth The reason a lot of rats have completely expressionless faces, segregate from every other animal associate with vermin and don’t associate with non vermins that much, and are very unfriendly in general is to avoid accidentally revealing that they can read minds. If all over a billion rats where to show facial expressions all the time just as much as non rats, integrate and associate with non vermin much more, and be much more friendly and talkative, then a lot of them might accidentally reveal that they can read minds by accidentally showing a facial expression or dirty look when someone thinks, or visually pictures something in their mind they don’t like, find astonishing, or funny etc because those people might see that and and really wonder what that was that just happened there and see the connection, and they might accidentally say something similar to what the person was just thinking and going to say. If they all associated with non vermin a lot more then there would be a lot more people around for them to accidentally show facial expressions when those people think things they don’t like etc, so they segregate and only associate with rats so there won’t be anyone around for them to see that and have any accidents happen in the first place. Try thinking, best yet visually picturing in your mind something absolutely crazy as you possibly can when you are around rats, and try looking for rats who give people particular looks, especially dirty looks for what appears to be for completely no reason, that is them giving people looks when they hear and visually see someone thinking something they don’t like, find astonishing, or funny etc. You have to spread the message!!!!! The world has to know about this!!!!! #255 to #214 - lordlucifer ONLINE (03/04/2013) [-] **lordlucifer rolled a random image posted in comment #3423850 at My Little Pony fanfiction, backgrounds, songs, lyrics, and GIFs. ** MFW #285 - dreamthrow (03/04/2013) [-] And the wireless recharging only took 100 years to re-create since Tesla showed his stuff off. #364 to #285 - robertolee (03/04/2013) [-] ******* Tesla, ***** was a genius! How was anyone supposed to compete with a celibate, AC creating, electricity harnessing, mathematical mitems, Edison raping and earthquake inducing badass? You know this ****** melted one of his assistants hands by firing x-rays at it? #284 - peanutbitter (03/04/2013) [-] oh. now that you mention it i think i see it User avatar #488 to #284 - joshwontwon (01/21/2014) [-] zomgz! your first comment! User avatar #31 - YoshiBond (03/04/2013) [-] Also HIV was cured in a little girl from Mississippi. #272 - arrrbie (03/04/2013) [-] MFW self learning ai brain #206 - awesomechardey (03/04/2013) [-] oh **** , no #63 - soapybox (03/04/2013) [-] "Self-Learning Artificial Brain" #372 - brothergrimm (03/04/2013) [-] telepathic rats...... self learning artificial brain...... science is going to be the architect of mans downfall...... mark my words User avatar #376 to #372 - basham (03/04/2013) [-] User avatar #127 - forgottenmyshorts (03/04/2013) [-] Alright I love science. And I love these occasional 'today in science' posts. But would someone care to divulge how exactly you prove, or rather, even tell, two rats are communicating telepathically? #192 to #127 - thecuntdestroyerr has deleted their comment [-] #191 to #127 - dereker (03/04/2013) [-] The researchers first trained pairs of rats to solve a simple problem — to press the correct lever when an indicator light above the lever switched on, to obtain a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes touch information. One animal of the dyad was designated as the “encoder” animal. This animal received a visual cue that informed it which lever to press in exchange for a food pellet. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second animal of the dyad, known as the “decoder” animal. The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. So to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain machine interface. The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable. This maximum rate was what the researchers found they could achieve when they were transmitting regular electrical signals directly to the decoder rat’s brain that were not generated by the encoder. Importantly, the communication provided by this brain-to-brain interface (BTBI) was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice — a “behavioral collaboration” between the pair of rats. #134 to #127 - AnonymousDonor (03/04/2013) [-] one rat eats from one bowl; goes to lie down the other rat eats from the same bowl, and then goes to lie down you cant ******* explain that must be telepathy #332 - stratosrider (03/04/2013) [-] Self-learning AI? Have we learned NOTHING from Sci-Fi movies?? #71 - eclecticparadigm **User deleted account** (03/04/2013) [-] #349 - platapus (03/04/2013) [-] self learning machine? self learning machine? #427 - manazetsugi (03/04/2013) [-] **manazetsugi rolled a random image posted in comment #17 at Sit down! ** is the sixth one Atlantis? hurhurdurr User avatar #449 to #427 - acoustic (03/04/2013) [-] Very relevant roll. nice... #320 - EdwardNigma ONLINE (03/04/2013) [-] &gt;Mars mission **** YES, DO IT FAGGOT. &gt;Lost continent &gt;Artificial learning brain ******* SKYNET And thats the 3 that fascinated me. >Mars mission >Lost continent >Artificial learning brain ******* SKYNET And thats the 3 that fascinated me. User avatar #342 to #320 - theblackhorntail ONLINE (03/04/2013) [-] Half Life 3 confirmed. Leave a comment  Friends (0)
0
0
3,963
3,963
60,114
a3654bdc-c5cb-43af-9ba4-54b0f14105c4
StampyAI/alignment-research-dataset/alignmentforum
('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ^π so we might expect to have better reward approximations over.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus.mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack >.mjx-sup {display: block} .mjx-stack >.mjx-sub {display: block} .mjx-prestack >.mjx-presup {display: block} .mjx-prestack >.mjx-presub {display: block} .mjx-delim-h >.mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left:.167em} .MJXc-space2 {margin-left:.222em} .MJXc-space3 {margin-left:.278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/:
73
438,000
446,000
755,091
12,533
d2463fcd-80ea-4164-91de-1ee5880568ef
trentmkelly/LessWrong-43k
What Would I Do? Self-prediction in Simple Algorithms (This talk was given at a public online event on Sunday July 12th. Scott Garrabrant is responsible for the talk, Justis Mills edited the transcript.  If you're a curated author and interested in giving a 5-min talk, which will then be transcribed and edited, sign up here.)      Scott Garrabrant: I'm going to be working in the logical induction paradigm, which means that I'm going to have this Pn thing, which assigns probabilities to logical sentences.     Basically all you need to know about it is that the probabilities that it assigns to logical sentences will be good. In particular, they'll be good on sentences that are parameterised by n, so for large n, Pn will have good beliefs about sentences that have n as a parameter.  This will allow us to build algorithms that can use beliefs about their own outputs as part of their algorithm, because the output of a deterministic algorithm is a logical sentence. Today I’ll present some algorithms that use self-prediction.  Here's the first one. An predicts whether or not it's going to output left. If the probability to output left is less than one half, then it outputs left. Otherwise, it outputs right. It predicts what it would do, and then it does the opposite.      So for n large, it converges to randomly choosing between left and right, because if it's overdoing left then it would do right instead, and vice versa. We can also make a biased version of this.   Here's an algorithm that, if it predicts that it outputs left with probability less than P then it outputs left, and otherwise outputs right.     The only way this algorithm can work is outputting left with probability P.  In fact the previous example was a special case of this with P = ½.  We can use this general self-prediction method to basically create pseudo-randomness for algorithms. Instead of saying “flip a coin,” I can say “try to predict what you would do, then do the opposite.” Third, here's an algorithm that's trying to do some opt
0
0
480
480
5,748
7dc43198-21fc-467d-8965-727c9b4e319d
StampyAI/alignment-research-dataset/arxiv
[CLS]Scaling Laws and Interpretability of Learning from Repeated Data 1 Introduction --------------- ![](https://media.arxiv-vanity.com/render-output/7729780/x1.png) Figure 1: Experimental Setup. From a large original text dataset (left), we draw 90% of our desired training dataset in a non-repeated fashion, and 10% as repeats of a tiny portion of the original dataset (right). We hold constant that 10% of total training tokens will come from repeats, but we vary the repeated fraction in our runs. In other words, the sample to be repeated might be very small, like 0.01% of the total training tokens repeated 1000x, or relatively large, like 1% of the total training tokens repeated 10x. A small, held-back portion of the original dataset (yellow in left figure), not including any repeated data, is used as a test set and is the test loss reported in all subsequent figures. Large, high-quality text datasets are crucial for training large language models Brown et al. ([2020](#bib.bib36 "Language models are few-shot learners")); Rae et al. ([2021](#bib.bib37 "Scaling language models: methods, analysis, and insights from training gopher")). Such datasets often contain many copies of substantially overlapping documents, which greatly impairs the performance of language models on downstream tasks Lee et al. ([2021](#bib.bib34 "Deduplicating training data makes language models better")). However, it is not well understood why data repetition impacts performance to such a large extent. In this paper we study data repetition in language models through two lenses: the macroscopic lens of scaling laws, and the microscopic lens of mechanistic interpretability Elhage et al. ([2021](#bib.bib15 "A mathematical framework for transformer circuits")); Olsson et al. ([2022](#bib.bib3 "In-context learning and induction heads")). For the first lens, we trained transformer Vaswani et al. ([2017](#bib.bib10 "Attention is all you need")) language models on mostly unique data plus a small fraction of repeated data (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Scaling Laws and Interpretability of Learning from Repeated Data")), varying the repeated dataset size, model size, and fraction of tokens trained on repeated data. We find a strong double-descent phenomenon Advani and Saxe ([2017](#bib.bib25 "High-dimensional dynamics of generalization error in neural networks")); Belkin et al. ([2018](#bib.bib26 "Reconciling modern machine learning practice and the bias-variance trade-off")); Nakkiran et al. ([2019](#bib.bib22 "Deep double descent: where bigger models and more data hurt")), such that there is a defined range of repetition frequency for which performance is harmed to a surprisingly large extent. We suspect there is a range in the middle where the data can be memorized and doing so consumes a large fraction of the model’s capacity, and this may be where the peak of degradation occurs. The location of the region suggests that large models like GPT-3, Gopher, and PALM Brown et al. ([2020](#bib.bib36 "Language models are few-shot learners")); Rae et al. ([2021](#bib.bib37 "Scaling language models: methods, analysis, and insights from training gopher")); Bi et al. ([2020](#bib.bib7 "PALM: pre-training an autoencoding and autoregressive language model for context-conditioned generation")) need to be careful about overfitting their high quality distributions like Wikipedia and books. For the second lens, mechanistic interpretability (attempting to reverse engineer the detailed computations performed by the model) we show that repeated data disproportionately damages induction heads. Induction heads use a circuit of 2 attention heads to "complete the pattern by copying and completing sequences" Olsson et al. ([2022](#bib.bib3 "In-context learning and induction heads")). The damage to induction heads is observed through degradation in copying, prefix matching, and through inspection. Together, the two lenses provide an integrated picture of how repeated data might be causing the network (or part of it) to shift from generalization to memorization, and mechanistically how this could be harming performance of the overall language model. ### 1.1 Summary of Results | | | | --- | --- | | | | Figure 2: Models of different sizes show a degradation in performance at a specific range of repeats that shrinks with model size (left panel). At its peak the degradation sometimes reaches the equivalent of a 2x decrease in model size. The right panel shows that divergence (blue line) from a healthy, straight scaling law (red) lines up with when the models start to dramatically overfit the repeated subset (green curve). The blue line on the right corresponds to a vertical slice of models in the left diagram trained on the repeated subset for 120 epochs. All these models were trained on 90% unique data and 10% repeated tokens. To systematically study repeated data, we trained transformer Vaswani et al. ([2017](#bib.bib10 "Attention is all you need")) language models on mostly unique data plus a small fraction of repeated data (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Scaling Laws and Interpretability of Learning from Repeated Data")), varying the repeated dataset size, model size, and fraction of tokens trained on repeated data over 2-3 orders of magnitude. All models were trained for 100B tokens. We examined the resulting models using both scaling laws and mechanistic interpretability tools. Our main findings were as follows: * Repeated data induces a strong double-descent phenomenon Advani and Saxe ([2017](#bib.bib25 "High-dimensional dynamics of generalization error in neural networks")); Belkin et al. ([2018](#bib.bib26 "Reconciling modern machine learning practice and the bias-variance trade-off")); Nakkiran et al. ([2019](#bib.bib22 "Deep double descent: where bigger models and more data hurt")), in which data repeated a few times does not cause much damage to language model performance, data repeated very many times also does not cause much damage, but there is a peak in the middle where damage is surprisingly large. For instance, when we train an 800M parameter transformer with 10% of training tokens drawn from the repeated subset (yellow curve in Figure [2](#S1.F2 "Figure 2 ‣ 1.1 Summary of Results ‣ 1 Introduction ‣ Scaling Laws and Interpretability of Learning from Repeated Data")) we find the loss can be nearly as high as for the 340M parameter transformer (light green curve). We see an epoch-wise Nakkiran et al. ([2019](#bib.bib22 "Deep double descent: where bigger models and more data hurt")) double descent learning curve in Figure [3](#S2.F3 "Figure 3 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") is driving this performance degradation. We suspect there is a range in the middle where the data can be memorized and doing so consumes a large fraction of the model’s capacity, and this may be where the peak of degradation occurs. Figure [2](#S1.F2 "Figure 2 ‣ 1.1 Summary of Results ‣ 1 Introduction ‣ Scaling Laws and Interpretability of Learning from Repeated Data") on the right shows that the peak performance hit coincides with where the train loss on the repeated data approaches zero, similar to previously observed double-descent phenomena. This also provides a practical diagnostic for when repeated data is likely to be harming the model. * Repeated data can cause a divergence from power-law scaling. For the blue curve in Figure [2](#S1.F2 "Figure 2 ‣ 1.1 Summary of Results ‣ 1 Introduction ‣ Scaling Laws and Interpretability of Learning from Repeated Data") right (122 repeated epochs), we see only a moderate impact to performance (line on log-log graph) until the model is scaled up to 100M parameters, after which we see a large divergence from power law scaling of cross entropy loss. Extrapolating the region of large degradation in Figure [4](#S2.F4 "Figure 4 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") predicts meaningful degradation of repeating data only 2 times for large (GPT-3 size) models, though the region would be shifted if the models were trained to the compute optimal frontier Hoffmann et al. ([2022](#bib.bib35 "Training compute-optimal large language models")). * Repeated data causes a disproportionately large performance hit to copying, a mechanism for in-context learning. We constructed a simple copying eval, the loss on the first paragraph of Harry Potter copied 11 times. We observe that using 3% repeated data at the worst number of repeated epochs caused up to a 3x reduction in effective model size (performance equal to model with 3x fewer parameters) on this task whereas it only caused at most a 15% reduction in effective model size on test loss. * The disproportionate performance hit to copying coincides with a disproportionate degradation of induction heads. In line with Olsson et al. ([2022](#bib.bib3 "In-context learning and induction heads")) we evaluated the models on their prefix matching score, repeated sequences of random tokens and observed the degree to which attention heads attend to earlier tokens that are preceded by a token that matches the present token. We observe that using 3% repeated data at the worst number of repeated epochs caused on average a 32% reduction in effective model size on this task whereas it only caused at most a 15% reduction in effective model size on test loss. * Repeated text data causes a small but still disproportionate performance drop out of distribution, as measured by cross entropy loss on Python code. Unlike our the Harry Potter copying and prefix matching evals we mostly see the performance drop with higher levels of repetition, 50-90%. * One and two-layer attention only models trained on repeated data are worse at exactly copying and fuzzily copying (for instance correctly predicting Dursleys given that Dursley has appeared previously) proper names on inspection. When we inspect per tokens losses of smaller models we can see this degradation in a simple, understandable form of copying in a paragraph of text. * Training on repeated Python code creates a similar behavior. When training on Python we also observe a double descent phenomenon and a predictable poor performance region in terms of model size and repeated epochs, though the shape of both curves are somewhat different. * Pre-training on repeated data damages models. Pre-training with repeated data leads to worse performance than both training from scratch and fine-tuning from a control model pre-trained on the original text dataset. During fine-tuning, the repeated data model forgets the repeated dataset, so we consider the model pre-trained with repeated data to be strictly worse than the model fine-tuned from the unique dataset. 2 Results ---------- | | | | --- | --- | | | | Figure 3: Learning curves for test loss on 800M models with 90% repeated data (left) and 50% repeated data (right), each with varying numbers of repeats/sizes of the repeated fraction. The graph on the left shows characteristic double descent curves. Repeated epochs corresponds to the number of epochs on the repeated tokens, the rest of the data is seen only once. For several models, test loss drops as normal during the beginning of training, but then starts to rise during the middle of training before dropping again. In the graph on the right with only 50% repeated data, we see that the double descent bumps have turned into long plateaus for highly affected models. Repeated data induces a strong double descent phenomenon. The results from training models on different sizes, fractions of repeated data, and frequency of repeats are shown in Figures 2 and 3. Figure 2 (left) shows that when we train on 10% repeated data and vary the frequency of repetition (or equivalently the number of epochs of repeated data), there is a specific range of repetition frequency for which damage to model performance is maximized. The range depends on the model size but for a 800M parameter model it occurs at roughly 100x repeats of 0.1% of the data, and degrades performance nearly to that of a 340M parameter model. This is a large degradation given that only 10% of the data is repeated. The peak coincides with the advent of memorization on the repeated data (Figure 2 right) – a possible indicator of a double descent phenomenon. Figure [3](#S2.F3 "Figure 3 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") shows learning curves for different repetition frequencies and for 50% and 90% of the data being repeated. In the extreme case of 90% repeated data and the correct frequency of repetition (100x-10,000x), we confirm the presence of a literal double descent curve in which the loss decreases, increases, and then decreases again (Figure [3](#S2.F3 "Figure 3 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") left). As we lower the fraction of repeated data to 50%, the curve becomes a long plateau rather than double descent, but it appears to be fundamentally an epoch-wise double descent phenomenon Nakkiran et al. ([2019](#bib.bib22 "Deep double descent: where bigger models and more data hurt")). These peaks and plateaus again coincide with the training loss on the repeated data approaching zero as shown in Figure [2](#S1.F2 "Figure 2 ‣ 1.1 Summary of Results ‣ 1 Introduction ‣ Scaling Laws and Interpretability of Learning from Repeated Data"). As in Nakkiran et al. ([2019](#bib.bib22 "Deep double descent: where bigger models and more data hurt")) we see double descent effects caused by both increasing model size and epochs. We suspect there is a range in the middle where the data can be memorized and doing so consumes a large fraction of the model’s capacity, and this may be where the peak of degradation occurs, for a more thorough discussion of this question see the discussion (section [5](#S5 "5 Discussion ‣ Scaling Laws and Interpretability of Learning from Repeated Data")). Repeated data can cause a divergence from power-law scaling. Figure [4](#S2.F4 "Figure 4 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") zooms in on the degradation of performance, measured as a function of model size for different repetition frequencies of the repeated data. For example, models trained for 1,220 repeats and 10% repeated data show a dip in performance to the equivalent of a model 0.55x as large, when the model size is 10M to 100M parameters. As the model size continues to increase, performance recovers to 0.8x model-size equivalent for a 1B parameter model. For a smaller number of repeats (122 repeats), the dip occurs later, centered around 1B parameters. | | | | --- | --- | | | | Figure 4: On the left we plot the same results as in Figure [2](#S1.F2 "Figure 2 ‣ 1.1 Summary of Results ‣ 1 Introduction ‣ Scaling Laws and Interpretability of Learning from Repeated Data"), re-parameterized in terms of the effective model size multiplier implied by the test loss (performance equal to a model with x times as many parameters). For a given number of repetitions, degradation occurs only for a specific range of model sizes. For example, for the blue curve (122 repeated epochs), we see almost no performance deviation from a power law scaling law (line on log-log graph) until the model is scaled up to 100M parameters, after which we see a divergence. We see the same divergence around 400M parameters for 12,200 repeated epochs. The right graph shows a large, predictable region over which the degradation occurs, and suggests that large models like GPT-3, Gopher, and PALM Brown et al. ([2020](#bib.bib36 "Language models are few-shot learners")); Rae et al. ([2021](#bib.bib37 "Scaling language models: methods, analysis, and insights from training gopher")); Bi et al. ([2020](#bib.bib7 "PALM: pre-training an autoencoding and autoregressive language model for context-conditioned generation")) need to be careful about overfitting their high quality distributions like Wikipedia and books – although note that this holds constant the number of total training tokens. The blue and green curves correspond to the right and left sides of the double descent region where we observe 50% of the maximum effect. They are an aggregation of that curve for the scans where we trained on 3%, 10%, 20%, 50%, and 90% repeated data. The details of both fits are in Appendix [A](#A1 "Appendix A Model Size Multiplier and Poor Performance Region Fits ‣ Scaling Laws and Interpretability of Learning from Repeated Data"). A large number of runs needed to be aggregated to produce a clean fit for region of reduced performance. The right panel of Figure [4](#S2.F4 "Figure 4 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") shows the range over which we observe at least 50% of the maximum degradation; this corresponds to a “band” or region in the (model size, repetition frequency) plane. Both boundaries of the region are a good fit to a power law relating frequency of repetition to the number of parameters of the model, namely: | | | | | --- | --- | --- | | | E=k∗Nα | | where E corresponds to epochs of repetition and N corresponds to the parameters in the model. it is notable that the lines in figure 2b are relatively parallel. The fits for the above lines are given in the table below: | | | | | --- | --- | --- | | | k | α | | right boundary | 5.1e7 | -.50 | | left boundary | 4.2e6 | -.56 | Note that extrapolating these boundaries leads to a prediction of significant degradation from repeating data as little as 2x on state-of-the-art language models with hundreds of billions of parameters, although this applies for a constant number of training tokens (100B). In practice large models are trained for more than thisHoffmann et al. ([2022](#bib.bib35 "Training compute-optimal large language models")), and as shown in Figure [3](#S2.F3 "Figure 3 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data"), training past the double descent peak is helpful, so the degradation would likely not be quite as bad. When looking at Figure [3](#S2.F3 "Figure 3 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") we see that the the poor performance region would be shifted left for large models trained on the compute efficient frontier (the pareto frontier of compute and performance) Kaplan et al. ([2020](#bib.bib33 "Scaling laws for neural language models")). Overall it seems that in addition to being robust to task, model size, and architecture as shown in previous work Advani and Saxe ([2017](#bib.bib25 "High-dimensional dynamics of generalization error in neural networks")); Belkin et al. ([2018](#bib.bib26 "Reconciling modern machine learning practice and the bias-variance trade-off")); Nakkiran et al. ([2019](#bib.bib22 "Deep double descent: where bigger models and more data hurt")) double descent as a general phenomenon appears to be robust to occurring in a sub-distribution and that it can have a large effect on overall performance even while being a modest fraction of training tokens. Repeated data causes a disproportionately large performance hit to copying, a mechanism for in-context learning. The ability of a language model to copy text (in the sense of being provided with a context consisting of a passage repeated several times, and testing whether the model can repeat it once more) is a potential measure of generalization, as copying is independent of the content of the text. Also, recent interpretability work has suggested that copying may be implemented by crisp internal algorithmic structures (Olsson et al. ([2022](#bib.bib3 "In-context learning and induction heads"))), again suggesting generalization. It thus seems valuable to investigate what happens to copying during a memorization-related degradation in performance, which we have shown above occurs in our experiments. To do this constructed a simple evaluation in which copying is heavily emphasized: we measure the loss on the first paragraph of Harry Potter copied 11 times. The models trained on repeated data performed much worse on this evaluation (Figure [5](#S2.F5 "Figure 5 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data")), substantially out of proportion to the degradation on the loss itself. In other words, copying is preferentially harmed by training on repeated data. For example, a 3% fraction of repeated data leads to a 1.15x reduction in effective model size (performance equal to model with 1.15 fewer parameters) on the general loss, but a much larger 3x effective model size reduction in terms of copying ability. As can be seen in Figure 5, the damage to copying is greater than the damage to overall loss across the entire range of repeated data fractions. This suggests that the shift to memorization caused by repeated data is selectively harming at some behaviors associated with generalization. To get another view on the same phenomenon, we measured the loss of various models on the Xth consecutive copy of the Harry Potter paragraph, where X runs from 1 to 12. As shown in Figure [7](#S2.F7 "Figure 7 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") (left), for most models the loss gradually decreases with increasing numbers of copies of the paragraph (i.e. the model has an easier time predicting an additional copies after seeing more consecutive copies), but at the peak of the double descent phenomenon, the loss is much higher and, strikingly, does not decrease at all with additional copies of the paragraph. This large aberration shows how strong the selective effect of the double descent phenomenon on copying is. General in-context learning is also harmed at the pessimal number of repeated epochs (Figure [7](#S2.F7 "Figure 7 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") right), though to a lesser extent than copying. | | | | --- | --- | | | | Figure 5: We constructed a simple measure of the model’s copying ability, consisting of the loss on the first paragraph of Harry Potter repeated 11 times. We measured the double descent peak performance for a given model size and fraction of repeated data and compared that to a fit of these evaluations on the control model (trained on unique text) scan to generate an effective model size. We observe that 3% repeated data at the pessimal number of repeated epochs caused a 3x reduction in effective model size on this task for a for several model sizes, whereas it only caused at most a 1.15x reduction in effective model size on test loss. We see much larger effects on the copying evaluation than on overall performance for repeated data fractions between 3% and 20%. The model size multiplier for copying is based on interpolation and the model size multiplier for test loss is based on a power law fit (see Appendix [C](#A3 "Appendix C Appendix: Copying and Prefix Matching Score Fits ‣ Scaling Laws and Interpretability of Learning from Repeated Data") for more details). The disproportionate performance hit to copying coincides with a disproportionate degradation of induction heads. Having connected the damage associated with repeated data with a measure of generalization (in-context copying of text), we next took the connection one step further, by trying to also probe the potential mechanistic basis of copying. Olsson et al. ([2022](#bib.bib3 "In-context learning and induction heads")) identifies “induction heads” as a possible basis for copying and in-context learning behavior in general, so we decided to measure these and try to connect them back to the repeated data double descent phenomenon. Olsson et al. ([2022](#bib.bib3 "In-context learning and induction heads")) defines induction heads by their ability to facilitate simple copying given a repeated random sequence of tokens (though in practice this definition ends up including heads with more complex behaviors too). Induction heads use a circuit of 2 attention heads to "complete the pattern by copying and completing sequences." This can be split up into attending to the relevant token (prefix matching) and increasing the logit corresponding to the attended-to token. | | | | --- | --- | | | | Figure 6: Comparison of degradation of prefix matching score with repeated data, compared to general degradation of the test loss. We measured the double descent peak performance for a given model size and fraction of repeated data and compared that to a fit of the prefix matching score on the control model scan to generate an effective model size. We observe that 3% repeated data causes on average [21](#A3.F21 "Figure 21 ‣ Appendix C Appendix: Copying and Prefix Matching Score Fits ‣ Scaling Laws and Interpretability of Learning from Repeated Data") a 1.47 model size multiplier on prefix matching score while causing less than a 1.15x model size reduction in effective model size on test loss. Again we see much larger effects on the prefix matching score than on overall performance for repeated data fractions between 3% and 20%. The model size multiplier for prefix matching is based on a linear fit (see Appendix [C](#A3 "Appendix C Appendix: Copying and Prefix Matching Score Fits ‣ Scaling Laws and Interpretability of Learning from Repeated Data") for more details of fit). The test loss shown on the right is the same graph as in Figure 5, but with differently scaled axes for ease of comparison. We decided to probe the prefix matching score as measure of mechanistic structure that is distinct from the behavior of copying itself. Figure [6](#S2.F6 "Figure 6 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") shows the same setup as Figure [5](#S2.F5 "Figure 5 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data") except for prefix matching score instead of copying loss. As can be seen in the figure, preferential damage to prefix matching score is not present across the whole range of repeated data fraction as it is for copying, but at low fractions of data repeated, there is still preferential damage. For example, at 3% repeated tokens, there is a 2x effective parameter decrease in prefix matching score, but only a 1.15x effective parameter decrease in general (test) loss. As another example, we find it interesting that the sharp drop in prefix matching score for a 1.5M parameter model with 50% repetition corresponded to a complete breakdown of paragraph level copying. This complete breakdown of paragraph level copying corresponds to a 1.5M parameter model having the effective overall performance of a 30,000 parameter model, while having an equivalent prefix matching score to a model with effectively 2,000 parameters. Although not as conclusive as the previous results, these clearly show that prefix matching is preferentially degraded in some cases. | | | | --- | --- | | | | Figure 7: Degradation of copying and in-context learning at the peak of the double descent curve. On the left we show the 2-layer models trained on 50% repeated data from Figure [5](#S2.F5 "Figure 5 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data"), evaluated on the first paragraph of Harry Potter copied X times where X runs from 1 to 11. In Appendix [D](#A4 "Appendix D Appendix: Harry Potter Copying Evaluation with Fewer Characters ‣ Scaling Laws and Interpretability of Learning from Repeated Data"), we explore shortening the length of the paragraph to verify the problem is with copying rather than long contexts. The right shows per token losses on the test set. Both graphs show dramatically reduced performance (higher copying loss, lower benefit to in-context learning) at the peak of the double descent. One and two-layer attention only models are worse at copying and fuzzily copying proper names on inspection. To examine the effect on induction heads and in-context learning even more closely, we looked at more granular copying in one and two layer attention-only transformers, for which interpreting the internal structure (and especially induction heads) is known to be particularly straightforward Elhage et al. ([2021](#bib.bib15 "A mathematical framework for transformer circuits")); Olsson et al. ([2022](#bib.bib3 "In-context learning and induction heads")). That is, we can reverse engineer a large portion of attention-only-transformers (no MLP’s) with a circuits-level understanding (understanding how individual neurons act together to produce useful behavior) Cammarata et al. ([2020](#bib.bib14 "Thread: circuits")). These small models also exhibit the same double-descent phenomenon as larger models (Appendix [B](#A2 "Appendix B Appendix: Logit Attribution Analysis, 2 Layer Models ‣ Scaling Laws and Interpretability of Learning from Repeated Data")). For 1-layer attention only models, where copying takes the form of skip-trigrams, we can easily see that the repeated data model is worse at a form of copying associated with these skip trigrams. Namely, we compare the probabilities that the repeated data and control models assign to each token in a paragraph, and focus especially on proper names which occur repeatedly in the paragraph (Figure [8](#S2.F8 "Figure 8 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data")). The most obvious way to correctly predict these re-occurring names is by copying, and we see that in most cases the control model (trained on unique text) performs much better than the one with repeated data (yellow underlines). ![](https://media.arxiv-vanity.com/render-output/7729780/figures/1l_attn_hp.jpg) Figure 8: Visualization of the difference in loss on the first paragraph of Harry Potter for control and 10%-repeated-data runs of a 1-layer attention-only model. Orange highlights correspond to the control model performing better, purple corresponds to the repeated data performing, and the intensity corresponds to the magnitude of the difference in per token losses. Proper names (which are a good target for copying when they occur more than once) are underlined in yellow on second or later occurance; it is clear that the control model performs better on these. Often the difference is dramatic: for the last three appearances of “Potters” the control model puts a >97% chance on “ters” given “Pot”, whereas the repeated data model puts <4% chance on that token. Very specifically, predicting repeated names requires exactly a skip-trigram pattern Elhage et al. ([2021](#bib.bib15 "A mathematical framework for transformer circuits")) which is the algorithmic operation 1-layer attention-only models are known to perform. For example, the following skip-trigrams are useful in the Harry Potter paragraph in Figure [8](#S2.F8 "Figure 8 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data"): | | | | | --- | --- | --- | | | [a][b]…[a]=>[b][Pot][ter]…[Pot]=>[ter] | | | | | | | --- | --- | --- | | | [a][b]…[a]=>[b′][Pot][ter]…[Pot]=>[ters] | | We also plotted the same visualization for a 2-layer attention-only model (which is known to contain simple induction heads), and find the control model is better at fuzzy copying (Figure [9](#S2.F9 "Figure 9 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data")). ![](https://media.arxiv-vanity.com/render-output/7729780/figures/2l_attn_per_token.jpg) Figure 9: Same as Figure 9, but for 2-layer attention-only models. Proper names (which are a good target for copying when they occur more than once) are underlined in yellow on second or later occurance. Here the repeated-data model sometimes does better on repeated proper names, but there are still clear examples of the control performing much better. These examples are highlighted in green and discussed. On the token [ley] in the second appearance of [D][urs][ley] the control model places a 92% likelihood on [ley] whereas the repeated data model places a 10% likelihood. On the token [leys] in the second appearance of [D][urs][leys] the control model places a 44% likelihood on [leys] whereas the repeated data model places a 4.9% likelihood. On the [ley] in [ un][D][urs][ley][ish] the control model places a 68% likelihood on [ley] whereas the repeated data model places a 0.4% likelihood. Visually, it is less obvious (compared to the 1-layer case) that the 2-layer repeated model is worse at names, and there are a few examples where it puts 1.1x higher odds on the correct token. But on the other hand there are dramatic cases of the control model doing 500x times better (odds ratio on correct token) for fuzzy copying, like unDursleyish, which is exactly the kind of degradation we’d expect to see from disrupting induction heads. We attempted to leverage logit attribution (which earlier tokens contributed to the prediction of the current token through a "direct path" with this attention head) to see if the difference was primarily due to the induction head being less active or other heads interfering with it Olsson et al. ([2022](#bib.bib3 "In-context learning and induction heads")). We were unable to find clear evidence of either, but we include our exploration of a 2 layer attention only model in Appendix [B](#A2 "Appendix B Appendix: Logit Attribution Analysis, 2 Layer Models ‣ Scaling Laws and Interpretability of Learning from Repeated Data"). Repeated data causes a smaller, disproportionate performance drop on our out-of-distribution evaluations. | | | | --- | --- | | | | Figure 10: We observe that training on high levels of repeated data causes a small disproportionate drop on out-of-distribution performance (Python loss). The effect is noisy, but since we do not see a model size effect we take the average in the figure on the right (harmonic mean of multipliers). For large repeated fractions of 50% and 90% we see model size multipliers of.84 and.75. Given that we overfit the model, we expected it to perform worse off distribution, which we do observe (Figure [10](#S2.F10 "Figure 10 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data")). We notice almost an opposite pattern to what we observed in the induction head results. We see most of the disproportionate drop at 50% and 90% rather than 1-10%. We observe a double descent phenomenon in sparse sweep of models trained on python, but we the Python scans exhibit a somewhat different overall shape. To add more generality to our results, we repeated the same experiments on a Python dataset instead of natural language (Figure [11](#S2.F11 "Figure 11 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data")). If we use the same method to fit the poor performance region, we see a broadly similar fit and a second epoch for today’s large models (approximately 200B parameters) is still robustly in the reduced performance region for python. However the fit is noisier than the fit for text and the two lines are no longer parallel. | | | | --- | --- | | | | Figure 11: Double descent phenomenon for models trained on python. Training on Python gives similar results to what Figure 2 and Figure 4 show for language models. Here 50% of the dataset consists of repeats and 50% is unique. On the left side is degradation in performance, occurring over a specific range of repetition that varies with model size. On the right, we again see a large region of poor performance as we did in Figure [4](#S2.F4 "Figure 4 ‣ 2 Results ‣ Scaling Laws and Interpretability of Learning from Repeated Data"), although the fit is noisier. Again the blue and green curves correspond to
0
0
8,000
13,199
2,989
dc386e3f-d750-452a-816c-2033939514bc
StampyAI/alignment-research-dataset/lesswrong
Are (at least some) Large Language Models Holographic Memory Stores? *Cross-posted* [*from New Savanna*](https://new-savanna.blogspot.com/2023/10/are-at-least-some-large-language-models.html). That’s been on my mind for the last week or two, ever since my recent work on ChatGPT’s memory for texts [1]. On the other than, there’s a sense in which it’s been on my mind for my entire career, or, more accurately, it’s been growing in my mind ever since I read Karl Pribram on neural holography back in 1969 in *Scientific American* [2]. For the moment let’s think of it as a metaphor, just a metaphor, nothing we have to commit to. Just yet. But ultimately, yes, I think it’s more than a metaphor. To that end I note that cognitive psychologists have recently been developing the idea of verbal memory as holographic in nature [3]. Note: These are quick and dirty notes, a place-holder for more considered thought. **Holography in the mind** -------------------------- Let’s start with an article David Hays and I published on neural holography as the neural underpinning of metaphor [4]. Here’s where we explain the holographic process: > Holography is a photographic technique for making images. A beam of laser light is split into two beams. One beam strikes the object and is reflected to a photographic plate. The other beam, called a reference beam, goes from laser to plate directly. When they meet, the two beams create an interference pattern—imagine dropping two stones into a pond at different places; the waves propagating from each of these points will meet and the resulting pattern is an interference pattern. The photographic plate records the pattern of interference between the reference beam and the reflected beam. > > The image recorded on the film doesn't look at all like an ordinary photographic image—it’s just a dense mass of fine dots. But when a beam of laser light having the same properties as the original reference beam is directed through the film an image appears in front of the film. The interaction of the laser beam and the hologram has recreated the wave form of the laser beam which bounced off the object when the hologram was made. The new beam has extracted the image from the plate. > > Holography is, as its name suggests, holistic. Every part of the scene is represented in every part of the plate. (This situation is most unlike ordinary photography, which uses a good lens to focus infinitesimal parts of the scene onto equally infinitesimal parts of the plate.) With such a determinedly nondigital recording, certain mathematical possibilities can be realized more easily—we are tempted to say, infinitely more easily. For example, convolution. Take the holographic image of a printed page, and the image of a single word. Convolute them. The result is an image of the page with each occurrence of the word highlighted. We can think of visual recognition as a kind of convolution. The present scene, containing several horses, is convoluted with the memory of a horse and the present horses are immediately recognized. We can think of recognition this way, but we must admit that this process has not been achieved in any machine as yet. > > Further, it is possible to record many different images on the same piece of film, using different reference beams. The reference beams may differ in color, in angle of incidence, or otherwise. We can think— although again we cannot cite a demonstration—of convoluting such a composite plate with a second plate. If the image in the second plate matches any one of the images in the composite, then it is recognized. For metaphor we want to convolute Achilles and the lion and to recognize, to elicit another image containing not Achilles, not the lion, but just that wherein they resemble one another. Such is the metaphor mechanism—but that must wait until the next section, on focal and residual schemas. > > The 175 billion weights that constitute the LLM at the core of ChatGPT, that’s the holographic memory. It is the superposition of all the texts in the training corpus. The training procedure – predict the next word – is a device for calculating a correlation (entanglement [5]) between each word in context, and every other word in every other text, in context. It’s a tedious process, no? But it works, yes? When one prompts a trained memory, the prompt serves as a reference beam. And the whole memory must be ‘swept’ to generate each character. Given the nature of digital computers, this is a somewhat sequential process, even given a warehouse full of GPUs, but conceptually it’s a single pass. When one accesses an optical hologram with a reference beam, the beam illuminates the whole holograph. This is what Miriam Yevick called “one-shot” access in her 1975 paper, Holographic or Fourier Logic [6]. The whole memory is searched in a single sweep. **Style transfer** ------------------ So, that’s the general idea. Much detail remains to be supplied, most of it by people with more technical knowledge than I’ve got. But I want to get in one last idea from the metaphor paper. We’ve been explaining the concepts of focal and residual schemas: > Now consider a face. Everything we said about the chair applies here as well. But the expression on the face can vary widely and the identity of the face remains constant. This variability of expression can also be handled by the mechanism of focal and residual. There is a focal schema for face-in-neutral-expression and then we have various residuals which can operate on the focal schema to produce various expressions. (You might want to recall D'Arcy Thompson's coordinate transformations in On Growth and Form 1932.) We tend to discard presentation residuals such as lighting and angle of sight, but we respond to expression residuals > > Our basic point about metaphor is that the ground which links tenor and vehicle is derived from residuals on them. Consider the following example, from Book Twenty of Homer's Iliad (Lattimore translation, 1951, ll. 163-175)—it has the verbal form of a simile, but the basic conceptual process is, of course, metaphorical: > > >                                                                    From the other  > side the son of Peleus rose like a lion against him,  > the baleful beast, when men have been straining to kill him, the country  > all in the hunt, and he at first pays them no attention  > but goes his way, only when some one of the impetuous young men  > has hit him with the spear he whirls, jaws open, over his teeth foam  > breaks out, and in the depth of his chest the powerful heart groans;  > he lashes his own ribs with his tail and the flanks on both sides  > as he rouses himself to fury for the fight, eyes glaring,  > and hurls himself straight onward on the chance of killing some one  > of the men, or else being killed himself in the first onrush.  > So the proud heart and fighting fury stirred on Achilleus  > to go forward in the face of great-hearted Aineias. > > > In short, Achilles was a lion in battle. Achilles is the tenor, lion the vehicle, and the ground is some martial virtue “proud heart and fighting fury”. But what of that detailed vignette about the lion's fighting style? Whatever its use in pacing the narrative, its real value, in our view, is that it contains the residuals on which the comparison rests, the residuals which give it life. The phrase “proud heart and fighting fury” is propositional while the fighting style is physiognomic. “Proud heart and fighting fury” may convey something of what is behind the fighting style, but only metaphoric interaction can foreground the complex schema by which we recognize and feel that style. > > The cognitive problem is to isolate the physiognomy of style, to tease it apart from the entities which exhibit that style. [...] In the case of Achilles and the lion we have two complex physiognomies, each extended in space and time. Metaphoric comparison serves to isolate the style, to allow us to focus our attention on that style as distinct from the entities which exhibit it. > > This comparison involves two foci, Achilles and the lion. The physical resemblance between them is not great—their body proportions are quite different and the lion is covered with fur while Achilles is, depending on the occasion, either naked or clothed in some one of many possible ways. The likeness shows up in the way they move in battle. A body in motion doesn't appear the same as a body at rest. The appearance presented by the focal body is modified by the many residuals which characterize that body's movement— twists and turns, foreshortenings and elongations (for an account of motion residuals, see Hay 1966). The movements of Achilles and the lion must differ at the grossest level, since the lion stands on four legs and fights with claws and teeth, while Achilles stands on two legs and fights with a spear or sword. But their movements are alike at a subtler level, at the level of what we call, in a dancer or a fighter, their style. Residuals can be stacked to many levels. “Proud heart and fighting fury” may be a good phrase to designate that style, but it doesn't allow us to attend to that style. Homer's extended simile does. > > That’s a mouthful, I know. Notice our emphasis on style. That’s what’s got my attention. One of the more interesting things LLMs can do is stylistic transfer. Take a piece of garden variety prose and present it in the style of Hemingway or Sontag, whomever you choose. Hays and I argued that that’s how metaphor is created, deep metaphor, that is, not metaphor so desiccated we no longer register its metaphorical nature, e.g. the mouth of the river. We made our argument about visual scenes: Achilles in batter, a lion in battle. LLMs apply the same process to texts, where style is considered to be a pattern of residuals over the conceptual content of the text. More later. **References** -------------- [1] Discursive Competence in ChatGPT, Part 2: Memory for Texts, Version 3, <https://www.academia.edu/107318793/Discursive_Competence_in_ChatGPT_Part_2_Memory_for_Texts_Version_3> [2] I recount that history here: Xanadu, GPT, and Beyond: An adventure of the mind, <https://www.academia.edu/106001453/Xanadu_GPT_and_Beyond_An_adventure_of_the_mind> [3] Michael N. Jones and Douglas J. K. Mewhort, Representing Word Meaning and Order Information in a Composite Holographic Lexicon, Psychological Review, 2007, Vol. 114, No. 1, 1-37. DOI: <https://doi.org/10.1037/0033-295X.114.1.1> Donald R. J. Frankin and D. J. K. Mewhort, Memory as a Holograpm: An Analysis of Learning and Recall, *Canadian Journal of Experimental Psychology / Revue canadienne de psychologie expérimentale*, Association 2015, Vol. 69, No. 1, 115–135, <https://doi.org/10.1037/cep0000035> [4] Metaphor, Recognition, and Neural Process, <https://www.academia.edu/238608/Metaphor_Recognition_and_Neural_Process> [5] See posts tagged with “entangle”, <https://new-savanna.blogspot.com/search/label/entangle> [6] Miriam Lipschutz Yevick, Holographic or Fourier Logic, *Pattern Recognition* 7, 197-213, <https://sci-hub.tw/10.1016/0031-3203(75)90005-9>
0
0
2,755
2,755
24,065
c68786d7-f155-40b6-9322-4bcec60b5565
trentmkelly/LessWrong-43k
Rationality Quotes With Attributions Hidden: from Mein Kampf to Men****x This thread has an experimental format for posting rationality quotes. Here is the format: For those posting quotes: Post the quote as usual, but not the author, original language translated from, or other information. That information is to be input after the quote according to the following format: [Source](http://linkgoes.here "hovertext goes here") For example: >When an idea is wanting, a word can always be found to take its place. [Source](http://www.quotationspage.com/quote/30216.html "Goethe, translated.") The source information will be available by hovering the mouse over "Source", without opening a new page. This format allows quotations to be evaluated with less context available, with all that entails. I hope this allays some of the uncertainty regarding why words of the Bible or authors such as Nietzsche are sometimes poorly received. People are encouraged to vote without considering the source information. If locally idolized people said genuinely silly things even considering the context, feel free to post those as well, but please use your best judgement as to whether or not taking it out of context is fair to the speaker. Please use your own judgement in deciding which quotes thread to post material. This isn't intended to compete with the main thread, it's an experiment to see if people like a different format better. Some people thought this format, or something like it, should simply be tried on the next regular quotes thread to minimize any disruption caused by having multiple threads, while others thought disruption wold be minimized by having a separate thread and leaving the main thread as normal. This is what I decided to do. The usual rules apply, except that there is no fixed limit to the number of quotes one may submit, because I'd like to populate this thread without taking too much from the usual thread. * Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply t
0
0
446
446
27,036
0432e50f-f66c-4872-a2e0-cf2190dd5728
trentmkelly/LessWrong-43k
The Pre-Historical Fallacy One fallacy that I see frequently in works of popular science -- and also here on LessWrong -- is the belief that we have strong evidence of the way things were in pre-history, particularly when one is giving evidence that we can explain various aspects of our culture, psychology, or personal experience because we evolved in a certain way. Moreover, it is held implicit that because we have this 'strong evidence', it must be relevant to the topic at hand. While it is true that the environment did effect our evolution and thus the way we are today, evolution and anthropology of pre-historic societies is emphasized to a much greater extent than rational thought would indicate is appropriate.  As a matter of course, you should remember these points whenever you hear a claim about prehistory: * Most of what we know (or guess) is based on less data than you would expect, and the publish or perish mentality is alive and well in the field of anthropology. * Most of the information is limited and technical, which means that anyone writing for a popular audience will have strong motivation to generalize and simplify. * It has been found time and time again that for any statement that we can make about human culture and behavior that there is (or was) a society somewhere that will serve as a counterexample.  * Very rarely do anthropologists or members of related fields have finely tuned critical thinking skills or a strong background on the philosophy of science, and are highly motivated to come up with interpretations of results that match their previous theories and expectations.  Results that you should have reasonable levels of confidence in should be framed in generalities, not absolutes. E.g., "The great majority of human cultures that we have observed have distinct and strong religious traditions", and not "humans evolved to have religion". It may be true that we have areas in our brain that evolved not only 'consistent with holding religion', but actually evo
0
0
408
408
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
65