Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
130,319 | 10,605,708,592 | IssuesEvent | 2019-10-10 21:07:06 | libra/libra | https://api.github.com/repos/libra/libra | opened | Tag faucet and client images in cluster_test | cluster_test enhancement | We have 3 nightly images currently - client, validator(libra_e2e) and faucet
Cluster test only tests validator and tags it with nightly_tested.
However in order to use it for prod deploy, we need to tag all 3 with nightly_tested when test passes
We can find those other images by matching `upstream_xxx` tag on them | 1.0 | Tag faucet and client images in cluster_test - We have 3 nightly images currently - client, validator(libra_e2e) and faucet
Cluster test only tests validator and tags it with nightly_tested.
However in order to use it for prod deploy, we need to tag all 3 with nightly_tested when test passes
We can find those other images by matching `upstream_xxx` tag on them | test | tag faucet and client images in cluster test we have nightly images currently client validator libra and faucet cluster test only tests validator and tags it with nightly tested however in order to use it for prod deploy we need to tag all with nightly tested when test passes we can find those other images by matching upstream xxx tag on them | 1 |
397,817 | 27,178,707,866 | IssuesEvent | 2023-02-18 10:37:43 | WalletConnect/echo-server | https://api.github.com/repos/WalletConnect/echo-server | closed | feat: Tidy Specs | A-documentation C-enhancement S-accepted P-low E-hard | Work through specs after E2EE and make sure they are all tidy and clean (preferably in one file) | 1.0 | feat: Tidy Specs - Work through specs after E2EE and make sure they are all tidy and clean (preferably in one file) | non_test | feat tidy specs work through specs after and make sure they are all tidy and clean preferably in one file | 0 |
130,841 | 10,667,187,076 | IssuesEvent | 2019-10-19 10:21:54 | marcomusy/vtkplotter | https://api.github.com/repos/marcomusy/vtkplotter | closed | AttributeError: 'Plotter' object has no attribute 'shape' | bug testing-phase |
File ".../anaconda3/lib/python3.7/site-packages/vtkplotter/backends.py", line 85, in getNotebookBackend
if vp.shape[0] != 1 or vp.shape[1] != 1:
AttributeError: 'Plotter' object has no attribute 'shape'
why do i get this error after installing k3d ? worked before that.
| 1.0 | AttributeError: 'Plotter' object has no attribute 'shape' -
File ".../anaconda3/lib/python3.7/site-packages/vtkplotter/backends.py", line 85, in getNotebookBackend
if vp.shape[0] != 1 or vp.shape[1] != 1:
AttributeError: 'Plotter' object has no attribute 'shape'
why do i get this error after installing k3d ? worked before that.
| test | attributeerror plotter object has no attribute shape file lib site packages vtkplotter backends py line in getnotebookbackend if vp shape or vp shape attributeerror plotter object has no attribute shape why do i get this error after installing worked before that | 1 |
34,504 | 4,931,418,916 | IssuesEvent | 2016-11-28 10:07:47 | TheScienceMuseum/collectionsonline | https://api.github.com/repos/TheScienceMuseum/collectionsonline | closed | Weight on display items higher | bug please-test priority-2 | This works fine for the "objects" tab but in the "all" tab, the archive documents come before an on display object. The "on display" object should almost always come first (unless we have a near direct match on title or accession no.).
<img width="770" alt="weight-on-display-higher-1" src="https://cloud.githubusercontent.com/assets/91365/19803653/c4b6cdf8-9d01-11e6-9063-78f140aa90b8.png">
<img width="863" alt="weight-on-display-higher-2" src="https://cloud.githubusercontent.com/assets/91365/19803652/c4b67bb4-9d01-11e6-9820-e3777a83c50c.png">
| 1.0 | Weight on display items higher - This works fine for the "objects" tab but in the "all" tab, the archive documents come before an on display object. The "on display" object should almost always come first (unless we have a near direct match on title or accession no.).
<img width="770" alt="weight-on-display-higher-1" src="https://cloud.githubusercontent.com/assets/91365/19803653/c4b6cdf8-9d01-11e6-9063-78f140aa90b8.png">
<img width="863" alt="weight-on-display-higher-2" src="https://cloud.githubusercontent.com/assets/91365/19803652/c4b67bb4-9d01-11e6-9820-e3777a83c50c.png">
| test | weight on display items higher this works fine for the objects tab but in the all tab the archive documents come before an on display object the on display object should almost always come first unless we have a near direct match on title or accession no img width alt weight on display higher src img width alt weight on display higher src | 1 |
52,341 | 6,228,107,453 | IssuesEvent | 2017-07-10 22:21:30 | vmware/docker-volume-vsphere | https://api.github.com/repos/vmware/docker-volume-vsphere | closed | [E2E] Automate P1 plugin tests | kind/test P1 | - [ ] Recover and extra mount
a. Start 2 containers using same volume with restart=always
b. restart docker and wait for containers to start and reference counts to be initialized.
c. Start 1 container using same volume
d. Stop all 3 containers and confirm disk is detached
- [ ] Short & long names
a. Start container with restart flag and use short name for volume (vol1)
b. restart docker
c. start another instance of container with short name for volume (vol1)
d. start another instance of container with long name for volume (vol1@datastore1)
e. stop all 3 containers and confirm disk is detached.
- [ ] Duplicate volume name
a. start container with restart flag and short name "vol1" (assuming vol1@ds1)
b. start container with restart flag with same volume name on another ds: vol1@ds2
c. restart docker
d. start container with long name: vol1@ds1
e. Stop all 3 containers and make sure both vol1@ds1 ad vol1@ds2 are detached.
| 1.0 | [E2E] Automate P1 plugin tests - - [ ] Recover and extra mount
a. Start 2 containers using same volume with restart=always
b. restart docker and wait for containers to start and reference counts to be initialized.
c. Start 1 container using same volume
d. Stop all 3 containers and confirm disk is detached
- [ ] Short & long names
a. Start container with restart flag and use short name for volume (vol1)
b. restart docker
c. start another instance of container with short name for volume (vol1)
d. start another instance of container with long name for volume (vol1@datastore1)
e. stop all 3 containers and confirm disk is detached.
- [ ] Duplicate volume name
a. start container with restart flag and short name "vol1" (assuming vol1@ds1)
b. start container with restart flag with same volume name on another ds: vol1@ds2
c. restart docker
d. start container with long name: vol1@ds1
e. Stop all 3 containers and make sure both vol1@ds1 ad vol1@ds2 are detached.
| test | automate plugin tests recover and extra mount a start containers using same volume with restart always b restart docker and wait for containers to start and reference counts to be initialized c start container using same volume d stop all containers and confirm disk is detached short long names a start container with restart flag and use short name for volume b restart docker c start another instance of container with short name for volume d start another instance of container with long name for volume e stop all containers and confirm disk is detached duplicate volume name a start container with restart flag and short name assuming b start container with restart flag with same volume name on another ds c restart docker d start container with long name e stop all containers and make sure both ad are detached | 1 |
6,284 | 2,831,949,829 | IssuesEvent | 2015-05-25 01:43:06 | start-jsk/2014-semi | https://api.github.com/repos/start-jsk/2014-semi | closed | 練習する | priority test | | 時間 | order | 得点 |正解|失敗|違反| 時間 | 担当 | 備考 |
|------------:|:--------|:-----|---:|---:|---:|:------|:-----------|:-----------------------------------------|
| 16 日15:00 | A | 40/186 | 7 | 1 | 3 | 19:11 | @k-okada | #543(HighLandを掴んでいるけどつかんでいないという), ~~#544(吸盤が取れる)~~ , ~~#545(シリアルポートからデータが取れない)~~, #540(チェックスクリプト) |
| 18:00 | B | 70 | 5 | 0 | 1 | 16:03 | @aginika | ~~#545(serialが停止して試合終了)~~, #550(return-objectでふりふり落https://github.com/start-jsk/2014-semi/issues/634とせず), #551(return-object時の把持物体の落下) , #https://github.com/start-jsk/2014-semi/issues/544#issuecomment-102611265 (結束帯の付け方によってグリッパがななめになってしまい大失敗) |
| 17 日10:00 | C | 69/199 | 6 | 1 | 2 | 16:39 | @wkentaro | https://github.com/start-jsk/2014-semi/issues/281#issuecomment-102749406 ( 掃除機を引きずり出してしまう), ~~https://github.com/start-jsk/2014-semi/issues/545#issuecomment-102736685 (serialが停止して試合終了)~~, https://github.com/start-jsk/2014-semi/issues/551#issuecomment-102733000 (genuinが戻せない), https://github.com/start-jsk/2014-semi/issues/501#issuecomment-102733020 (セグメント失敗) |
| 15:00 | B | | | | | | @iory | |
| 18:00 | D | 0(-27) | 0 | 1 | 2 | 13:50 | @aginika | #570(エラーにより停止), https://github.com/start-jsk/2014-semi/issues/544#issuecomment-102798641 (グリッパが斜めについてちゃんととれない), https://github.com/start-jsk/2014-semi/issues/551#issuecomment-102797427 (return時に失敗)|
| 22:00 | D | 69/194 | | | | 18:24 | @aginika | https://github.com/start-jsk/2014-semi/issues/341#issuecomment-102825067 (ブラシを取ったつもりで空振り 1:40秒) https://github.com/start-jsk/2014-semi/issues/27#issuecomment-102825327 (twitterの写真取るのが遅い) #575 (前半5:23/ 10:57.掃除機の稼働中に動作が遷移&落下) #578 (7:30 genuineが中途半端把持になり失敗) 謎の失敗(9:31秒) #586 #577 (left verifacationが途中で死亡) #574 (11:04 / 後半1/26 return-objectの距離短く再把持) https://github.com/start-jsk/2014-semi/issues/551#issuecomment-102797427 (後半2:25 IKが解けずIのreturn-objectで死亡) #544 (グリッパの頻繁なずれの問題)|
| 18 日10:00 | B | | | | | | @k-okada | |
| 15:00 | E | -7/188 | 4 | 5 | 1 | | @wkentaro | https://github.com/start-jsk/2014-semi/issues/602 (Eの棚のpick), https://github.com/start-jsk/2014-semi/issues/604 (判別回数), https://github.com/start-jsk/2014-semi/issues/601 (hand camera認識), https://github.com/start-jsk/2014-semi/issues/603 (first_yearを落とす), #544 (グリッパの頻繁なずれの問題) |
| 18:46 | F | 120/188 | | | | 15:00 | @iory |~~ #545(シリアルポートのエラーが依然出るが、止まらずに最後まで行った) ~~, highlandをすってとったが、箱を出るときに微妙な段差で躓いて離してしまった。|
| 19 日13:00 | G | no rate | | | | | @iory | https://github.com/start-jsk/2014-semi/issues/617 交差して落ちる. このあと落としてobject verificationで止まってしまった. https://github.com/start-jsk/2014-semi/issues/310 右手のグリッパーがとれる |
| 15:00 | I | -41/188 | 2 | 2 | 5 | 19:00 | @wkentaro | https://github.com/start-jsk/2014-semi/issues/636, https://github.com/start-jsk/2014-semi/issues/635, https://github.com/start-jsk/2014-semi/issues/634, 新しいグリッパーでは落とすことが多い.. |
| 18:00 | A | | | | | | @k-okada | |
| 20 日10:00 | B | | | | | | @aginika | |
| 16:00 | C | 30/186 | 3 | 0 | 4 | 10:00 | @wkentaro | 落とすことが多い、立てかけてあるものがとれない |
| 18:00 | A | | | | | | @iory | |
| 21 日10:00 | H | 69/186 | 6 | 1 | 2 | 13:00 | @wkentaro | https://github.com/start-jsk/2014-semi/issues/603#issuecomment-104092860, https://github.com/start-jsk/2014-semi/issues/689, https://github.com/start-jsk/2014-semi/issues/688, https://github.com/start-jsk/2014-semi/issues/687 |
| 19:02 | I | 37 | | | | | @k-okada | 次回までに,#703 と#704 をやりpick-objectでつかめないという状況を少なくする. |
| 23:09 | A | 52/188 | 5 | 1 | 1 | 20:00 | @k-okada | #706を試した。使い方は#704, Kが取れるまで調整してテスト実行。ただし、CとJで失敗。次はK,C,Jでとれるまで調整が必要。 |
| 22 日 6:00 | I | 30/180 | 4 | 1 | 2 | 20:00 | @iory | #670 と https://github.com/start-jsk/2014-semi/pull/721 の試験, #722 カラフルな球がとっているのにとれてないという問題 |
| 15:00 | C | 69/190 | 6 | 0 | 1 | 20:00 | @aginika | |
| 18:00 | A | | | | | | @wkentaro | |
- プログラムの動かし方
| 17 日10:00 | C | 69/199 | 6 | 1 | 2 | 16:39 | @wkentaro | https://github.com/start-jsk/2014-semi/issues/281#issuecomment-102749406 ( 掃除機を引きずり出してしまう), https://github.com/start-jsk/2014-semi/issues/545#issueco
```
sudo pkill mongod
roslaunch jsk_2014_picking_challenge baxter.launch
roslaunch jsk_2014_picking_challenge setup.launch
roslaunch jsk_2014_picking_challenge main.launch json:=`rospack find jsk_2014_picking_challenge`/data/apc-h.json
```
- 新しいapc.jsonの作り方
```
roscd jsk_2014_picking_challenge/scripts
./interface_generator.py
cp apc.json ../data/apc-*.json
```
apc-*は適宜適切なアルファベットにしましょう。setup.launchの引数も適切に変えましょう。
また、実験をするときは棚全体の物体のおき方が見えるように写真をとりましょう。
新しく生成したjsonは`git commit`しましょう。
- ビンの並べ方
- A(186点満点): https://github.com/start-jsk/2014-semi/blob/master/jsk_2014_picking_challenge/data/apc-a.json
- 得点の確認法
- `rosrun jsk_2014_picking_challenge score_calculator.py`を実行すべし(最後に出る満点と点数を控える)
- 正解:オーダ通りのものを掴んでオーダビンに入れたものの個数(すべて正解)
- 失敗:オーダとは違うものでオーダビンに入れたものの個数(認識がミス)
- 違反:ものを落としたり壊したりしたものの個数(つかみ, 把持認識ミス)
- 16 - (正解 + 失敗 + 違反) 把持に失敗したもの数 | 1.0 | 練習する - | 時間 | order | 得点 |正解|失敗|違反| 時間 | 担当 | 備考 |
|------------:|:--------|:-----|---:|---:|---:|:------|:-----------|:-----------------------------------------|
| 16 日15:00 | A | 40/186 | 7 | 1 | 3 | 19:11 | @k-okada | #543(HighLandを掴んでいるけどつかんでいないという), ~~#544(吸盤が取れる)~~ , ~~#545(シリアルポートからデータが取れない)~~, #540(チェックスクリプト) |
| 18:00 | B | 70 | 5 | 0 | 1 | 16:03 | @aginika | ~~#545(serialが停止して試合終了)~~, #550(return-objectでふりふり落https://github.com/start-jsk/2014-semi/issues/634とせず), #551(return-object時の把持物体の落下) , #https://github.com/start-jsk/2014-semi/issues/544#issuecomment-102611265 (結束帯の付け方によってグリッパがななめになってしまい大失敗) |
| 17 日10:00 | C | 69/199 | 6 | 1 | 2 | 16:39 | @wkentaro | https://github.com/start-jsk/2014-semi/issues/281#issuecomment-102749406 ( 掃除機を引きずり出してしまう), ~~https://github.com/start-jsk/2014-semi/issues/545#issuecomment-102736685 (serialが停止して試合終了)~~, https://github.com/start-jsk/2014-semi/issues/551#issuecomment-102733000 (genuinが戻せない), https://github.com/start-jsk/2014-semi/issues/501#issuecomment-102733020 (セグメント失敗) |
| 15:00 | B | | | | | | @iory | |
| 18:00 | D | 0(-27) | 0 | 1 | 2 | 13:50 | @aginika | #570(エラーにより停止), https://github.com/start-jsk/2014-semi/issues/544#issuecomment-102798641 (グリッパが斜めについてちゃんととれない), https://github.com/start-jsk/2014-semi/issues/551#issuecomment-102797427 (return時に失敗)|
| 22:00 | D | 69/194 | | | | 18:24 | @aginika | https://github.com/start-jsk/2014-semi/issues/341#issuecomment-102825067 (ブラシを取ったつもりで空振り 1:40秒) https://github.com/start-jsk/2014-semi/issues/27#issuecomment-102825327 (twitterの写真取るのが遅い) #575 (前半5:23/ 10:57.掃除機の稼働中に動作が遷移&落下) #578 (7:30 genuineが中途半端把持になり失敗) 謎の失敗(9:31秒) #586 #577 (left verifacationが途中で死亡) #574 (11:04 / 後半1/26 return-objectの距離短く再把持) https://github.com/start-jsk/2014-semi/issues/551#issuecomment-102797427 (後半2:25 IKが解けずIのreturn-objectで死亡) #544 (グリッパの頻繁なずれの問題)|
| 18 日10:00 | B | | | | | | @k-okada | |
| 15:00 | E | -7/188 | 4 | 5 | 1 | | @wkentaro | https://github.com/start-jsk/2014-semi/issues/602 (Eの棚のpick), https://github.com/start-jsk/2014-semi/issues/604 (判別回数), https://github.com/start-jsk/2014-semi/issues/601 (hand camera認識), https://github.com/start-jsk/2014-semi/issues/603 (first_yearを落とす), #544 (グリッパの頻繁なずれの問題) |
| 18:46 | F | 120/188 | | | | 15:00 | @iory |~~ #545(シリアルポートのエラーが依然出るが、止まらずに最後まで行った) ~~, highlandをすってとったが、箱を出るときに微妙な段差で躓いて離してしまった。|
| 19 日13:00 | G | no rate | | | | | @iory | https://github.com/start-jsk/2014-semi/issues/617 交差して落ちる. このあと落としてobject verificationで止まってしまった. https://github.com/start-jsk/2014-semi/issues/310 右手のグリッパーがとれる |
| 15:00 | I | -41/188 | 2 | 2 | 5 | 19:00 | @wkentaro | https://github.com/start-jsk/2014-semi/issues/636, https://github.com/start-jsk/2014-semi/issues/635, https://github.com/start-jsk/2014-semi/issues/634, 新しいグリッパーでは落とすことが多い.. |
| 18:00 | A | | | | | | @k-okada | |
| 20 日10:00 | B | | | | | | @aginika | |
| 16:00 | C | 30/186 | 3 | 0 | 4 | 10:00 | @wkentaro | 落とすことが多い、立てかけてあるものがとれない |
| 18:00 | A | | | | | | @iory | |
| 21 日10:00 | H | 69/186 | 6 | 1 | 2 | 13:00 | @wkentaro | https://github.com/start-jsk/2014-semi/issues/603#issuecomment-104092860, https://github.com/start-jsk/2014-semi/issues/689, https://github.com/start-jsk/2014-semi/issues/688, https://github.com/start-jsk/2014-semi/issues/687 |
| 19:02 | I | 37 | | | | | @k-okada | 次回までに,#703 と#704 をやりpick-objectでつかめないという状況を少なくする. |
| 23:09 | A | 52/188 | 5 | 1 | 1 | 20:00 | @k-okada | #706を試した。使い方は#704, Kが取れるまで調整してテスト実行。ただし、CとJで失敗。次はK,C,Jでとれるまで調整が必要。 |
| 22 日 6:00 | I | 30/180 | 4 | 1 | 2 | 20:00 | @iory | #670 と https://github.com/start-jsk/2014-semi/pull/721 の試験, #722 カラフルな球がとっているのにとれてないという問題 |
| 15:00 | C | 69/190 | 6 | 0 | 1 | 20:00 | @aginika | |
| 18:00 | A | | | | | | @wkentaro | |
- プログラムの動かし方
| 17 日10:00 | C | 69/199 | 6 | 1 | 2 | 16:39 | @wkentaro | https://github.com/start-jsk/2014-semi/issues/281#issuecomment-102749406 ( 掃除機を引きずり出してしまう), https://github.com/start-jsk/2014-semi/issues/545#issueco
```
sudo pkill mongod
roslaunch jsk_2014_picking_challenge baxter.launch
roslaunch jsk_2014_picking_challenge setup.launch
roslaunch jsk_2014_picking_challenge main.launch json:=`rospack find jsk_2014_picking_challenge`/data/apc-h.json
```
- 新しいapc.jsonの作り方
```
roscd jsk_2014_picking_challenge/scripts
./interface_generator.py
cp apc.json ../data/apc-*.json
```
apc-*は適宜適切なアルファベットにしましょう。setup.launchの引数も適切に変えましょう。
また、実験をするときは棚全体の物体のおき方が見えるように写真をとりましょう。
新しく生成したjsonは`git commit`しましょう。
- ビンの並べ方
- A(186点満点): https://github.com/start-jsk/2014-semi/blob/master/jsk_2014_picking_challenge/data/apc-a.json
- 得点の確認法
- `rosrun jsk_2014_picking_challenge score_calculator.py`を実行すべし(最後に出る満点と点数を控える)
- 正解:オーダ通りのものを掴んでオーダビンに入れたものの個数(すべて正解)
- 失敗:オーダとは違うものでオーダビンに入れたものの個数(認識がミス)
- 違反:ものを落としたり壊したりしたものの個数(つかみ, 把持認識ミス)
- 16 - (正解 + 失敗 + 違反) 把持に失敗したもの数 | test | 練習する 時間 order 得点 正解 失敗 違反 時間 担当 備考 a k okada highlandを掴んでいるけどつかんでいないという) (吸盤が取れる) (シリアルポートからデータが取れない) (チェックスクリプト) b aginika serialが停止して試合終了 return objectでふりふり落 return object時の把持物体の落下 結束帯の付け方によってグリッパがななめになってしまい大失敗 c wkentaro 掃除機を引きずり出してしまう serialが停止して試合終了 genuinが戻せない セグメント失敗 b iory d aginika エラーにより停止 グリッパが斜めについてちゃんととれない return時に失敗 d aginika ブラシを取ったつもりで空振り twitterの写真取るのが遅い 掃除機の稼働中に動作が遷移 落下 genuineが中途半端把持になり失敗 謎の失敗 left verifacationが途中で死亡 return objectの距離短く再把持 ikが解けずiのreturn objectで死亡 グリッパの頻繁なずれの問題 b k okada e wkentaro eの棚のpick 判別回数 hand camera認識 first yearを落とす グリッパの頻繁なずれの問題 f iory シリアルポートのエラーが依然出るが、止まらずに最後まで行った highlandをすってとったが、箱を出るときに微妙な段差で躓いて離してしまった。 g no rate iory 交差して落ちる このあと落としてobject verificationで止まってしまった 右手のグリッパーがとれる i wkentaro 新しいグリッパーでは落とすことが多い a k okada b aginika c wkentaro 落とすことが多い、立てかけてあるものがとれない a iory h wkentaro i k okada 次回までに, と をやりpick objectでつかめないという状況を少なくする. a k okada 。使い方は kが取れるまで調整してテスト実行。ただし、cとjで失敗。次はk c jでとれるまで調整が必要。 日 i iory と の試験 カラフルな球がとっているのにとれてないという問題 c aginika a wkentaro プログラムの動かし方 c wkentaro 掃除機を引きずり出してしまう sudo pkill mongod roslaunch jsk picking challenge baxter launch roslaunch jsk picking challenge setup launch roslaunch jsk picking challenge main launch json rospack find jsk picking challenge data apc h json 新しいapc jsonの作り方 roscd jsk picking challenge scripts interface generator py cp apc json data apc json apc は適宜適切なアルファベットにしましょう。setup launchの引数も適切に変えましょう。 また、実験をするときは棚全体の物体のおき方が見えるように写真をとりましょう。 新しく生成したjsonは git commit しましょう。 ビンの並べ方 a ) 得点の確認法 rosrun jsk picking challenge score calculator py を実行すべし 最後に出る満点と点数を控える 正解:オーダ通りのものを掴んでオーダビンに入れたものの個数(すべて正解) 失敗:オーダとは違うものでオーダビンに入れたものの個数(認識がミス) 違反:ものを落としたり壊したりしたものの個数(つかみ 把持認識ミス) 正解 失敗 違反 把持に失敗したもの数 | 1 |
57,910 | 16,137,417,058 | IssuesEvent | 2021-04-29 13:32:39 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Vet Center CAP form should not contain guidance about fixing externally managed data | Defect Needs refining | **Describe the defect**
CAPs do not have externally managed data and should not have guidance about it.
**To Reproduce**
Steps to reproduce the behavior:
1. https://prod.cms.va.gov/node/add/vet_center_cap
2. Note misleading help text
**Expected behavior**
This help text "Review key location information about this facility. To request a correction to address, phone, or hours, contact an administrator" should not appear on https://prod.cms.va.gov/node/add/vet_center_cap
**Screenshots**

| 1.0 | Vet Center CAP form should not contain guidance about fixing externally managed data - **Describe the defect**
CAPs do not have externally managed data and should not have guidance about it.
**To Reproduce**
Steps to reproduce the behavior:
1. https://prod.cms.va.gov/node/add/vet_center_cap
2. Note misleading help text
**Expected behavior**
This help text "Review key location information about this facility. To request a correction to address, phone, or hours, contact an administrator" should not appear on https://prod.cms.va.gov/node/add/vet_center_cap
**Screenshots**

| non_test | vet center cap form should not contain guidance about fixing externally managed data describe the defect caps do not have externally managed data and should not have guidance about it to reproduce steps to reproduce the behavior note misleading help text expected behavior this help text review key location information about this facility to request a correction to address phone or hours contact an administrator should not appear on screenshots | 0 |
16,891 | 3,573,434,439 | IssuesEvent | 2016-01-27 06:23:41 | dotnet/orleans | https://api.github.com/repos/dotnet/orleans | closed | Builds for OrleansContrib | question testing | Creating a new issue to discuss the question of builds for https://github.com/OrleansContrib/ as raised by @amccool / @richorama / @sergeybykov in #1205
https://github.com/dotnet/orleans/issues/1205#issuecomment-174725692
Basically is there a build process etc or should we set something up? | 1.0 | Builds for OrleansContrib - Creating a new issue to discuss the question of builds for https://github.com/OrleansContrib/ as raised by @amccool / @richorama / @sergeybykov in #1205
https://github.com/dotnet/orleans/issues/1205#issuecomment-174725692
Basically is there a build process etc or should we set something up? | test | builds for orleanscontrib creating a new issue to discuss the question of builds for as raised by amccool richorama sergeybykov in basically is there a build process etc or should we set something up | 1 |
4,386 | 5,030,487,210 | IssuesEvent | 2016-12-16 01:02:34 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Replace deprecated `self.assertEquals` with `self.assertEqual` | area: testing-infrastructure bite size bug | If you run `test-backend`, with Django 1.10, we now get tons of these warnings.
/home/tabbott/zulip/zerver/tests/test_upload.py:582: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(sanitize_name(u'tarball.tar.gz'), u'tarball.tar.gz')
We should be able to clean them up pretty quickly with `git grep` and `sed -i`. | 1.0 | Replace deprecated `self.assertEquals` with `self.assertEqual` - If you run `test-backend`, with Django 1.10, we now get tons of these warnings.
/home/tabbott/zulip/zerver/tests/test_upload.py:582: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(sanitize_name(u'tarball.tar.gz'), u'tarball.tar.gz')
We should be able to clean them up pretty quickly with `git grep` and `sed -i`. | non_test | replace deprecated self assertequals with self assertequal if you run test backend with django we now get tons of these warnings home tabbott zulip zerver tests test upload py deprecationwarning please use assertequal instead self assertequals sanitize name u tarball tar gz u tarball tar gz we should be able to clean them up pretty quickly with git grep and sed i | 0 |
22,083 | 3,592,865,003 | IssuesEvent | 2016-02-01 17:31:17 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | 3.2 prefix routing not working | Defect | Hello there, i have a problem with prefix routing.
```php
<?php
use Cake\Core\Plugin;
use Cake\Routing\Router;
use Cake\Routing\RouteBuilder;
Router::defaultRouteClass('DashedRoute');
Router::prefix('api', function (RouteBuilder $routes) {
$routes->prefix('v1', function (RouteBuilder $routes) {
$routes->extensions('json');
$routes->connect('/me', ['controller' => 'Users', 'action' => 'me']);
});
});
Plugin::routes();
```
Controller location is `src/Controller/Api/V1/UsersController.php`.
```php
<?php
namespace App\Controller\Api\V1;
/**
* Class UsersController
* @package App\Controller\Api\V1
*
* @property \App\Model\Table\UsersTable $Users
*/
class UsersController extends ApiController
{
public function me()
{
$this->set('user', $this->Auth->user());
}
}
```
It always return `
Error: UsersController could not be found.
Error: Create the class UsersController below in file: src/Controller/Api/V1/UsersController.php`
Route params:
plugin: (null)
prefix: api/v1
controller: Users
action: me | 1.0 | 3.2 prefix routing not working - Hello there, i have a problem with prefix routing.
```php
<?php
use Cake\Core\Plugin;
use Cake\Routing\Router;
use Cake\Routing\RouteBuilder;
Router::defaultRouteClass('DashedRoute');
Router::prefix('api', function (RouteBuilder $routes) {
$routes->prefix('v1', function (RouteBuilder $routes) {
$routes->extensions('json');
$routes->connect('/me', ['controller' => 'Users', 'action' => 'me']);
});
});
Plugin::routes();
```
Controller location is `src/Controller/Api/V1/UsersController.php`.
```php
<?php
namespace App\Controller\Api\V1;
/**
* Class UsersController
* @package App\Controller\Api\V1
*
* @property \App\Model\Table\UsersTable $Users
*/
class UsersController extends ApiController
{
public function me()
{
$this->set('user', $this->Auth->user());
}
}
```
It always return `
Error: UsersController could not be found.
Error: Create the class UsersController below in file: src/Controller/Api/V1/UsersController.php`
Route params:
plugin: (null)
prefix: api/v1
controller: Users
action: me | non_test | prefix routing not working hello there i have a problem with prefix routing php php use cake core plugin use cake routing router use cake routing routebuilder router defaultrouteclass dashedroute router prefix api function routebuilder routes routes prefix function routebuilder routes routes extensions json routes connect me plugin routes controller location is src controller api userscontroller php php php namespace app controller api class userscontroller package app controller api property app model table userstable users class userscontroller extends apicontroller public function me this set user this auth user it always return error userscontroller could not be found error create the class userscontroller below in file src controller api userscontroller php route params plugin null prefix api controller users action me | 0 |
57,180 | 6,540,093,446 | IssuesEvent | 2017-09-01 14:11:17 | LDMW/cms | https://api.github.com/repos/LDMW/cms | closed | Ability to add logos into the footer | please-test priority-3 T2h | - [ ] India to check NHS guidelines
- [x] Need to be able to add logos into the footer | 1.0 | Ability to add logos into the footer - - [ ] India to check NHS guidelines
- [x] Need to be able to add logos into the footer | test | ability to add logos into the footer india to check nhs guidelines need to be able to add logos into the footer | 1 |
165,559 | 12,857,845,679 | IssuesEvent | 2020-07-09 09:59:13 | banzhonghu/acxbugtracking | https://api.github.com/repos/banzhonghu/acxbugtracking | opened | Acunetix - PHPinfo page found | CWE-200 CWE-200 Information_Disclosure Test_Files bug | | **Target URL** | http://testphp.vulnweb.com|
------------ | -------------
| **Severity** | Medium|
##### Affects
http://testphp.vulnweb.com/secured/phpinfo.php
##### Attack Details
Pattern found: <title>phpinfo()</title>
##### HTTP Request
GET /secured/phpinfo.php HTTP/1.1
Acunetix-Aspect: enabled
Acunetix-Aspect-Password: 082119f75623eb7abd7bf357698ff66c
Acunetix-Aspect-Queries: aspectalerts
Referer: http://testphp.vulnweb.com/
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip,deflate
Host: testphp.vulnweb.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36
Connection: Keep-alive
##### Vulnerability Description
This script is using phpinfo() function. This function outputs a large amount of information about the current state of PHP. This includes information about PHP compilation options and extensions, the PHP version, server information and environment (if compiled as a module), the PHP environment, OS version information, paths, master and local values of configuration options, HTTP headers, and the PHP License.
##### Impact
This file may expose sensitive information that may help an malicious user to prepare more advanced attacks.
##### Remediation
Remove the file from production systems.
----
##### References:
<a href=https://www.php.net/manual/en/function.phpinfo.php>PHP phpinfo</a>
| 1.0 | Acunetix - PHPinfo page found - | **Target URL** | http://testphp.vulnweb.com|
------------ | -------------
| **Severity** | Medium|
##### Affects
http://testphp.vulnweb.com/secured/phpinfo.php
##### Attack Details
Pattern found: <title>phpinfo()</title>
##### HTTP Request
GET /secured/phpinfo.php HTTP/1.1
Acunetix-Aspect: enabled
Acunetix-Aspect-Password: 082119f75623eb7abd7bf357698ff66c
Acunetix-Aspect-Queries: aspectalerts
Referer: http://testphp.vulnweb.com/
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip,deflate
Host: testphp.vulnweb.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36
Connection: Keep-alive
##### Vulnerability Description
This script is using phpinfo() function. This function outputs a large amount of information about the current state of PHP. This includes information about PHP compilation options and extensions, the PHP version, server information and environment (if compiled as a module), the PHP environment, OS version information, paths, master and local values of configuration options, HTTP headers, and the PHP License.
##### Impact
This file may expose sensitive information that may help an malicious user to prepare more advanced attacks.
##### Remediation
Remove the file from production systems.
----
##### References:
<a href=https://www.php.net/manual/en/function.phpinfo.php>PHP phpinfo</a>
| test | acunetix phpinfo page found target url severity medium affects attack details pattern found lt title gt phpinfo lt title gt http request get secured phpinfo php http acunetix aspect enabled acunetix aspect password acunetix aspect queries aspectalerts referer accept text html application xhtml xml application xml q q accept encoding gzip deflate host testphp vulnweb com user agent mozilla windows nt applewebkit khtml like gecko chrome safari connection keep alive vulnerability description this script is using phpinfo function this function outputs a large amount of information about the current state of php this includes information about php compilation options and extensions the php version server information and environment if compiled as a module the php environment os version information paths master and local values of configuration options http headers and the php license impact this file may expose sensitive information that may help an malicious user to prepare more advanced attacks remediation remove the file from production systems references | 1 |
327,183 | 28,046,132,612 | IssuesEvent | 2023-03-28 23:09:18 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix nn.test_tensorflow_weighted_cross_entropy_with_logits | TensorFlow Frontend Sub Task Failing Test | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3696985215/jobs/6261439089" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
| 1.0 | Fix nn.test_tensorflow_weighted_cross_entropy_with_logits - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3696985215/jobs/6261439089" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
| test | fix nn test tensorflow weighted cross entropy with logits numpy img src | 1 |
298,471 | 9,200,366,627 | IssuesEvent | 2019-03-07 16:53:55 | qissue-bot/QGIS | https://api.github.com/repos/qissue-bot/QGIS | closed | new added field isn't in label | Category: Vectors Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report | ---
Author Name: **jobi-users-sourceforge-net -** (jobi-users-sourceforge-net -)
Original Redmine Issue: 45, https://issues.qgis.org/issues/45
Original Assignee: Gary Sherman
---
I added a new field 'name' with some text to a GRASS
layer and then wanted to display this as label.
But it doesn't appear in the label dropdown of the
preferences of this layer.
| 1.0 | new added field isn't in label - ---
Author Name: **jobi-users-sourceforge-net -** (jobi-users-sourceforge-net -)
Original Redmine Issue: 45, https://issues.qgis.org/issues/45
Original Assignee: Gary Sherman
---
I added a new field 'name' with some text to a GRASS
layer and then wanted to display this as label.
But it doesn't appear in the label dropdown of the
preferences of this layer.
| non_test | new added field isn t in label author name jobi users sourceforge net jobi users sourceforge net original redmine issue original assignee gary sherman i added a new field name with some text to a grass layer and then wanted to display this as label but it doesn t appear in the label dropdown of the preferences of this layer | 0 |
236,186 | 19,520,019,372 | IssuesEvent | 2021-12-29 16:40:38 | hcn1519/TILMemo | https://api.github.com/repos/hcn1519/TILMemo | closed | Unit Test - Test Stub | 📔 published UnitTest | # Test Stub xUnit
## How It Works
* SUT가 의존하고 있는 인터페이스의 테스트 구현을 정의한다.
* Exercise 단계에서 해당 stub을 실제 구현 대신 사용할 수 있도록 stub을 구성한다.
* Test 수행 단계에서 stub을 호출할 경우 해당 stub은 이전에 정의한 값을 리턴한다.
* 테스트에서는 일반적인 테스트 검증 과정처럼 예상 결과를 verify한다.
## When to Use it
1. 적절한 input을 구성할 수 없어서 테스트할 수 없는 코드가 발생하는 경우 - Test Stub을 통해 다양한 indirect input을 구성하고 이를 통해 SUT의 동작을 컨트롤 할 수 있다.
2. 테스트 환경에서는 사용이 어려운 소프트웨어를 사용하는 모듈을 테스트하고자 하는 경우 - 해당 소프트웨어를 통해 제공받는 값을 Test Stub을 통해 값을 대신 주입할 수 있도록 한다.
- 사용하능 indirect input을 검증해야 하는 경우, stub 대신 mock이나 spy를 활용한다.
## Variation: Test Fixture
In [xUnit](http://xunitpatterns.com/xUnit.html) , a *test fixture* is all the things we need to have in place in order to run a test and expect a particular outcome. Some people call this the [test context](http://xunitpatterns.com/test%20context.html) .
## Variation: Responder
- 유효한 indirect input을 제공하는 Test Stub
- Happy Path Testing에 사용된다.
## Variation: Saboteur
- 유효하지 않은 indirect input을 제공하는 Test Stub
- SUT가 잘못된 input으로 exception, runtime error 등이 발생할 수 있는 상황을 어떻게 극복(동작)하는지를 확인하는 목적으로 사용된다.
## Variation: Temporary Test Stub
- Depended-On-Component(DOC)가 준비되지 않았을 때 임시로 사용하는 stub
- 하드 코딩된 return statement 수준으로 정의된다.
- Temporary Test Stub은 Real DOC가 준비되면 교체된다.
## Variation: Entity Chain Snipping
- Responder의 한 변형으로 복잡한 네트워크 구현을 완전히 구현한 것처럼 동작하는 Test Stub
- 하나의 단순한 stub이 복잡한 네트워크 구현의 최종 결과 값을 리턴한다.
- Fixture setup을 좀 더 빠르게 할 수 있도록 하고, 테스트 코드를 쉽게 이해하도록 만든다.
## Implementation Note
* 몇가지 test stub 환경에 대한 예시 제공
| 1.0 | Unit Test - Test Stub - # Test Stub xUnit
## How It Works
* SUT가 의존하고 있는 인터페이스의 테스트 구현을 정의한다.
* Exercise 단계에서 해당 stub을 실제 구현 대신 사용할 수 있도록 stub을 구성한다.
* Test 수행 단계에서 stub을 호출할 경우 해당 stub은 이전에 정의한 값을 리턴한다.
* 테스트에서는 일반적인 테스트 검증 과정처럼 예상 결과를 verify한다.
## When to Use it
1. 적절한 input을 구성할 수 없어서 테스트할 수 없는 코드가 발생하는 경우 - Test Stub을 통해 다양한 indirect input을 구성하고 이를 통해 SUT의 동작을 컨트롤 할 수 있다.
2. 테스트 환경에서는 사용이 어려운 소프트웨어를 사용하는 모듈을 테스트하고자 하는 경우 - 해당 소프트웨어를 통해 제공받는 값을 Test Stub을 통해 값을 대신 주입할 수 있도록 한다.
- 사용하능 indirect input을 검증해야 하는 경우, stub 대신 mock이나 spy를 활용한다.
## Variation: Test Fixture
In [xUnit](http://xunitpatterns.com/xUnit.html) , a *test fixture* is all the things we need to have in place in order to run a test and expect a particular outcome. Some people call this the [test context](http://xunitpatterns.com/test%20context.html) .
## Variation: Responder
- 유효한 indirect input을 제공하는 Test Stub
- Happy Path Testing에 사용된다.
## Variation: Saboteur
- 유효하지 않은 indirect input을 제공하는 Test Stub
- SUT가 잘못된 input으로 exception, runtime error 등이 발생할 수 있는 상황을 어떻게 극복(동작)하는지를 확인하는 목적으로 사용된다.
## Variation: Temporary Test Stub
- Depended-On-Component(DOC)가 준비되지 않았을 때 임시로 사용하는 stub
- 하드 코딩된 return statement 수준으로 정의된다.
- Temporary Test Stub은 Real DOC가 준비되면 교체된다.
## Variation: Entity Chain Snipping
- Responder의 한 변형으로 복잡한 네트워크 구현을 완전히 구현한 것처럼 동작하는 Test Stub
- 하나의 단순한 stub이 복잡한 네트워크 구현의 최종 결과 값을 리턴한다.
- Fixture setup을 좀 더 빠르게 할 수 있도록 하고, 테스트 코드를 쉽게 이해하도록 만든다.
## Implementation Note
* 몇가지 test stub 환경에 대한 예시 제공
| test | unit test test stub test stub xunit how it works sut가 의존하고 있는 인터페이스의 테스트 구현을 정의한다 exercise 단계에서 해당 stub을 실제 구현 대신 사용할 수 있도록 stub을 구성한다 test 수행 단계에서 stub을 호출할 경우 해당 stub은 이전에 정의한 값을 리턴한다 테스트에서는 일반적인 테스트 검증 과정처럼 예상 결과를 verify한다 when to use it 적절한 input을 구성할 수 없어서 테스트할 수 없는 코드가 발생하는 경우 test stub을 통해 다양한 indirect input을 구성하고 이를 통해 sut의 동작을 컨트롤 할 수 있다 테스트 환경에서는 사용이 어려운 소프트웨어를 사용하는 모듈을 테스트하고자 하는 경우 해당 소프트웨어를 통해 제공받는 값을 test stub을 통해 값을 대신 주입할 수 있도록 한다 사용하능 indirect input을 검증해야 하는 경우 stub 대신 mock이나 spy를 활용한다 variation test fixture in a test fixture is all the things we need to have in place in order to run a test and expect a particular outcome some people call this the variation responder 유효한 indirect input을 제공하는 test stub happy path testing에 사용된다 variation saboteur 유효하지 않은 indirect input을 제공하는 test stub sut가 잘못된 input으로 exception runtime error 등이 발생할 수 있는 상황을 어떻게 극복 동작 하는지를 확인하는 목적으로 사용된다 variation temporary test stub depended on component doc 가 준비되지 않았을 때 임시로 사용하는 stub 하드 코딩된 return statement 수준으로 정의된다 temporary test stub은 real doc가 준비되면 교체된다 variation entity chain snipping responder의 한 변형으로 복잡한 네트워크 구현을 완전히 구현한 것처럼 동작하는 test stub 하나의 단순한 stub이 복잡한 네트워크 구현의 최종 결과 값을 리턴한다 fixture setup을 좀 더 빠르게 할 수 있도록 하고 테스트 코드를 쉽게 이해하도록 만든다 implementation note 몇가지 test stub 환경에 대한 예시 제공 | 1 |
41,967 | 10,729,171,910 | IssuesEvent | 2019-10-28 15:10:01 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | [COGNITION]: Colmery 107 - Heading H2s should wrap expand/collapse buttons | 508-defect-3 508/Accessibility bah-ct-107 gibct | ## Description
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
The VA 508 office noted this as an improvement item. The large gray expand/collapse sections do not identify as headings level two in JAWS. A quick investigation showed the `<button>` elements wrapping the `<h2>` and is the likely cause of these headings being ignored. Screenshot attached.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket.
-->
**VFS Point of Contact:** _Trevor_
## Acceptance Criteria
<!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. -->
* As a screenreader user, I want the H2 headings to appear in the Headings menu and be available as virtual cursor stops when I'm navigating by heading
* As a visual user, I do not want to see any style or functionality changes in the expand/collapse sections
## Environment
* Windows 10
* IE11
* JAWS
* https://staging.va.gov/gi-bill-comparison-tool/profile/11806124
## Possible Fixes (optional)
Let's try refactoring these to wrap the buttons in the H2:
```html
<h2>
<button
aria-controls="accordion-item-8"
aria-expanded="true"
class="usa-accordion-button"
>
Veteran programs
</button>
</h2>
```
-->
## WCAG or Vendor Guidance (optional)
* [Info and Relationships: Understanding SC 1.3.1](https://www.w3.org/TR/UNDERSTANDING-WCAG20/content-structure-separation-programmatic.html)
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->

| 1.0 | [COGNITION]: Colmery 107 - Heading H2s should wrap expand/collapse buttons - ## Description
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
The VA 508 office noted this as an improvement item. The large gray expand/collapse sections do not identify as headings level two in JAWS. A quick investigation showed the `<button>` elements wrapping the `<h2>` and is the likely cause of these headings being ignored. Screenshot attached.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket.
-->
**VFS Point of Contact:** _Trevor_
## Acceptance Criteria
<!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. -->
* As a screenreader user, I want the H2 headings to appear in the Headings menu and be available as virtual cursor stops when I'm navigating by heading
* As a visual user, I do not want to see any style or functionality changes in the expand/collapse sections
## Environment
* Windows 10
* IE11
* JAWS
* https://staging.va.gov/gi-bill-comparison-tool/profile/11806124
## Possible Fixes (optional)
Let's try refactoring these to wrap the buttons in the H2:
```html
<h2>
<button
aria-controls="accordion-item-8"
aria-expanded="true"
class="usa-accordion-button"
>
Veteran programs
</button>
</h2>
```
-->
## WCAG or Vendor Guidance (optional)
* [Info and Relationships: Understanding SC 1.3.1](https://www.w3.org/TR/UNDERSTANDING-WCAG20/content-structure-separation-programmatic.html)
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->

| non_test | colmery heading should wrap expand collapse buttons description the va office noted this as an improvement item the large gray expand collapse sections do not identify as headings level two in jaws a quick investigation showed the elements wrapping the and is the likely cause of these headings being ignored screenshot attached point of contact if this issue is being opened by a vfs team member please add a point of contact usually this is the same person who enters the issue ticket vfs point of contact trevor acceptance criteria as a screenreader user i want the headings to appear in the headings menu and be available as virtual cursor stops when i m navigating by heading as a visual user i do not want to see any style or functionality changes in the expand collapse sections environment windows jaws possible fixes optional let s try refactoring these to wrap the buttons in the html button aria controls accordion item aria expanded true class usa accordion button veteran programs wcag or vendor guidance optional screenshots or trace logs | 0 |
484,207 | 13,936,187,230 | IssuesEvent | 2020-10-22 12:37:12 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | m.facebook.com - desktop site instead of mobile site | browser-firefox-mobile engine-gecko priority-critical | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0; Mobile; rv:68.0) Gecko/20100101 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/60262 -->
**URL**: https://m.facebook.com/messages/?entrypoint=jewel&no_hist=1
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 6.0
**Tested Another Browser**: Yes Other
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
Browser
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200501020101</li><li>channel: alpha</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/1cffdbb3-1a5d-4c6a-8c15-39c23ec4d336)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | m.facebook.com - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0; Mobile; rv:68.0) Gecko/20100101 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/60262 -->
**URL**: https://m.facebook.com/messages/?entrypoint=jewel&no_hist=1
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 6.0
**Tested Another Browser**: Yes Other
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
Browser
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200501020101</li><li>channel: alpha</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/1cffdbb3-1a5d-4c6a-8c15-39c23ec4d336)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | m facebook com desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser yes other problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce browser browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel alpha hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
60,448 | 6,691,129,663 | IssuesEvent | 2017-10-09 11:59:48 | ansble/monument | https://api.github.com/repos/ansble/monument | closed | Add tests around routes/handleStaticFile.js | first contribution Hacktoberfest help wanted Testing | This file is currently tested by higher level tests that but not directly in unit tests. Add unite tests to exercise it fully and get it to 100% coverage. | 1.0 | Add tests around routes/handleStaticFile.js - This file is currently tested by higher level tests that but not directly in unit tests. Add unite tests to exercise it fully and get it to 100% coverage. | test | add tests around routes handlestaticfile js this file is currently tested by higher level tests that but not directly in unit tests add unite tests to exercise it fully and get it to coverage | 1 |
24,745 | 4,107,474,348 | IssuesEvent | 2016-06-06 13:11:30 | sasstools/sass-lint | https://api.github.com/repos/sasstools/sass-lint | closed | Refactoring the helpers test suite | enhancement tests | Having recently added to the helpers test file it became apparent that it needs a good refactor and in my opinion splitting up into multiple files, with a similar folder/file structure to that of the rules. While there aren't that many at the moment, I think to do it now would only be beneficial in the future.
So if anybody doesn't have any objections, I'll look get going on this. | 1.0 | Refactoring the helpers test suite - Having recently added to the helpers test file it became apparent that it needs a good refactor and in my opinion splitting up into multiple files, with a similar folder/file structure to that of the rules. While there aren't that many at the moment, I think to do it now would only be beneficial in the future.
So if anybody doesn't have any objections, I'll look get going on this. | test | refactoring the helpers test suite having recently added to the helpers test file it became apparent that it needs a good refactor and in my opinion splitting up into multiple files with a similar folder file structure to that of the rules while there aren t that many at the moment i think to do it now would only be beneficial in the future so if anybody doesn t have any objections i ll look get going on this | 1 |
264,967 | 23,145,082,309 | IssuesEvent | 2022-07-28 23:14:23 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Seridores - Registro por lotação - Santo Hipólito | generalization test development template-Síntese tecnologia informatica tag-Servidores subtag-Registro por lotação | DoD: Realizar o teste de Generalização do validador da tag Seridores - Registro por lotação para o Município de Santo Hipólito. | 1.0 | Teste de generalizacao para a tag Seridores - Registro por lotação - Santo Hipólito - DoD: Realizar o teste de Generalização do validador da tag Seridores - Registro por lotação para o Município de Santo Hipólito. | test | teste de generalizacao para a tag seridores registro por lotação santo hipólito dod realizar o teste de generalização do validador da tag seridores registro por lotação para o município de santo hipólito | 1 |
276,720 | 24,013,046,466 | IssuesEvent | 2022-09-14 20:44:44 | iotaledger/explorer | https://api.github.com/repos/iotaledger/explorer | closed | [Fix]: Fix routing when navigating to base Explorer route | priority:1 type:fix network:testnet network:shimmer scope:project | ### Task description
Fix routing when navigating to base Explorer route. Defaults to `mainnet` while if should pickup the first network from the configured available.
### Requirements
N/A
### Acceptance criteria
N/A
### Creation checklist
- [ ] I have assigned this task to the correct people
- [ ] I have added the most appropriate labels
- [ ] I have linked the correct milestone and/or project | 1.0 | [Fix]: Fix routing when navigating to base Explorer route - ### Task description
Fix routing when navigating to base Explorer route. Defaults to `mainnet` while if should pickup the first network from the configured available.
### Requirements
N/A
### Acceptance criteria
N/A
### Creation checklist
- [ ] I have assigned this task to the correct people
- [ ] I have added the most appropriate labels
- [ ] I have linked the correct milestone and/or project | test | fix routing when navigating to base explorer route task description fix routing when navigating to base explorer route defaults to mainnet while if should pickup the first network from the configured available requirements n a acceptance criteria n a creation checklist i have assigned this task to the correct people i have added the most appropriate labels i have linked the correct milestone and or project | 1 |
416,687 | 12,150,448,094 | IssuesEvent | 2020-04-24 17:59:47 | openshift/odo | https://api.github.com/repos/openshift/odo | opened | `odo component list` does not work for Devfile's | area/devfile priority/Medium | /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:** Linux
**Output of `odo version`:** N/A
## How did you run odo exactly?
I deployed a devfile application
```sh
git clone https://github.com/odo-devfiles/springboot-ex
cd springboot-ex
odo create java-spring-boot myspring
odo push
odo list # does not work
```
## Actual behavior
```sh
~/openshift/springboot-ex master ✗ 99d ⚑ ◒ ⍉
▶ ls -lah
total 60K
drwxr-xr-x 6 wikus wikus 4.0K Apr 24 11:55 .
drwxr-xr-x 10 wikus wikus 4.0K Apr 24 11:53 ..
drwxr-xr-x 3 wikus wikus 4.0K Apr 24 11:53 chart
-rw-r--r-- 1 wikus wikus 213 Apr 24 11:53 .cw-settings
-rw-r--r-- 1 wikus wikus 1.2K Apr 24 11:55 devfile.yaml
-rw-r--r-- 1 wikus wikus 237 Apr 24 11:53 Dockerfile
-rw-r--r-- 1 wikus wikus 2.4K Apr 24 11:53 Dockerfile-build
-rw-r--r-- 1 wikus wikus 325 Apr 24 11:53 Dockerfile-tools
drwxr-xr-x 8 wikus wikus 4.0K Apr 24 13:58 .git
-rw-r--r-- 1 wikus wikus 65 Apr 24 13:39 .gitignore
-rw-r--r-- 1 wikus wikus 97 Apr 24 11:53 Jenkinsfile
drwxr-x--- 3 wikus wikus 4.0K Apr 24 13:39 .odo
-rw-r--r-- 1 wikus wikus 3.0K Apr 24 11:53 pom.xml
-rw-r--r-- 1 wikus wikus 1.8K Apr 24 11:53 README.md
drwxr-xr-x 4 wikus wikus 4.0K Apr 24 11:53 src
~/openshift/springboot-ex master ✗ 99d ⚑ ◒
▶ odo list
✗ Please specify the application name and project name
Or use the command from inside a directory containing an odo component.
```
Despite having a devfile, `odo list` does not work
## Expected behavior
For `odo list` to work
## Any logs, error output, etc?
See above :)
| 1.0 | `odo component list` does not work for Devfile's - /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:** Linux
**Output of `odo version`:** N/A
## How did you run odo exactly?
I deployed a devfile application
```sh
git clone https://github.com/odo-devfiles/springboot-ex
cd springboot-ex
odo create java-spring-boot myspring
odo push
odo list # does not work
```
## Actual behavior
```sh
~/openshift/springboot-ex master ✗ 99d ⚑ ◒ ⍉
▶ ls -lah
total 60K
drwxr-xr-x 6 wikus wikus 4.0K Apr 24 11:55 .
drwxr-xr-x 10 wikus wikus 4.0K Apr 24 11:53 ..
drwxr-xr-x 3 wikus wikus 4.0K Apr 24 11:53 chart
-rw-r--r-- 1 wikus wikus 213 Apr 24 11:53 .cw-settings
-rw-r--r-- 1 wikus wikus 1.2K Apr 24 11:55 devfile.yaml
-rw-r--r-- 1 wikus wikus 237 Apr 24 11:53 Dockerfile
-rw-r--r-- 1 wikus wikus 2.4K Apr 24 11:53 Dockerfile-build
-rw-r--r-- 1 wikus wikus 325 Apr 24 11:53 Dockerfile-tools
drwxr-xr-x 8 wikus wikus 4.0K Apr 24 13:58 .git
-rw-r--r-- 1 wikus wikus 65 Apr 24 13:39 .gitignore
-rw-r--r-- 1 wikus wikus 97 Apr 24 11:53 Jenkinsfile
drwxr-x--- 3 wikus wikus 4.0K Apr 24 13:39 .odo
-rw-r--r-- 1 wikus wikus 3.0K Apr 24 11:53 pom.xml
-rw-r--r-- 1 wikus wikus 1.8K Apr 24 11:53 README.md
drwxr-xr-x 4 wikus wikus 4.0K Apr 24 11:53 src
~/openshift/springboot-ex master ✗ 99d ⚑ ◒
▶ odo list
✗ Please specify the application name and project name
Or use the command from inside a directory containing an odo component.
```
Despite having a devfile, `odo list` does not work
## Expected behavior
For `odo list` to work
## Any logs, error output, etc?
See above :)
| non_test | odo component list does not work for devfile s kind bug welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project what versions of software are you using operating system linux output of odo version n a how did you run odo exactly i deployed a devfile application sh git clone cd springboot ex odo create java spring boot myspring odo push odo list does not work actual behavior sh openshift springboot ex master ✗ ⚑ ◒ ⍉ ▶ ls lah total drwxr xr x wikus wikus apr drwxr xr x wikus wikus apr drwxr xr x wikus wikus apr chart rw r r wikus wikus apr cw settings rw r r wikus wikus apr devfile yaml rw r r wikus wikus apr dockerfile rw r r wikus wikus apr dockerfile build rw r r wikus wikus apr dockerfile tools drwxr xr x wikus wikus apr git rw r r wikus wikus apr gitignore rw r r wikus wikus apr jenkinsfile drwxr x wikus wikus apr odo rw r r wikus wikus apr pom xml rw r r wikus wikus apr readme md drwxr xr x wikus wikus apr src openshift springboot ex master ✗ ⚑ ◒ ▶ odo list ✗ please specify the application name and project name or use the command from inside a directory containing an odo component despite having a devfile odo list does not work expected behavior for odo list to work any logs error output etc see above | 0 |
114,427 | 11,847,033,827 | IssuesEvent | 2020-03-24 11:15:51 | jhipster/generator-jhipster | https://api.github.com/repos/jhipster/generator-jhipster | closed | Redis Replication setup (master-slave mode) | area: documentation:books: theme: redis | ##### **Overview of the feature request**
Often time, in the production, the single server setup is not recommended, so we added the Redis Cluster support(https://github.com/jhipster/generator-jhipster/issues/11129). But for small projects, the cluster may not be necessary.
##### **Motivation for or Use Case**
We want to guarantee the cache is still available if one of the nodes is crashed. I modified my project based on v6.6.0 to use single node in dev, and master-slave in production. It's relatively light weight changes, but not sure whether it should be included in the jHipster.
##### **Related issues or PR**
https://github.com/jhipster/generator-jhipster/issues/11129
https://github.com/jhipster/generator-jhipster/issues/9280
- [x] Checking this box is mandatory (this is just to show you read everything)
<!-- Love JHipster? Please consider supporting our collective:
👉 https://opencollective.com/generator-jhipster/donate -->
| 1.0 | Redis Replication setup (master-slave mode) - ##### **Overview of the feature request**
Often time, in the production, the single server setup is not recommended, so we added the Redis Cluster support(https://github.com/jhipster/generator-jhipster/issues/11129). But for small projects, the cluster may not be necessary.
##### **Motivation for or Use Case**
We want to guarantee the cache is still available if one of the nodes is crashed. I modified my project based on v6.6.0 to use single node in dev, and master-slave in production. It's relatively light weight changes, but not sure whether it should be included in the jHipster.
##### **Related issues or PR**
https://github.com/jhipster/generator-jhipster/issues/11129
https://github.com/jhipster/generator-jhipster/issues/9280
- [x] Checking this box is mandatory (this is just to show you read everything)
<!-- Love JHipster? Please consider supporting our collective:
👉 https://opencollective.com/generator-jhipster/donate -->
| non_test | redis replication setup master slave mode overview of the feature request often time in the production the single server setup is not recommended so we added the redis cluster support but for small projects the cluster may not be necessary motivation for or use case we want to guarantee the cache is still available if one of the nodes is crashed i modified my project based on to use single node in dev and master slave in production it s relatively light weight changes but not sure whether it should be included in the jhipster related issues or pr checking this box is mandatory this is just to show you read everything love jhipster please consider supporting our collective 👉 | 0 |
805,936 | 29,737,714,900 | IssuesEvent | 2023-06-14 03:17:59 | dwyl/imgup | https://api.github.com/repos/dwyl/imgup | closed | Feat: Image Upload API | enhancement help wanted priority-1 discuss technical T1d | Once we have the _basic_ Web-based image uploading working there are _many_ enhancements we can make. ✨
Hopefully by making _everything_ Open Source - as always - we invite contributions from the community. 🤞
However from _our_ perspective @dwyl what we want is the ability to upload from our `Flutter` (`Native` Mobile) App.
# Todo
+ [ ] Create a **_Secure_ REST API endpoint** that allows a client e.g. `JS` or `Flutter` to upload an image
+ [x] Ideally should be streaming to provide visual feedback of the upload progress
+ [x] Should return the URL of the image once uploaded
> **Note**: this issue is not complete. It's a place-holder for the discussion around features/requirements.
Once we have #51 working in the Web interface, this is the next logical step. | 1.0 | Feat: Image Upload API - Once we have the _basic_ Web-based image uploading working there are _many_ enhancements we can make. ✨
Hopefully by making _everything_ Open Source - as always - we invite contributions from the community. 🤞
However from _our_ perspective @dwyl what we want is the ability to upload from our `Flutter` (`Native` Mobile) App.
# Todo
+ [ ] Create a **_Secure_ REST API endpoint** that allows a client e.g. `JS` or `Flutter` to upload an image
+ [x] Ideally should be streaming to provide visual feedback of the upload progress
+ [x] Should return the URL of the image once uploaded
> **Note**: this issue is not complete. It's a place-holder for the discussion around features/requirements.
Once we have #51 working in the Web interface, this is the next logical step. | non_test | feat image upload api once we have the basic web based image uploading working there are many enhancements we can make ✨ hopefully by making everything open source as always we invite contributions from the community 🤞 however from our perspective dwyl what we want is the ability to upload from our flutter native mobile app todo create a secure rest api endpoint that allows a client e g js or flutter to upload an image ideally should be streaming to provide visual feedback of the upload progress should return the url of the image once uploaded note this issue is not complete it s a place holder for the discussion around features requirements once we have working in the web interface this is the next logical step | 0 |
19,111 | 10,319,386,381 | IssuesEvent | 2019-08-30 17:22:26 | clientIO/joint | https://api.github.com/repos/clientIO/joint | closed | Element click events freeze browser when graph generated from large JSON | enhancement performance | When I try to render a graph with 1000 elements using `graph.fromJSON()`, the element click events freeze the browser for about 20 seconds (latest version of Chrome) and then run. I don't have this problem when rendering the graph from an adjacency list. I have a [codepen](https://codepen.io/silvertiger/pen/dBKzQz?editors=1111) to demonstrate. *Note: it takes forever (upward of several minutes) to load*. The adjacency list and raw JSON are [here](https://gist.github.com/1silvertiger/dd0852a1efd6670879374e84ac97e9ca). Any idea why this is happening? | True | Element click events freeze browser when graph generated from large JSON - When I try to render a graph with 1000 elements using `graph.fromJSON()`, the element click events freeze the browser for about 20 seconds (latest version of Chrome) and then run. I don't have this problem when rendering the graph from an adjacency list. I have a [codepen](https://codepen.io/silvertiger/pen/dBKzQz?editors=1111) to demonstrate. *Note: it takes forever (upward of several minutes) to load*. The adjacency list and raw JSON are [here](https://gist.github.com/1silvertiger/dd0852a1efd6670879374e84ac97e9ca). Any idea why this is happening? | non_test | element click events freeze browser when graph generated from large json when i try to render a graph with elements using graph fromjson the element click events freeze the browser for about seconds latest version of chrome and then run i don t have this problem when rendering the graph from an adjacency list i have a to demonstrate note it takes forever upward of several minutes to load the adjacency list and raw json are any idea why this is happening | 0 |
657 | 2,507,128,333 | IssuesEvent | 2015-01-12 16:18:10 | deis/deis | https://api.github.com/repos/deis/deis | opened | integration tests should share an app | testing | Each integration test file is an island currently. In particular, we waste a lot of time creating, `git push`ing, and destroying apps, even when that functionality isn't specifically under test.
Perhaps with some ref-counting `setUp`/`tearDown` approach, we could have the tests all share a single Deis app that is only updated or destroyed when the test calls for that. This should speed up the integration tests significantly. | 1.0 | integration tests should share an app - Each integration test file is an island currently. In particular, we waste a lot of time creating, `git push`ing, and destroying apps, even when that functionality isn't specifically under test.
Perhaps with some ref-counting `setUp`/`tearDown` approach, we could have the tests all share a single Deis app that is only updated or destroyed when the test calls for that. This should speed up the integration tests significantly. | test | integration tests should share an app each integration test file is an island currently in particular we waste a lot of time creating git push ing and destroying apps even when that functionality isn t specifically under test perhaps with some ref counting setup teardown approach we could have the tests all share a single deis app that is only updated or destroyed when the test calls for that this should speed up the integration tests significantly | 1 |
22,875 | 3,974,348,366 | IssuesEvent | 2016-05-04 21:50:16 | sztomi/callme | https://api.github.com/repos/sztomi/callme | opened | Add unit tests with external clients and servers | help wanted testing | Currently, many tests rely on creating a server and a client that connects to each other to test some functionality of one or both. This is fine, but essentially makes the library rely on itself for testing.
A reasonably portable addition to create unit tests that rely on an embeddable script engine (ruby, python, lua are all fine) to create servers and clients.
The tests should be able to install any required packages for the language (e.g. if it's python, they should be able to `pip install` msgpack or mprpc). | 1.0 | Add unit tests with external clients and servers - Currently, many tests rely on creating a server and a client that connects to each other to test some functionality of one or both. This is fine, but essentially makes the library rely on itself for testing.
A reasonably portable addition to create unit tests that rely on an embeddable script engine (ruby, python, lua are all fine) to create servers and clients.
The tests should be able to install any required packages for the language (e.g. if it's python, they should be able to `pip install` msgpack or mprpc). | test | add unit tests with external clients and servers currently many tests rely on creating a server and a client that connects to each other to test some functionality of one or both this is fine but essentially makes the library rely on itself for testing a reasonably portable addition to create unit tests that rely on an embeddable script engine ruby python lua are all fine to create servers and clients the tests should be able to install any required packages for the language e g if it s python they should be able to pip install msgpack or mprpc | 1 |
243,329 | 20,378,834,836 | IssuesEvent | 2022-02-21 18:40:11 | task-tim/Chere-Sasha | https://api.github.com/repos/task-tim/Chere-Sasha | closed | 2.1 - Prototype pour tester le contrôle de son à l'aide de Max et Arduino | bug test case priority : high | - [x] **2.1 - 1** Faire le code Arduino
- [x] **2.1 - 2** Faire le code Max
- [x] **2.1 - 3** Connecter les 2 ensemble pour voir si le tout marche ensemble
SUCCÈS | 1.0 | 2.1 - Prototype pour tester le contrôle de son à l'aide de Max et Arduino - - [x] **2.1 - 1** Faire le code Arduino
- [x] **2.1 - 2** Faire le code Max
- [x] **2.1 - 3** Connecter les 2 ensemble pour voir si le tout marche ensemble
SUCCÈS | test | prototype pour tester le contrôle de son à l aide de max et arduino faire le code arduino faire le code max connecter les ensemble pour voir si le tout marche ensemble succès | 1 |
1,509 | 2,550,793,793 | IssuesEvent | 2015-02-01 22:43:40 | schwa/SwiftGraphics | https://api.github.com/repos/schwa/SwiftGraphics | opened | Cannot run unit tests in Release mode | bug help wanted P2 testing | Many unit tests fail when built as Release instead of Debug.
This seems like it _might_ be a bug in XCTest? Other people are reporting similar issues:
https://twitter.com/blinker13/status/561816493105872896
We need to file a radar for this. | 1.0 | Cannot run unit tests in Release mode - Many unit tests fail when built as Release instead of Debug.
This seems like it _might_ be a bug in XCTest? Other people are reporting similar issues:
https://twitter.com/blinker13/status/561816493105872896
We need to file a radar for this. | test | cannot run unit tests in release mode many unit tests fail when built as release instead of debug this seems like it might be a bug in xctest other people are reporting similar issues we need to file a radar for this | 1 |
544,023 | 15,888,799,424 | IssuesEvent | 2021-04-10 09:02:36 | musescore/MuseScore | https://api.github.com/repos/musescore/MuseScore | opened | [MU4 Issue] Time signature number spinbox and dropdown not aligned properly with radio button and slash | Low Priority | **Describe the bug**
Time signature number spinbox and dropdown not aligned properly with radio button and slash
**To Reproduce**
Steps to reproduce the behavior:
1. Click on Creating New Score Wizard
2. Click on 'Next' button
3. Click on "Time Signature"
4. Observe alignment of numbers in the box from the dropdown (screenshot attached)
**Expected behavior**
Radiobutton , spinboxes, slash should be aligned
**Screenshots**

**Desktop (please complete the following information):**
MacOS
**Additional context**
Add any other context about the problem here.
| 1.0 | [MU4 Issue] Time signature number spinbox and dropdown not aligned properly with radio button and slash - **Describe the bug**
Time signature number spinbox and dropdown not aligned properly with radio button and slash
**To Reproduce**
Steps to reproduce the behavior:
1. Click on Creating New Score Wizard
2. Click on 'Next' button
3. Click on "Time Signature"
4. Observe alignment of numbers in the box from the dropdown (screenshot attached)
**Expected behavior**
Radiobutton , spinboxes, slash should be aligned
**Screenshots**

**Desktop (please complete the following information):**
MacOS
**Additional context**
Add any other context about the problem here.
| non_test | time signature number spinbox and dropdown not aligned properly with radio button and slash describe the bug time signature number spinbox and dropdown not aligned properly with radio button and slash to reproduce steps to reproduce the behavior click on creating new score wizard click on next button click on time signature observe alignment of numbers in the box from the dropdown screenshot attached expected behavior radiobutton spinboxes slash should be aligned screenshots desktop please complete the following information macos additional context add any other context about the problem here | 0 |
233,767 | 19,057,764,364 | IssuesEvent | 2021-11-26 00:06:45 | nrwl/nx | https://api.github.com/repos/nrwl/nx | closed | It looks like if test are executed twice. | type: bug blocked: repro needed scope: testing tools stale | Hello, does it happen to anyone that it seems as if the **tests were executed twice**?
I mean, **I have 1289 tests** in my project and in console it appears that **2578 were executed.**, (In the screenshot below I have taken the screenshot almost at the end of the execution, it shows **2551** of these) but when the final report is being generated, it appears the 1289 tests.
I have updated **Angular (12.2.10)** **, Jest** and **Nx** to their **latest version.**
My **package.json** is as follows:
{
"name": "frontend",
"version": "1.0.2",
"license": "MIT",
"scripts": {
"ng": "nx",
"nx": "nx",
"start": "nx serve",
"build": "ng build",
"build:prod": "node --max_old_space_size=8192 node_modules/@angular/cli/bin/ng build --configuration production --aot --build-optimizer && gzipper compress ./www",
"build:test": "node --max_old_space_size=8192 node_modules/@angular/cli/bin/ng build --configuration production --aot --build-optimizer --configuration=test && gzipper compress ./www",
"test": "node --max-old-space-size=8192 --expose-gc node_modules/jest/bin/jest.js --coverage --verbose false --logHeapUsage --ci",
"test-watch": "jest --watchAll --coverage --verbose false --logHeapUsage --maxWorkers=11",
"lint": "nx workspace-lint && ng lint",
"e2e": "ng e2e",
"affected:apps": "nx affected:apps",
"affected:libs": "nx affected:libs",
"affected:build": "nx affected:build",
"affected:e2e": "nx affected:e2e",
"affected:test": "nx affected:test",
"affected:lint": "nx affected:lint",
"affected:dep-graph": "nx affected:dep-graph",
"affected": "nx affected",
"format": "nx format:write",
"format:write": "nx format:write",
"format:check": "nx format:check",
"update": "ng update @nrwl/workspace",
"workspace-schematic": "nx workspace-schematic",
"dep-graph": "nx dep-graph",
"help": "nx help",
"postinstall": "ngcc",
"compodoc": "compodoc -p ./tsconfig.compodoc.json -s",
"build-webpack-analyzer": "ng build --stats-json",
"run-webpack-analyzer": "webpack-bundle-analyzer www/stats.json",
"sonar-linux": "./node_modules/sonar-scanner/bin/sonar-scanner",
"sonar-windows": ".\\node_modules\\sonar-scanner\\bin\\sonar-scanner"
},
"author": "José Ignacio Sanz García",
"private": true,
"dependencies": {
"@angular-material-components/datetime-picker": "^5.1.0",
"@angular/animations": "12.2.10",
"@angular/cdk": "12.2.10",
"@angular/common": "12.2.10",
"@angular/compiler": "12.2.10",
"@angular/core": "12.2.10",
"@angular/elements": "12.2.10",
"@angular/forms": "12.2.10",
"@angular/localize": "^12.2.10",
"@angular/material": "12.2.10",
"@angular/platform-browser": "12.2.10",
"@angular/platform-browser-dynamic": "12.2.10",
"@angular/platform-server": "12.2.10",
"@angular/pwa": "^0.1102.11",
"@angular/router": "12.2.10",
"@angular/service-worker": "12.2.10",
"@compodoc/compodoc": "^1.1.13",
"@ctrl/ngx-codemirror": "^4.1.1",
"@fullcalendar/core": "^5.6.0",
"@ng-select/ng-select": "^6.1.0",
"@ngrx/component-store": "12.5.0",
"@ngrx/effects": "12.5.0",
"@ngrx/entity": "12.5.0",
"@ngrx/router-store": "12.5.0",
"@ngrx/store": "12.5.0",
"@ngrx/store-devtools": "12.5.0",
"@ngx-translate/core": "^13.0.0",
"@ngx-translate/http-loader": "^6.0.0",
"@nrwl/eslint-plugin-nx": "12.10.0",
"@nrwl/workspace": "12.10.0",
"@typescript-eslint/eslint-plugin": "^4.22.1",
"@typescript-eslint/parser": "^4.22.1",
"chart.js": "^2.9.4",
"classlist.js": "^1.1.20150312",
"codemirror": "^5.61.0",
"crypto-js": "^4.0.0",
"eslint-plugin-cypress": "^2.11.2",
"file-saver": "^2.0.5",
"fs": "0.0.1-security",
"google-libphonenumber": "^3.2.21",
"html2canvas": "^1.0.0-rc.7",
"html2pdf.js": "^0.10.1",
"jest-canvas-mock": "^2.3.1",
"jest-leak-detector": "^27.0.6",
"jsdom": "^16.6.0",
"jspdf": "^2.3.1",
"jspdf-autotable": "^3.5.14",
"lodash": "^4.17.21",
"mammoth": "^1.4.18",
"ng-mocks": "^12.5.0",
"ngx-material-file-input": "^2.1.1",
"ngx-swiper-wrapper": "^10.0.0",
"npm": "^7.23.0",
"posthtml-beautify": "^0.7.0",
"primeicons": "^4.1.0",
"primeng": "^11.4.0",
"project": "^0.1.6",
"properties-parser": "^0.3.1",
"quill": "^1.3.7",
"rxjs": "^6.6.7",
"screenfull": "^5.1.0",
"textarea-caret": "^3.1.0",
"to-px": "^1.1.0",
"tslib": "^2.2.0",
"weak-napi": "^2.0.2",
"web-animations-js": "^2.3.2",
"xlsx": "^0.17.1",
"zone.js": "0.11.4"
},
"devDependencies": {
"@angular-devkit/build-angular": "12.2.10",
"@angular-devkit/schematics": "^12.2.10",
"@angular-eslint/eslint-plugin": "12.3.1",
"@angular-eslint/eslint-plugin-template": "12.3.1",
"@angular-eslint/template-parser": "12.3.1",
"@angular/cli": "12.2.10",
"@angular/compiler-cli": "12.2.10",
"@angular/language-service": "12.2.10",
"@ngrx/schematics": "12.5.0",
"@nrwl/angular": "12.10.0",
"@nrwl/cypress": "12.10.0",
"@nrwl/jest": "12.10.0",
"@types/chart.js": "^2.9.31",
"@types/crypto-js": "^4.0.1",
"@types/google-libphonenumber": "^7.4.21",
"@types/html2canvas": "0.0.36",
"@types/jest": "27.0.2",
"@types/jspdf": "^1.3.3",
"@types/lodash": "^4.14.168",
"@types/textarea-caret": "^3.0.0",
"@types/to-px": "^1.1.1",
"cypress": "^7.2.0",
"eslint": "^7.25.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-prettier": "^3.4.0",
"gzipper": "^4.5.0",
"html-docx-js-typescript": "^0.1.5",
"jest": "27.2.3",
"jest-junit": "^12.2.0",
"jest-preset-angular": "10.0.1",
"jest-sonar": "^0.2.12",
"posthtml": "^0.15.2",
"prettier": "2.3.1",
"protractor": "^7.0.0",
"sonar-scanner": "^3.1.0",
"string.prototype.replaceall": "^1.0.5",
"swiper": "^6.5.9",
"ts-jest": "27.0.5",
"ts-loader": "^9.1.1",
"ts-node": "^9.1.1",
"typescript": "4.3.5",
"webpack-bundle-analyzer": "^4.4.0",
"webpack-cli": "^4.6.0"
},
"browser": {
"crypto": false
}
}
The **base jest.config.js** (the one in the root) is as follows:
module.exports = {
testMatch: ['**/+(*.)+(spec|test).+(ts|js)?(x)'],
transform: {
'^.+\\.(ts|js|html)$': 'ts-jest',
},
resolver: '@nrwl/jest/plugins/resolver',
moduleFileExtensions: ['ts', 'js', 'html'],
coverageDirectory: '<rootDir>/coverage',
reporters: ['default',
'jest-junit',
['jest-sonar', {
outputDirectory: './',
outputName: 'test-report.xml',
reportedFilePath: 'absolute'
}],
],
setupFiles: [
'<rootDir>/test-setup.ts'
],
coverageReporters: ["json", "lcov", "clover", "text", "text-summary", "cobertura"],
collectCoverageFrom: [
"**/*.ts",
"!**/node_modules/**",
"!**/*vessel*.ts",
"!**/*.module.ts",
"!**/routes.ts",
"!**/paths.ts",
"!**/*.token.ts",
"!**/*.collection.ts",
"!**/*.enum.ts",
"!**/*.model.ts",
"!**/*.mock.ts",
"!**/polyfills.ts",
"!**/main.ts",
"!**/index.ts",
"!**/*help*.ts",
"!**/admin-help/**",
"!**/*.environment.ts",
"!**/environment*.ts",
"!**/typings.d.ts"
],
projects: [
'<rootDir>/libs/core',
'<rootDir>/libs/menu-ui',
'<rootDir>/libs/shared',
'<rootDir>/apps/ioms-main',
'<rootDir>/apps/auth',
'<rootDir>/apps/side-menu',
'<rootDir>/apps/dashboard',
'<rootDir>/apps/messaging',
'<rootDir>/apps/administration',
'<rootDir>/apps/data-request',
'<rootDir>/apps/vessels',
'<rootDir>/apps/compliance-data'
]
};
My **jest-preset.js** is the following:
const nxPreset = require('@nrwl/jest/preset');
nxPreset.transform = {
'^.+\\.(ts|js|html)$': 'jest-preset-angular'
};
module.exports = { ...nxPreset };
the **test-setup.ts** that is in the **root** is the following:
import 'jest-canvas-mock';
import '@frontend/core/mocks/libs/rxjs.mock';
console.warn = (message: string) => {
if(!message.includes('Could not find Angular Material core theme. Most Material components may not work as expected. For more info refer to the theming guide: https://material.angular.io/guide/theming')) {
console.warn(message);
}
};
the **jest.config.js** files of the **different projects** are as the following:
module.exports = {
displayName: 'core',
name: 'core',
coverageDirectory: '../../coverage/libs/core',
preset: '../../jest.preset.js',
setupFiles: ['./../../test-setup.ts'],
setupFilesAfterEnv: ['<rootDir>/src/test-setup.ts'],
globals: {
'ts-jest': {
tsconfig: '<rootDir>/tsconfig.spec.json',
stringifyContentPathRegex: '\\.(html|svg)$'
},
},
snapshotSerializers: [
'jest-preset-angular/build/serializers/no-ng-attributes',
'jest-preset-angular/build/serializers/ng-snapshot',
'jest-preset-angular/build/serializers/html-comment',
]
};
What could be happening?


Thanks in advance.
| 1.0 | It looks like if test are executed twice. - Hello, does it happen to anyone that it seems as if the **tests were executed twice**?
I mean, **I have 1289 tests** in my project and in console it appears that **2578 were executed.**, (In the screenshot below I have taken the screenshot almost at the end of the execution, it shows **2551** of these) but when the final report is being generated, it appears the 1289 tests.
I have updated **Angular (12.2.10)** **, Jest** and **Nx** to their **latest version.**
My **package.json** is as follows:
{
"name": "frontend",
"version": "1.0.2",
"license": "MIT",
"scripts": {
"ng": "nx",
"nx": "nx",
"start": "nx serve",
"build": "ng build",
"build:prod": "node --max_old_space_size=8192 node_modules/@angular/cli/bin/ng build --configuration production --aot --build-optimizer && gzipper compress ./www",
"build:test": "node --max_old_space_size=8192 node_modules/@angular/cli/bin/ng build --configuration production --aot --build-optimizer --configuration=test && gzipper compress ./www",
"test": "node --max-old-space-size=8192 --expose-gc node_modules/jest/bin/jest.js --coverage --verbose false --logHeapUsage --ci",
"test-watch": "jest --watchAll --coverage --verbose false --logHeapUsage --maxWorkers=11",
"lint": "nx workspace-lint && ng lint",
"e2e": "ng e2e",
"affected:apps": "nx affected:apps",
"affected:libs": "nx affected:libs",
"affected:build": "nx affected:build",
"affected:e2e": "nx affected:e2e",
"affected:test": "nx affected:test",
"affected:lint": "nx affected:lint",
"affected:dep-graph": "nx affected:dep-graph",
"affected": "nx affected",
"format": "nx format:write",
"format:write": "nx format:write",
"format:check": "nx format:check",
"update": "ng update @nrwl/workspace",
"workspace-schematic": "nx workspace-schematic",
"dep-graph": "nx dep-graph",
"help": "nx help",
"postinstall": "ngcc",
"compodoc": "compodoc -p ./tsconfig.compodoc.json -s",
"build-webpack-analyzer": "ng build --stats-json",
"run-webpack-analyzer": "webpack-bundle-analyzer www/stats.json",
"sonar-linux": "./node_modules/sonar-scanner/bin/sonar-scanner",
"sonar-windows": ".\\node_modules\\sonar-scanner\\bin\\sonar-scanner"
},
"author": "José Ignacio Sanz García",
"private": true,
"dependencies": {
"@angular-material-components/datetime-picker": "^5.1.0",
"@angular/animations": "12.2.10",
"@angular/cdk": "12.2.10",
"@angular/common": "12.2.10",
"@angular/compiler": "12.2.10",
"@angular/core": "12.2.10",
"@angular/elements": "12.2.10",
"@angular/forms": "12.2.10",
"@angular/localize": "^12.2.10",
"@angular/material": "12.2.10",
"@angular/platform-browser": "12.2.10",
"@angular/platform-browser-dynamic": "12.2.10",
"@angular/platform-server": "12.2.10",
"@angular/pwa": "^0.1102.11",
"@angular/router": "12.2.10",
"@angular/service-worker": "12.2.10",
"@compodoc/compodoc": "^1.1.13",
"@ctrl/ngx-codemirror": "^4.1.1",
"@fullcalendar/core": "^5.6.0",
"@ng-select/ng-select": "^6.1.0",
"@ngrx/component-store": "12.5.0",
"@ngrx/effects": "12.5.0",
"@ngrx/entity": "12.5.0",
"@ngrx/router-store": "12.5.0",
"@ngrx/store": "12.5.0",
"@ngrx/store-devtools": "12.5.0",
"@ngx-translate/core": "^13.0.0",
"@ngx-translate/http-loader": "^6.0.0",
"@nrwl/eslint-plugin-nx": "12.10.0",
"@nrwl/workspace": "12.10.0",
"@typescript-eslint/eslint-plugin": "^4.22.1",
"@typescript-eslint/parser": "^4.22.1",
"chart.js": "^2.9.4",
"classlist.js": "^1.1.20150312",
"codemirror": "^5.61.0",
"crypto-js": "^4.0.0",
"eslint-plugin-cypress": "^2.11.2",
"file-saver": "^2.0.5",
"fs": "0.0.1-security",
"google-libphonenumber": "^3.2.21",
"html2canvas": "^1.0.0-rc.7",
"html2pdf.js": "^0.10.1",
"jest-canvas-mock": "^2.3.1",
"jest-leak-detector": "^27.0.6",
"jsdom": "^16.6.0",
"jspdf": "^2.3.1",
"jspdf-autotable": "^3.5.14",
"lodash": "^4.17.21",
"mammoth": "^1.4.18",
"ng-mocks": "^12.5.0",
"ngx-material-file-input": "^2.1.1",
"ngx-swiper-wrapper": "^10.0.0",
"npm": "^7.23.0",
"posthtml-beautify": "^0.7.0",
"primeicons": "^4.1.0",
"primeng": "^11.4.0",
"project": "^0.1.6",
"properties-parser": "^0.3.1",
"quill": "^1.3.7",
"rxjs": "^6.6.7",
"screenfull": "^5.1.0",
"textarea-caret": "^3.1.0",
"to-px": "^1.1.0",
"tslib": "^2.2.0",
"weak-napi": "^2.0.2",
"web-animations-js": "^2.3.2",
"xlsx": "^0.17.1",
"zone.js": "0.11.4"
},
"devDependencies": {
"@angular-devkit/build-angular": "12.2.10",
"@angular-devkit/schematics": "^12.2.10",
"@angular-eslint/eslint-plugin": "12.3.1",
"@angular-eslint/eslint-plugin-template": "12.3.1",
"@angular-eslint/template-parser": "12.3.1",
"@angular/cli": "12.2.10",
"@angular/compiler-cli": "12.2.10",
"@angular/language-service": "12.2.10",
"@ngrx/schematics": "12.5.0",
"@nrwl/angular": "12.10.0",
"@nrwl/cypress": "12.10.0",
"@nrwl/jest": "12.10.0",
"@types/chart.js": "^2.9.31",
"@types/crypto-js": "^4.0.1",
"@types/google-libphonenumber": "^7.4.21",
"@types/html2canvas": "0.0.36",
"@types/jest": "27.0.2",
"@types/jspdf": "^1.3.3",
"@types/lodash": "^4.14.168",
"@types/textarea-caret": "^3.0.0",
"@types/to-px": "^1.1.1",
"cypress": "^7.2.0",
"eslint": "^7.25.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-prettier": "^3.4.0",
"gzipper": "^4.5.0",
"html-docx-js-typescript": "^0.1.5",
"jest": "27.2.3",
"jest-junit": "^12.2.0",
"jest-preset-angular": "10.0.1",
"jest-sonar": "^0.2.12",
"posthtml": "^0.15.2",
"prettier": "2.3.1",
"protractor": "^7.0.0",
"sonar-scanner": "^3.1.0",
"string.prototype.replaceall": "^1.0.5",
"swiper": "^6.5.9",
"ts-jest": "27.0.5",
"ts-loader": "^9.1.1",
"ts-node": "^9.1.1",
"typescript": "4.3.5",
"webpack-bundle-analyzer": "^4.4.0",
"webpack-cli": "^4.6.0"
},
"browser": {
"crypto": false
}
}
The **base jest.config.js** (the one in the root) is as follows:
module.exports = {
testMatch: ['**/+(*.)+(spec|test).+(ts|js)?(x)'],
transform: {
'^.+\\.(ts|js|html)$': 'ts-jest',
},
resolver: '@nrwl/jest/plugins/resolver',
moduleFileExtensions: ['ts', 'js', 'html'],
coverageDirectory: '<rootDir>/coverage',
reporters: ['default',
'jest-junit',
['jest-sonar', {
outputDirectory: './',
outputName: 'test-report.xml',
reportedFilePath: 'absolute'
}],
],
setupFiles: [
'<rootDir>/test-setup.ts'
],
coverageReporters: ["json", "lcov", "clover", "text", "text-summary", "cobertura"],
collectCoverageFrom: [
"**/*.ts",
"!**/node_modules/**",
"!**/*vessel*.ts",
"!**/*.module.ts",
"!**/routes.ts",
"!**/paths.ts",
"!**/*.token.ts",
"!**/*.collection.ts",
"!**/*.enum.ts",
"!**/*.model.ts",
"!**/*.mock.ts",
"!**/polyfills.ts",
"!**/main.ts",
"!**/index.ts",
"!**/*help*.ts",
"!**/admin-help/**",
"!**/*.environment.ts",
"!**/environment*.ts",
"!**/typings.d.ts"
],
projects: [
'<rootDir>/libs/core',
'<rootDir>/libs/menu-ui',
'<rootDir>/libs/shared',
'<rootDir>/apps/ioms-main',
'<rootDir>/apps/auth',
'<rootDir>/apps/side-menu',
'<rootDir>/apps/dashboard',
'<rootDir>/apps/messaging',
'<rootDir>/apps/administration',
'<rootDir>/apps/data-request',
'<rootDir>/apps/vessels',
'<rootDir>/apps/compliance-data'
]
};
My **jest-preset.js** is the following:
const nxPreset = require('@nrwl/jest/preset');
nxPreset.transform = {
'^.+\\.(ts|js|html)$': 'jest-preset-angular'
};
module.exports = { ...nxPreset };
the **test-setup.ts** that is in the **root** is the following:
import 'jest-canvas-mock';
import '@frontend/core/mocks/libs/rxjs.mock';
console.warn = (message: string) => {
if(!message.includes('Could not find Angular Material core theme. Most Material components may not work as expected. For more info refer to the theming guide: https://material.angular.io/guide/theming')) {
console.warn(message);
}
};
the **jest.config.js** files of the **different projects** are as the following:
module.exports = {
displayName: 'core',
name: 'core',
coverageDirectory: '../../coverage/libs/core',
preset: '../../jest.preset.js',
setupFiles: ['./../../test-setup.ts'],
setupFilesAfterEnv: ['<rootDir>/src/test-setup.ts'],
globals: {
'ts-jest': {
tsconfig: '<rootDir>/tsconfig.spec.json',
stringifyContentPathRegex: '\\.(html|svg)$'
},
},
snapshotSerializers: [
'jest-preset-angular/build/serializers/no-ng-attributes',
'jest-preset-angular/build/serializers/ng-snapshot',
'jest-preset-angular/build/serializers/html-comment',
]
};
What could be happening?


Thanks in advance.
| test | it looks like if test are executed twice hello does it happen to anyone that it seems as if the tests were executed twice i mean i have tests in my project and in console it appears that were executed in the screenshot below i have taken the screenshot almost at the end of the execution it shows of these but when the final report is being generated it appears the tests i have updated angular jest and nx to their latest version my package json is as follows name frontend version license mit scripts ng nx nx nx start nx serve build ng build build prod node max old space size node modules angular cli bin ng build configuration production aot build optimizer gzipper compress www build test node max old space size node modules angular cli bin ng build configuration production aot build optimizer configuration test gzipper compress www test node max old space size expose gc node modules jest bin jest js coverage verbose false logheapusage ci test watch jest watchall coverage verbose false logheapusage maxworkers lint nx workspace lint ng lint ng affected apps nx affected apps affected libs nx affected libs affected build nx affected build affected nx affected affected test nx affected test affected lint nx affected lint affected dep graph nx affected dep graph affected nx affected format nx format write format write nx format write format check nx format check update ng update nrwl workspace workspace schematic nx workspace schematic dep graph nx dep graph help nx help postinstall ngcc compodoc compodoc p tsconfig compodoc json s build webpack analyzer ng build stats json run webpack analyzer webpack bundle analyzer www stats json sonar linux node modules sonar scanner bin sonar scanner sonar windows node modules sonar scanner bin sonar scanner author josé ignacio sanz garcía private true dependencies angular material components datetime picker angular animations angular cdk angular common angular compiler angular core angular elements angular forms angular localize angular material angular platform browser angular platform browser dynamic angular platform server angular pwa angular router angular service worker compodoc compodoc ctrl ngx codemirror fullcalendar core ng select ng select ngrx component store ngrx effects ngrx entity ngrx router store ngrx store ngrx store devtools ngx translate core ngx translate http loader nrwl eslint plugin nx nrwl workspace typescript eslint eslint plugin typescript eslint parser chart js classlist js codemirror crypto js eslint plugin cypress file saver fs security google libphonenumber rc js jest canvas mock jest leak detector jsdom jspdf jspdf autotable lodash mammoth ng mocks ngx material file input ngx swiper wrapper npm posthtml beautify primeicons primeng project properties parser quill rxjs screenfull textarea caret to px tslib weak napi web animations js xlsx zone js devdependencies angular devkit build angular angular devkit schematics angular eslint eslint plugin angular eslint eslint plugin template angular eslint template parser angular cli angular compiler cli angular language service ngrx schematics nrwl angular nrwl cypress nrwl jest types chart js types crypto js types google libphonenumber types types jest types jspdf types lodash types textarea caret types to px cypress eslint eslint config prettier eslint plugin prettier gzipper html docx js typescript jest jest junit jest preset angular jest sonar posthtml prettier protractor sonar scanner string prototype replaceall swiper ts jest ts loader ts node typescript webpack bundle analyzer webpack cli browser crypto false the base jest config js the one in the root is as follows module exports testmatch transform ts js html ts jest resolver nrwl jest plugins resolver modulefileextensions coveragedirectory coverage reporters default jest junit jest sonar outputdirectory outputname test report xml reportedfilepath absolute setupfiles test setup ts coveragereporters collectcoveragefrom ts node modules vessel ts module ts routes ts paths ts token ts collection ts enum ts model ts mock ts polyfills ts main ts index ts help ts admin help environment ts environment ts typings d ts projects libs core libs menu ui libs shared apps ioms main apps auth apps side menu apps dashboard apps messaging apps administration apps data request apps vessels apps compliance data my jest preset js is the following const nxpreset require nrwl jest preset nxpreset transform ts js html jest preset angular module exports nxpreset the test setup ts that is in the root is the following import jest canvas mock import frontend core mocks libs rxjs mock console warn message string if message includes could not find angular material core theme most material components may not work as expected for more info refer to the theming guide console warn message the jest config js files of the different projects are as the following module exports displayname core name core coveragedirectory coverage libs core preset jest preset js setupfiles setupfilesafterenv globals ts jest tsconfig tsconfig spec json stringifycontentpathregex html svg snapshotserializers jest preset angular build serializers no ng attributes jest preset angular build serializers ng snapshot jest preset angular build serializers html comment what could be happening thanks in advance | 1 |
245,156 | 20,749,823,177 | IssuesEvent | 2022-03-15 05:50:43 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | ccl/cliccl: TestExportDataWithRevisions failed | C-test-failure O-robot branch-master | ccl/cliccl.TestExportDataWithRevisions [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4574871&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4574871&tab=artifacts#/) on master @ [f5fc84fb5707428ae9505c5e3e90cf3f63d465ad](https://github.com/cockroachdb/cockroach/commits/f5fc84fb5707428ae9505c5e3e90cf3f63d465ad):
```
=== RUN TestExportDataWithRevisions
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/f4e460ad344831033e3c304b661d69e4/logTestExportDataWithRevisions2248579060
test_log_scope.go:80: use -show-logs to present logs inline
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/f4e460ad344831033e3c304b661d69e4/logTestExportDataWithRevisions3836320217
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
/cc @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestExportDataWithRevisions.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 1.0 | ccl/cliccl: TestExportDataWithRevisions failed - ccl/cliccl.TestExportDataWithRevisions [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4574871&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4574871&tab=artifacts#/) on master @ [f5fc84fb5707428ae9505c5e3e90cf3f63d465ad](https://github.com/cockroachdb/cockroach/commits/f5fc84fb5707428ae9505c5e3e90cf3f63d465ad):
```
=== RUN TestExportDataWithRevisions
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/f4e460ad344831033e3c304b661d69e4/logTestExportDataWithRevisions2248579060
test_log_scope.go:80: use -show-logs to present logs inline
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/f4e460ad344831033e3c304b661d69e4/logTestExportDataWithRevisions3836320217
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
/cc @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestExportDataWithRevisions.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | ccl cliccl testexportdatawithrevisions failed ccl cliccl testexportdatawithrevisions with on master run testexportdatawithrevisions test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline help see also parameters in this failure tags bazel gss cc cockroachdb server | 1 |
167,463 | 20,726,116,781 | IssuesEvent | 2022-03-14 02:14:05 | directoryxx/Inventory-SISI | https://api.github.com/repos/directoryxx/Inventory-SISI | opened | CVE-2021-37701 (High) detected in tar-2.2.2.tgz, tar-0.1.20.tgz | security vulnerability | ## CVE-2021-37701 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-2.2.2.tgz</b>, <b>tar-0.1.20.tgz</b></p></summary>
<p>
<details><summary><b>tar-2.2.2.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.2.tgz">https://registry.npmjs.org/tar/-/tar-2.2.2.tgz</a></p>
<p>Path to dependency file: /assets/adminlte/bower_components/select2/package.json</p>
<p>Path to vulnerable library: /assets/adminlte/bower_components/bootstrap-daterangepicker/node_modules/tar/package.json,/assets/adminlte/bower_components/bootstrap-daterangepicker/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-3.13.1.tgz (Root Library)
- node-gyp-3.8.0.tgz
- :x: **tar-2.2.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-0.1.20.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-0.1.20.tgz">https://registry.npmjs.org/tar/-/tar-0.1.20.tgz</a></p>
<p>Path to dependency file: /assets/adminlte/bower_components/morris.js/package.json</p>
<p>Path to vulnerable library: /assets/adminlte/bower_components/morris.js/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- bower-1.2.8.tgz (Root Library)
- :x: **tar-0.1.20.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.16</p>
<p>Direct dependency fix Resolution (node-sass): 5.0.0</p><p>Fix Resolution (tar): 4.4.16</p>
<p>Direct dependency fix Resolution (bower): 1.3.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37701 (High) detected in tar-2.2.2.tgz, tar-0.1.20.tgz - ## CVE-2021-37701 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-2.2.2.tgz</b>, <b>tar-0.1.20.tgz</b></p></summary>
<p>
<details><summary><b>tar-2.2.2.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.2.tgz">https://registry.npmjs.org/tar/-/tar-2.2.2.tgz</a></p>
<p>Path to dependency file: /assets/adminlte/bower_components/select2/package.json</p>
<p>Path to vulnerable library: /assets/adminlte/bower_components/bootstrap-daterangepicker/node_modules/tar/package.json,/assets/adminlte/bower_components/bootstrap-daterangepicker/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-3.13.1.tgz (Root Library)
- node-gyp-3.8.0.tgz
- :x: **tar-2.2.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-0.1.20.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-0.1.20.tgz">https://registry.npmjs.org/tar/-/tar-0.1.20.tgz</a></p>
<p>Path to dependency file: /assets/adminlte/bower_components/morris.js/package.json</p>
<p>Path to vulnerable library: /assets/adminlte/bower_components/morris.js/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- bower-1.2.8.tgz (Root Library)
- :x: **tar-0.1.20.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.16</p>
<p>Direct dependency fix Resolution (node-sass): 5.0.0</p><p>Fix Resolution (tar): 4.4.16</p>
<p>Direct dependency fix Resolution (bower): 1.3.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in tar tgz tar tgz cve high severity vulnerability vulnerable libraries tar tgz tar tgz tar tgz tar for node library home page a href path to dependency file assets adminlte bower components package json path to vulnerable library assets adminlte bower components bootstrap daterangepicker node modules tar package json assets adminlte bower components bootstrap daterangepicker node modules tar package json dependency hierarchy node sass tgz root library node gyp tgz x tar tgz vulnerable library tar tgz tar for node library home page a href path to dependency file assets adminlte bower components morris js package json path to vulnerable library assets adminlte bower components morris js node modules tar package json dependency hierarchy bower tgz root library x tar tgz vulnerable library vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems the cache checking logic used both and characters as path separators however is a valid filename character on posix systems by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite additionally a similar confusion could arise on case insensitive filesystems if a tar archive contained a directory at foo followed by a symbolic link named foo then on case insensitive file systems the creation of the symbolic link would remove the directory from the filesystem but not from the internal directory cache as it would not be treated as a cache hit a subsequent file entry within the foo directory would then be placed in the target of the symbolic link thinking that the directory had already been created these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution node sass fix resolution tar direct dependency fix resolution bower step up your open source security game with whitesource | 0 |
308,407 | 9,438,844,981 | IssuesEvent | 2019-04-14 04:28:32 | dqrobotics/matlab | https://api.github.com/repos/dqrobotics/matlab | closed | Questions about DQ_MobileBase's implementation | high priority question | Hello, @bvadorno,
I'm checking the implementation of `DQ_MobileBase.m` and came across the following
--------------------------------------------------**Context 1**--------------------------------------------------
(in file [`DQ_MobileBase`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_MobileBase.m))
```
properties (Access = protected)
dim_configuration_space;
end
```
I don't know if I'm misunderstanding MATLAB, but I found that strange since the method related to that attribute is part of DQ_Kinematics
(in file [`DQ_Kinematics`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_Kinematics.m))
```
% GET_DIM_CONFIGURATION_SPACE returns the dimension of the configuration
% space.
dim = get_dim_configuration_space(obj);
```
--------------------------------------------------**Question 1**--------------------------------------------------
I think the attribute `dim_configuration_space` should also be part of `DQ_Kinematics`, since the abstract method `get_dim_configuration_space(obj)` is part of `DQ_Kinematics`.
---------------------------------------------------------**End 1**--------------------------------------------------
----------------------------------------------------**Context 2**--------------------------------------------------
The attribute
(in file [`DQ_MobileBase`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_MobileBase.m))
```
properties
base_pose;
end
```
Seems very similar to the attribute
(in file [`DQ_Kinematics`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_Kinematics.m))
```
properties
% Frame used to determine the robot physical location
base_frame;
end
```
In fact, there seems to be some mistake because of the following method in which `base_frame()` returns `base_pose` and not `base_frame`.
(in file [`DQ_MobileBase`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_MobileBase.m))
```
function ret = base_frame(obj)
ret = obj.base_pose;
end
```
--------------------------------------------------**Question 2**--------------------------------------------------
Do they have a different functionality? Is this duplication expected?
--------------------------------------------------------**End 2**--------------------------------------------------
--------------------------------------------------**Question 3**--------------------------------------------------
Is there a reason why `raw_fkm()` and `raw_pose_jacobian()` are abstract methods of `DQ_MobileBase()` instead of being abstract methods of `DQ_Kinematics()`?
--------------------------------------------------------**End 3**--------------------------------------------------
Kind regards,
Murilo | 1.0 | Questions about DQ_MobileBase's implementation - Hello, @bvadorno,
I'm checking the implementation of `DQ_MobileBase.m` and came across the following
--------------------------------------------------**Context 1**--------------------------------------------------
(in file [`DQ_MobileBase`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_MobileBase.m))
```
properties (Access = protected)
dim_configuration_space;
end
```
I don't know if I'm misunderstanding MATLAB, but I found that strange since the method related to that attribute is part of DQ_Kinematics
(in file [`DQ_Kinematics`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_Kinematics.m))
```
% GET_DIM_CONFIGURATION_SPACE returns the dimension of the configuration
% space.
dim = get_dim_configuration_space(obj);
```
--------------------------------------------------**Question 1**--------------------------------------------------
I think the attribute `dim_configuration_space` should also be part of `DQ_Kinematics`, since the abstract method `get_dim_configuration_space(obj)` is part of `DQ_Kinematics`.
---------------------------------------------------------**End 1**--------------------------------------------------
----------------------------------------------------**Context 2**--------------------------------------------------
The attribute
(in file [`DQ_MobileBase`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_MobileBase.m))
```
properties
base_pose;
end
```
Seems very similar to the attribute
(in file [`DQ_Kinematics`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_Kinematics.m))
```
properties
% Frame used to determine the robot physical location
base_frame;
end
```
In fact, there seems to be some mistake because of the following method in which `base_frame()` returns `base_pose` and not `base_frame`.
(in file [`DQ_MobileBase`](https://github.com/dqrobotics/matlab/blob/master/robot_modeling/DQ_MobileBase.m))
```
function ret = base_frame(obj)
ret = obj.base_pose;
end
```
--------------------------------------------------**Question 2**--------------------------------------------------
Do they have a different functionality? Is this duplication expected?
--------------------------------------------------------**End 2**--------------------------------------------------
--------------------------------------------------**Question 3**--------------------------------------------------
Is there a reason why `raw_fkm()` and `raw_pose_jacobian()` are abstract methods of `DQ_MobileBase()` instead of being abstract methods of `DQ_Kinematics()`?
--------------------------------------------------------**End 3**--------------------------------------------------
Kind regards,
Murilo | non_test | questions about dq mobilebase s implementation hello bvadorno i m checking the implementation of dq mobilebase m and came across the following context in file properties access protected dim configuration space end i don t know if i m misunderstanding matlab but i found that strange since the method related to that attribute is part of dq kinematics in file get dim configuration space returns the dimension of the configuration space dim get dim configuration space obj question i think the attribute dim configuration space should also be part of dq kinematics since the abstract method get dim configuration space obj is part of dq kinematics end context the attribute in file properties base pose end seems very similar to the attribute in file properties frame used to determine the robot physical location base frame end in fact there seems to be some mistake because of the following method in which base frame returns base pose and not base frame in file function ret base frame obj ret obj base pose end question do they have a different functionality is this duplication expected end question is there a reason why raw fkm and raw pose jacobian are abstract methods of dq mobilebase instead of being abstract methods of dq kinematics end kind regards murilo | 0 |
37,684 | 8,474,800,442 | IssuesEvent | 2018-10-24 17:07:59 | brainvisa/testbidon | https://api.github.com/repos/brainvisa/testbidon | closed | somanifti partial reading leaves open file | Category: soma-io Component: Resolution Priority: Normal Status: Closed Tracker: Defect | ---
Author Name: **Riviere, Denis** (Riviere, Denis)
Original Redmine Issue: 13844, https://bioproj.extra.cea.fr/redmine/issues/13844
Original Date: 2015-11-14
---
Reading a full NIFTI volume is OK, but partial reading leaves an open file descriptor on the file.
| 1.0 | somanifti partial reading leaves open file - ---
Author Name: **Riviere, Denis** (Riviere, Denis)
Original Redmine Issue: 13844, https://bioproj.extra.cea.fr/redmine/issues/13844
Original Date: 2015-11-14
---
Reading a full NIFTI volume is OK, but partial reading leaves an open file descriptor on the file.
| non_test | somanifti partial reading leaves open file author name riviere denis riviere denis original redmine issue original date reading a full nifti volume is ok but partial reading leaves an open file descriptor on the file | 0 |
32,823 | 7,604,243,532 | IssuesEvent | 2018-04-29 22:54:25 | UWCubeSat/DubSat1 | https://api.github.com/repos/UWCubeSat/DubSat1 | closed | PD board hangs at instruction 0x04 | CRITICAL Post Code Complete bug | cmd_rollcall is the only packet that hangs this MSP, hangs even without inserting data payload. Steps to reproduce: send cmd_rollcall | 1.0 | PD board hangs at instruction 0x04 - cmd_rollcall is the only packet that hangs this MSP, hangs even without inserting data payload. Steps to reproduce: send cmd_rollcall | non_test | pd board hangs at instruction cmd rollcall is the only packet that hangs this msp hangs even without inserting data payload steps to reproduce send cmd rollcall | 0 |
32,634 | 6,100,885,453 | IssuesEvent | 2017-06-20 13:35:11 | juba/questionr | https://api.github.com/repos/juba/questionr | closed | Make a screencast for i* functions ? | documentation question | A very short screencast could be a good way to show how the "interactive" functions (`irec`, `iorder`, `icut`) work and how to use the interface.
| 1.0 | Make a screencast for i* functions ? - A very short screencast could be a good way to show how the "interactive" functions (`irec`, `iorder`, `icut`) work and how to use the interface.
| non_test | make a screencast for i functions a very short screencast could be a good way to show how the interactive functions irec iorder icut work and how to use the interface | 0 |
73,986 | 7,371,638,838 | IssuesEvent | 2018-03-13 12:28:53 | akkadotnet/akka.net | https://api.github.com/repos/akkadotnet/akka.net | closed | [BUG] Testing a Persistent Actor on .NET 4.5 | akka-persistence potential bug tests | I want to learn how to use Persistent Actor, but I don't understand why my implemented scenario (a ping-pong like messages exchange) raises a timeout exception in my NUnit test waiting for a response.
If the actor extends a non-persistent ReceiveActor and use `Receive` instead of `Command`, the test turns green.
Here the actor class.
```c#
public class FakeActor : ReceivePersistentActor
{
public override string PersistenceId { get; } = "HashCoded";
public FakeActor()
{
Command<FakeRequest>(request => Sender.Tell(new FakeResponse(requestId: request.RequestId)));
}
}
```
Here the actor test class
```c#
public class FakeActorTest : TestKit
{
[Test]
public void FakeActor_FakeRequest()
{
// Arrange
var senderProbe = CreateTestProbe();
var sut = Sys.ActorOf(Props.Create<FakeActor>());
// Act
sut.Tell(new FakeRequest(1), senderProbe.Ref);
// Assert
senderProbe.ExpectMsg<FakeResponse>(r => r.RequestId == 1);
}
}
```
And here the protocol classes.
```c#
public class FakeRequest
{
public FakeRequest(ulong requestId) { RequestId = requestId; }
public override string ToString() => $"FakeRequest: {RequestId}";
public ulong RequestId { get; private set; }
}
```
```c#
public class FakeResponse
{
public FakeResponse(ulong requestId) { RequestId = requestId; }
public override string ToString() => $"FakeResponse: {RequestId}";
public ulong RequestId { get; private set; }
}
```
Specs:
* Which Akka.Net version you are using: 1.3.2
* On which platform you are using Akka.Net: Windows - .NET 4.5 | 1.0 | [BUG] Testing a Persistent Actor on .NET 4.5 - I want to learn how to use Persistent Actor, but I don't understand why my implemented scenario (a ping-pong like messages exchange) raises a timeout exception in my NUnit test waiting for a response.
If the actor extends a non-persistent ReceiveActor and use `Receive` instead of `Command`, the test turns green.
Here the actor class.
```c#
public class FakeActor : ReceivePersistentActor
{
public override string PersistenceId { get; } = "HashCoded";
public FakeActor()
{
Command<FakeRequest>(request => Sender.Tell(new FakeResponse(requestId: request.RequestId)));
}
}
```
Here the actor test class
```c#
public class FakeActorTest : TestKit
{
[Test]
public void FakeActor_FakeRequest()
{
// Arrange
var senderProbe = CreateTestProbe();
var sut = Sys.ActorOf(Props.Create<FakeActor>());
// Act
sut.Tell(new FakeRequest(1), senderProbe.Ref);
// Assert
senderProbe.ExpectMsg<FakeResponse>(r => r.RequestId == 1);
}
}
```
And here the protocol classes.
```c#
public class FakeRequest
{
public FakeRequest(ulong requestId) { RequestId = requestId; }
public override string ToString() => $"FakeRequest: {RequestId}";
public ulong RequestId { get; private set; }
}
```
```c#
public class FakeResponse
{
public FakeResponse(ulong requestId) { RequestId = requestId; }
public override string ToString() => $"FakeResponse: {RequestId}";
public ulong RequestId { get; private set; }
}
```
Specs:
* Which Akka.Net version you are using: 1.3.2
* On which platform you are using Akka.Net: Windows - .NET 4.5 | test | testing a persistent actor on net i want to learn how to use persistent actor but i don t understand why my implemented scenario a ping pong like messages exchange raises a timeout exception in my nunit test waiting for a response if the actor extends a non persistent receiveactor and use receive instead of command the test turns green here the actor class c public class fakeactor receivepersistentactor public override string persistenceid get hashcoded public fakeactor command request sender tell new fakeresponse requestid request requestid here the actor test class c public class fakeactortest testkit public void fakeactor fakerequest arrange var senderprobe createtestprobe var sut sys actorof props create act sut tell new fakerequest senderprobe ref assert senderprobe expectmsg r r requestid and here the protocol classes c public class fakerequest public fakerequest ulong requestid requestid requestid public override string tostring fakerequest requestid public ulong requestid get private set c public class fakeresponse public fakeresponse ulong requestid requestid requestid public override string tostring fakeresponse requestid public ulong requestid get private set specs which akka net version you are using on which platform you are using akka net windows net | 1 |
88,633 | 15,814,315,726 | IssuesEvent | 2021-04-05 09:15:42 | AlexRogalskiy/github-action-tag-replacer | https://api.github.com/repos/AlexRogalskiy/github-action-tag-replacer | opened | CVE-2021-23362 (Medium) detected in hosted-git-info-2.8.8.tgz | security vulnerability | ## CVE-2021-23362 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hosted-git-info-2.8.8.tgz</b></p></summary>
<p>Provides metadata and conversions from repository urls for Github, Bitbucket and Gitlab</p>
<p>Library home page: <a href="https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz">https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz</a></p>
<p>Path to dependency file: github-action-tag-replacer/package.json</p>
<p>Path to vulnerable library: github-action-tag-replacer/node_modules/npm/node_modules/hosted-git-info/package.json,github-action-tag-replacer/node_modules/conventional-changelog-core/node_modules/read-pkg/node_modules/hosted-git-info/package.json,github-action-tag-replacer/node_modules/hosted-git-info/package.json</p>
<p>
Dependency Hierarchy:
- conventional-changelog-cli-2.1.1.tgz (Root Library)
- conventional-changelog-3.1.24.tgz
- conventional-changelog-core-4.2.2.tgz
- read-pkg-3.0.0.tgz
- normalize-package-data-2.5.0.tgz
- :x: **hosted-git-info-2.8.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-tag-replacer/commit/774da864d377bd240eabd69479a3a706e0b4d7b4">774da864d377bd240eabd69479a3a706e0b4d7b4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package hosted-git-info before 3.0.8 are vulnerable to Regular Expression Denial of Service (ReDoS) via shortcutMatch in fromUrl().
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23362>CVE-2021-23362</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/hosted-git-info/releases/tag/v3.0.8">https://github.com/npm/hosted-git-info/releases/tag/v3.0.8</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: hosted-git-info - 3.0.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23362 (Medium) detected in hosted-git-info-2.8.8.tgz - ## CVE-2021-23362 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hosted-git-info-2.8.8.tgz</b></p></summary>
<p>Provides metadata and conversions from repository urls for Github, Bitbucket and Gitlab</p>
<p>Library home page: <a href="https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz">https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz</a></p>
<p>Path to dependency file: github-action-tag-replacer/package.json</p>
<p>Path to vulnerable library: github-action-tag-replacer/node_modules/npm/node_modules/hosted-git-info/package.json,github-action-tag-replacer/node_modules/conventional-changelog-core/node_modules/read-pkg/node_modules/hosted-git-info/package.json,github-action-tag-replacer/node_modules/hosted-git-info/package.json</p>
<p>
Dependency Hierarchy:
- conventional-changelog-cli-2.1.1.tgz (Root Library)
- conventional-changelog-3.1.24.tgz
- conventional-changelog-core-4.2.2.tgz
- read-pkg-3.0.0.tgz
- normalize-package-data-2.5.0.tgz
- :x: **hosted-git-info-2.8.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-tag-replacer/commit/774da864d377bd240eabd69479a3a706e0b4d7b4">774da864d377bd240eabd69479a3a706e0b4d7b4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package hosted-git-info before 3.0.8 are vulnerable to Regular Expression Denial of Service (ReDoS) via shortcutMatch in fromUrl().
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23362>CVE-2021-23362</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/hosted-git-info/releases/tag/v3.0.8">https://github.com/npm/hosted-git-info/releases/tag/v3.0.8</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: hosted-git-info - 3.0.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in hosted git info tgz cve medium severity vulnerability vulnerable library hosted git info tgz provides metadata and conversions from repository urls for github bitbucket and gitlab library home page a href path to dependency file github action tag replacer package json path to vulnerable library github action tag replacer node modules npm node modules hosted git info package json github action tag replacer node modules conventional changelog core node modules read pkg node modules hosted git info package json github action tag replacer node modules hosted git info package json dependency hierarchy conventional changelog cli tgz root library conventional changelog tgz conventional changelog core tgz read pkg tgz normalize package data tgz x hosted git info tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package hosted git info before are vulnerable to regular expression denial of service redos via shortcutmatch in fromurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution hosted git info step up your open source security game with whitesource | 0 |
33,209 | 14,015,126,061 | IssuesEvent | 2020-10-29 12:55:15 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | Documentation template for alert and action types | Feature:Actions Feature:Alerting Team:Alerting Services | Created from https://github.com/elastic/kibana/issues/75548#issuecomment-716822440.
Currently there is no guideline on how to document an alert type (or action type) and we have documentation for some that is only a single paragraph while others having a dedicated page with examples.
It would be nice to work with the docs team (cc @gchaps) and to come up with a documentation template for alert and action types to follow. | 1.0 | Documentation template for alert and action types - Created from https://github.com/elastic/kibana/issues/75548#issuecomment-716822440.
Currently there is no guideline on how to document an alert type (or action type) and we have documentation for some that is only a single paragraph while others having a dedicated page with examples.
It would be nice to work with the docs team (cc @gchaps) and to come up with a documentation template for alert and action types to follow. | non_test | documentation template for alert and action types created from currently there is no guideline on how to document an alert type or action type and we have documentation for some that is only a single paragraph while others having a dedicated page with examples it would be nice to work with the docs team cc gchaps and to come up with a documentation template for alert and action types to follow | 0 |
65,665 | 6,972,034,140 | IssuesEvent | 2017-12-11 15:49:47 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | teamcity: failed tests on master: testrace/TestBackupRestoreResume, testrace/TestBackupRestoreResume/restore, testrace/TestGRPCKeepaliveFailureFailsInflightRPCs, test/TestBackupRestoreResume, test/TestBackupRestoreResume/restore, test/TestGRPCKeepaliveFailureFailsInflightRPCs | Robot test-failure | The following tests appear to have failed:
[#434076](https://teamcity.cockroachdb.com/viewLog.html?buildId=434076):
```
--- FAIL: testrace/TestBackupRestoreResume (30.410s)
------- Stdout: -------
W171207 05:28:04.695127 61394 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I171207 05:28:04.712362 61394 server/config.go:520 [n?] 1 storage engine initialized
I171207 05:28:04.712450 61394 server/config.go:523 [n?] RocksDB cache size: 128 MiB
I171207 05:28:04.712498 61394 server/config.go:523 [n?] store 0: in-memory, size 0 B
I171207 05:28:04.780100 61394 server/node.go:362 [n?] **** cluster 6723e78f-ac5a-43d7-9964-a534000eb01f has been created
I171207 05:28:04.780341 61394 server/server.go:921 [n?] **** add additional nodes by specifying --join=127.0.0.1:42559
I171207 05:28:04.849185 61394 storage/store.go:1202 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I171207 05:28:04.861777 61394 server/node.go:487 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=3.2 KiB), ranges=1, leases=1, writes=0.00, bytesPerReplica={p10=3298.00 p25=3298.00 p50=3298.00 p75=3298.00 p90=3298.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00}
I171207 05:28:04.866270 61394 server/node.go:340 [n1] node ID 1 initialized
I171207 05:28:04.872504 61394 gossip/gossip.go:333 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42559" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:4 >
I171207 05:28:04.873875 61394 storage/stores.go:332 [n1] read 0 node addresses from persistent storage
I171207 05:28:04.888616 61394 server/node.go:628 [n1] connecting to gossip network to verify cluster ID...
I171207 05:28:04.889163 61394 server/node.go:653 [n1] node connected via gossip and verified as part of cluster "6723e78f-ac5a-43d7-9964-a534000eb01f"
I171207 05:28:04.904753 61394 server/node.go:429 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I171207 05:28:04.905446 61571 storage/replica_command.go:1231 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I171207 05:28:04.906402 61394 sql/distsql_physical_planner.go:121 [n1] creating DistSQLPlanner with address {tcp 127.0.0.1:42559}
I171207 05:28:04.998595 61394 server/server.go:1147 [n1] starting https server at 127.0.0.1:34599
I171207 05:28:04.998808 61394 server/server.go:1148 [n1] starting grpc/postgres server at 127.0.0.1:42559
I171207 05:28:04.998886 61394 server/server.go:1149 [n1] advertising CockroachDB node at 127.0.0.1:42559
W171207 05:28:04.999228 61394 sql/jobs/registry.go:219 [n1] unable to get node liveness: node not in the liveness table
E171207 05:28:05.241790 61576 storage/consistency_queue.go:107 [replica consistency checker,n1,s1,r1/1:/{Min-System/}] key range /Min-/Max outside of bounds of range /Min-/System/""
E171207 05:28:05.277857 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:05.282943 61571 storage/replica_command.go:1231 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I171207 05:28:05.410243 61571 storage/replica_command.go:1231 [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
E171207 05:28:05.430355 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
W171207 05:28:05.437816 61583 storage/intent_resolver.go:351 [n1,s1,r3/1:/{System/NodeL…-Max}]: failed to push during intent resolution: failed to push "sql txn implicit" id=4c293664 key=/Table/SystemConfigSpan/Start rw=true pri=0.04037893 iso=SERIALIZABLE stat=PENDING epo=0 ts=1512624485.310465627,0 orig=1512624485.310465627,0 max=1512624485.310465627,0 wto=false rop=false seq=7
I171207 05:28:05.494594 61394 sql/event_log.go:113 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
E171207 05:28:05.539024 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:05.667987 61571 storage/replica_command.go:1231 [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
E171207 05:28:05.673345 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:05.682726 61394 sql/lease.go:348 [n1] publish: descID=12 (eventlog) version=2 mtime=2017-12-07 05:28:05.679506163 +0000 UTC
I171207 05:28:05.782296 61571 storage/replica_command.go:1231 [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
E171207 05:28:05.841857 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:05.945848 61571 storage/replica_command.go:1231 [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
E171207 05:28:05.965682 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:06.134653 61571 storage/replica_command.go:1231 [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I171207 05:28:06.305635 61571 storage/replica_command.go:1231 [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I171207 05:28:06.397044 61394 sql/event_log.go:113 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:node}
I171207 05:28:06.546189 61571 storage/replica_command.go:1231 [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
W171207 05:28:06.652495 61747 storage/intent_resolver.go:351 [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=283664c3 key=/Local/Range/Table/12/RangeDescriptor rw=true pri=0.04168553 iso=SERIALIZABLE stat=PENDING epo=0 ts=1512624486.546533318,0 orig=1512624486.546533318,0 max=1512624486.546533318,0 wto=false rop=false seq=3
W171207 05:28:06.655859 61731 storage/intent_resolver.go:351 [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=283664c3 key=/Local/Range/Table/12/RangeDescriptor rw=true pri=0.04168553 iso=SERIALIZABLE stat=PENDING epo=0 ts=1512624486.546533318,0 orig=1512624486.546533318,0 max=1512624486.546533318,0 wto=false rop=false seq=3
I171207 05:28:06.691320 61571 storage/replica_command.go:1231 [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I171207 05:28:06.888574 61571 storage/replica_command.go:1231 [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I171207 05:28:06.890177 61394 sql/event_log.go:113 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:node}
I171207 05:28:07.067954 61394 sql/event_log.go:113 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:node}
I171207 05:28:07.196497 61571 storage/replica_command.go:1231 [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I171207 05:28:07.274163 61394 server/server.go:1207 [n1] done ensuring all necessary migrations have run
I171207 05:28:07.274394 61394 server/server.go:1210 [n1] serving sql connections
I171207 05:28:07.344277 61778 sql/event_log.go:113 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:42559} Attrs: Locality: ServerVersion:1.1-4} ClusterID:6723e78f-ac5a-43d7-9964-a534000eb01f StartedAt:1512624484904198993 LastUp:1512624484904198993}
I171207 05:28:07.387346 61571 storage/replica_command.go:1231 [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I171207 05:28:07.537254 61571 storage/replica_command.go:1231 [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I171207 05:28:07.754686 61571 storage/replica_command.go:1231 [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
W171207 05:28:07.818906 61803 storage/intent_resolver.go:351 [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=cf3b97cf key=/Local/Range/Table/18/RangeDescriptor rw=true pri=0.03262283 iso=SERIALIZABLE stat=PENDING epo=0 ts=1512624487.754898315,0 orig=1512624487.754898315,0 max=1512624487.754898315,0 wto=false rop=false seq=3
I171207 05:28:07.868290 61571 storage/replica_command.go:1231 [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
W171207 05:28:08.113268 61394 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I171207 05:28:08.124500 61394 server/config.go:520 [n?] 1 storage engine initialized
I171207 05:28:08.124622 61394 server/config.go:523 [n?] RocksDB cache size: 128 MiB
I171207 05:28:08.124689 61394 server/config.go:523 [n?] store 0: in-memory, size 0 B
W171207 05:28:08.124982 61394 gossip/gossip.go:1279 [n?] no incoming or outgoing connections
I171207 05:28:08.125702 61394 server/server.go:923 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I171207 05:28:08.301649 61785 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:42559
I171207 05:28:08.307658 61775 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:38223}
I171207 05:28:08.365726 61394 storage/stores.go:332 [n?] read 0 node addresses from persistent storage
I171207 05:28:08.366207 61394 storage/stores.go:351 [n?] wrote 1 node addresses to persistent storage
I171207 05:28:08.368711 61394 server/node.go:628 [n?] connecting to gossip network to verify cluster ID...
I171207 05:28:08.369138 61394 server/node.go:653 [n?] node connected via gossip and verified as part of cluster "6723e78f-ac5a-43d7-9964-a534000eb01f"
I171207 05:28:08.393290 61912 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:08.407086 61911 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:08.416647 61394 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:08.449953 61394 server/node.go:333 [n?] new node allocated ID 2
I171207 05:28:08.450968 61394 gossip/gossip.go:333 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38223" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:4 >
I171207 05:28:08.452063 61394 server/node.go:415 [n2] node=2: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>]
I171207 05:28:08.453029 61394 server/node.go:429 [n2] node=2: started with [] engine(s) and attributes []
I171207 05:28:08.454328 61394 sql/distsql_physical_planner.go:121 [n2] creating DistSQLPlanner with address {tcp 127.0.0.1:38223}
I171207 05:28:08.461190 61880 storage/stores.go:351 [n1] wrote 1 node addresses to persistent storage
I171207 05:28:08.541985 61868 server/node.go:609 [n2] bootstrapped store [n2,s2]
I171207 05:28:08.573157 61394 server/server.go:1147 [n2] starting https server at 127.0.0.1:37495
I171207 05:28:08.573455 61394 server/server.go:1148 [n2] starting grpc/postgres server at 127.0.0.1:38223
I171207 05:28:08.573575 61394 server/server.go:1149 [n2] advertising CockroachDB node at 127.0.0.1:38223
W171207 05:28:08.574000 61394 sql/jobs/registry.go:219 [n2] unable to get node liveness: node not in the liveness table
I171207 05:28:08.646653 61394 server/server.go:1207 [n2] done ensuring all necessary migrations have run
I171207 05:28:08.647003 61394 server/server.go:1210 [n2] serving sql connections
I171207 05:28:08.778296 61809 sql/event_log.go:113 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:38223} Attrs: Locality: ServerVersion:1.1-4} ClusterID:6723e78f-ac5a-43d7-9964-a534000eb01f StartedAt:1512624488452553346 LastUp:1512624488452553346}
W171207 05:28:08.838578 61394 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I171207 05:28:08.847950 61394 server/config.go:520 [n?] 1 storage engine initialized
I171207 05:28:08.848171 61394 server/config.go:523 [n?] RocksDB cache size: 128 MiB
I171207 05:28:08.848297 61394 server/config.go:523 [n?] store 0: in-memory, size 0 B
W171207 05:28:08.848664 61394 gossip/gossip.go:1279 [n?] no incoming or outgoing connections
I171207 05:28:08.849315 61394 server/server.go:923 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I171207 05:28:09.000961 62090 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:42559
I171207 05:28:09.004396 62032 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:35925}
I171207 05:28:09.031952 61394 storage/stores.go:332 [n?] read 0 node addresses from persistent storage
I171207 05:28:09.032391 61394 storage/stores.go:351 [n?] wrote 2 node addresses to persistent storage
I171207 05:28:09.032593 61394 server/node.go:628 [n?] connecting to gossip network to verify cluster ID...
I171207 05:28:09.032775 61394 server/node.go:653 [n?] node connected via gossip and verified as part of cluster "6723e78f-ac5a-43d7-9964-a534000eb01f"
I171207 05:28:09.036915 62120 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:09.044179 62119 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:09.081984 61394 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:09.091129 61394 server/node.go:333 [n?] new node allocated ID 3
I171207 05:28:09.091852 61394 gossip/gossip.go:333 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:35925" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:4 >
I171207 05:28:09.092780 61394 server/node.go:415 [n3] node=3: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>]
I171207 05:28:09.093577 61394 server/node.go:429 [n3] node=3: started with [] engine(s) and attributes []
I171207 05:28:09.094715 61394 sql/distsql_physical_planner.go:121 [n3] creating DistSQLPlanner with address {tcp 127.0.0.1:35925}
I171207 05:28:09.106825 61813 storage/stores.go:351 [n1] wrote 2 node addresses to persistent storage
I171207 05:28:09.110558 61814 storage/stores.go:351 [n2] wrote 2 node addresses to persistent storage
I171207 05:28:09.184450 61394 server/server.go:1147 [n3] starting https server at 127.0.0.1:36771
I171207 05:28:09.184668 61394 server/server.go:1148 [n3] starting grpc/postgres server at 127.0.0.1:35925
I171207 05:28:09.208003 61394 server/server.go:1149 [n3] advertising CockroachDB node at 127.0.0.1:35925
W171207 05:28:09.208384 61394 sql/jobs/registry.go:219 [n3] unable to get node liveness: node not in the liveness table
I171207 05:28:09.285690 62064 server/node.go:609 [n3] bootstrapped store [n3,s3]
I171207 05:28:09.322078 61394 server/server.go:1207 [n3] done ensuring all necessary migrations have run
I171207 05:28:09.322382 61394 server/server.go:1210 [n3] serving sql connections
I171207 05:28:09.402051 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r8/1:/Table/1{1-2}] generated preemptive snapshot a321a09b at index 21
I171207 05:28:09.503839 62243 sql/event_log.go:113 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:35925} Attrs: Locality: ServerVersion:1.1-4} ClusterID:6723e78f-ac5a-43d7-9964-a534000eb01f StartedAt:1512624489093139045 LastUp:1512624489093139045}
I171207 05:28:09.576476 61618 storage/store.go:3552 [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 11, rate-limit: 8.0 MiB/sec, 9ms
I171207 05:28:09.579557 62248 storage/replica_raftstorage.go:733 [n2,s2,r8/?:{-}] applying preemptive snapshot at index 21 (id=a321a09b, encoded size=5977, 1 rocksdb batches, 11 log entries)
I171207 05:28:09.584868 62248 storage/replica_raftstorage.go:739 [n2,s2,r8/?:/Table/1{1-2}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=1ms]
I171207 05:28:09.594295 61618 storage/replica_command.go:2153 [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2]
I171207 05:28:09.642013 61618 storage/replica.go:3161 [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:09.649354 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r5/1:/System/ts{d-e}] generated preemptive snapshot ea47ef54 at index 22
I171207 05:28:09.735088 62357 storage/raft_transport.go:455 [n2] raft transport stream to node 1 established
I171207 05:28:09.916339 62379 storage/replica_raftstorage.go:733 [n3,s3,r5/?:{-}] applying preemptive snapshot at index 22 (id=ea47ef54, encoded size=156121, 1 rocksdb batches, 12 log entries)
I171207 05:28:09.933851 61618 storage/store.go:3552 [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 924, log entries: 12, rate-limit: 8.0 MiB/sec, 45ms
I171207 05:28:09.940102 62379 storage/replica_raftstorage.go:739 [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 23ms [clear=0ms batch=0ms entries=20ms commit=2ms]
I171207 05:28:09.953809 61618 storage/replica_command.go:2153 [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2]
I171207 05:28:10.015264 61618 storage/replica.go:3161 [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:10.061035 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] generated preemptive snapshot 8819b4ad at index 25
I171207 05:28:10.069710 61572 storage/store.go:3552 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 15, rate-limit: 8.0 MiB/sec, 7ms
I171207 05:28:10.075551 62391 storage/replica_raftstorage.go:733 [n3,s3,r6/?:{-}] applying preemptive snapshot at index 25 (id=8819b4ad, encoded size=6595, 1 rocksdb batches, 15 log entries)
I171207 05:28:10.087763 62391 storage/replica_raftstorage.go:739 [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I171207 05:28:10.099802 61572 storage/replica_command.go:2153 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2]
I171207 05:28:10.130219 62317 storage/raft_transport.go:455 [n3] raft transport stream to node 1 established
I171207 05:28:10.228711 61572 storage/replica.go:3161 [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:10.238327 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot 7680c0e2 at index 35
I171207 05:28:10.246338 61618 storage/store.go:3552 [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n2,s2):?: kv pairs: 126, log entries: 25, rate-limit: 8.0 MiB/sec, 7ms
I171207 05:28:10.250035 62407 storage/replica_raftstorage.go:733 [n2,s2,r10/?:{-}] applying preemptive snapshot at index 35 (id=7680c0e2, encoded size=35877, 1 rocksdb batches, 25 log entries)
I171207 05:28:10.262587 62407 storage/replica_raftstorage.go:739 [n2,s2,r10/?:/Table/1{3-4}] applied preemptive snapshot in 12ms [clear=0ms batch=0ms entries=11ms commit=1ms]
I171207 05:28:10.278277 61618 storage/replica_command.go:2153 [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2]
I171207 05:28:10.319765 61618 storage/replica.go:3161 [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:10.332191 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r17/1:/{Table/20-Max}] generated preemptive snapshot 95e4c9ff at index 13
I171207 05:28:10.360458 61618 storage/store.go:3552 [replicate,n1,s1,r17/1:/{Table/20-Max}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 3, rate-limit: 8.0 MiB/sec, 12ms
I171207 05:28:10.363420 62468 storage/replica_raftstorage.go:733 [n2,s2,r17/?:{-}] applying preemptive snapshot at index 13 (id=95e4c9ff, encoded size=931, 1 rocksdb batches, 3 log entries)
I171207 05:28:10.365622 62468 storage/replica_raftstorage.go:739 [n2,s2,r17/?:/{Table/20-Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I171207 05:28:10.382788 61618 storage/replica_command.go:2153 [replicate,n1,s1,r17/1:/{Table/20-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r17:/{Table/20-Max} [(n1,s1):1, next=2]
I171207 05:28:10.447957 61618 storage/replica.go:3161 [n1,s1,r17/1:/{Table/20-Max}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:10.459037 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot 30eac5f4 at index 21
I171207 05:28:10.465775 61572 storage/store.go:3552 [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 11, rate-limit: 8.0 MiB/sec, 4ms
I171207 05:28:10.468606 62486 storage/replica_raftstorage.go:733 [n3,s3,r14/?:{-}] applying preemptive snapshot at index 21 (id=30eac5f4, encoded size=3992, 1 rocksdb batches, 11 log entries)
I171207 05:28:10.475040 62486 storage/replica_raftstorage.go:739 [n3,s3,r14/?:/Table/1{7-8}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=3ms commit=1ms]
I171207 05:28:10.486050 61572 storage/replica_command.go:2153 [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2]
I171207 05:28:10.571516 61572 storage/replica.go:3161 [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:10.583970 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r16/1:/Table/{19-20}] generated preemptive snapshot 58eb46a1 at index 18
I171207 05:28:10.606785 61618 storage/store.go:3552 [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 8, rate-limit: 8.0 MiB/sec, 20ms
I171207 05:28:10.615140 62475 storage/replica_raftstorage.go:733 [n2,s2,r16/?:{-}] applying preemptive snapshot at index 18 (id=58eb46a1, encoded size=3114, 1 rocksdb batches, 8 log entries)
I171207 05:28:10.621740 62475 storage/replica_raftstorage.go:739 [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I171207 05:28:10.627889 61618 storage/replica_command.go:2153 [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2]
I171207 05:28:10.741845 61618 storage/replica.go:3161 [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:10.834118 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated preemptive snapshot 6936bf14 at index 41
I171207 05:28:10.859980 61572 storage/store.go:3552 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n3,s3):?: kv pairs: 37, log entries: 31, rate-limit: 8.0 MiB/sec, 24ms
I171207 05:28:10.865034 62500 storage/replica_raftstorage.go:733 [n3,s3,r4/?:{-}] applying preemptive snapshot at index 41 (id=6936bf14, encoded size=110859, 1 rocksdb batches, 31 log entries)
I171207 05:28:10.946760 62500 storage/replica_raftstorage.go:739 [n3,s3,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 55ms [clear=0ms batch=0ms entries=52ms commit=1ms]
I171207 05:28:10.955008 61572 storage/replica_command.go:2153 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2]
I171207 05:28:11.015143 61572 storage/replica.go:3161 [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:11.055409 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] generated preemptive snapshot 7d122b1f at index 27
I171207 05:28:11.085083 61618 storage/store.go:3552 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n2,s2):?: kv pairs: 14, log entries: 17, rate-limit: 8.0 MiB/sec, 28ms
I171207 05:28:11.156101 62517 storage/replica_raftstorage.go:733 [n2,s2,r3/?:{-}] applying preemptive snapshot at index 27 (id=7d122b1f, encoded size=8684, 1 rocksdb batches, 17 log entries)
I171207 05:28:11.161941 62517 storage/replica_raftstorage.go:739 [n2,s2,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I171207 05:28:11.181114 61618 storage/replica_command.go:2153 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2]
I171207 05:28:11.260898 61618 storage/replica.go:3161 [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:11.277897 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated preemptive snapshot fe739712 at index 44
I171207 05:28:11.289618 61618 storage/store.go:3552 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 38, log entries: 34, rate-limit: 8.0 MiB/sec, 9ms
I171207 05:28:11.306826 62427 storage/replica_raftstorage.go:733 [n2,s2,r4/?:{-}] applying preemptive snapshot at index 44 (id=fe739712, encoded size=112280, 1 rocksdb batches, 34 log entries)
I171207 05:28:11.365848 62427 storage/replica_raftstorage.go:739 [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 52ms [clear=0ms batch=0ms entries=50ms commit=1ms]
I171207 05:28:11.371141 61618 storage/replica_command.go:2153 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:11.475446 61618 storage/replica.go:3161 [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:11.528545 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] generated preemptive snapshot c9ceb083 at index 35
I171207 05:28:11.538912 61572 storage/store.go:3552 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 49, log entries: 25, rate-limit: 8.0 MiB/sec, 9ms
I171207 05:28:11.543278 62553 storage/replica_raftstorage.go:733 [n3,s3,r7/?:{-}] applying preemptive snapshot at index 35 (id=c9ceb083, encoded size=20083, 1 rocksdb batches, 25 log entries)
I171207 05:28:11.582153 62553 storage/replica_raftstorage.go:739 [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 38ms [clear=0ms batch=0ms entries=11ms commit=3ms]
I171207 05:28:11.608669 61572 storage/replica_command.go:2153 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2]
I171207 05:28:11.672275 61572 storage/replica.go:3161 [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:11.683916 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot 11c2f938 at index 18
I171207 05:28:11.745466 61618 storage/store.go:3552 [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 8, rate-limit: 8.0 MiB/sec, 49ms
I171207 05:28:11.749114 62595 storage/replica_raftstorage.go:733 [n3,s3,r12/?:{-}] applying preemptive snapshot at index 18 (id=11c2f938, encoded size=3113, 1 rocksdb batches, 8 log entries)
I171207 05:28:11.756893 62595 storage/replica_raftstorage.go:739 [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I171207 05:28:11.781612 61618 storage/replica_command.go:2153 [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2]
I171207 05:28:11.871690 61618 storage/replica.go:3161 [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:11.883706 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot e9b62cbc at index 18
I171207 05:28:11.902360 61572 storage/store.go:3552 [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 8, rate-limit: 8.0 MiB/sec, 17ms
I171207 05:28:11.907103 62526 storage/replica_raftstorage.go:733 [n2,s2,r15/?:{-}] applying preemptive snapshot at index 18 (id=e9b62cbc, encoded size=3118, 1 rocksdb batches, 8 log entries)
I171207 05:28:11.919763 62526 storage/replica_raftstorage.go:739 [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 10ms [clear=0ms batch=0ms entries=3ms commit=6ms]
I171207 05:28:11.946651 61572 storage/replica_command.go:2153 [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2]
I171207 05:28:12.049673 61572 storage/replica.go:3161 [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:12.068265 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] generated preemptive snapshot dfd54e5b at index 38
I171207 05:28:12.077684 61618 storage/store.go:3552 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 28, rate-limit: 8.0 MiB/sec, 8ms
I171207 05:28:12.081914 62433 storage/replica_raftstorage.go:733 [n2,s2,r7/?:{-}] applying preemptive snapshot at index 38 (id=dfd54e5b, encoded size=21290, 1 rocksdb batches, 28 log entries)
I171207 05:28:12.089631 62433 storage/replica_raftstorage.go:739 [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=6ms commit=1ms]
I171207 05:28:12.111984 61618 storage/replica_command.go:2153 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:12.244606 61618 storage/replica.go:3161 [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:12.262905 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot 40a238d6 at index 21
I171207 05:28:12.272450 61618 storage/store.go:3552 [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 11, rate-limit: 8.0 MiB/sec, 8ms
I171207 05:28:12.290895 62676 storage/replica_raftstorage.go:733 [n3,s3,r15/?:{-}] applying preemptive snapshot at index 21 (id=40a238d6, encoded size=4325, 1 rocksdb batches, 11 log entries)
I171207 05:28:12.312750 62676 storage/replica_raftstorage.go:739 [n3,s3,r15/?:/Table/1{8-9}] applied preemptive snapshot in 21ms [clear=0ms batch=0ms entries=10ms commit=10ms]
I171207 05:28:12.320238 61618 storage/replica_command.go:2153 [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:12.424204 61618 storage/replica.go:3161 [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:12.496044 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot 369562bb at index 21
I171207 05:28:12.514793 61572 storage/store.go:3552 [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 11, rate-limit: 8.0 MiB/sec, 14ms
I171207 05:28:12.518806 62680 storage/replica_raftstorage.go:733 [n3,s3,r13/?:{-}] applying preemptive snapshot at index 21 (id=369562bb, encoded size=3989, 1 rocksdb batches, 11 log entries)
I171207 05:28:12.531266 62680 storage/replica_raftstorage.go:739 [n3,s3,r13/?:/Table/1{6-7}] applied preemptive snapshot in 12ms [clear=0ms batch=0ms entries=10ms commit=0ms]
I171207 05:28:12.547770 61572 storage/replica_command.go:2153 [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2]
I171207 05:28:12.658290 61572 storage/replica.go:3161 [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:12.675278 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot ed574821 at index 24
I171207 05:28:12.694106 61618 storage/store.go:3552 [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 14, rate-limit: 8.0 MiB/sec, 18ms
I171207 05:28:12.701006 62694 storage/replica_raftstorage.go:733 [n2,s2,r14/?:{-}] applying preemptive snapshot at index 24 (id=ed574821, encoded size=5199, 1 rocksdb batches, 14 log entries)
I171207 05:28:12.707951 62694 storage/replica_raftstorage.go:739 [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=4ms commit=1ms]
I171207 05:28:12.717080 61618 storage/replica_command.go:2153 [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:12.808005 61618 storage/replica.go:3161 [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:12.836991 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot 525628a8 at index 24
I171207 05:28:12.845692 61618 storage/store.go:3552 [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 14, rate-limit: 8.0 MiB/sec, 8ms
I171207 05:28:12.852280 62657 storage/replica_raftstorage.go:733 [n2,s2,r13/?:{-}] applying preemptive snapshot at index 24 (id=525628a8, encoded size=5196, 1 rocksdb batches, 14 log entries)
I171207 05:28:12.863754 62657 storage/replica_raftstorage.go:739 [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I171207 05:28:12.880152 61618 storage/replica_command.go:2153 [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:12.986353 61618 storage/replica.go:3161 [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:13.001621 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] generated preemptive snapshot 61a43e26 at index 21
I171207 05:28:13.047728 61572 storage/store.go:3552 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 11, rate-limit: 8.0 MiB/sec, 45ms
I171207 05:28:13.051392 62755 storage/replica_raftstorage.go:733 [n3,s3,r2/?:{-}] applying preemptive snapshot at index 21 (id=61a43e26, encoded size=6141, 1 rocksdb batches, 11 log entries)
I171207 05:28:13.073444 62755 storage/replica_raftstorage.go:739 [n3,s3,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 22ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I171207 05:28:13.121795 61572 storage/replica_command.go:2153 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2]
I171207 05:28:13.225306 61572 storage/replica.go:3161 [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:13.252827 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r9/1:/Table/1{2-3}] generated preemptive snapshot ce9dec24 at index 28
I171207 05:28:13.270244 61618 storage/store.go:3552 [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 46, log entries: 18, rate-limit: 8.0 MiB/sec, 11ms
I171207 05:28:13.277792 62670 storage/replica_raftstorage.go:733 [n3,s3,r9/?:{-}] applying preemptive snapshot at index 28 (id=ce9dec24, encoded size=16970, 1 rocksdb batches, 18 log entries)
I171207 05:28:13.312756 62670 storage/replica_raftstorage.go:739 [n3,s3,r9/?:/Table/1{2-3}] applied preemptive snapshot in 33ms [clear=0ms batch=0ms entries=17ms commit=12ms]
I171207 05:28:13.330677 61618 storage/replica_command.go:2153 [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2]
I171207 05:28:13.366551 61515 storage/replica_proposal.go:195 [n1,s1,r7/1:/Table/{SystemCon…-11}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624493.353042834,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:13.385836 61525 storage/replica_proposal.go:195 [n1,s1,r10/1:/Table/1{3-4}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624493.376732061,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:13.470881 61523 storage/replica_proposal.go:195 [n1,s1,r9/1:/Table/1{2-3}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624493.448887109,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:13.487067 61618 storage/replica.go:3161 [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:13.510476 61366 storage/replica_proposal.go:195 [n1,s1,r12/1:/Table/1{5-6}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624493.477831634,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:13.515075 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot afc32b9d at index 107
I171207 05:28:13.537674 61618 storage/store.go:3552 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 67, log entries: 19, rate-limit: 8.0 MiB/sec, 6ms
I171207 05:28:13.540514 62837 storage/replica_raftstorage.go:733 [n3,s3,r1/?:{-}] applying preemptive snapshot at index 107 (id=afc32b9d, encoded size=7256, 1 rocksdb batches, 19 log entries)
I171207 05:28:13.573871 62837 storage/replica_raftstorage.go:739 [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 33ms [clear=0ms batch=0ms entries=10ms commit=0ms]
I171207 05:28:13.583505 61618 storage/replica_command.go:2153 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2]
I171207 05:28:13.643157 61618 storage/replica.go:3161 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:13.661606 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 5712f68a at index 110
I171207 05:28:13.678281 61572 storage/store.go:3552 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 70, log entries: 22, rate-limit: 8.0 MiB/sec, 16ms
I171207 05:28:13.681089 62785 storage/replica_raftstorage.go:733 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 110 (id=5712f68a, encoded size=9013, 1 rocksdb batches, 22 log entries)
I171207 05:28:13.697680 62785 storage/replica_raftstorage.go:739 [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 16ms [clear=0ms batch=0ms entries=12ms commit=1ms]
I171207 05:28:13.707659 61572 storage/replica_command.go:2153 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:13.924647 62890 storage/replica.go:3161 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:13.996031 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] generated preemptive snapshot ad4413a0 at index 25
I171207 05:28:14.003427 61618 storage/store.go:3552 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 15, rate-limit: 8.0 MiB/sec, 6ms
I171207 05:28:14.007316 62915 storage/replica_raftstorage.go:733 [n2,s2,r2/?:{-}] applying preemptive snapshot at index 25 (id=ad4413a0, encoded size=7637, 1 rocksdb batches, 15 log entries)
I171207 05:28:14.044288 62915 storage/replica_raftstorage.go:739 [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 31ms [clear=0ms batch=0ms entries=20ms commit=6ms]
I171207 05:28:14.109870 61618 storage/replica_command.go:2153 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:14.223301 61618 storage/replica.go:3161 [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:14.313205 61493 storage/replica_proposal.go:195 [n1,s1,r6/1:/{System/tse-Table/System…}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624494.275641200,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:14.316266 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] generated preemptive snapshot c687b4fc at index 29
I171207 05:28:14.330787 61618 storage/store.go:3552 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 13, log entries: 19, rate-limit: 8.0 MiB/sec, 13ms
I171207 05:28:14.343244 62873 storage/replica_raftstorage.go:733 [n2,s2,r6/?:{-}] applying preemptive snapshot at index 29 (id=c687b4fc, encoded size=8057, 1 rocksdb batches, 19 log entries)
I171207 05:28:14.357094 62873 storage/replica_raftstorage.go:739 [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I171207 05:28:14.363502 61618 storage/replica_command.go:2153 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:14.504657 61940 storage/replica_proposal.go:195 [n2,s2,r16/2:/Table/{19-20}] new range lease repl=(n2,s2):2 start=1512624493.850695305,1 epo=1 pro=1512624494.457665336,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:14.534269 61883 storage/replica_raftstorage.go:527 [replicate,n2,s2,r16/2:/Table/{19-20}] generated preemptive snapshot 14fe84cc at index 22
I171207 05:28:14.599211 62936 storage/raft_transport.go:455 [n3] raft transport stream to node 2 established
I171207 05:28:14.607244 62864 storage/replica.go:3161 [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:14.633272 62185 storage/replica_proposal.go:195 [n3,s3,r5/2:/System/ts{d-e}] new range lease repl=(n3,s3):2 start=1512624493.850695305,1 epo=1 pro=1512624494.529504072,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:14.647504 61817 storage/replica_raftstorage.go:527 [replicate,n3,s3,r5/2:/System/ts{d-e}] generated preemptive snapshot 96769c9e at index 26
I171207 05:28:14.689353 61545 storage/replica_proposal.go:195 [n1,s1,r11/1:/Table/1{4-5}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624494.646851303,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:14.692890 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot 3599b350 at index 18
I171207 05:28:14.702335 61817 storage/store.go:3552 [replicate,n3,s3,r5/2:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 925, log entries: 16, rate-limit: 8.0 MiB/sec, 54ms
I171207 05:28:14.706771 61618 storage/store.go:3552 [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 8, rate-limit: 8.0 MiB/sec, 13ms
I171207 05:28:14.726604 63046 storage/replica_raftstorage.go:733 [n2,s2,r5/?:{-}] applying preemptive snapshot at index 26 (id=96769c9e, encoded size=157633, 1 rocksdb batches, 16 log entries)
I171207 05:28:14.739121 63062 storage/replica_raftstorage.go:733 [n3,s3,r11/?:{-}] applying preemptive snapshot at index 18 (id=3599b350, encoded size=3189, 1 rocksdb batches, 8 log entries)
I171207 05:28:14.758365 63062 storage/replica_raftstorage.go:739 [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 19ms [clear=0ms batch=0ms entries=2ms commit=15ms]
I171207 05:28:14.789863 61618 storage/replica_command.go:2153 [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2]
I171207 05:28:14.818813 63046 storage/replica_raftstorage.go:739 [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 82ms [clear=0ms batch=0ms entries=48ms commit=22ms]
I171207 05:28:14.876170 63012 storage/raft_transport.go:455 [n2] raft transport stream to node 3 established
I171207 05:28:14.885307 61883 storage/store.go:3552 [replicate,n2,s2,r16/2:/Table/{19-20}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 12, rate-limit: 8.0 MiB/sec, 8ms
I171207 05:28:14.889511 63065 storage/replica_raftstorage.go:733 [n3,s3,r16/?:{-}] applying preemptive snapshot at index 22 (id=14fe84cc, encoded size=4550, 1 rocksdb batches, 12 log entries)
I171207 05:28:14.893997 63065 storage/replica_raftstorage.go:739 [n3,s3,r16/?:/Table/{19-20}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I171207 05:28:14.928414 61618 storage/replica.go:3161 [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:14.996540 61618 storage/queue.go:728 [n1,replicate] purgatory is now empty
I171207 05:28:14.996808 61883 storage/replica_command.go:2153 [replicate,n2,s2,r16/2:/Table/{19-20}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:15.020384 61817 storage/replica_command.go:2153 [replicate,n3,s3,r5/2:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:15.033776 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot cd0af517 at index 20
I171207 05:28:15.127991 61572 storage/store.go:3552 [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 13, log entries: 10, rate-limit: 8.0 MiB/sec, 49ms
I171207 05:28:15.149689 63129 storage/replica_raftstorage.go:733 [n2,s2,r11/?:{-}] applying preemptive snapshot at index 20 (id=cd0af517, encoded size=4509, 1 rocksdb batches, 10 log entries)
I171207 05:28:15.165129 63129 storage/replica_raftstorage.go:739 [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 15ms [clear=0ms batch=0ms entries=2ms commit=2ms]
I171207 05:28:15.211877 61572 storage/replica_command.go:2153 [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:15.297092 63205 storage/replica.go:3161 [n2,s2,r16/2:/Table/{19-20}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:15.364673 61950 storage/replica_proposal.go:195 [n2,s2,r8/2:/Table/1{1-2}] new range lease repl=(n2,s2):2 start=1512624493.850695305,1 epo=1 pro=1512624495.321565630,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:15.395548 61883 storage/replica_raftstorage.go:527 [replicate,n2,s2,r8/2:/Table/1{1-2}] generated preemptive snapshot 0515de46 at index 25
I171207 05:28:15.421124 61572 storage/replica.go:3161 [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:15.467145 61883 storage/store.go:3552 [replicate,n2,s2,r8/2:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 15, rate-limit: 8.0 MiB/sec, 70ms
I171207 05:28:15.472525 63213 storage/replica_raftstorage.go:733 [n3,s3,r8/?:{-}] applying preemptive snapshot at index 25 (id=0515de46, encoded size=7413, 1 rocksdb batches, 15 log entries)
I171207 05:28:15.481405 63213 storage/replica_raftstorage.go:739 [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I171207 05:28:15.597648 61883 storage/replica_command.go:2153 [replicate,n2,s2,r8/2:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:15.604365 61366 storage/replica_proposal.go:195 [n1,s1,r17/1:/{Table/20-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:15.614668 63225 storage/replica.go:3161 [n3,s3,r5/2:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:15.614953 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r17/1:/{Table/20-Max}] generated preemptive snapshot 1db082e7 at index 17
I171207 05:28:15.644553 61572 storage/store.go:3552 [replicate,n1,s1,r17/1:/{Table/20-Max}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 7, rate-limit: 8.0 MiB/sec, 28ms
I171207 05:28:15.648179 63289 storage/replica_raftstorage.go:733 [n3,s3,r17/?:{-}] applying preemptive snapshot at index 17 (id=1db082e7, encoded size=2333, 1 rocksdb batches, 7 log entries)
I171207 05:28:15.651312 63289 storage/replica_raftstorage.go:739 [n3,s3,r17/?:/{Table/20-Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I171207 05:28:15.670982 61572 storage/replica_command.go:2153 [replicate,n1,s1,r17/1:/{Table/20-Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r17:/{Table/20-Max} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:15.841340 61572 storage/replica.go:3161 [n1,s1,r17/1:/{Table/20-Max}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:15.899832 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot 0c8bb06f at index 22
I171207 05:28:15.948228 61572 storage/store.go:3552 [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 12, rate-limit: 8.0 MiB/sec, 29ms
I171207 05:28:15.975766 63351 storage/replica.go:3161 [n2,s2,r8/2:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:15.986458 63350 storage/replica_raftstorage.go:733 [n2,s2,r12/?:{-}] applying preemptive snapshot at index 22 (id=0c8bb06f, encoded size=4511, 1 rocksdb batches, 12 log entries)
I171207 05:28:15.998140 63350 storage/replica_raftstorage.go:739 [n2,s2,r12/?:/Table/1{5-6}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=5ms commit=0ms]
I171207 05:28:16.007926 61572 storage/replica_command.go:2153 [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:16.143002 61572 storage/replica.go:3161 [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:16.177512 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot 4711ed21 at index 98
I171207 05:28:16.213232 61572 storage/store.go:3552 [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 268, log entries: 14, rate-limit: 8.0 MiB/sec, 34ms
I171207 05:28:16.232811 63356 storage/replica_raftstorage.go:733 [n3,s3,r10/?:{-}] applying preemptive snapshot at index 98 (id=4711ed21, encoded size=49010, 1 rocksdb batches, 14 log entries)
I171207 05:28:16.269480 63356 storage/replica_raftstorage.go:739 [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 35ms [clear=0ms batch=3ms entries=30ms commit=1ms]
I171207 05:28:16.280137 61572 storage/replica_command.go:2153 [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:16.525121 61572 storage/replica.go:3161 [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:16.562843 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] generated preemptive snapshot b447dde3 at index 34
I171207 05:28:16.582010 61572 storage/store.go:3552 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 18, log entries: 24, rate-limit: 8.0 MiB/sec, 18ms
I171207 05:28:16.600628 63493 storage/replica_raftstorage.go:733 [n3,s3,r3/?:{-}] applying preemptive snapshot at index 34 (id=b447dde3, encoded size=11176, 1 rocksdb batches, 24 log entries)
I171207 05:28:16.631134 63493 storage/replica_raftstorage.go:739 [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 30ms [clear=0ms batch=0ms entries=16ms commit=0ms]
I171207 05:28:16.660934 61572 storage/replica_command.go:2153 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:16.733669 63481 storage/replica.go:3161 [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:16.803176 61371 storage/replica_proposal.go:195 [n1,s1,r14/1:/Table/1{7-8}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.758200886,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:16.837842 61528 storage/replica_proposal.go:195 [n1,s1,r13/1:/Table/1{6-7}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.811637560,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:16.855946 61490 storage/replica_proposal.go:195 [n1,s1,r15/1:/Table/1{8-9}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.847650189,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:16.887869 61497 storage/replica_proposal.go:195 [n1,s1,r4/1:/System/{NodeLive…-tsd}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.871067736,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:16.899848 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r9/1:/Table/1{2-3}] generated preemptive snapshot f2b0feec at index 32
I171207 05:28:16.927270 61572 storage/store.go:3552 [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 47, log entries: 22, rate-limit: 8.0 MiB/sec, 26ms
I171207 05:28:16.930141 63573 storage/replica_raftstorage.go:733 [n2,s2,r9/?:{-}] applying preemptive snapshot at index 32 (id=f2b0feec, encoded size=18328, 1 rocksdb batches, 22 log entries)
I171207 05:28:16.940943 63573 storage/replica_raftstorage.go:739 [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 10ms [clear=0ms batch=0ms entries=9ms commit=1ms]
I171207 05:28:16.961791 61572 storage/replica_command.go:2153 [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:17.175226 61572 storage/replica.go:3161 [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:17.851628 61571 storage/replica_command.go:1231 [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/50 [r18]
I171207 05:28:17.871094 63595 sql/event_log.go:113 [client=127.0.0.1:57496,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:data Statement:CREATE DATABASE IF NOT EXISTS data User:root}
I171207 05:28:18.051431 61541 storage/replica_proposal.go:195 [n1,s1,r17/1:/{Table/20-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0 following repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0
I171207 05:28:18.175470 63595 sql/event_log.go:113 [client=127.0.0.1:57496,user=root,n1] Event: "create_table", target: 51, info: {TableName:data.bank Statement:CREATE TABLE data.bank (id INT PRIMARY KEY, balance INT, payload STRING, FAMILY (id, balance, payload)) User:root}
I171207 05:28:18.204183 61571 storage/replica_command.go:1231 [split,n1,s1,r18/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r19]
I171207 05:28:18.220898 62195 storage/replica_proposal.go:195 [n3,s3,r15/3:/Table/1{8-9}] new range lease repl=(n3,s3):3 start=1512624498.195796803,0 epo=1 pro=1512624498.195839071,0 following repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.847650189,0
I171207 05:28:18.606935 61504 storage/replica_proposal.go:195 [n1,s1,r18/1:/{Table/50-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0 following repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0
I171207 05:28:19.448308 62208 storage/replica_proposal.go:195 [n3,s3,r11/2:/Table/1{4-5}] new range lease repl=(n3,s3):2 start=1512624499.426922894,0 epo=1 pro=1512624499.426957
```
Please assign, take a look and update the issue accordingly.
| 1.0 | teamcity: failed tests on master: testrace/TestBackupRestoreResume, testrace/TestBackupRestoreResume/restore, testrace/TestGRPCKeepaliveFailureFailsInflightRPCs, test/TestBackupRestoreResume, test/TestBackupRestoreResume/restore, test/TestGRPCKeepaliveFailureFailsInflightRPCs - The following tests appear to have failed:
[#434076](https://teamcity.cockroachdb.com/viewLog.html?buildId=434076):
```
--- FAIL: testrace/TestBackupRestoreResume (30.410s)
------- Stdout: -------
W171207 05:28:04.695127 61394 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I171207 05:28:04.712362 61394 server/config.go:520 [n?] 1 storage engine initialized
I171207 05:28:04.712450 61394 server/config.go:523 [n?] RocksDB cache size: 128 MiB
I171207 05:28:04.712498 61394 server/config.go:523 [n?] store 0: in-memory, size 0 B
I171207 05:28:04.780100 61394 server/node.go:362 [n?] **** cluster 6723e78f-ac5a-43d7-9964-a534000eb01f has been created
I171207 05:28:04.780341 61394 server/server.go:921 [n?] **** add additional nodes by specifying --join=127.0.0.1:42559
I171207 05:28:04.849185 61394 storage/store.go:1202 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I171207 05:28:04.861777 61394 server/node.go:487 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=3.2 KiB), ranges=1, leases=1, writes=0.00, bytesPerReplica={p10=3298.00 p25=3298.00 p50=3298.00 p75=3298.00 p90=3298.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00}
I171207 05:28:04.866270 61394 server/node.go:340 [n1] node ID 1 initialized
I171207 05:28:04.872504 61394 gossip/gossip.go:333 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42559" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:4 >
I171207 05:28:04.873875 61394 storage/stores.go:332 [n1] read 0 node addresses from persistent storage
I171207 05:28:04.888616 61394 server/node.go:628 [n1] connecting to gossip network to verify cluster ID...
I171207 05:28:04.889163 61394 server/node.go:653 [n1] node connected via gossip and verified as part of cluster "6723e78f-ac5a-43d7-9964-a534000eb01f"
I171207 05:28:04.904753 61394 server/node.go:429 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I171207 05:28:04.905446 61571 storage/replica_command.go:1231 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I171207 05:28:04.906402 61394 sql/distsql_physical_planner.go:121 [n1] creating DistSQLPlanner with address {tcp 127.0.0.1:42559}
I171207 05:28:04.998595 61394 server/server.go:1147 [n1] starting https server at 127.0.0.1:34599
I171207 05:28:04.998808 61394 server/server.go:1148 [n1] starting grpc/postgres server at 127.0.0.1:42559
I171207 05:28:04.998886 61394 server/server.go:1149 [n1] advertising CockroachDB node at 127.0.0.1:42559
W171207 05:28:04.999228 61394 sql/jobs/registry.go:219 [n1] unable to get node liveness: node not in the liveness table
E171207 05:28:05.241790 61576 storage/consistency_queue.go:107 [replica consistency checker,n1,s1,r1/1:/{Min-System/}] key range /Min-/Max outside of bounds of range /Min-/System/""
E171207 05:28:05.277857 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:05.282943 61571 storage/replica_command.go:1231 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I171207 05:28:05.410243 61571 storage/replica_command.go:1231 [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
E171207 05:28:05.430355 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
W171207 05:28:05.437816 61583 storage/intent_resolver.go:351 [n1,s1,r3/1:/{System/NodeL…-Max}]: failed to push during intent resolution: failed to push "sql txn implicit" id=4c293664 key=/Table/SystemConfigSpan/Start rw=true pri=0.04037893 iso=SERIALIZABLE stat=PENDING epo=0 ts=1512624485.310465627,0 orig=1512624485.310465627,0 max=1512624485.310465627,0 wto=false rop=false seq=7
I171207 05:28:05.494594 61394 sql/event_log.go:113 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
E171207 05:28:05.539024 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:05.667987 61571 storage/replica_command.go:1231 [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
E171207 05:28:05.673345 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:05.682726 61394 sql/lease.go:348 [n1] publish: descID=12 (eventlog) version=2 mtime=2017-12-07 05:28:05.679506163 +0000 UTC
I171207 05:28:05.782296 61571 storage/replica_command.go:1231 [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
E171207 05:28:05.841857 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:05.945848 61571 storage/replica_command.go:1231 [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
E171207 05:28:05.965682 61610 sql/jobs/registry.go:206 error while adopting jobs: relation "system.jobs" does not exist
I171207 05:28:06.134653 61571 storage/replica_command.go:1231 [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I171207 05:28:06.305635 61571 storage/replica_command.go:1231 [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I171207 05:28:06.397044 61394 sql/event_log.go:113 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:node}
I171207 05:28:06.546189 61571 storage/replica_command.go:1231 [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
W171207 05:28:06.652495 61747 storage/intent_resolver.go:351 [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=283664c3 key=/Local/Range/Table/12/RangeDescriptor rw=true pri=0.04168553 iso=SERIALIZABLE stat=PENDING epo=0 ts=1512624486.546533318,0 orig=1512624486.546533318,0 max=1512624486.546533318,0 wto=false rop=false seq=3
W171207 05:28:06.655859 61731 storage/intent_resolver.go:351 [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=283664c3 key=/Local/Range/Table/12/RangeDescriptor rw=true pri=0.04168553 iso=SERIALIZABLE stat=PENDING epo=0 ts=1512624486.546533318,0 orig=1512624486.546533318,0 max=1512624486.546533318,0 wto=false rop=false seq=3
I171207 05:28:06.691320 61571 storage/replica_command.go:1231 [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I171207 05:28:06.888574 61571 storage/replica_command.go:1231 [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I171207 05:28:06.890177 61394 sql/event_log.go:113 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:node}
I171207 05:28:07.067954 61394 sql/event_log.go:113 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:node}
I171207 05:28:07.196497 61571 storage/replica_command.go:1231 [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I171207 05:28:07.274163 61394 server/server.go:1207 [n1] done ensuring all necessary migrations have run
I171207 05:28:07.274394 61394 server/server.go:1210 [n1] serving sql connections
I171207 05:28:07.344277 61778 sql/event_log.go:113 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:42559} Attrs: Locality: ServerVersion:1.1-4} ClusterID:6723e78f-ac5a-43d7-9964-a534000eb01f StartedAt:1512624484904198993 LastUp:1512624484904198993}
I171207 05:28:07.387346 61571 storage/replica_command.go:1231 [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I171207 05:28:07.537254 61571 storage/replica_command.go:1231 [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I171207 05:28:07.754686 61571 storage/replica_command.go:1231 [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
W171207 05:28:07.818906 61803 storage/intent_resolver.go:351 [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=cf3b97cf key=/Local/Range/Table/18/RangeDescriptor rw=true pri=0.03262283 iso=SERIALIZABLE stat=PENDING epo=0 ts=1512624487.754898315,0 orig=1512624487.754898315,0 max=1512624487.754898315,0 wto=false rop=false seq=3
I171207 05:28:07.868290 61571 storage/replica_command.go:1231 [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
W171207 05:28:08.113268 61394 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I171207 05:28:08.124500 61394 server/config.go:520 [n?] 1 storage engine initialized
I171207 05:28:08.124622 61394 server/config.go:523 [n?] RocksDB cache size: 128 MiB
I171207 05:28:08.124689 61394 server/config.go:523 [n?] store 0: in-memory, size 0 B
W171207 05:28:08.124982 61394 gossip/gossip.go:1279 [n?] no incoming or outgoing connections
I171207 05:28:08.125702 61394 server/server.go:923 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I171207 05:28:08.301649 61785 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:42559
I171207 05:28:08.307658 61775 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:38223}
I171207 05:28:08.365726 61394 storage/stores.go:332 [n?] read 0 node addresses from persistent storage
I171207 05:28:08.366207 61394 storage/stores.go:351 [n?] wrote 1 node addresses to persistent storage
I171207 05:28:08.368711 61394 server/node.go:628 [n?] connecting to gossip network to verify cluster ID...
I171207 05:28:08.369138 61394 server/node.go:653 [n?] node connected via gossip and verified as part of cluster "6723e78f-ac5a-43d7-9964-a534000eb01f"
I171207 05:28:08.393290 61912 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:08.407086 61911 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:08.416647 61394 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:08.449953 61394 server/node.go:333 [n?] new node allocated ID 2
I171207 05:28:08.450968 61394 gossip/gossip.go:333 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38223" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:4 >
I171207 05:28:08.452063 61394 server/node.go:415 [n2] node=2: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>]
I171207 05:28:08.453029 61394 server/node.go:429 [n2] node=2: started with [] engine(s) and attributes []
I171207 05:28:08.454328 61394 sql/distsql_physical_planner.go:121 [n2] creating DistSQLPlanner with address {tcp 127.0.0.1:38223}
I171207 05:28:08.461190 61880 storage/stores.go:351 [n1] wrote 1 node addresses to persistent storage
I171207 05:28:08.541985 61868 server/node.go:609 [n2] bootstrapped store [n2,s2]
I171207 05:28:08.573157 61394 server/server.go:1147 [n2] starting https server at 127.0.0.1:37495
I171207 05:28:08.573455 61394 server/server.go:1148 [n2] starting grpc/postgres server at 127.0.0.1:38223
I171207 05:28:08.573575 61394 server/server.go:1149 [n2] advertising CockroachDB node at 127.0.0.1:38223
W171207 05:28:08.574000 61394 sql/jobs/registry.go:219 [n2] unable to get node liveness: node not in the liveness table
I171207 05:28:08.646653 61394 server/server.go:1207 [n2] done ensuring all necessary migrations have run
I171207 05:28:08.647003 61394 server/server.go:1210 [n2] serving sql connections
I171207 05:28:08.778296 61809 sql/event_log.go:113 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:38223} Attrs: Locality: ServerVersion:1.1-4} ClusterID:6723e78f-ac5a-43d7-9964-a534000eb01f StartedAt:1512624488452553346 LastUp:1512624488452553346}
W171207 05:28:08.838578 61394 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I171207 05:28:08.847950 61394 server/config.go:520 [n?] 1 storage engine initialized
I171207 05:28:08.848171 61394 server/config.go:523 [n?] RocksDB cache size: 128 MiB
I171207 05:28:08.848297 61394 server/config.go:523 [n?] store 0: in-memory, size 0 B
W171207 05:28:08.848664 61394 gossip/gossip.go:1279 [n?] no incoming or outgoing connections
I171207 05:28:08.849315 61394 server/server.go:923 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I171207 05:28:09.000961 62090 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:42559
I171207 05:28:09.004396 62032 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:35925}
I171207 05:28:09.031952 61394 storage/stores.go:332 [n?] read 0 node addresses from persistent storage
I171207 05:28:09.032391 61394 storage/stores.go:351 [n?] wrote 2 node addresses to persistent storage
I171207 05:28:09.032593 61394 server/node.go:628 [n?] connecting to gossip network to verify cluster ID...
I171207 05:28:09.032775 61394 server/node.go:653 [n?] node connected via gossip and verified as part of cluster "6723e78f-ac5a-43d7-9964-a534000eb01f"
I171207 05:28:09.036915 62120 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:09.044179 62119 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:09.081984 61394 kv/dist_sender.go:355 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171207 05:28:09.091129 61394 server/node.go:333 [n?] new node allocated ID 3
I171207 05:28:09.091852 61394 gossip/gossip.go:333 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:35925" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:4 >
I171207 05:28:09.092780 61394 server/node.go:415 [n3] node=3: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>]
I171207 05:28:09.093577 61394 server/node.go:429 [n3] node=3: started with [] engine(s) and attributes []
I171207 05:28:09.094715 61394 sql/distsql_physical_planner.go:121 [n3] creating DistSQLPlanner with address {tcp 127.0.0.1:35925}
I171207 05:28:09.106825 61813 storage/stores.go:351 [n1] wrote 2 node addresses to persistent storage
I171207 05:28:09.110558 61814 storage/stores.go:351 [n2] wrote 2 node addresses to persistent storage
I171207 05:28:09.184450 61394 server/server.go:1147 [n3] starting https server at 127.0.0.1:36771
I171207 05:28:09.184668 61394 server/server.go:1148 [n3] starting grpc/postgres server at 127.0.0.1:35925
I171207 05:28:09.208003 61394 server/server.go:1149 [n3] advertising CockroachDB node at 127.0.0.1:35925
W171207 05:28:09.208384 61394 sql/jobs/registry.go:219 [n3] unable to get node liveness: node not in the liveness table
I171207 05:28:09.285690 62064 server/node.go:609 [n3] bootstrapped store [n3,s3]
I171207 05:28:09.322078 61394 server/server.go:1207 [n3] done ensuring all necessary migrations have run
I171207 05:28:09.322382 61394 server/server.go:1210 [n3] serving sql connections
I171207 05:28:09.402051 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r8/1:/Table/1{1-2}] generated preemptive snapshot a321a09b at index 21
I171207 05:28:09.503839 62243 sql/event_log.go:113 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:35925} Attrs: Locality: ServerVersion:1.1-4} ClusterID:6723e78f-ac5a-43d7-9964-a534000eb01f StartedAt:1512624489093139045 LastUp:1512624489093139045}
I171207 05:28:09.576476 61618 storage/store.go:3552 [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 11, rate-limit: 8.0 MiB/sec, 9ms
I171207 05:28:09.579557 62248 storage/replica_raftstorage.go:733 [n2,s2,r8/?:{-}] applying preemptive snapshot at index 21 (id=a321a09b, encoded size=5977, 1 rocksdb batches, 11 log entries)
I171207 05:28:09.584868 62248 storage/replica_raftstorage.go:739 [n2,s2,r8/?:/Table/1{1-2}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=1ms]
I171207 05:28:09.594295 61618 storage/replica_command.go:2153 [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2]
I171207 05:28:09.642013 61618 storage/replica.go:3161 [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:09.649354 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r5/1:/System/ts{d-e}] generated preemptive snapshot ea47ef54 at index 22
I171207 05:28:09.735088 62357 storage/raft_transport.go:455 [n2] raft transport stream to node 1 established
I171207 05:28:09.916339 62379 storage/replica_raftstorage.go:733 [n3,s3,r5/?:{-}] applying preemptive snapshot at index 22 (id=ea47ef54, encoded size=156121, 1 rocksdb batches, 12 log entries)
I171207 05:28:09.933851 61618 storage/store.go:3552 [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 924, log entries: 12, rate-limit: 8.0 MiB/sec, 45ms
I171207 05:28:09.940102 62379 storage/replica_raftstorage.go:739 [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 23ms [clear=0ms batch=0ms entries=20ms commit=2ms]
I171207 05:28:09.953809 61618 storage/replica_command.go:2153 [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2]
I171207 05:28:10.015264 61618 storage/replica.go:3161 [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:10.061035 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] generated preemptive snapshot 8819b4ad at index 25
I171207 05:28:10.069710 61572 storage/store.go:3552 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 15, rate-limit: 8.0 MiB/sec, 7ms
I171207 05:28:10.075551 62391 storage/replica_raftstorage.go:733 [n3,s3,r6/?:{-}] applying preemptive snapshot at index 25 (id=8819b4ad, encoded size=6595, 1 rocksdb batches, 15 log entries)
I171207 05:28:10.087763 62391 storage/replica_raftstorage.go:739 [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I171207 05:28:10.099802 61572 storage/replica_command.go:2153 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2]
I171207 05:28:10.130219 62317 storage/raft_transport.go:455 [n3] raft transport stream to node 1 established
I171207 05:28:10.228711 61572 storage/replica.go:3161 [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:10.238327 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot 7680c0e2 at index 35
I171207 05:28:10.246338 61618 storage/store.go:3552 [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n2,s2):?: kv pairs: 126, log entries: 25, rate-limit: 8.0 MiB/sec, 7ms
I171207 05:28:10.250035 62407 storage/replica_raftstorage.go:733 [n2,s2,r10/?:{-}] applying preemptive snapshot at index 35 (id=7680c0e2, encoded size=35877, 1 rocksdb batches, 25 log entries)
I171207 05:28:10.262587 62407 storage/replica_raftstorage.go:739 [n2,s2,r10/?:/Table/1{3-4}] applied preemptive snapshot in 12ms [clear=0ms batch=0ms entries=11ms commit=1ms]
I171207 05:28:10.278277 61618 storage/replica_command.go:2153 [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2]
I171207 05:28:10.319765 61618 storage/replica.go:3161 [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:10.332191 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r17/1:/{Table/20-Max}] generated preemptive snapshot 95e4c9ff at index 13
I171207 05:28:10.360458 61618 storage/store.go:3552 [replicate,n1,s1,r17/1:/{Table/20-Max}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 3, rate-limit: 8.0 MiB/sec, 12ms
I171207 05:28:10.363420 62468 storage/replica_raftstorage.go:733 [n2,s2,r17/?:{-}] applying preemptive snapshot at index 13 (id=95e4c9ff, encoded size=931, 1 rocksdb batches, 3 log entries)
I171207 05:28:10.365622 62468 storage/replica_raftstorage.go:739 [n2,s2,r17/?:/{Table/20-Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I171207 05:28:10.382788 61618 storage/replica_command.go:2153 [replicate,n1,s1,r17/1:/{Table/20-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r17:/{Table/20-Max} [(n1,s1):1, next=2]
I171207 05:28:10.447957 61618 storage/replica.go:3161 [n1,s1,r17/1:/{Table/20-Max}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:10.459037 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot 30eac5f4 at index 21
I171207 05:28:10.465775 61572 storage/store.go:3552 [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 11, rate-limit: 8.0 MiB/sec, 4ms
I171207 05:28:10.468606 62486 storage/replica_raftstorage.go:733 [n3,s3,r14/?:{-}] applying preemptive snapshot at index 21 (id=30eac5f4, encoded size=3992, 1 rocksdb batches, 11 log entries)
I171207 05:28:10.475040 62486 storage/replica_raftstorage.go:739 [n3,s3,r14/?:/Table/1{7-8}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=3ms commit=1ms]
I171207 05:28:10.486050 61572 storage/replica_command.go:2153 [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2]
I171207 05:28:10.571516 61572 storage/replica.go:3161 [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:10.583970 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r16/1:/Table/{19-20}] generated preemptive snapshot 58eb46a1 at index 18
I171207 05:28:10.606785 61618 storage/store.go:3552 [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 8, rate-limit: 8.0 MiB/sec, 20ms
I171207 05:28:10.615140 62475 storage/replica_raftstorage.go:733 [n2,s2,r16/?:{-}] applying preemptive snapshot at index 18 (id=58eb46a1, encoded size=3114, 1 rocksdb batches, 8 log entries)
I171207 05:28:10.621740 62475 storage/replica_raftstorage.go:739 [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I171207 05:28:10.627889 61618 storage/replica_command.go:2153 [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2]
I171207 05:28:10.741845 61618 storage/replica.go:3161 [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:10.834118 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated preemptive snapshot 6936bf14 at index 41
I171207 05:28:10.859980 61572 storage/store.go:3552 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n3,s3):?: kv pairs: 37, log entries: 31, rate-limit: 8.0 MiB/sec, 24ms
I171207 05:28:10.865034 62500 storage/replica_raftstorage.go:733 [n3,s3,r4/?:{-}] applying preemptive snapshot at index 41 (id=6936bf14, encoded size=110859, 1 rocksdb batches, 31 log entries)
I171207 05:28:10.946760 62500 storage/replica_raftstorage.go:739 [n3,s3,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 55ms [clear=0ms batch=0ms entries=52ms commit=1ms]
I171207 05:28:10.955008 61572 storage/replica_command.go:2153 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2]
I171207 05:28:11.015143 61572 storage/replica.go:3161 [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:11.055409 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] generated preemptive snapshot 7d122b1f at index 27
I171207 05:28:11.085083 61618 storage/store.go:3552 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n2,s2):?: kv pairs: 14, log entries: 17, rate-limit: 8.0 MiB/sec, 28ms
I171207 05:28:11.156101 62517 storage/replica_raftstorage.go:733 [n2,s2,r3/?:{-}] applying preemptive snapshot at index 27 (id=7d122b1f, encoded size=8684, 1 rocksdb batches, 17 log entries)
I171207 05:28:11.161941 62517 storage/replica_raftstorage.go:739 [n2,s2,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I171207 05:28:11.181114 61618 storage/replica_command.go:2153 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2]
I171207 05:28:11.260898 61618 storage/replica.go:3161 [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:11.277897 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated preemptive snapshot fe739712 at index 44
I171207 05:28:11.289618 61618 storage/store.go:3552 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 38, log entries: 34, rate-limit: 8.0 MiB/sec, 9ms
I171207 05:28:11.306826 62427 storage/replica_raftstorage.go:733 [n2,s2,r4/?:{-}] applying preemptive snapshot at index 44 (id=fe739712, encoded size=112280, 1 rocksdb batches, 34 log entries)
I171207 05:28:11.365848 62427 storage/replica_raftstorage.go:739 [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 52ms [clear=0ms batch=0ms entries=50ms commit=1ms]
I171207 05:28:11.371141 61618 storage/replica_command.go:2153 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:11.475446 61618 storage/replica.go:3161 [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:11.528545 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] generated preemptive snapshot c9ceb083 at index 35
I171207 05:28:11.538912 61572 storage/store.go:3552 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 49, log entries: 25, rate-limit: 8.0 MiB/sec, 9ms
I171207 05:28:11.543278 62553 storage/replica_raftstorage.go:733 [n3,s3,r7/?:{-}] applying preemptive snapshot at index 35 (id=c9ceb083, encoded size=20083, 1 rocksdb batches, 25 log entries)
I171207 05:28:11.582153 62553 storage/replica_raftstorage.go:739 [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 38ms [clear=0ms batch=0ms entries=11ms commit=3ms]
I171207 05:28:11.608669 61572 storage/replica_command.go:2153 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2]
I171207 05:28:11.672275 61572 storage/replica.go:3161 [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:11.683916 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot 11c2f938 at index 18
I171207 05:28:11.745466 61618 storage/store.go:3552 [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 8, rate-limit: 8.0 MiB/sec, 49ms
I171207 05:28:11.749114 62595 storage/replica_raftstorage.go:733 [n3,s3,r12/?:{-}] applying preemptive snapshot at index 18 (id=11c2f938, encoded size=3113, 1 rocksdb batches, 8 log entries)
I171207 05:28:11.756893 62595 storage/replica_raftstorage.go:739 [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I171207 05:28:11.781612 61618 storage/replica_command.go:2153 [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2]
I171207 05:28:11.871690 61618 storage/replica.go:3161 [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:11.883706 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot e9b62cbc at index 18
I171207 05:28:11.902360 61572 storage/store.go:3552 [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 8, rate-limit: 8.0 MiB/sec, 17ms
I171207 05:28:11.907103 62526 storage/replica_raftstorage.go:733 [n2,s2,r15/?:{-}] applying preemptive snapshot at index 18 (id=e9b62cbc, encoded size=3118, 1 rocksdb batches, 8 log entries)
I171207 05:28:11.919763 62526 storage/replica_raftstorage.go:739 [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 10ms [clear=0ms batch=0ms entries=3ms commit=6ms]
I171207 05:28:11.946651 61572 storage/replica_command.go:2153 [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2]
I171207 05:28:12.049673 61572 storage/replica.go:3161 [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171207 05:28:12.068265 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] generated preemptive snapshot dfd54e5b at index 38
I171207 05:28:12.077684 61618 storage/store.go:3552 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 28, rate-limit: 8.0 MiB/sec, 8ms
I171207 05:28:12.081914 62433 storage/replica_raftstorage.go:733 [n2,s2,r7/?:{-}] applying preemptive snapshot at index 38 (id=dfd54e5b, encoded size=21290, 1 rocksdb batches, 28 log entries)
I171207 05:28:12.089631 62433 storage/replica_raftstorage.go:739 [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=6ms commit=1ms]
I171207 05:28:12.111984 61618 storage/replica_command.go:2153 [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:12.244606 61618 storage/replica.go:3161 [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:12.262905 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot 40a238d6 at index 21
I171207 05:28:12.272450 61618 storage/store.go:3552 [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 11, rate-limit: 8.0 MiB/sec, 8ms
I171207 05:28:12.290895 62676 storage/replica_raftstorage.go:733 [n3,s3,r15/?:{-}] applying preemptive snapshot at index 21 (id=40a238d6, encoded size=4325, 1 rocksdb batches, 11 log entries)
I171207 05:28:12.312750 62676 storage/replica_raftstorage.go:739 [n3,s3,r15/?:/Table/1{8-9}] applied preemptive snapshot in 21ms [clear=0ms batch=0ms entries=10ms commit=10ms]
I171207 05:28:12.320238 61618 storage/replica_command.go:2153 [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:12.424204 61618 storage/replica.go:3161 [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:12.496044 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot 369562bb at index 21
I171207 05:28:12.514793 61572 storage/store.go:3552 [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 11, rate-limit: 8.0 MiB/sec, 14ms
I171207 05:28:12.518806 62680 storage/replica_raftstorage.go:733 [n3,s3,r13/?:{-}] applying preemptive snapshot at index 21 (id=369562bb, encoded size=3989, 1 rocksdb batches, 11 log entries)
I171207 05:28:12.531266 62680 storage/replica_raftstorage.go:739 [n3,s3,r13/?:/Table/1{6-7}] applied preemptive snapshot in 12ms [clear=0ms batch=0ms entries=10ms commit=0ms]
I171207 05:28:12.547770 61572 storage/replica_command.go:2153 [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2]
I171207 05:28:12.658290 61572 storage/replica.go:3161 [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:12.675278 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot ed574821 at index 24
I171207 05:28:12.694106 61618 storage/store.go:3552 [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 14, rate-limit: 8.0 MiB/sec, 18ms
I171207 05:28:12.701006 62694 storage/replica_raftstorage.go:733 [n2,s2,r14/?:{-}] applying preemptive snapshot at index 24 (id=ed574821, encoded size=5199, 1 rocksdb batches, 14 log entries)
I171207 05:28:12.707951 62694 storage/replica_raftstorage.go:739 [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=4ms commit=1ms]
I171207 05:28:12.717080 61618 storage/replica_command.go:2153 [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:12.808005 61618 storage/replica.go:3161 [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:12.836991 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot 525628a8 at index 24
I171207 05:28:12.845692 61618 storage/store.go:3552 [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 14, rate-limit: 8.0 MiB/sec, 8ms
I171207 05:28:12.852280 62657 storage/replica_raftstorage.go:733 [n2,s2,r13/?:{-}] applying preemptive snapshot at index 24 (id=525628a8, encoded size=5196, 1 rocksdb batches, 14 log entries)
I171207 05:28:12.863754 62657 storage/replica_raftstorage.go:739 [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I171207 05:28:12.880152 61618 storage/replica_command.go:2153 [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:12.986353 61618 storage/replica.go:3161 [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:13.001621 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] generated preemptive snapshot 61a43e26 at index 21
I171207 05:28:13.047728 61572 storage/store.go:3552 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 11, rate-limit: 8.0 MiB/sec, 45ms
I171207 05:28:13.051392 62755 storage/replica_raftstorage.go:733 [n3,s3,r2/?:{-}] applying preemptive snapshot at index 21 (id=61a43e26, encoded size=6141, 1 rocksdb batches, 11 log entries)
I171207 05:28:13.073444 62755 storage/replica_raftstorage.go:739 [n3,s3,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 22ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I171207 05:28:13.121795 61572 storage/replica_command.go:2153 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2]
I171207 05:28:13.225306 61572 storage/replica.go:3161 [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:13.252827 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r9/1:/Table/1{2-3}] generated preemptive snapshot ce9dec24 at index 28
I171207 05:28:13.270244 61618 storage/store.go:3552 [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 46, log entries: 18, rate-limit: 8.0 MiB/sec, 11ms
I171207 05:28:13.277792 62670 storage/replica_raftstorage.go:733 [n3,s3,r9/?:{-}] applying preemptive snapshot at index 28 (id=ce9dec24, encoded size=16970, 1 rocksdb batches, 18 log entries)
I171207 05:28:13.312756 62670 storage/replica_raftstorage.go:739 [n3,s3,r9/?:/Table/1{2-3}] applied preemptive snapshot in 33ms [clear=0ms batch=0ms entries=17ms commit=12ms]
I171207 05:28:13.330677 61618 storage/replica_command.go:2153 [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2]
I171207 05:28:13.366551 61515 storage/replica_proposal.go:195 [n1,s1,r7/1:/Table/{SystemCon…-11}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624493.353042834,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:13.385836 61525 storage/replica_proposal.go:195 [n1,s1,r10/1:/Table/1{3-4}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624493.376732061,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:13.470881 61523 storage/replica_proposal.go:195 [n1,s1,r9/1:/Table/1{2-3}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624493.448887109,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:13.487067 61618 storage/replica.go:3161 [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:13.510476 61366 storage/replica_proposal.go:195 [n1,s1,r12/1:/Table/1{5-6}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624493.477831634,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:13.515075 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot afc32b9d at index 107
I171207 05:28:13.537674 61618 storage/store.go:3552 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 67, log entries: 19, rate-limit: 8.0 MiB/sec, 6ms
I171207 05:28:13.540514 62837 storage/replica_raftstorage.go:733 [n3,s3,r1/?:{-}] applying preemptive snapshot at index 107 (id=afc32b9d, encoded size=7256, 1 rocksdb batches, 19 log entries)
I171207 05:28:13.573871 62837 storage/replica_raftstorage.go:739 [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 33ms [clear=0ms batch=0ms entries=10ms commit=0ms]
I171207 05:28:13.583505 61618 storage/replica_command.go:2153 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2]
I171207 05:28:13.643157 61618 storage/replica.go:3161 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:13.661606 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 5712f68a at index 110
I171207 05:28:13.678281 61572 storage/store.go:3552 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 70, log entries: 22, rate-limit: 8.0 MiB/sec, 16ms
I171207 05:28:13.681089 62785 storage/replica_raftstorage.go:733 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 110 (id=5712f68a, encoded size=9013, 1 rocksdb batches, 22 log entries)
I171207 05:28:13.697680 62785 storage/replica_raftstorage.go:739 [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 16ms [clear=0ms batch=0ms entries=12ms commit=1ms]
I171207 05:28:13.707659 61572 storage/replica_command.go:2153 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:13.924647 62890 storage/replica.go:3161 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:13.996031 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] generated preemptive snapshot ad4413a0 at index 25
I171207 05:28:14.003427 61618 storage/store.go:3552 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 15, rate-limit: 8.0 MiB/sec, 6ms
I171207 05:28:14.007316 62915 storage/replica_raftstorage.go:733 [n2,s2,r2/?:{-}] applying preemptive snapshot at index 25 (id=ad4413a0, encoded size=7637, 1 rocksdb batches, 15 log entries)
I171207 05:28:14.044288 62915 storage/replica_raftstorage.go:739 [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 31ms [clear=0ms batch=0ms entries=20ms commit=6ms]
I171207 05:28:14.109870 61618 storage/replica_command.go:2153 [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:14.223301 61618 storage/replica.go:3161 [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:14.313205 61493 storage/replica_proposal.go:195 [n1,s1,r6/1:/{System/tse-Table/System…}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624494.275641200,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:14.316266 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] generated preemptive snapshot c687b4fc at index 29
I171207 05:28:14.330787 61618 storage/store.go:3552 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 13, log entries: 19, rate-limit: 8.0 MiB/sec, 13ms
I171207 05:28:14.343244 62873 storage/replica_raftstorage.go:733 [n2,s2,r6/?:{-}] applying preemptive snapshot at index 29 (id=c687b4fc, encoded size=8057, 1 rocksdb batches, 19 log entries)
I171207 05:28:14.357094 62873 storage/replica_raftstorage.go:739 [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I171207 05:28:14.363502 61618 storage/replica_command.go:2153 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:14.504657 61940 storage/replica_proposal.go:195 [n2,s2,r16/2:/Table/{19-20}] new range lease repl=(n2,s2):2 start=1512624493.850695305,1 epo=1 pro=1512624494.457665336,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:14.534269 61883 storage/replica_raftstorage.go:527 [replicate,n2,s2,r16/2:/Table/{19-20}] generated preemptive snapshot 14fe84cc at index 22
I171207 05:28:14.599211 62936 storage/raft_transport.go:455 [n3] raft transport stream to node 2 established
I171207 05:28:14.607244 62864 storage/replica.go:3161 [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:14.633272 62185 storage/replica_proposal.go:195 [n3,s3,r5/2:/System/ts{d-e}] new range lease repl=(n3,s3):2 start=1512624493.850695305,1 epo=1 pro=1512624494.529504072,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:14.647504 61817 storage/replica_raftstorage.go:527 [replicate,n3,s3,r5/2:/System/ts{d-e}] generated preemptive snapshot 96769c9e at index 26
I171207 05:28:14.689353 61545 storage/replica_proposal.go:195 [n1,s1,r11/1:/Table/1{4-5}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624494.646851303,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:14.692890 61618 storage/replica_raftstorage.go:527 [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot 3599b350 at index 18
I171207 05:28:14.702335 61817 storage/store.go:3552 [replicate,n3,s3,r5/2:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 925, log entries: 16, rate-limit: 8.0 MiB/sec, 54ms
I171207 05:28:14.706771 61618 storage/store.go:3552 [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 8, rate-limit: 8.0 MiB/sec, 13ms
I171207 05:28:14.726604 63046 storage/replica_raftstorage.go:733 [n2,s2,r5/?:{-}] applying preemptive snapshot at index 26 (id=96769c9e, encoded size=157633, 1 rocksdb batches, 16 log entries)
I171207 05:28:14.739121 63062 storage/replica_raftstorage.go:733 [n3,s3,r11/?:{-}] applying preemptive snapshot at index 18 (id=3599b350, encoded size=3189, 1 rocksdb batches, 8 log entries)
I171207 05:28:14.758365 63062 storage/replica_raftstorage.go:739 [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 19ms [clear=0ms batch=0ms entries=2ms commit=15ms]
I171207 05:28:14.789863 61618 storage/replica_command.go:2153 [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2]
I171207 05:28:14.818813 63046 storage/replica_raftstorage.go:739 [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 82ms [clear=0ms batch=0ms entries=48ms commit=22ms]
I171207 05:28:14.876170 63012 storage/raft_transport.go:455 [n2] raft transport stream to node 3 established
I171207 05:28:14.885307 61883 storage/store.go:3552 [replicate,n2,s2,r16/2:/Table/{19-20}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 12, rate-limit: 8.0 MiB/sec, 8ms
I171207 05:28:14.889511 63065 storage/replica_raftstorage.go:733 [n3,s3,r16/?:{-}] applying preemptive snapshot at index 22 (id=14fe84cc, encoded size=4550, 1 rocksdb batches, 12 log entries)
I171207 05:28:14.893997 63065 storage/replica_raftstorage.go:739 [n3,s3,r16/?:/Table/{19-20}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I171207 05:28:14.928414 61618 storage/replica.go:3161 [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171207 05:28:14.996540 61618 storage/queue.go:728 [n1,replicate] purgatory is now empty
I171207 05:28:14.996808 61883 storage/replica_command.go:2153 [replicate,n2,s2,r16/2:/Table/{19-20}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:15.020384 61817 storage/replica_command.go:2153 [replicate,n3,s3,r5/2:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:15.033776 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot cd0af517 at index 20
I171207 05:28:15.127991 61572 storage/store.go:3552 [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 13, log entries: 10, rate-limit: 8.0 MiB/sec, 49ms
I171207 05:28:15.149689 63129 storage/replica_raftstorage.go:733 [n2,s2,r11/?:{-}] applying preemptive snapshot at index 20 (id=cd0af517, encoded size=4509, 1 rocksdb batches, 10 log entries)
I171207 05:28:15.165129 63129 storage/replica_raftstorage.go:739 [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 15ms [clear=0ms batch=0ms entries=2ms commit=2ms]
I171207 05:28:15.211877 61572 storage/replica_command.go:2153 [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:15.297092 63205 storage/replica.go:3161 [n2,s2,r16/2:/Table/{19-20}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:15.364673 61950 storage/replica_proposal.go:195 [n2,s2,r8/2:/Table/1{1-2}] new range lease repl=(n2,s2):2 start=1512624493.850695305,1 epo=1 pro=1512624495.321565630,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:15.395548 61883 storage/replica_raftstorage.go:527 [replicate,n2,s2,r8/2:/Table/1{1-2}] generated preemptive snapshot 0515de46 at index 25
I171207 05:28:15.421124 61572 storage/replica.go:3161 [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:15.467145 61883 storage/store.go:3552 [replicate,n2,s2,r8/2:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 15, rate-limit: 8.0 MiB/sec, 70ms
I171207 05:28:15.472525 63213 storage/replica_raftstorage.go:733 [n3,s3,r8/?:{-}] applying preemptive snapshot at index 25 (id=0515de46, encoded size=7413, 1 rocksdb batches, 15 log entries)
I171207 05:28:15.481405 63213 storage/replica_raftstorage.go:739 [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I171207 05:28:15.597648 61883 storage/replica_command.go:2153 [replicate,n2,s2,r8/2:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:15.604365 61366 storage/replica_proposal.go:195 [n1,s1,r17/1:/{Table/20-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:15.614668 63225 storage/replica.go:3161 [n3,s3,r5/2:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:15.614953 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r17/1:/{Table/20-Max}] generated preemptive snapshot 1db082e7 at index 17
I171207 05:28:15.644553 61572 storage/store.go:3552 [replicate,n1,s1,r17/1:/{Table/20-Max}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 7, rate-limit: 8.0 MiB/sec, 28ms
I171207 05:28:15.648179 63289 storage/replica_raftstorage.go:733 [n3,s3,r17/?:{-}] applying preemptive snapshot at index 17 (id=1db082e7, encoded size=2333, 1 rocksdb batches, 7 log entries)
I171207 05:28:15.651312 63289 storage/replica_raftstorage.go:739 [n3,s3,r17/?:/{Table/20-Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I171207 05:28:15.670982 61572 storage/replica_command.go:2153 [replicate,n1,s1,r17/1:/{Table/20-Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r17:/{Table/20-Max} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:15.841340 61572 storage/replica.go:3161 [n1,s1,r17/1:/{Table/20-Max}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:15.899832 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot 0c8bb06f at index 22
I171207 05:28:15.948228 61572 storage/store.go:3552 [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 12, rate-limit: 8.0 MiB/sec, 29ms
I171207 05:28:15.975766 63351 storage/replica.go:3161 [n2,s2,r8/2:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:15.986458 63350 storage/replica_raftstorage.go:733 [n2,s2,r12/?:{-}] applying preemptive snapshot at index 22 (id=0c8bb06f, encoded size=4511, 1 rocksdb batches, 12 log entries)
I171207 05:28:15.998140 63350 storage/replica_raftstorage.go:739 [n2,s2,r12/?:/Table/1{5-6}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=5ms commit=0ms]
I171207 05:28:16.007926 61572 storage/replica_command.go:2153 [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:16.143002 61572 storage/replica.go:3161 [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:16.177512 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot 4711ed21 at index 98
I171207 05:28:16.213232 61572 storage/store.go:3552 [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 268, log entries: 14, rate-limit: 8.0 MiB/sec, 34ms
I171207 05:28:16.232811 63356 storage/replica_raftstorage.go:733 [n3,s3,r10/?:{-}] applying preemptive snapshot at index 98 (id=4711ed21, encoded size=49010, 1 rocksdb batches, 14 log entries)
I171207 05:28:16.269480 63356 storage/replica_raftstorage.go:739 [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 35ms [clear=0ms batch=3ms entries=30ms commit=1ms]
I171207 05:28:16.280137 61572 storage/replica_command.go:2153 [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:16.525121 61572 storage/replica.go:3161 [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:16.562843 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] generated preemptive snapshot b447dde3 at index 34
I171207 05:28:16.582010 61572 storage/store.go:3552 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 18, log entries: 24, rate-limit: 8.0 MiB/sec, 18ms
I171207 05:28:16.600628 63493 storage/replica_raftstorage.go:733 [n3,s3,r3/?:{-}] applying preemptive snapshot at index 34 (id=b447dde3, encoded size=11176, 1 rocksdb batches, 24 log entries)
I171207 05:28:16.631134 63493 storage/replica_raftstorage.go:739 [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 30ms [clear=0ms batch=0ms entries=16ms commit=0ms]
I171207 05:28:16.660934 61572 storage/replica_command.go:2153 [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, (n2,s2):2, next=3]
I171207 05:28:16.733669 63481 storage/replica.go:3161 [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I171207 05:28:16.803176 61371 storage/replica_proposal.go:195 [n1,s1,r14/1:/Table/1{7-8}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.758200886,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:16.837842 61528 storage/replica_proposal.go:195 [n1,s1,r13/1:/Table/1{6-7}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.811637560,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:16.855946 61490 storage/replica_proposal.go:195 [n1,s1,r15/1:/Table/1{8-9}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.847650189,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:16.887869 61497 storage/replica_proposal.go:195 [n1,s1,r4/1:/System/{NodeLive…-tsd}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.871067736,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1512624493.850695305,0 pro=1512624484.850822763,0
I171207 05:28:16.899848 61572 storage/replica_raftstorage.go:527 [replicate,n1,s1,r9/1:/Table/1{2-3}] generated preemptive snapshot f2b0feec at index 32
I171207 05:28:16.927270 61572 storage/store.go:3552 [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 47, log entries: 22, rate-limit: 8.0 MiB/sec, 26ms
I171207 05:28:16.930141 63573 storage/replica_raftstorage.go:733 [n2,s2,r9/?:{-}] applying preemptive snapshot at index 32 (id=f2b0feec, encoded size=18328, 1 rocksdb batches, 22 log entries)
I171207 05:28:16.940943 63573 storage/replica_raftstorage.go:739 [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 10ms [clear=0ms batch=0ms entries=9ms commit=1ms]
I171207 05:28:16.961791 61572 storage/replica_command.go:2153 [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, (n3,s3):2, next=3]
I171207 05:28:17.175226 61572 storage/replica.go:3161 [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171207 05:28:17.851628 61571 storage/replica_command.go:1231 [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/50 [r18]
I171207 05:28:17.871094 63595 sql/event_log.go:113 [client=127.0.0.1:57496,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:data Statement:CREATE DATABASE IF NOT EXISTS data User:root}
I171207 05:28:18.051431 61541 storage/replica_proposal.go:195 [n1,s1,r17/1:/{Table/20-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0 following repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0
I171207 05:28:18.175470 63595 sql/event_log.go:113 [client=127.0.0.1:57496,user=root,n1] Event: "create_table", target: 51, info: {TableName:data.bank Statement:CREATE TABLE data.bank (id INT PRIMARY KEY, balance INT, payload STRING, FAMILY (id, balance, payload)) User:root}
I171207 05:28:18.204183 61571 storage/replica_command.go:1231 [split,n1,s1,r18/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r19]
I171207 05:28:18.220898 62195 storage/replica_proposal.go:195 [n3,s3,r15/3:/Table/1{8-9}] new range lease repl=(n3,s3):3 start=1512624498.195796803,0 epo=1 pro=1512624498.195839071,0 following repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624496.847650189,0
I171207 05:28:18.606935 61504 storage/replica_proposal.go:195 [n1,s1,r18/1:/{Table/50-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0 following repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1512624495.564055859,0
I171207 05:28:19.448308 62208 storage/replica_proposal.go:195 [n3,s3,r11/2:/Table/1{4-5}] new range lease repl=(n3,s3):2 start=1512624499.426922894,0 epo=1 pro=1512624499.426957
```
Please assign, take a look and update the issue accordingly.
| test | teamcity failed tests on master testrace testbackuprestoreresume testrace testbackuprestoreresume restore testrace testgrpckeepalivefailurefailsinflightrpcs test testbackuprestoreresume test testbackuprestoreresume restore test testgrpckeepalivefailurefailsinflightrpcs the following tests appear to have failed fail testrace testbackuprestoreresume stdout server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b server node go cluster has been created server server go add additional nodes by specifying join storage store go failed initial metrics computation system config not yet available server node go initialized store disk capacity mib available mib used b logicalbytes kib ranges leases writes bytesperreplica writesperreplica server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality serverversion storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes storage replica command go initiating a split of this range at key system sql distsql physical planner go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at sql jobs registry go unable to get node liveness node not in the liveness table storage consistency queue go key range min max outside of bounds of range min system sql jobs registry go error while adopting jobs relation system jobs does not exist storage replica command go initiating a split of this range at key system nodeliveness storage replica command go initiating a split of this range at key system nodelivenessmax sql jobs registry go error while adopting jobs relation system jobs does not exist storage intent resolver go failed to push during intent resolution failed to push sql txn implicit id key table systemconfigspan start rw true pri iso serializable stat pending epo ts orig max wto false rop false seq sql event log go event alter table target info tablename eventlog statement alter table system eventlog alter column uniqueid set default uuid user node mutationid cascadedroppedviews sql jobs registry go error while adopting jobs relation system jobs does not exist storage replica command go initiating a split of this range at key system tsd sql jobs registry go error while adopting jobs relation system jobs does not exist sql lease go publish descid eventlog version mtime utc storage replica command go initiating a split of this range at key system tse sql jobs registry go error while adopting jobs relation system jobs does not exist storage replica command go initiating a split of this range at key table systemconfigspan start sql jobs registry go error while adopting jobs relation system jobs does not exist storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql event log go event set cluster setting target info settingname diagnostics reporting enabled value true user node storage replica command go initiating a split of this range at key table storage intent resolver go failed to push during intent resolution failed to push split id key local range table rangedescriptor rw true pri iso serializable stat pending epo ts orig max wto false rop false seq storage intent resolver go failed to push during intent resolution failed to push split id key local range table rangedescriptor rw true pri iso serializable stat pending epo ts orig max wto false rop false seq storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql event log go event set cluster setting target info settingname version value user node sql event log go event set cluster setting target info settingname trace debug enable value false user node storage replica command go initiating a split of this range at key table server server go done ensuring all necessary migrations have run server server go serving sql connections sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion clusterid startedat lastup storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage intent resolver go failed to push during intent resolution failed to push split id key local range table rangedescriptor rw true pri iso serializable stat pending epo ts orig max wto false rop false seq storage replica command go initiating a split of this range at key table server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections server server go no stores bootstrapped and join flag specified awaiting init command gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion server node go node asynchronously bootstrapping engine s server node go node started with engine s and attributes sql distsql physical planner go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage server node go bootstrapped store server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at sql jobs registry go unable to get node liveness node not in the liveness table server server go done ensuring all necessary migrations have run server server go serving sql connections sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion clusterid startedat lastup server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections server server go no stores bootstrapped and join flag specified awaiting init command gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion server node go node asynchronously bootstrapping engine s server node go node started with engine s and attributes sql distsql physical planner go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at sql jobs registry go unable to get node liveness node not in the liveness table server node go bootstrapped store server server go done ensuring all necessary migrations have run server server go serving sql connections storage replica raftstorage go generated preemptive snapshot at index sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion clusterid startedat lastup storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage raft transport go raft transport stream to node established storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system ts d e storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system tse table systemconfigspan start storage raft transport go raft transport stream to node established storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table max storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system nodelivenessmax tsd storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system nodeliveness max storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system nodelivenessmax tsd storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table systemconfigspan start storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table systemconfigspan start storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system nodeliveness storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica go proposing add replica updated next storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor min system storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor min system storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system nodeliveness storage replica go proposing add replica updated next storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system tse table systemconfigspan start storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica raftstorage go generated preemptive snapshot at index storage raft transport go raft transport stream to node established storage replica go proposing add replica updated next storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica raftstorage go generated preemptive snapshot at index storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica raftstorage go applied preemptive snapshot in storage raft transport go raft transport stream to node established storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica go proposing add replica updated next storage queue go purgatory is now empty storage replica command go change replicas add replica read existing descriptor table storage replica command go change replicas add replica read existing descriptor system ts d e storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica raftstorage go generated preemptive snapshot at index storage replica go proposing add replica updated next storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table max storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica go proposing add replica updated next storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system nodeliveness max storage replica go proposing add replica updated next storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica command go initiating a split of this range at key table sql event log go event create database target info databasename data statement create database if not exists data user root storage replica proposal go new range lease repl start epo pro following repl start epo pro sql event log go event create table target info tablename data bank statement create table data bank id int primary key balance int payload string family id balance payload user root storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl start epo pro following repl start epo pro storage replica proposal go new range lease repl start epo pro following repl start epo pro storage replica proposal go new range lease repl start epo pro please assign take a look and update the issue accordingly | 1 |
240,228 | 20,018,330,749 | IssuesEvent | 2022-02-01 14:15:42 | Public-Health-Scotland/source-linkage-files | https://api.github.com/repos/Public-Health-Scotland/source-linkage-files | opened | Create tests for social care lookups | Testing back-end | Create tests for comparing new social care lookups to the previous update file e.g. compare December update to September update.
- [ ] Care home episodes data
- [ ] Demographic lookup data
This follows the same format for existing tests in the SLFs for example a data frame with the following:
- measure: number of CHIs
- existing value: number of CHIs in old file
- new value: number of CHIs in new file
- difference: the difference between existing and new values
- percentage change
- issue: This will flag a 1 or 0 if the percentage change is greater than 5
The idea is to create measures based on the care home episodes data file and the demographic lookup file and output a data frame in a similar format to 'test' the difference between the two files. | 1.0 | Create tests for social care lookups - Create tests for comparing new social care lookups to the previous update file e.g. compare December update to September update.
- [ ] Care home episodes data
- [ ] Demographic lookup data
This follows the same format for existing tests in the SLFs for example a data frame with the following:
- measure: number of CHIs
- existing value: number of CHIs in old file
- new value: number of CHIs in new file
- difference: the difference between existing and new values
- percentage change
- issue: This will flag a 1 or 0 if the percentage change is greater than 5
The idea is to create measures based on the care home episodes data file and the demographic lookup file and output a data frame in a similar format to 'test' the difference between the two files. | test | create tests for social care lookups create tests for comparing new social care lookups to the previous update file e g compare december update to september update care home episodes data demographic lookup data this follows the same format for existing tests in the slfs for example a data frame with the following measure number of chis existing value number of chis in old file new value number of chis in new file difference the difference between existing and new values percentage change issue this will flag a or if the percentage change is greater than the idea is to create measures based on the care home episodes data file and the demographic lookup file and output a data frame in a similar format to test the difference between the two files | 1 |
215,084 | 16,590,734,839 | IssuesEvent | 2021-06-01 07:23:27 | pitchmuc/adobe-analytics-api-2.0 | https://api.github.com/repos/pitchmuc/adobe-analytics-api-2.0 | closed | Order of arguments | documentation enhancement good first issue | https://github.com/pitchmuc/adobe_analytics_api_2.0/blob/04e29001025a6440e972efe6b1b9f292aac956a8/adobe_analytics_2/aanalytics2.py#L69
I'd suggest to put the most likely changed argument first. I feel, that this would be the `save` argument here. | 1.0 | Order of arguments - https://github.com/pitchmuc/adobe_analytics_api_2.0/blob/04e29001025a6440e972efe6b1b9f292aac956a8/adobe_analytics_2/aanalytics2.py#L69
I'd suggest to put the most likely changed argument first. I feel, that this would be the `save` argument here. | non_test | order of arguments i d suggest to put the most likely changed argument first i feel that this would be the save argument here | 0 |
141,791 | 11,437,461,204 | IssuesEvent | 2020-02-05 00:05:10 | ayumi-cloud/oc-security-module | https://api.github.com/repos/ayumi-cloud/oc-security-module | closed | Combine and finish off the HTTP Methods modules | Add to Blacklist Add to Whitelist FINSIHED Firewall Priority: High Testing - Passed enhancement | ### Enhancement idea
- [x] Combine and finish off the HTTP Methods modules.
Merge issues:
- https://github.com/ayumi-cloud/oc-security-module/issues/111
- https://github.com/ayumi-cloud/oc-security-module/issues/114
| 1.0 | Combine and finish off the HTTP Methods modules - ### Enhancement idea
- [x] Combine and finish off the HTTP Methods modules.
Merge issues:
- https://github.com/ayumi-cloud/oc-security-module/issues/111
- https://github.com/ayumi-cloud/oc-security-module/issues/114
| test | combine and finish off the http methods modules enhancement idea combine and finish off the http methods modules merge issues | 1 |
232,110 | 25,564,972,627 | IssuesEvent | 2022-11-30 13:40:56 | jtimberlake/rei-cedar | https://api.github.com/repos/jtimberlake/rei-cedar | opened | CVE-2022-38900 (High) detected in decode-uri-component-0.2.0.tgz | security vulnerability | ## CVE-2022-38900 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>decode-uri-component-0.2.0.tgz</b></p></summary>
<p>A better decodeURIComponent</p>
<p>Library home page: <a href="https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz">https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/decode-uri-component/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-styles-3.14.1.tgz (Root Library)
- query-string-6.14.1.tgz
- :x: **decode-uri-component-0.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/rei-cedar/commit/1bdaed2a9f64ddf36c63561f57b069a67d0d77a7">1bdaed2a9f64ddf36c63561f57b069a67d0d77a7</a></p>
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
decode-uri-component 0.2.0 is vulnerable to Improper Input Validation resulting in DoS.
<p>Publish Date: 2022-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-38900>CVE-2022-38900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
| True | CVE-2022-38900 (High) detected in decode-uri-component-0.2.0.tgz - ## CVE-2022-38900 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>decode-uri-component-0.2.0.tgz</b></p></summary>
<p>A better decodeURIComponent</p>
<p>Library home page: <a href="https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz">https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/decode-uri-component/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-styles-3.14.1.tgz (Root Library)
- query-string-6.14.1.tgz
- :x: **decode-uri-component-0.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/rei-cedar/commit/1bdaed2a9f64ddf36c63561f57b069a67d0d77a7">1bdaed2a9f64ddf36c63561f57b069a67d0d77a7</a></p>
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
decode-uri-component 0.2.0 is vulnerable to Improper Input Validation resulting in DoS.
<p>Publish Date: 2022-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-38900>CVE-2022-38900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
| non_test | cve high detected in decode uri component tgz cve high severity vulnerability vulnerable library decode uri component tgz a better decodeuricomponent library home page a href path to dependency file package json path to vulnerable library node modules decode uri component package json dependency hierarchy rollup plugin styles tgz root library query string tgz x decode uri component tgz vulnerable library found in head commit a href found in base branch next vulnerability details decode uri component is vulnerable to improper input validation resulting in dos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href | 0 |
175,022 | 13,529,056,527 | IssuesEvent | 2020-09-15 17:40:46 | microsoft/vscode-python | https://api.github.com/repos/microsoft/vscode-python | closed | Did not see prompt to change this setting python.unitTest.unittestEnabled | area-testing good first issue needs PR reason-preexisting type-bug | ## Environment data
- VS Code version: 1.34.0
- Extension version (available under the Extensions sidebar): 2019.6.16302-dev
- OS and version: windows
- Python version (& distribution if applicable, e.g. Anaconda): 3.7
- Type of virtual environment used (N/A | venv | virtualenv | conda | ...): N/A
- Relevant/affected Python packages and their versions: unittest
## Expected behaviour
Should show a prompt to change the old setting `python.unitTest.unittestEnabled` to new setting `python.testing.unittestEnabled`.
## Actual behaviour
Appends new setting to the existing settings.
## Steps to reproduce:
1. Open a project with these settings.
```js
"python.unitTest.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*_test.py"
],
"python.unitTest.unittestEnabled": true,
```
This is the prompt i get:

This is what i have in settings:
```js
{
"python.pythonPath": "C:\\Python37\\python.exe",
"python.linting.flake8Enabled": true,
"python.linting.pylintEnabled": false,
"python.linting.flake8Args": [
"--ignore", "E24,E121,E123,E125,E126,E221,E226,E261,E266,E704,E265",
],
"python.formatting.provider": "autopep8",
"python.formatting.autopep8Args": [
"--ignore", "E24,E121,E123,E125,E126,E221,E226,E261,E266,E704,E265"
],
"python.unitTest.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*_test.py"
],
"python.unitTest.unittestEnabled": true,
}
```
This is what happens if I click `Configure Test Framework` and follow through the prompts:
```js
{
"python.pythonPath": "C:\\Python37\\python.exe",
"python.linting.flake8Enabled": true,
"python.linting.pylintEnabled": false,
"python.linting.flake8Args": [
"--ignore", "E24,E121,E123,E125,E126,E221,E226,E261,E266,E704,E265",
],
"python.formatting.provider": "autopep8",
"python.formatting.autopep8Args": [
"--ignore", "E24,E121,E123,E125,E126,E221,E226,E261,E266,E704,E265"
],
"python.unitTest.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*_test.py"
],
"python.unitTest.unittestEnabled": true,
"python.testing.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*_test.py"
],
"python.testing.pyTestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.unittestEnabled": true,
}
```
| 1.0 | Did not see prompt to change this setting python.unitTest.unittestEnabled - ## Environment data
- VS Code version: 1.34.0
- Extension version (available under the Extensions sidebar): 2019.6.16302-dev
- OS and version: windows
- Python version (& distribution if applicable, e.g. Anaconda): 3.7
- Type of virtual environment used (N/A | venv | virtualenv | conda | ...): N/A
- Relevant/affected Python packages and their versions: unittest
## Expected behaviour
Should show a prompt to change the old setting `python.unitTest.unittestEnabled` to new setting `python.testing.unittestEnabled`.
## Actual behaviour
Appends new setting to the existing settings.
## Steps to reproduce:
1. Open a project with these settings.
```js
"python.unitTest.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*_test.py"
],
"python.unitTest.unittestEnabled": true,
```
This is the prompt i get:

This is what i have in settings:
```js
{
"python.pythonPath": "C:\\Python37\\python.exe",
"python.linting.flake8Enabled": true,
"python.linting.pylintEnabled": false,
"python.linting.flake8Args": [
"--ignore", "E24,E121,E123,E125,E126,E221,E226,E261,E266,E704,E265",
],
"python.formatting.provider": "autopep8",
"python.formatting.autopep8Args": [
"--ignore", "E24,E121,E123,E125,E126,E221,E226,E261,E266,E704,E265"
],
"python.unitTest.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*_test.py"
],
"python.unitTest.unittestEnabled": true,
}
```
This is what happens if I click `Configure Test Framework` and follow through the prompts:
```js
{
"python.pythonPath": "C:\\Python37\\python.exe",
"python.linting.flake8Enabled": true,
"python.linting.pylintEnabled": false,
"python.linting.flake8Args": [
"--ignore", "E24,E121,E123,E125,E126,E221,E226,E261,E266,E704,E265",
],
"python.formatting.provider": "autopep8",
"python.formatting.autopep8Args": [
"--ignore", "E24,E121,E123,E125,E126,E221,E226,E261,E266,E704,E265"
],
"python.unitTest.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*_test.py"
],
"python.unitTest.unittestEnabled": true,
"python.testing.unittestArgs": [
"-v",
"-s",
".",
"-p",
"*_test.py"
],
"python.testing.pyTestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.unittestEnabled": true,
}
```
| test | did not see prompt to change this setting python unittest unittestenabled environment data vs code version extension version available under the extensions sidebar dev os and version windows python version distribution if applicable e g anaconda type of virtual environment used n a venv virtualenv conda n a relevant affected python packages and their versions unittest expected behaviour should show a prompt to change the old setting python unittest unittestenabled to new setting python testing unittestenabled actual behaviour appends new setting to the existing settings steps to reproduce open a project with these settings js python unittest unittestargs v s p test py python unittest unittestenabled true this is the prompt i get this is what i have in settings js python pythonpath c python exe python linting true python linting pylintenabled false python linting ignore python formatting provider python formatting ignore python unittest unittestargs v s p test py python unittest unittestenabled true this is what happens if i click configure test framework and follow through the prompts js python pythonpath c python exe python linting true python linting pylintenabled false python linting ignore python formatting provider python formatting ignore python unittest unittestargs v s p test py python unittest unittestenabled true python testing unittestargs v s p test py python testing pytestenabled false python testing nosetestsenabled false python testing unittestenabled true | 1 |
280,802 | 8,686,862,492 | IssuesEvent | 2018-12-03 12:05:05 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | drivers: nrf: UARTE interrupt driven cannot be compiled | area: Drivers area: UART bug priority: high | Interrupt driven API for UARTE driver is causing compilation errors.
**To Reproduce**
Try to build hello world sample for nrf52840 with interrupts support. Default DT settings and UARTE selected via Kconfig.
**Expected behavior**
Application compiles and works.
**Impact**
showstopper
| 1.0 | drivers: nrf: UARTE interrupt driven cannot be compiled - Interrupt driven API for UARTE driver is causing compilation errors.
**To Reproduce**
Try to build hello world sample for nrf52840 with interrupts support. Default DT settings and UARTE selected via Kconfig.
**Expected behavior**
Application compiles and works.
**Impact**
showstopper
| non_test | drivers nrf uarte interrupt driven cannot be compiled interrupt driven api for uarte driver is causing compilation errors to reproduce try to build hello world sample for with interrupts support default dt settings and uarte selected via kconfig expected behavior application compiles and works impact showstopper | 0 |
76,109 | 9,917,342,269 | IssuesEvent | 2019-06-28 23:47:29 | github-tunisia/github-tunisia.github.io | https://api.github.com/repos/github-tunisia/github-tunisia.github.io | closed | Notice For Contributors | documentation | # Some Information For Contributors
---
## Thank you all for contributing to the `GITHUB TUNISIA` organization and being here.

## When you contribute do not forget to put your name in the README.md file like this :

## ~Note : I will add this night the `GTML.svg` image.~ | 1.0 | Notice For Contributors - # Some Information For Contributors
---
## Thank you all for contributing to the `GITHUB TUNISIA` organization and being here.

## When you contribute do not forget to put your name in the README.md file like this :

## ~Note : I will add this night the `GTML.svg` image.~ | non_test | notice for contributors some information for contributors thank you all for contributing to the github tunisia organization and being here when you contribute do not forget to put your name in the readme md file like this note i will add this night the gtml svg image | 0 |
337,617 | 30,251,723,298 | IssuesEvent | 2023-07-06 21:11:14 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: kv95/enc=false/nodes=4/ssds=8 failed | C-test-failure O-robot O-roachtest branch-master release-blocker T-testeng | roachtest.kv95/enc=false/nodes=4/ssds=8 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10797165?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10797165?buildTab=artifacts#/kv95/enc=false/nodes=4/ssds=8) on master @ [dbe8511fae8fca21562fdde5c240b1f7d06ef582](https://github.com/cockroachdb/cockroach/commits/dbe8511fae8fca21562fdde5c240b1f7d06ef582):
```
(test_runner.go:1075).runTest: test timed out (3h0m0s)
(cluster.go:2282).Run: output in run_181048.014807822_n5_workload-run-kv-tole: ./workload run kv --tolerate-errors --init --histograms=perf/stats.json --concurrency=256 --splits=1000 --duration=30m0s --read-percent=95 {pgurl:1-4} returned: COMMAND_PROBLEM: exit status 134
(monitor.go:137).Wait: monitor failure: monitor task failed: t.Fatal() was called
test artifacts and logs in: /artifacts/kv95/enc=false/nodes=4/ssds=8/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=8</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=8</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv95/enc=false/nodes=4/ssds=8.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 3.0 | roachtest: kv95/enc=false/nodes=4/ssds=8 failed - roachtest.kv95/enc=false/nodes=4/ssds=8 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10797165?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10797165?buildTab=artifacts#/kv95/enc=false/nodes=4/ssds=8) on master @ [dbe8511fae8fca21562fdde5c240b1f7d06ef582](https://github.com/cockroachdb/cockroach/commits/dbe8511fae8fca21562fdde5c240b1f7d06ef582):
```
(test_runner.go:1075).runTest: test timed out (3h0m0s)
(cluster.go:2282).Run: output in run_181048.014807822_n5_workload-run-kv-tole: ./workload run kv --tolerate-errors --init --histograms=perf/stats.json --concurrency=256 --splits=1000 --duration=30m0s --read-percent=95 {pgurl:1-4} returned: COMMAND_PROBLEM: exit status 134
(monitor.go:137).Wait: monitor failure: monitor task failed: t.Fatal() was called
test artifacts and logs in: /artifacts/kv95/enc=false/nodes=4/ssds=8/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=8</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=8</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv95/enc=false/nodes=4/ssds=8.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest enc false nodes ssds failed roachtest enc false nodes ssds with on master test runner go runtest test timed out cluster go run output in run workload run kv tole workload run kv tolerate errors init histograms perf stats json concurrency splits duration read percent pgurl returned command problem exit status monitor go wait monitor failure monitor task failed t fatal was called test artifacts and logs in artifacts enc false nodes ssds run parameters roachtest arch roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb test eng | 1 |
269,904 | 8,444,215,550 | IssuesEvent | 2018-10-18 17:47:18 | utfpr/municipalMarketFairControl | https://api.github.com/repos/utfpr/municipalMarketFairControl | closed | Documentar API | Back priority | - Supervisor (feito)
- Login (feito)
- Celula (feito)
- Categoria
- Subcategoria
- Feira
- Feirante
- Participa (fazer por último) | 1.0 | Documentar API - - Supervisor (feito)
- Login (feito)
- Celula (feito)
- Categoria
- Subcategoria
- Feira
- Feirante
- Participa (fazer por último) | non_test | documentar api supervisor feito login feito celula feito categoria subcategoria feira feirante participa fazer por último | 0 |
129,283 | 10,568,752,440 | IssuesEvent | 2019-10-06 15:07:32 | bertvannuffelen/demo_oslodoc | https://api.github.com/repos/bertvannuffelen/demo_oslodoc | closed | Eigenschap in associatieklasse worden niet aangemaakt indien verwijzend naar ander klasse in ander package | Topic: testassociaties readyfortest | De eigenschappen in kwestie ontbreken zowel in het vocabularium als in het applicatieprofiel.
Voorbeeld:

Hier ontbreekt k06B in associatieklasse Heeft06.
(Niettegenstaande de aanwezigheid van de correcte package tag op de associatie in kwestie)
Te bekijken in:
https://otl-test.data.vlaanderen.be/doc/vocabularium/documentatie/associaties-voc-met-packages2
https://otl-test.data.vlaanderen.be/doc/applicatieprofiel/documentatie/associaties-met-packages2 | 2.0 | Eigenschap in associatieklasse worden niet aangemaakt indien verwijzend naar ander klasse in ander package - De eigenschappen in kwestie ontbreken zowel in het vocabularium als in het applicatieprofiel.
Voorbeeld:

Hier ontbreekt k06B in associatieklasse Heeft06.
(Niettegenstaande de aanwezigheid van de correcte package tag op de associatie in kwestie)
Te bekijken in:
https://otl-test.data.vlaanderen.be/doc/vocabularium/documentatie/associaties-voc-met-packages2
https://otl-test.data.vlaanderen.be/doc/applicatieprofiel/documentatie/associaties-met-packages2 | test | eigenschap in associatieklasse worden niet aangemaakt indien verwijzend naar ander klasse in ander package de eigenschappen in kwestie ontbreken zowel in het vocabularium als in het applicatieprofiel voorbeeld hier ontbreekt in associatieklasse niettegenstaande de aanwezigheid van de correcte package tag op de associatie in kwestie te bekijken in | 1 |
136,900 | 11,092,223,986 | IssuesEvent | 2019-12-15 17:31:55 | ayumi-cloud/oc-security-module | https://api.github.com/repos/ayumi-cloud/oc-security-module | closed | Add blank Majestic Crawler records to whitelist | Add to Whitelist FINSIHED Firewall Definitions Priority: Medium Testing - Passed enhancement | ### Enhancement idea
- [x] Add blank Majestic Crawler records to whitelist.
Fields | Details
---|---
ISP | LeaseWeb USA Inc.
Type | Data Center/Web Hosting/Transit
Hostname | crawl-vfyrb9.mj12bot.com
Domain | leaseweb.com
Country | United States
City | Manassas, Virginia
ASN | AS30633
| 1.0 | Add blank Majestic Crawler records to whitelist - ### Enhancement idea
- [x] Add blank Majestic Crawler records to whitelist.
Fields | Details
---|---
ISP | LeaseWeb USA Inc.
Type | Data Center/Web Hosting/Transit
Hostname | crawl-vfyrb9.mj12bot.com
Domain | leaseweb.com
Country | United States
City | Manassas, Virginia
ASN | AS30633
| test | add blank majestic crawler records to whitelist enhancement idea add blank majestic crawler records to whitelist fields details isp leaseweb usa inc type data center web hosting transit hostname crawl com domain leaseweb com country united states city manassas virginia asn | 1 |
58,535 | 7,160,619,982 | IssuesEvent | 2018-01-28 03:06:24 | CCBlueX/LiquidBounce1.8-Issues | https://api.github.com/repos/CCBlueX/LiquidBounce1.8-Issues | closed | Keybinds GUI | GUI (Design etc) Request | Ich wäre dafür, dass es in der GUI eine Art "Button" gibt, welcher einen direkt die Keybinds setzen lässt.
Oder eventuell mit der mittleren Maustaste auf die Modules drücken und dann erscheint eine virtuelle Tastatur, wo man nur noch die Tasten anklicken muss. (Wie in Null). | 1.0 | Keybinds GUI - Ich wäre dafür, dass es in der GUI eine Art "Button" gibt, welcher einen direkt die Keybinds setzen lässt.
Oder eventuell mit der mittleren Maustaste auf die Modules drücken und dann erscheint eine virtuelle Tastatur, wo man nur noch die Tasten anklicken muss. (Wie in Null). | non_test | keybinds gui ich wäre dafür dass es in der gui eine art button gibt welcher einen direkt die keybinds setzen lässt oder eventuell mit der mittleren maustaste auf die modules drücken und dann erscheint eine virtuelle tastatur wo man nur noch die tasten anklicken muss wie in null | 0 |
249,767 | 21,190,835,855 | IssuesEvent | 2022-04-08 17:13:05 | googleapis/python-pubsub | https://api.github.com/repos/googleapis/python-pubsub | closed | subscriber_test.test_receive_messages_with_exactly_once_delivery_enabled unit flaky | api: pubsub priority: p2 testing flaky | Samples unit test fails since PR #629
```
regional_publisher_client = <google.cloud.pubsub_v1.PublisherClient object at 0x7fcdbe671ca0>
exactly_once_delivery_topic = 'projects/python-docs-samples-tests/topics/subscription-test-eod-topic-3.9-9bea132264a14256be2578f16fc56344'
subscription_eod = 'projects/python-docs-samples-tests/subscriptions/subscription-test-subscription-eod-3.9-9bea132264a14256be2578f16fc56344'
capsys = <_pytest.capture.CaptureFixture object at 0x7fcdbe55ab20>
def test_receive_messages_with_exactly_once_delivery_enabled(
regional_publisher_client: pubsub_v1.PublisherClient,
exactly_once_delivery_topic: str,
subscription_eod: str,
capsys: CaptureFixture[str],
) -> None:
message_ids = _publish_messages(regional_publisher_client, exactly_once_delivery_topic)
> subscriber.receive_messages_with_exactly_once_delivery_enabled(
PROJECT_ID, SUBSCRIPTION_EOD, 10
)
subscriber_test.py:715:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
subscriber.py:662: in receive_messages_with_exactly_once_delivery_enabled
streaming_pull_future.result(timeout=timeout)
/usr/local/lib/python3.9/concurrent/futures/_base.py:445: in result
return self.__get_result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = None
def __get_result(self):
if self._exception:
try:
> raise self._exception
E google.api_core.exceptions.NotFound: 404 Resource not found (resource=projects/python-docs-samples-tests/subscriptions/subscription-test-subscription-eod-3.9-9bea132264a14256be2578f16fc56344).
/usr/local/lib/python3.9/concurrent/futures/_base.py:390: NotFound
```
I suspect this may be due to a change in the test in: PR #628
or exactly once behavior in PR #626.
| 1.0 | subscriber_test.test_receive_messages_with_exactly_once_delivery_enabled unit flaky - Samples unit test fails since PR #629
```
regional_publisher_client = <google.cloud.pubsub_v1.PublisherClient object at 0x7fcdbe671ca0>
exactly_once_delivery_topic = 'projects/python-docs-samples-tests/topics/subscription-test-eod-topic-3.9-9bea132264a14256be2578f16fc56344'
subscription_eod = 'projects/python-docs-samples-tests/subscriptions/subscription-test-subscription-eod-3.9-9bea132264a14256be2578f16fc56344'
capsys = <_pytest.capture.CaptureFixture object at 0x7fcdbe55ab20>
def test_receive_messages_with_exactly_once_delivery_enabled(
regional_publisher_client: pubsub_v1.PublisherClient,
exactly_once_delivery_topic: str,
subscription_eod: str,
capsys: CaptureFixture[str],
) -> None:
message_ids = _publish_messages(regional_publisher_client, exactly_once_delivery_topic)
> subscriber.receive_messages_with_exactly_once_delivery_enabled(
PROJECT_ID, SUBSCRIPTION_EOD, 10
)
subscriber_test.py:715:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
subscriber.py:662: in receive_messages_with_exactly_once_delivery_enabled
streaming_pull_future.result(timeout=timeout)
/usr/local/lib/python3.9/concurrent/futures/_base.py:445: in result
return self.__get_result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = None
def __get_result(self):
if self._exception:
try:
> raise self._exception
E google.api_core.exceptions.NotFound: 404 Resource not found (resource=projects/python-docs-samples-tests/subscriptions/subscription-test-subscription-eod-3.9-9bea132264a14256be2578f16fc56344).
/usr/local/lib/python3.9/concurrent/futures/_base.py:390: NotFound
```
I suspect this may be due to a change in the test in: PR #628
or exactly once behavior in PR #626.
| test | subscriber test test receive messages with exactly once delivery enabled unit flaky samples unit test fails since pr regional publisher client exactly once delivery topic projects python docs samples tests topics subscription test eod topic subscription eod projects python docs samples tests subscriptions subscription test subscription eod capsys def test receive messages with exactly once delivery enabled regional publisher client pubsub publisherclient exactly once delivery topic str subscription eod str capsys capturefixture none message ids publish messages regional publisher client exactly once delivery topic subscriber receive messages with exactly once delivery enabled project id subscription eod subscriber test py subscriber py in receive messages with exactly once delivery enabled streaming pull future result timeout timeout usr local lib concurrent futures base py in result return self get result self none def get result self if self exception try raise self exception e google api core exceptions notfound resource not found resource projects python docs samples tests subscriptions subscription test subscription eod usr local lib concurrent futures base py notfound i suspect this may be due to a change in the test in pr or exactly once behavior in pr | 1 |
829,543 | 31,882,326,762 | IssuesEvent | 2023-09-16 14:23:29 | SaekKkanDa/OppaManyColorTone | https://api.github.com/repos/SaekKkanDa/OppaManyColorTone | opened | [AdSense] 구글 에드센스 정책 확인 및 적용 | feature priority-high | ### 개요
- 구글 에드센스 현 상황 (적용이 되었는지.. 등) 파악 후 적용
### 기능 + 테스트 리스트
- 구글 에드센스 현 상황 파악
- 구글 에드센스 적용
---
### 시간 트래킹(선택사항)
- 예상 : 7D
- 시작 :
- 끝 :
### TODO(선택사항)
-
### 참고자료(선택사항)
-
| 1.0 | [AdSense] 구글 에드센스 정책 확인 및 적용 - ### 개요
- 구글 에드센스 현 상황 (적용이 되었는지.. 등) 파악 후 적용
### 기능 + 테스트 리스트
- 구글 에드센스 현 상황 파악
- 구글 에드센스 적용
---
### 시간 트래킹(선택사항)
- 예상 : 7D
- 시작 :
- 끝 :
### TODO(선택사항)
-
### 참고자료(선택사항)
-
| non_test | 구글 에드센스 정책 확인 및 적용 개요 구글 에드센스 현 상황 적용이 되었는지 등 파악 후 적용 기능 테스트 리스트 구글 에드센스 현 상황 파악 구글 에드센스 적용 시간 트래킹 선택사항 예상 시작 끝 todo 선택사항 참고자료 선택사항 | 0 |
455 | 2,502,138,666 | IssuesEvent | 2015-01-09 03:56:45 | ether/etherpad-lite | https://api.github.com/repos/ether/etherpad-lite | closed | "TypeError: type is null" on pasting | Waiting on Testing | I am having some trouble pasting text from other browser windows. Most of the time it works fine, however it seems like if a "share" widget is part of the text selection then when pasting, I get `TypeError: type is null` and the text is pasted, but no highlighting occurs and it does not save to the pad. An example would be if you were to copy/paste the text (ensuring the Share bit is selected) in [this news article] (https://au.news.yahoo.com/thewest/regional/north-west/a/25924481/airport-solar-power-plan/).
If I use Firebug to delete all the `<li>` tags in the `<ul class="share-tools-share-items">` list then copy/paste works. I've tried deleting all but one list item and removing all the attributes from it but it still errors. I think maybe the browser is trying to copy associated events and triggering them when pasting happens, but I don't have the knowledge to be able to dig further.
I can replicate this in both Chrome and Firefox, running both Windows 7 64 and on OS X 10.10. | 1.0 | "TypeError: type is null" on pasting - I am having some trouble pasting text from other browser windows. Most of the time it works fine, however it seems like if a "share" widget is part of the text selection then when pasting, I get `TypeError: type is null` and the text is pasted, but no highlighting occurs and it does not save to the pad. An example would be if you were to copy/paste the text (ensuring the Share bit is selected) in [this news article] (https://au.news.yahoo.com/thewest/regional/north-west/a/25924481/airport-solar-power-plan/).
If I use Firebug to delete all the `<li>` tags in the `<ul class="share-tools-share-items">` list then copy/paste works. I've tried deleting all but one list item and removing all the attributes from it but it still errors. I think maybe the browser is trying to copy associated events and triggering them when pasting happens, but I don't have the knowledge to be able to dig further.
I can replicate this in both Chrome and Firefox, running both Windows 7 64 and on OS X 10.10. | test | typeerror type is null on pasting i am having some trouble pasting text from other browser windows most of the time it works fine however it seems like if a share widget is part of the text selection then when pasting i get typeerror type is null and the text is pasted but no highlighting occurs and it does not save to the pad an example would be if you were to copy paste the text ensuring the share bit is selected in if i use firebug to delete all the tags in the list then copy paste works i ve tried deleting all but one list item and removing all the attributes from it but it still errors i think maybe the browser is trying to copy associated events and triggering them when pasting happens but i don t have the knowledge to be able to dig further i can replicate this in both chrome and firefox running both windows and on os x | 1 |
5,347 | 3,204,610,382 | IssuesEvent | 2015-10-03 08:51:29 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Encrypted Login via mod_login doesn't work | No Code Attached Yet | You need a Joomla powered website that can be reached via HTTP and HTTPS. Moreover you need the possibility to log in at the frontend (this is e.g. the case with the demo site delivered with Joomla).
Locate the "Login Form" module in the backend and switch "Encrypt Login Form" to "Yes". Save this setting.
Open the frontend via HTTP and scroll down until you reach the login form. Log in with any account allowed to log in at the frontend (the superuser account shall do).
Despite the setting of "Encrypt Login Form" to "Yes" the log in will be done via HTTP and not HTTPS as requested.
If you inspect the HTML source of the frontend pages you'll see that the action attribute of the form markup contains a HTTP URL and not a HTTPS URL as expected. I tested this with SEF on and SEF off setting, this makes no difference.
Up to now I found out that the mod_login module creates the login form in mod_login/tmpl/default.php by calling the JRoute::_() method with the current URL converted into a string as first argument and with the "Encrypt Login Form" flag as third argument.
I located the JRoute::_() method in libraries/joomla/application/route.php, and I see that this method returns the URL passed as first argument without moification if the following condition is true:
!is_array($url) && (strpos($url, '&') !== 0) && (strpos($url, 'index.php') !== 0)
This is the case whenever you pass a complete URL as string to this method as is the case with the login form created by the mod_login module.
I also inspected the logout form in mod_login/tmpl/default_logout.php where the JRoute::_() method is called too. In that case the current URL converted into a string is passed as first parameter too and the "Encrypt Login Form" flag is passed as third argument too.
If that piece of code should lead to switching back from HTTPS used when showing frontend pages after logging in to HTTP pages after the logout this will not work since the JRoute::_() method will return the HTTPS URL unmodified due to the same reason as for the login case.
BTW the "Encrypt Login Form" label is misleading although the tooltip helps in understanding what this setting will do (or better should do). Something like "Log In Via SSL" would be better here I think. | 1.0 | Encrypted Login via mod_login doesn't work - You need a Joomla powered website that can be reached via HTTP and HTTPS. Moreover you need the possibility to log in at the frontend (this is e.g. the case with the demo site delivered with Joomla).
Locate the "Login Form" module in the backend and switch "Encrypt Login Form" to "Yes". Save this setting.
Open the frontend via HTTP and scroll down until you reach the login form. Log in with any account allowed to log in at the frontend (the superuser account shall do).
Despite the setting of "Encrypt Login Form" to "Yes" the log in will be done via HTTP and not HTTPS as requested.
If you inspect the HTML source of the frontend pages you'll see that the action attribute of the form markup contains a HTTP URL and not a HTTPS URL as expected. I tested this with SEF on and SEF off setting, this makes no difference.
Up to now I found out that the mod_login module creates the login form in mod_login/tmpl/default.php by calling the JRoute::_() method with the current URL converted into a string as first argument and with the "Encrypt Login Form" flag as third argument.
I located the JRoute::_() method in libraries/joomla/application/route.php, and I see that this method returns the URL passed as first argument without moification if the following condition is true:
!is_array($url) && (strpos($url, '&') !== 0) && (strpos($url, 'index.php') !== 0)
This is the case whenever you pass a complete URL as string to this method as is the case with the login form created by the mod_login module.
I also inspected the logout form in mod_login/tmpl/default_logout.php where the JRoute::_() method is called too. In that case the current URL converted into a string is passed as first parameter too and the "Encrypt Login Form" flag is passed as third argument too.
If that piece of code should lead to switching back from HTTPS used when showing frontend pages after logging in to HTTP pages after the logout this will not work since the JRoute::_() method will return the HTTPS URL unmodified due to the same reason as for the login case.
BTW the "Encrypt Login Form" label is misleading although the tooltip helps in understanding what this setting will do (or better should do). Something like "Log In Via SSL" would be better here I think. | non_test | encrypted login via mod login doesn t work you need a joomla powered website that can be reached via http and https moreover you need the possibility to log in at the frontend this is e g the case with the demo site delivered with joomla locate the login form module in the backend and switch encrypt login form to yes save this setting open the frontend via http and scroll down until you reach the login form log in with any account allowed to log in at the frontend the superuser account shall do despite the setting of encrypt login form to yes the log in will be done via http and not https as requested if you inspect the html source of the frontend pages you ll see that the action attribute of the form markup contains a http url and not a https url as expected i tested this with sef on and sef off setting this makes no difference up to now i found out that the mod login module creates the login form in mod login tmpl default php by calling the jroute method with the current url converted into a string as first argument and with the encrypt login form flag as third argument i located the jroute method in libraries joomla application route php and i see that this method returns the url passed as first argument without moification if the following condition is true is array url strpos url strpos url index php this is the case whenever you pass a complete url as string to this method as is the case with the login form created by the mod login module i also inspected the logout form in mod login tmpl default logout php where the jroute method is called too in that case the current url converted into a string is passed as first parameter too and the encrypt login form flag is passed as third argument too if that piece of code should lead to switching back from https used when showing frontend pages after logging in to http pages after the logout this will not work since the jroute method will return the https url unmodified due to the same reason as for the login case btw the encrypt login form label is misleading although the tooltip helps in understanding what this setting will do or better should do something like log in via ssl would be better here i think | 0 |
203,035 | 15,341,583,056 | IssuesEvent | 2021-02-27 12:38:23 | apache/shardingsphere-elasticjob | https://api.github.com/repos/apache/shardingsphere-elasticjob | closed | Flaky unit tests in RDBJobEventStorageTest(non-idempotent) | test | ## Bug Report
### Expected behavior
Tests `io.elasticjob.lite.event.rdb.JobEventRdbStorageTest.assertAddJobStatusTraceEventWhenFailoverWithTaskFailedState` and `io.elasticjob.lite.event.rdb.JobEventRdbStorageTest.assertAddJobStatusTraceEventWhenFailoverWithTaskStagingState` should be idempotent and pass when running twice in the same JVM
### Actual behavior
The two tests are not idempotent and fail if run twice in the same JVM
### Reason analyze (If you can)
Each of the tests pollutes some states shared among tests, causing the second test run to fail for assertions on the storage's `JobStatusTraceEvent`. For instance, the second run of `JobEventRdbStorageTest.assertAddJobStatusTraceEventWhenFailoverWithTaskFailedState` fails for the following assertion:
```
List<JobStatusTraceEvent> jobStatusTraceEvents = storage.getJobStatusTraceEvents("fake_failed_failover_task_id");
assertThat(jobStatusTraceEvents.size(), is(2));
```
### Steps to reproduce the behavior.
Run each of the aforementioned tests twice in the same JVM. The second test run would fail.
### Example codes for reproduce this issue (such as a github link).
| 1.0 | Flaky unit tests in RDBJobEventStorageTest(non-idempotent) - ## Bug Report
### Expected behavior
Tests `io.elasticjob.lite.event.rdb.JobEventRdbStorageTest.assertAddJobStatusTraceEventWhenFailoverWithTaskFailedState` and `io.elasticjob.lite.event.rdb.JobEventRdbStorageTest.assertAddJobStatusTraceEventWhenFailoverWithTaskStagingState` should be idempotent and pass when running twice in the same JVM
### Actual behavior
The two tests are not idempotent and fail if run twice in the same JVM
### Reason analyze (If you can)
Each of the tests pollutes some states shared among tests, causing the second test run to fail for assertions on the storage's `JobStatusTraceEvent`. For instance, the second run of `JobEventRdbStorageTest.assertAddJobStatusTraceEventWhenFailoverWithTaskFailedState` fails for the following assertion:
```
List<JobStatusTraceEvent> jobStatusTraceEvents = storage.getJobStatusTraceEvents("fake_failed_failover_task_id");
assertThat(jobStatusTraceEvents.size(), is(2));
```
### Steps to reproduce the behavior.
Run each of the aforementioned tests twice in the same JVM. The second test run would fail.
### Example codes for reproduce this issue (such as a github link).
| test | flaky unit tests in rdbjobeventstoragetest non idempotent bug report expected behavior tests io elasticjob lite event rdb jobeventrdbstoragetest assertaddjobstatustraceeventwhenfailoverwithtaskfailedstate and io elasticjob lite event rdb jobeventrdbstoragetest assertaddjobstatustraceeventwhenfailoverwithtaskstagingstate should be idempotent and pass when running twice in the same jvm actual behavior the two tests are not idempotent and fail if run twice in the same jvm reason analyze if you can each of the tests pollutes some states shared among tests causing the second test run to fail for assertions on the storage s jobstatustraceevent for instance the second run of jobeventrdbstoragetest assertaddjobstatustraceeventwhenfailoverwithtaskfailedstate fails for the following assertion list jobstatustraceevents storage getjobstatustraceevents fake failed failover task id assertthat jobstatustraceevents size is steps to reproduce the behavior run each of the aforementioned tests twice in the same jvm the second test run would fail example codes for reproduce this issue such as a github link | 1 |
176,032 | 13,624,262,541 | IssuesEvent | 2020-09-24 07:47:32 | WoWManiaUK/Redemption | https://api.github.com/repos/WoWManiaUK/Redemption | closed | Druid fly form + Costumes | Fix - Tester Confirmed | **What is Happening:**
If druid player put on costumes lime murlock or pirate then change to fly form it will not change shape to bird but keep form of costume and look like it fly / swim in air , as seen in screen shoot


**What Should happen:**
Druid Fly form should overwrite costume and change druid to respected bird form
P.S. can not find proof of this form back in day , but this started recently as before it used to work normally
| 1.0 | Druid fly form + Costumes - **What is Happening:**
If druid player put on costumes lime murlock or pirate then change to fly form it will not change shape to bird but keep form of costume and look like it fly / swim in air , as seen in screen shoot


**What Should happen:**
Druid Fly form should overwrite costume and change druid to respected bird form
P.S. can not find proof of this form back in day , but this started recently as before it used to work normally
| test | druid fly form costumes what is happening if druid player put on costumes lime murlock or pirate then change to fly form it will not change shape to bird but keep form of costume and look like it fly swim in air as seen in screen shoot what should happen druid fly form should overwrite costume and change druid to respected bird form p s can not find proof of this form back in day but this started recently as before it used to work normally | 1 |
228,914 | 7,569,435,203 | IssuesEvent | 2018-04-23 04:28:40 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Create separate booleans on Stream objects for private/invite-only streams and private history | area: stream settings enhancement help wanted priority: high | In Zulip, we have 2 concepts that are related but not quite the same:
* Stream.invite_only -- the current boolean which controls whether people can join a stream without an invitation
* Whether the message history of the stream is visible to subscribers from before they joined.
Historically, these were the same, since we wanted to support discussing whether to invite someone to a stream before adding them without potential embarrassing issues, but it would be good to give users the flexibility to do things differently for streams where they want to (we have several, such as the GCI members stream, in chat.zulip.org that would make sense to be invite-only with full history available to members).
There are a few things that this separation will require:
* creating the new boolean and populating it properly in a migration (I think we need to use the logic currently in `Stream.is_public()` for this, basically)
* Moving the `get_old_messages` code to actually make the old messages available based on the new boolean.
* Changing `access_message` (used for getting raw content, etc.) to
* Adding tests for the new behavior
Ideas for what we should call the new field? | 1.0 | Create separate booleans on Stream objects for private/invite-only streams and private history - In Zulip, we have 2 concepts that are related but not quite the same:
* Stream.invite_only -- the current boolean which controls whether people can join a stream without an invitation
* Whether the message history of the stream is visible to subscribers from before they joined.
Historically, these were the same, since we wanted to support discussing whether to invite someone to a stream before adding them without potential embarrassing issues, but it would be good to give users the flexibility to do things differently for streams where they want to (we have several, such as the GCI members stream, in chat.zulip.org that would make sense to be invite-only with full history available to members).
There are a few things that this separation will require:
* creating the new boolean and populating it properly in a migration (I think we need to use the logic currently in `Stream.is_public()` for this, basically)
* Moving the `get_old_messages` code to actually make the old messages available based on the new boolean.
* Changing `access_message` (used for getting raw content, etc.) to
* Adding tests for the new behavior
Ideas for what we should call the new field? | non_test | create separate booleans on stream objects for private invite only streams and private history in zulip we have concepts that are related but not quite the same stream invite only the current boolean which controls whether people can join a stream without an invitation whether the message history of the stream is visible to subscribers from before they joined historically these were the same since we wanted to support discussing whether to invite someone to a stream before adding them without potential embarrassing issues but it would be good to give users the flexibility to do things differently for streams where they want to we have several such as the gci members stream in chat zulip org that would make sense to be invite only with full history available to members there are a few things that this separation will require creating the new boolean and populating it properly in a migration i think we need to use the logic currently in stream is public for this basically moving the get old messages code to actually make the old messages available based on the new boolean changing access message used for getting raw content etc to adding tests for the new behavior ideas for what we should call the new field | 0 |
134,533 | 10,918,791,106 | IssuesEvent | 2019-11-21 17:36:00 | pantsbuild/pants | https://api.github.com/repos/pantsbuild/pants | closed | InterpreterSelectionIntegrationTest.test_conflict_via_config is flaky | flaky-test | I believe this may be due to the fact that the output we're looking for is from pex writing directly to stderr, which may not happen synchronously, or might be buffered weirdly, or something else. It would be really neat if we could at least have the option in pex to never do global things like write to stderr -- this has led to issues in the past when building local python dists where pex would write unicode directly to stderr, which caused encoding errors (that particular failure mode no longer occurs after #7016, however).
```
==================== FAILURES ====================
InterpreterSelectionIntegrationTest.test_conflict_via_config
self = <pants_test.backend.python.tasks.test_interpreter_selection_integration.InterpreterSelectionIntegrationTest testMethod=test_conflict_via_config>
def test_conflict_via_config(self):
# Tests that targets with compatibility conflict with targets with default compatibility.
# NB: Passes empty `args` to avoid having the default CLI args override the config.
config = {
'python-setup': {
'interpreter_constraints': ['CPython<2.7'],
}
}
binary_target = '{}:echo_interpreter_version'.format(self.testproject)
pants_run = self._build_pex(binary_target, config=config, args=[])
self.assert_failure(pants_run,
'Unexpected successful build of {binary}.'.format(binary=binary_target))
self.assertIn('Unable to detect a suitable interpreter for compatibilities',
> pants_run.stdout_data)
E AssertionError: 'Unable to detect a suitable interpreter for compatibilities' not found in '\n04:54:51 00:00 [main]\n (To run a reporting server: ./pants server)\n04:54:51 00:00 [setup]\n04:54:51 00:00 [parse]\n Executing tasks in goals: jvm-platform-validate -> bootstrap -> imports -> unpack-jars -> unpack-wheels -> deferred-sources -> native-compile -> link -> gen -> resolve -> resources -> compile -> pyprep -> binary\n04:54:52 00:01 [jvm-platform-validate]\n04:54:52 00:01 [jvm-platform-validate]\n04:54:52 00:01 [bootstrap]\n04:54:52 00:01 [substitute-aliased-targets]\n04:54:52 00:01 [jar-dependency-management]\n04:54:52 00:01 [bootstrap-jvm-tools]\n04:54:52 00:01 [provide-tools-jar]\n04:54:52 00:01 [imports]\n04:54:52 00:01 [ivy-imports]\n04:54:52 00:01 [unpack-jars]\n04:54:52 00:01 [unpack-jars]\n04:54:52 00:01 [unpack-wheels]\n04:54:52 00:01 [unpack-wheels]\n04:54:52 00:01 [deferred-sources]\n04:54:52 00:01 [deferred-sources]\n04:54:52 00:01 [native-compile]\n04:54:52 00:01 [conan-prep]\n Waiting for background workers to finish.\n04:54:53 00:02 [complete]\n FAILURE'
.pants.d/pyprep/sources/d1e75ac2a5c0c2761495309046e90015cadf905f/pants_test/backend/python/tasks/test_interpreter_selection_integration.py:47: AssertionError
``` | 1.0 | InterpreterSelectionIntegrationTest.test_conflict_via_config is flaky - I believe this may be due to the fact that the output we're looking for is from pex writing directly to stderr, which may not happen synchronously, or might be buffered weirdly, or something else. It would be really neat if we could at least have the option in pex to never do global things like write to stderr -- this has led to issues in the past when building local python dists where pex would write unicode directly to stderr, which caused encoding errors (that particular failure mode no longer occurs after #7016, however).
```
==================== FAILURES ====================
InterpreterSelectionIntegrationTest.test_conflict_via_config
self = <pants_test.backend.python.tasks.test_interpreter_selection_integration.InterpreterSelectionIntegrationTest testMethod=test_conflict_via_config>
def test_conflict_via_config(self):
# Tests that targets with compatibility conflict with targets with default compatibility.
# NB: Passes empty `args` to avoid having the default CLI args override the config.
config = {
'python-setup': {
'interpreter_constraints': ['CPython<2.7'],
}
}
binary_target = '{}:echo_interpreter_version'.format(self.testproject)
pants_run = self._build_pex(binary_target, config=config, args=[])
self.assert_failure(pants_run,
'Unexpected successful build of {binary}.'.format(binary=binary_target))
self.assertIn('Unable to detect a suitable interpreter for compatibilities',
> pants_run.stdout_data)
E AssertionError: 'Unable to detect a suitable interpreter for compatibilities' not found in '\n04:54:51 00:00 [main]\n (To run a reporting server: ./pants server)\n04:54:51 00:00 [setup]\n04:54:51 00:00 [parse]\n Executing tasks in goals: jvm-platform-validate -> bootstrap -> imports -> unpack-jars -> unpack-wheels -> deferred-sources -> native-compile -> link -> gen -> resolve -> resources -> compile -> pyprep -> binary\n04:54:52 00:01 [jvm-platform-validate]\n04:54:52 00:01 [jvm-platform-validate]\n04:54:52 00:01 [bootstrap]\n04:54:52 00:01 [substitute-aliased-targets]\n04:54:52 00:01 [jar-dependency-management]\n04:54:52 00:01 [bootstrap-jvm-tools]\n04:54:52 00:01 [provide-tools-jar]\n04:54:52 00:01 [imports]\n04:54:52 00:01 [ivy-imports]\n04:54:52 00:01 [unpack-jars]\n04:54:52 00:01 [unpack-jars]\n04:54:52 00:01 [unpack-wheels]\n04:54:52 00:01 [unpack-wheels]\n04:54:52 00:01 [deferred-sources]\n04:54:52 00:01 [deferred-sources]\n04:54:52 00:01 [native-compile]\n04:54:52 00:01 [conan-prep]\n Waiting for background workers to finish.\n04:54:53 00:02 [complete]\n FAILURE'
.pants.d/pyprep/sources/d1e75ac2a5c0c2761495309046e90015cadf905f/pants_test/backend/python/tasks/test_interpreter_selection_integration.py:47: AssertionError
``` | test | interpreterselectionintegrationtest test conflict via config is flaky i believe this may be due to the fact that the output we re looking for is from pex writing directly to stderr which may not happen synchronously or might be buffered weirdly or something else it would be really neat if we could at least have the option in pex to never do global things like write to stderr this has led to issues in the past when building local python dists where pex would write unicode directly to stderr which caused encoding errors that particular failure mode no longer occurs after however failures interpreterselectionintegrationtest test conflict via config self def test conflict via config self tests that targets with compatibility conflict with targets with default compatibility nb passes empty args to avoid having the default cli args override the config config python setup interpreter constraints binary target echo interpreter version format self testproject pants run self build pex binary target config config args self assert failure pants run unexpected successful build of binary format binary binary target self assertin unable to detect a suitable interpreter for compatibilities pants run stdout data e assertionerror unable to detect a suitable interpreter for compatibilities not found in n to run a reporting server pants server n executing tasks in goals jvm platform validate bootstrap imports unpack jars unpack wheels deferred sources native compile link gen resolve resources compile pyprep binary n waiting for background workers to finish n failure pants d pyprep sources pants test backend python tasks test interpreter selection integration py assertionerror | 1 |
356,090 | 10,588,732,167 | IssuesEvent | 2019-10-09 03:11:00 | GFDRR/geonode-afghanistan | https://api.github.com/repos/GFDRR/geonode-afghanistan | closed | Add sub-menu pages to the "About" section | Priority enhancement | We will need to expand on the "about" section to contain information specific to the Afghanistan project.
We will request the following support:
1. Add a "Project Information" sub-menu under the "About" section.
2. Group the "people", "groups" and "group categories" under a sub-heading titled "Users"
3. develop and host the page/s for "Project Information" using a template that has the look and feel of the AF geonode.
4. Support in adding text, images and videos to the page created. # # | 1.0 | Add sub-menu pages to the "About" section - We will need to expand on the "about" section to contain information specific to the Afghanistan project.
We will request the following support:
1. Add a "Project Information" sub-menu under the "About" section.
2. Group the "people", "groups" and "group categories" under a sub-heading titled "Users"
3. develop and host the page/s for "Project Information" using a template that has the look and feel of the AF geonode.
4. Support in adding text, images and videos to the page created. # # | non_test | add sub menu pages to the about section we will need to expand on the about section to contain information specific to the afghanistan project we will request the following support add a project information sub menu under the about section group the people groups and group categories under a sub heading titled users develop and host the page s for project information using a template that has the look and feel of the af geonode support in adding text images and videos to the page created | 0 |
326,430 | 27,990,758,634 | IssuesEvent | 2023-03-27 03:19:47 | Sars9588/mywebclass-simulation | https://api.github.com/repos/Sars9588/mywebclass-simulation | closed | Displays Important Notice Header | Test | Name of Test Developer: MD
Test Name: Displays Important Notice Header
Test Type: Text
| 1.0 | Displays Important Notice Header - Name of Test Developer: MD
Test Name: Displays Important Notice Header
Test Type: Text
| test | displays important notice header name of test developer md test name displays important notice header test type text | 1 |
146,565 | 11,739,483,147 | IssuesEvent | 2020-03-11 17:47:32 | dhenry-KCI/FredCo-Post-Go-Live- | https://api.github.com/repos/dhenry-KCI/FredCo-Post-Go-Live- | closed | GIS - Township Code on Address Issue - Need Infor Assistance | Test Accepted | Creating a ticket for tracking...
Mary McCullough has discovered an issue where many of the Township codes on addresses in the system are not correct because there is no match of that address in GIS. Assistance from Infor has been requested to help query and bulk update these addresses.
Details regarding this can be found in the 2/21 - 2/24 email chain. Dan has requested approval prior to working on this item.
| 1.0 | GIS - Township Code on Address Issue - Need Infor Assistance - Creating a ticket for tracking...
Mary McCullough has discovered an issue where many of the Township codes on addresses in the system are not correct because there is no match of that address in GIS. Assistance from Infor has been requested to help query and bulk update these addresses.
Details regarding this can be found in the 2/21 - 2/24 email chain. Dan has requested approval prior to working on this item.
| test | gis township code on address issue need infor assistance creating a ticket for tracking mary mccullough has discovered an issue where many of the township codes on addresses in the system are not correct because there is no match of that address in gis assistance from infor has been requested to help query and bulk update these addresses details regarding this can be found in the email chain dan has requested approval prior to working on this item | 1 |
317,062 | 27,209,066,690 | IssuesEvent | 2023-02-20 15:11:15 | DMTF/libspdm | https://api.github.com/repos/DMTF/libspdm | closed | Add unit test to cover the unrecommended OpaqueDataLength in CHALLENGE_AUTH message. | test | 1. Refer to **paragraph 353** Table 36 — Successful CHALLENGE_AUTH response message format in
https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_1.2.1.pdf
**OpaqueDataLength** Field:
Size of the OpaqueData field that follows in bytes. The value
should not be greater than 1024 bytes. Shall be 0 if no
OpaqueData is provided.
2. The line 250 ~253 check the OpaqueDataLength should not be greater than 1024 bytes.
https://github.com/DMTF/libspdm/blob/2c5571377ae330263435ec925b5f18868c4170a3/library/spdm_requester_lib/libspdm_req_challenge.c#L249-L253
I think we should add unit test to cover the unrecommended OpaqueDataLength(**greater than 1024 bytes**) in CHALLENGE_AUTH message. | 1.0 | Add unit test to cover the unrecommended OpaqueDataLength in CHALLENGE_AUTH message. - 1. Refer to **paragraph 353** Table 36 — Successful CHALLENGE_AUTH response message format in
https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_1.2.1.pdf
**OpaqueDataLength** Field:
Size of the OpaqueData field that follows in bytes. The value
should not be greater than 1024 bytes. Shall be 0 if no
OpaqueData is provided.
2. The line 250 ~253 check the OpaqueDataLength should not be greater than 1024 bytes.
https://github.com/DMTF/libspdm/blob/2c5571377ae330263435ec925b5f18868c4170a3/library/spdm_requester_lib/libspdm_req_challenge.c#L249-L253
I think we should add unit test to cover the unrecommended OpaqueDataLength(**greater than 1024 bytes**) in CHALLENGE_AUTH message. | test | add unit test to cover the unrecommended opaquedatalength in challenge auth message refer to paragraph table — successful challenge auth response message format in opaquedatalength field size of the opaquedata field that follows in bytes the value should not be greater than bytes shall be if no opaquedata is provided the line check the opaquedatalength should not be greater than bytes i think we should add unit test to cover the unrecommended opaquedatalength greater than bytes in challenge auth message | 1 |
300,489 | 25,971,918,452 | IssuesEvent | 2022-12-19 11:59:12 | iotaledger/explorer | https://api.github.com/repos/iotaledger/explorer | opened | [Task]: Improve usability on Statistics page (mobile) | type:enhancement network:testnet network:shimmer | ### Task description
Currently if we click on a graph (on mobile) the tooltip shows and stays, no easy way to turn it off.
* Improve usability of "hover tooltip" on Statistics page when on mobile.
### Requirements
N/A
### Acceptance criteria
N/A
### Creation checklist
- [X] I have assigned this task to the correct people
- [X] I have added the most appropriate labels
- [X] I have linked the correct milestone and/or project | 1.0 | [Task]: Improve usability on Statistics page (mobile) - ### Task description
Currently if we click on a graph (on mobile) the tooltip shows and stays, no easy way to turn it off.
* Improve usability of "hover tooltip" on Statistics page when on mobile.
### Requirements
N/A
### Acceptance criteria
N/A
### Creation checklist
- [X] I have assigned this task to the correct people
- [X] I have added the most appropriate labels
- [X] I have linked the correct milestone and/or project | test | improve usability on statistics page mobile task description currently if we click on a graph on mobile the tooltip shows and stays no easy way to turn it off improve usability of hover tooltip on statistics page when on mobile requirements n a acceptance criteria n a creation checklist i have assigned this task to the correct people i have added the most appropriate labels i have linked the correct milestone and or project | 1 |
213,673 | 16,531,170,091 | IssuesEvent | 2021-05-27 06:15:40 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | closed | Opentelemetry Exporters Azuremonitor Readme issues | Client Docs Monitor - Exporter test-manual-pass | 1.
Section [Link](
https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/monitor/azure-monitor-opentelemetry-exporter#examples):

Suggestion:
Add hyperlink for `[samples]` https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/monitor/azure-monitor-opentelemetry-exporter/src/samples
@jongio for notification
| 1.0 | Opentelemetry Exporters Azuremonitor Readme issues - 1.
Section [Link](
https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/monitor/azure-monitor-opentelemetry-exporter#examples):

Suggestion:
Add hyperlink for `[samples]` https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/monitor/azure-monitor-opentelemetry-exporter/src/samples
@jongio for notification
| test | opentelemetry exporters azuremonitor readme issues section suggestion add hyperlink for jongio for notification | 1 |
287,020 | 24,802,923,680 | IssuesEvent | 2022-10-25 00:06:29 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | [Flaky Test] should be able to open the categories panel and create a new main category if the user has the right capabilities | [Status] Stale [Type] Flaky Test | <!-- __META_DATA__:{"failedTimes":1,"totalCommits":1,"baseCommit":"129784128219b75d923fe39b17caac228ac138c6"} -->
**Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.**
## Test title
should be able to open the categories panel and create a new main category if the user has the right capabilities
## Test path
`specs/editor/various/taxonomies.test.js`
## Errors
<!-- __TEST_RESULTS_LIST__ -->
<!-- __TEST_RESULT__ --><time datetime="2021-10-09T07:59:57.467Z"><code>[2021-10-09T07:59:57.467Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/1323027376"><code>mobile/issue/3055-show-disabled-block-reason</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-05-10T15:53:44.090Z"><code>[2022-05-10T15:53:44.090Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2301707185"><code>trunk</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-06-02T16:05:10.604Z"><code>[2022-06-02T16:05:10.604Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2429308549"><code>edit-visually-edit-html</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-06-09T12:44:10.155Z"><code>[2022-06-09T12:44:10.155Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2468263910"><code>refactor/slot-fill-negate</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-09-04T13:07:14.722Z"><code>[2022-09-04T13:07:14.722Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2988115112"><code>refactor/components-popover-typescript</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-09-13T07:03:09.691Z"><code>[2022-09-13T07:03:09.691Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3042855851"><code>update/patterns-inserter-design</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><details>
<summary>
<time datetime="2022-09-19T11:47:43.997Z"><code>[2022-09-19T11:47:43.997Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3081986077"><code>update/navigation-classic</code></a>.
</summary>
```
● Taxonomies › should be able to open the categories panel and create a new main category if the user has the right capabilities
TimeoutError: waiting for selector `.editor-post-taxonomies__hierarchical-terms-list` failed: timeout 30000ms exceeded
73 | await openSidebarPanelWithTitle( 'Categories' );
74 |
> 75 | await page.waitForSelector(
| ^
76 | '.editor-post-taxonomies__hierarchical-terms-list'
77 | );
78 |
at new WaitTask (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:813:28)
at DOMWorld.waitForSelectorInPage (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:656:22)
at Object.internalHandler.waitFor (../../node_modules/puppeteer-core/src/common/QueryHandler.ts:78:19)
at DOMWorld.waitForSelector (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:511:25)
at Frame.waitForSelector (../../node_modules/puppeteer-core/src/common/FrameManager.ts:1273:47)
at Page.waitForSelector (../../node_modules/puppeteer-core/src/common/Page.ts:3210:29)
at Object.<anonymous> (specs/editor/various/taxonomies.test.js:75:14)
at runMicrotasks (<anonymous>)
```
</details><!-- /__TEST_RESULT__ -->
<!-- /__TEST_RESULTS_LIST__ -->
| 1.0 | [Flaky Test] should be able to open the categories panel and create a new main category if the user has the right capabilities - <!-- __META_DATA__:{"failedTimes":1,"totalCommits":1,"baseCommit":"129784128219b75d923fe39b17caac228ac138c6"} -->
**Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.**
## Test title
should be able to open the categories panel and create a new main category if the user has the right capabilities
## Test path
`specs/editor/various/taxonomies.test.js`
## Errors
<!-- __TEST_RESULTS_LIST__ -->
<!-- __TEST_RESULT__ --><time datetime="2021-10-09T07:59:57.467Z"><code>[2021-10-09T07:59:57.467Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/1323027376"><code>mobile/issue/3055-show-disabled-block-reason</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-05-10T15:53:44.090Z"><code>[2022-05-10T15:53:44.090Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2301707185"><code>trunk</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-06-02T16:05:10.604Z"><code>[2022-06-02T16:05:10.604Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2429308549"><code>edit-visually-edit-html</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-06-09T12:44:10.155Z"><code>[2022-06-09T12:44:10.155Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2468263910"><code>refactor/slot-fill-negate</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-09-04T13:07:14.722Z"><code>[2022-09-04T13:07:14.722Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2988115112"><code>refactor/components-popover-typescript</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2022-09-13T07:03:09.691Z"><code>[2022-09-13T07:03:09.691Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3042855851"><code>update/patterns-inserter-design</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><details>
<summary>
<time datetime="2022-09-19T11:47:43.997Z"><code>[2022-09-19T11:47:43.997Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3081986077"><code>update/navigation-classic</code></a>.
</summary>
```
● Taxonomies › should be able to open the categories panel and create a new main category if the user has the right capabilities
TimeoutError: waiting for selector `.editor-post-taxonomies__hierarchical-terms-list` failed: timeout 30000ms exceeded
73 | await openSidebarPanelWithTitle( 'Categories' );
74 |
> 75 | await page.waitForSelector(
| ^
76 | '.editor-post-taxonomies__hierarchical-terms-list'
77 | );
78 |
at new WaitTask (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:813:28)
at DOMWorld.waitForSelectorInPage (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:656:22)
at Object.internalHandler.waitFor (../../node_modules/puppeteer-core/src/common/QueryHandler.ts:78:19)
at DOMWorld.waitForSelector (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:511:25)
at Frame.waitForSelector (../../node_modules/puppeteer-core/src/common/FrameManager.ts:1273:47)
at Page.waitForSelector (../../node_modules/puppeteer-core/src/common/Page.ts:3210:29)
at Object.<anonymous> (specs/editor/various/taxonomies.test.js:75:14)
at runMicrotasks (<anonymous>)
```
</details><!-- /__TEST_RESULT__ -->
<!-- /__TEST_RESULTS_LIST__ -->
| test | should be able to open the categories panel and create a new main category if the user has the right capabilities flaky test detected this is an auto generated issue by github actions please do not edit this manually test title should be able to open the categories panel and create a new main category if the user has the right capabilities test path specs editor various taxonomies test js errors test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on a href ● taxonomies › should be able to open the categories panel and create a new main category if the user has the right capabilities timeouterror waiting for selector editor post taxonomies hierarchical terms list failed timeout exceeded await opensidebarpanelwithtitle categories await page waitforselector editor post taxonomies hierarchical terms list at new waittask node modules puppeteer core src common domworld ts at domworld waitforselectorinpage node modules puppeteer core src common domworld ts at object internalhandler waitfor node modules puppeteer core src common queryhandler ts at domworld waitforselector node modules puppeteer core src common domworld ts at frame waitforselector node modules puppeteer core src common framemanager ts at page waitforselector node modules puppeteer core src common page ts at object specs editor various taxonomies test js at runmicrotasks | 1 |
110,103 | 9,430,724,173 | IssuesEvent | 2019-04-12 09:42:50 | NickBurneConsulting-GivePanel/givepanel | https://api.github.com/repos/NickBurneConsulting-GivePanel/givepanel | reopened | Birthday Filter throws error on save event | Priority - Level 1 Ready for test Resolved Locally bug | if you select 'Filter Results' on the Fundraisers Tab, then select it to show 'No Birthday Fundraisers'... then subsequently click to edit one of these filtered results, when you click to add a label and it autosaves, it also auto refreshes the list of fundraisers back to the unfiltered results. So you essentially lose the one you were editing amongst the sea of results. | 1.0 | Birthday Filter throws error on save event - if you select 'Filter Results' on the Fundraisers Tab, then select it to show 'No Birthday Fundraisers'... then subsequently click to edit one of these filtered results, when you click to add a label and it autosaves, it also auto refreshes the list of fundraisers back to the unfiltered results. So you essentially lose the one you were editing amongst the sea of results. | test | birthday filter throws error on save event if you select filter results on the fundraisers tab then select it to show no birthday fundraisers then subsequently click to edit one of these filtered results when you click to add a label and it autosaves it also auto refreshes the list of fundraisers back to the unfiltered results so you essentially lose the one you were editing amongst the sea of results | 1 |
229,029 | 18,277,681,400 | IssuesEvent | 2021-10-04 20:59:21 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | gradle_jetifier_test failed compiling Java (flake??) | a: tests platform-android tool t: gradle team: flakes passed first triage P4 | I had this failure on an unrelated PR. Maybe some sort of download problem? I don't see anything obvious in the logs to cause this.
```
[gradle_jetifier_test] [STDOUT] stdout: [ +300 ms] > Task :firebase_auth:compileReleaseJavaWithJavac FAILED
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:8: error: package android.support.annotation does not exist
[gradle_jetifier_test] [STDOUT] stderr: [ ] import android.support.annotation.NonNull;
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:9: error: package android.support.annotation does not exist
[gradle_jetifier_test] [STDOUT] stderr: [ ] import android.support.annotation.Nullable;
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:638: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] private void reportException(Result result, @Nullable Exception exception) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class Nullable
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:550: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<AuthResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.SignInCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:569: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<Void> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.TaskVoidCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:587: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<SignInMethodQueryResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.GetSignInMethodsCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:186: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<AuthResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:445: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<GetTokenResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:499: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] Note: /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java uses unchecked or unsafe operations.
[gradle_jetifier_test] [STDOUT] stderr: [ ] Note: Recompile with -Xlint:unchecked for details.
[gradle_jetifier_test] [STDOUT] stderr: [ ] 9 errors
[gradle_jetifier_test] [STDOUT] stderr: [ +92 ms] FAILURE: Build failed with an exception.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * What went wrong:
[gradle_jetifier_test] [STDOUT] stderr: [ ] Execution failed for task ':firebase_auth:compileReleaseJavaWithJavac'.
[gradle_jetifier_test] [STDOUT] stderr: [ ] > Compilation failed; see the compiler error output for details.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * Try:
[gradle_jetifier_test] [STDOUT] stderr: [ ] Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * Get more help at https://help.gradle.org
[gradle_jetifier_test] [STDOUT] stderr: [ ] BUILD FAILED in 1m 22s
[gradle_jetifier_test] [STDOUT] stdout: [ ] Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Use '--warning-mode all' to show the individual deprecation warnings.
[gradle_jetifier_test] [STDOUT] stdout: [ ] See https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings
[gradle_jetifier_test] [STDOUT] stdout: [ ] 40 actionable tasks: 39 executed, 1 up-to-date
[gradle_jetifier_test] [STDOUT] stdout: [ +512 ms] Running Gradle task 'assembleRelease'... (completed in 82.7s)
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] The built failed likely due to AndroidX incompatibilities in a plugin. The tool is about to try using Jetfier to solve the incompatibility.
[gradle_jetifier_test] [STDOUT] stdout: [ +3 ms] ✏️ Creating `android/settings_aar.gradle`...
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] ✏️ Creating `android/settings_aar.gradle`... (completed in 0ms)
[gradle_jetifier_test] [STDOUT] stdout: [ ] [!] Flutter tried to create the file `android/settings_aar.gradle`, but failed.
[gradle_jetifier_test] [STDOUT] stdout: [ ] To manually update `settings.gradle`, follow these steps:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: 1. Copy `settings.gradle` as `settings_aar.gradle`
[gradle_jetifier_test] [STDOUT] stdout: 2. Remove the following code from `settings_aar.gradle`:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: def localPropertiesFile = new File(rootProject.projectDir, "local.properties")
[gradle_jetifier_test] [STDOUT] stdout: def properties = new Properties()
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: assert localPropertiesFile.exists()
[gradle_jetifier_test] [STDOUT] stdout: localPropertiesFile.withReader("UTF-8") { reader -> properties.load(reader) }
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: def flutterSdkPath = properties.getProperty("flutter.sdk")
[gradle_jetifier_test] [STDOUT] stdout: assert flutterSdkPath != null, "flutter.sdk not set in local.properties"
[gradle_jetifier_test] [STDOUT] stdout: apply from: "$flutterSdkPath/packages/flutter_tools/gradle/app_plugin_loader.gradle"
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [ +6 ms] "flutter apk" took 83,150ms.
[gradle_jetifier_test] [STDOUT] stderr: Please create the file and run this command again.
[gradle_jetifier_test] [STDOUT] stderr:
[gradle_jetifier_test] [STDOUT] stderr: #0 throwToolExit (package:flutter_tools/src/base/common.dart:14:3)
[gradle_jetifier_test] [STDOUT] stderr: #1 createSettingsAarGradle (package:flutter_tools/src/android/gradle.dart:197:5)
[gradle_jetifier_test] [STDOUT] stderr: #2 buildGradleApp (package:flutter_tools/src/android/gradle.dart:258:5)
[gradle_jetifier_test] [STDOUT] stderr: #3 buildGradleApp (package:flutter_tools/src/android/gradle.dart:439:19)
[gradle_jetifier_test] [STDOUT] stderr: #4 _rootRunUnary (dart:async/zone.dart:1198:47)
[gradle_jetifier_test] [STDOUT] stderr: #5 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
[gradle_jetifier_test] [STDOUT] stderr: #6 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
[gradle_jetifier_test] [STDOUT] stderr: #7 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
[gradle_jetifier_test] [STDOUT] stderr: #8 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
[gradle_jetifier_test] [STDOUT] stderr: #9 Future._completeWithValue (dart:async/future_impl.dart:529:5)
[gradle_jetifier_test] [STDOUT] stderr: #10 Future._asyncCompleteWithValue.<anonymous closure> (dart:async/future_impl.dart:567:7)
[gradle_jetifier_test] [STDOUT] stderr: #11 _rootRun (dart:async/zone.dart:1190:13)
[gradle_jetifier_test] [STDOUT] stderr: #12 _CustomZone.run (dart:async/zone.dart:1093:19)
[gradle_jetifier_test] [STDOUT] stderr: #13 _CustomZone.runGuarded (dart:async/zone.dart:997:7)
[gradle_jetifier_test] [STDOUT] stderr: #14 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:1037:23)
[gradle_jetifier_test] [STDOUT] stderr: #15 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
[gradle_jetifier_test] [STDOUT] stderr: #16 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
[gradle_jetifier_test] [STDOUT] stderr: #17 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:118:13)
[gradle_jetifier_test] [STDOUT] stderr: #18 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:169:5)
```
<details>
<summary>Full logs</summary>
```
════════════════╡ ••• Running task "gradle_jetifier_test" ••• ╞═════════════════
Executing: /tmp/flutter sdk/bin/cache/dart-sdk/bin/dart --enable-vm-service=0 --no-pause-isolates-on-exit bin/tasks/gradle_jetifier_test.dart in /tmp/flutter sdk/dev/devicelab
[gradle_jetifier_test] [STDOUT] Observatory listening on http://127.0.0.1:38849/EBAgEz-TqAc=/
[gradle_jetifier_test] [STDOUT] Running task.
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ══════════════════╡ ••• Checking running Dart processes ••• ╞═══════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Cannot list processes on this system: `ps` not available.
[gradle_jetifier_test] [STDOUT] enabling configs for macOS, Linux, Windows, and Web...
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter config --enable-macos-desktop --enable-windows-desktop --enable-linux-desktop --enable-web in /tmp/flutter sdk/dev/devicelab
[gradle_jetifier_test] [STDOUT] stdout: Setting "enable-web" value to "true".
[gradle_jetifier_test] [STDOUT] stdout: Setting "enable-linux-desktop" value to "true".
[gradle_jetifier_test] [STDOUT] stdout: Setting "enable-macos-desktop" value to "true".
[gradle_jetifier_test] [STDOUT] stdout: Setting "enable-windows-desktop" value to "true".
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: You may need to restart any open editors for them to read new settings.
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 0
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═════════════════════════════╡ ••• Find Java ••• ╞══════════════════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter doctor -v in /tmp/flutter sdk/dev/devicelab
[gradle_jetifier_test] [STDOUT] stdout: [✓] Flutter (Channel unknown, 1.19.0-2.0.pre.70, on Linux, locale en_US.UTF-8)
[gradle_jetifier_test] [STDOUT] stdout: • Flutter version 1.19.0-2.0.pre.70 at /tmp/flutter sdk
[gradle_jetifier_test] [STDOUT] stdout: • Framework revision d26585d66b (23 minutes ago), 2020-05-14 17:39:44 -0700
[gradle_jetifier_test] [STDOUT] stdout: • Engine revision 47513a70eb
[gradle_jetifier_test] [STDOUT] stdout: • Dart version 2.9.0 (build 2.9.0-9.0.dev 2676764792)
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[gradle_jetifier_test] [STDOUT] stdout: • Android SDK at /opt/android_sdk
[gradle_jetifier_test] [STDOUT] stdout: • Platform android-28, build-tools 28.0.3
[gradle_jetifier_test] [STDOUT] stdout: • ANDROID_HOME = /opt/android_sdk
[gradle_jetifier_test] [STDOUT] stdout: • ANDROID_SDK_ROOT = /opt/android_sdk
[gradle_jetifier_test] [STDOUT] stdout: • Java binary at: /usr/bin/java
[gradle_jetifier_test] [STDOUT] stdout: • Java version OpenJDK Runtime Environment (build 1.8.0_242-8u242-b08-1~deb9u1-b08)
[gradle_jetifier_test] [STDOUT] stdout: • All Android licenses accepted.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [✓] Chrome - develop for the web
[gradle_jetifier_test] [STDOUT] stdout: • Chrome at google-chrome
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [✗] Linux toolchain - develop for Linux desktop
[gradle_jetifier_test] [STDOUT] stdout: ✗ clang++ is not installed
[gradle_jetifier_test] [STDOUT] stdout: • GNU Make 4.1
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [!] Android Studio (not installed)
[gradle_jetifier_test] [STDOUT] stdout: • Android Studio not found; download from https://developer.android.com/studio/index.html
[gradle_jetifier_test] [STDOUT] stdout: (or visit https://flutter.dev/docs/get-started/install/linux#android-setup for detailed instructions).
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [✓] Connected device (3 available)
[gradle_jetifier_test] [STDOUT] stdout: • Linux • Linux • linux-x64 • Linux
[gradle_jetifier_test] [STDOUT] stdout: • Web Server • web-server • web-javascript • Flutter Tools
[gradle_jetifier_test] [STDOUT] stdout: • Chrome • chrome • web-javascript • Google Chrome 80.0.3987.149
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: ! Doctor found issues in 2 categories.
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 0
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Using JAVA_HOME=/usr
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ════════════════╡ ••• Create Flutter AndroidX app project ••• ╞═════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter create --org io.flutter.devicelab hello in /tmp/flutter_module_test.PUXEVH
[gradle_jetifier_test] [STDOUT] stdout: Creating project hello...
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/main.cc (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/flutter/.template_version (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/window_configuration.cc (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/app_configuration.mk (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/window_configuration.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/Makefile (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/build.gradle (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/kotlin/io/flutter/devicelab/hello/MainActivity.kt (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/build.gradle (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/hello_android.iml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/lib/main.dart (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/Runner.vcxproj.filters (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/Runner.sln (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/flutter/.template_version (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/AppConfiguration.props (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/FlutterBuild.vcxproj (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/win32_window.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/run_loop.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/runner.exe.manifest (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/Runner.rc (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/win32_window.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/utils.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/flutter_window.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/window_configuration.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/window_configuration.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/flutter_window.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/utils.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/main.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/run_loop.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/resource.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/resources/app_icon.ico (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/Runner.vcxproj (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/scripts/prepare_dependencies.bat (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/scripts/bundle_assets_and_deps.bat (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/test/widget_test.dart (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.metadata (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/profile/AndroidManifest.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/AndroidManifest.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-mdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-hdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/values/styles.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/drawable/launch_background.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/debug/AndroidManifest.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/settings.gradle (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/gradle.properties (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/gradle/wrapper/gradle-wrapper.properties (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcworkspace/contents.xcworkspacedata (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Info.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Release.entitlements (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/MainFlutterWindow.swift (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_512.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_16.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_128.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_1024.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_32.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_64.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_256.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/DebugProfile.entitlements (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Configs/Release.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Configs/Debug.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Configs/AppInfo.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Configs/Warnings.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Base.lproj/MainMenu.xib (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/AppDelegate.swift (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcodeproj/project.pbxproj (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Flutter/Flutter-Debug.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Flutter/Flutter-Release.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/hello.iml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/runConfigurations/main_dart.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/libraries/KotlinJavaRuntime.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/libraries/Dart_SDK.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/modules.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/workspace.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Runner-Bridging-Header.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/AppDelegate.swift (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/project.pbxproj (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/README.md (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/icons/Icon-512.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/icons/Icon-192.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/favicon.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/manifest.json (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/index.html (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcworkspace/contents.xcworkspacedata (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcworkspace/xcshareddata/WorkspaceSettings.xcsettings (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Info.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/README.md (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/Contents.json (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Base.lproj/Main.storyboard (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Base.lproj/LaunchScreen.storyboard (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/project.xcworkspace/xcshareddata/WorkspaceSettings.xcsettings (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Flutter/Release.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Flutter/Debug.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Flutter/AppFrameworkInfo.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/pubspec.yaml (created)
[gradle_jetifier_test] [STDOUT] stdout: Running "flutter pub get" in hello... 0.8s
[gradle_jetifier_test] [STDOUT] stdout: Wrote 133 files.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: All done!
[gradle_jetifier_test] [STDOUT] stdout: [✓] Flutter: is fully installed. (Channel unknown, 1.19.0-2.0.pre.70, on Linux, locale en_US.UTF-8)
[gradle_jetifier_test] [STDOUT] stdout: [✓] Android toolchain - develop for Android devices: is fully installed. (Android SDK version 28.0.3)
[gradle_jetifier_test] [STDOUT] stdout: [✓] Chrome - develop for the web: is fully installed.
[gradle_jetifier_test] [STDOUT] stdout: [✗] Linux toolchain - develop for Linux desktop: is not installed.
[gradle_jetifier_test] [STDOUT] stdout: [!] Android Studio: is not available. (not installed)
[gradle_jetifier_test] [STDOUT] stdout: [✓] Connected device: is fully installed. (3 available)
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: Run "flutter doctor" for information about installing additional components.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: In order to run your application, type:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: $ cd hello
[gradle_jetifier_test] [STDOUT] stdout: $ flutter run
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: Your application code is in hello/lib/main.dart.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: WARNING: The Linux tooling and APIs are not yet stable. You will likely need to re-create the "linux" directory after future Flutter updates.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: WARNING: The Windows tooling and APIs are not yet stable. You will likely need to re-create the "windows" directory after future Flutter updates.
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 0
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═══════════════╡ ••• Add plugin that uses support libraries ••• ╞═══════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter packages get in /tmp/flutter_module_test.PUXEVH/hello
[gradle_jetifier_test] [STDOUT] stdout: Running "flutter pub get" in hello... 0.8s
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 0
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═══════════════════════╡ ••• Update proguard rules ••• ╞════════════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═════════════════════════╡ ••• Build release APK ••• ╞══════════════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter build apk --target-platform android-arm --no-shrink --verbose in /tmp/flutter_module_test.PUXEVH/hello
[gradle_jetifier_test] [STDOUT] stdout: [ +13 ms] executing: [/tmp/flutter sdk/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[gradle_jetifier_test] [STDOUT] stdout: [ +35 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[gradle_jetifier_test] [STDOUT] stdout: [ ] d26585d66b98af25147d3420a88408556dcef706
[gradle_jetifier_test] [STDOUT] stdout: [ ] executing: [/tmp/flutter sdk/] git tag --contains HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ +199 ms] Exit code 0 from: git tag --contains HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ +2 ms] executing: [/tmp/flutter sdk/] git describe --match *.*.*-*.*.pre --first-parent --long --tags
[gradle_jetifier_test] [STDOUT] stdout: [ +156 ms] Exit code 0 from: git describe --match *.*.*-*.*.pre --first-parent --long --tags
[gradle_jetifier_test] [STDOUT] stdout: [ ] 1.19.0-1.0.pre-70-gd26585d66
[gradle_jetifier_test] [STDOUT] stdout: [ +13 ms] executing: [/tmp/flutter sdk/] git rev-parse --abbrev-ref --symbolic @{u}
[gradle_jetifier_test] [STDOUT] stdout: [ +4 ms] Exit code 128 from: git rev-parse --abbrev-ref --symbolic @{u}
[gradle_jetifier_test] [STDOUT] stdout: [ ] fatal: HEAD does not point to a branch
[gradle_jetifier_test] [STDOUT] stdout: [ +28 ms] executing: [/tmp/flutter sdk/] git rev-parse --abbrev-ref HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ +4 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ +33 ms] Artifact Instance of 'AndroidMavenArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ +8 ms] Artifact Instance of 'MaterialFonts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'GradleWrapper' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'AndroidMavenArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ +3 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FontSubsetArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ +92 ms] Found plugin firebase_auth at /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/
[gradle_jetifier_test] [STDOUT] stdout: [ +5 ms] Found plugin firebase_core at /root/.pub-cache/hosted/pub.dartlang.org/firebase_core-0.2.5+1/
[gradle_jetifier_test] [STDOUT] stdout: [ +70 ms] Found plugin firebase_auth at /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] Found plugin firebase_core at /root/.pub-cache/hosted/pub.dartlang.org/firebase_core-0.2.5+1/
[gradle_jetifier_test] [STDOUT] stdout: [ +72 ms] Generating /tmp/flutter_module_test.PUXEVH/hello/android/app/src/main/java/io/flutter/plugins/GeneratedPluginRegistrant.java
[gradle_jetifier_test] [STDOUT] stdout: [ +133 ms] Running Gradle task 'assembleRelease'...
[gradle_jetifier_test] [STDOUT] stdout: [ +2 ms] gradle.properties already sets `android.enableR8`
[gradle_jetifier_test] [STDOUT] stdout: [ +3 ms] Using gradle from /tmp/flutter_module_test.PUXEVH/hello/android/gradlew.
[gradle_jetifier_test] [STDOUT] stdout: [ ] /tmp/flutter_module_test.PUXEVH/hello/android/gradlew mode: 33261 rwxr-xr-x.
[gradle_jetifier_test] [STDOUT] stdout: [ +6 ms] executing: [/tmp/flutter_module_test.PUXEVH/hello/android/] /tmp/flutter_module_test.PUXEVH/hello/android/gradlew -Pverbose=true -Ptarget-platform=android-arm -Ptarget=lib/main.dart -Ptrack-widget-creation=true -Ptree-shake-icons=true assembleRelease
[gradle_jetifier_test] [STDOUT] stdout: [ +716 ms] Starting a Gradle Daemon, 4 stopped Daemons could not be reused, use --status for details
[gradle_jetifier_test] [STDOUT] stdout: [+28480 ms] Checking the license for package Android SDK Platform 27 in /opt/android_sdk/licenses
[gradle_jetifier_test] [STDOUT] stdout: [ ] License for package Android SDK Platform 27 accepted.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Preparing "Install Android SDK Platform 27 (revision: 3)".
[gradle_jetifier_test] [STDOUT] stdout: [+10798 ms] "Install Android SDK Platform 27 (revision: 3)" ready.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Installing Android SDK Platform 27 in /opt/android_sdk/platforms/android-27
[gradle_jetifier_test] [STDOUT] stdout: [ ] "Install Android SDK Platform 27 (revision: 3)" complete.
[gradle_jetifier_test] [STDOUT] stdout: [ +199 ms] "Install Android SDK Platform 27 (revision: 3)" finished.
[gradle_jetifier_test] [STDOUT] stdout: [+9701 ms] > Task :app:compileFlutterBuildRelease
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +10 ms] executing: [/tmp/flutter sdk/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +37 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] d26585d66b98af25147d3420a88408556dcef706
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] executing: [/tmp/flutter sdk/] git tag --contains HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +212 ms] Exit code 0 from: git tag --contains HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +4 ms] executing: [/tmp/flutter sdk/] git describe --match *.*.*-*.*.pre --first-parent --long --tags
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +194 ms] Exit code 0 from: git describe --match *.*.*-*.*.pre --first-parent --long --tags
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] 1.19.0-1.0.pre-70-gd26585d66
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +14 ms] executing: [/tmp/flutter sdk/] git rev-parse --abbrev-ref --symbolic @{u}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +7 ms] Exit code 128 from: git rev-parse --abbrev-ref --symbolic @{u}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] fatal: HEAD does not point to a branch
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +35 ms] executing: [/tmp/flutter sdk/] git rev-parse --abbrev-ref HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +5 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +38 ms] Artifact Instance of 'AndroidMavenArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +9 ms] Artifact Instance of 'MaterialFonts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'GradleWrapper' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidMavenArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FontSubsetArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +89 ms] Initializing file store
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +23 ms] kernel_snapshot: Starting due to {}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +38 ms] /tmp/flutter sdk/bin/cache/dart-sdk/bin/dart /tmp/flutter sdk/bin/cache/artifacts/engine/linux-x64/frontend_server.dart.snapshot --sdk-root /tmp/flutter sdk/bin/cache/artifacts/engine/common/flutter_patched_sdk_product/ --target=flutter -Ddart.developer.causal_async_stacks=false -Ddart.vm.profile=false -Ddart.vm.product=true --bytecode-options=source-positions --aot --tfa --packages /tmp/flutter_module_test.PUXEVH/hello/.packages --output-dill /tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/app.dill --depfile /tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/kernel_snapshot.d package:hello/main.dart
[gradle_jetifier_test] [STDOUT] stdout: [+16085 ms] [+16879 ms] kernel_snapshot: Complete
[gradle_jetifier_test] [STDOUT] stdout: [ +699 ms] [ +629 ms] aot_android_asset_bundle: Starting due to {}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +10 ms] android_aot_release_android-arm: Starting due to {InvalidatedReason.inputChanged}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +7 ms] executing: /tmp/flutter sdk/bin/cache/artifacts/engine/android-arm-release/linux-x64/gen_snapshot --deterministic --snapshot_kind=app-aot-elf --elf=/tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/armeabi-v7a/app.so --strip --no-sim-use-hardfp --no-use-integer-division --no-causal-async-stacks --lazy-async-stacks /tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/app.dill
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] [ +133 ms] Running command: /tmp/flutter sdk/bin/cache/dart-sdk/bin/dart /tmp/flutter sdk/bin/cache/artifacts/engine/linux-x64/const_finder.dart.snapshot --kernel-file /tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/app.dill --class-library-uri package:flutter/src/widgets/icon_data.dart --class-name IconData
[gradle_jetifier_test] [STDOUT] stdout: [+1100 ms] [+1037 ms] Running font-subset: /tmp/flutter sdk/bin/cache/artifacts/engine/linux-x64/font-subset /tmp/flutter_module_test.PUXEVH/hello/build/app/intermediates/flutter/release/flutter_assets/fonts/MaterialIcons-Regular.ttf /tmp/flutter sdk/bin/cache/artifacts/material_fonts/MaterialIcons-Regular.ttf, using codepoints 59574 58834 58820 58848 58829 57669
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +14 ms] aot_android_asset_bundle: Complete
[gradle_jetifier_test] [STDOUT] stdout: [+6299 ms] [+6304 ms] android_aot_release_android-arm: Complete
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] [ +84 ms] android_aot_bundle_release_android-arm: Starting due to {InvalidatedReason.inputChanged}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +5 ms] android_aot_bundle_release_android-arm: Complete
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +49 ms] Persisting file store
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +8 ms] Done persisting file store
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +4 ms] build succeeded.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +12 ms] "flutter assemble" took 25,359ms.
[gradle_jetifier_test] [STDOUT] stdout: [ +498 ms] > Task :app:packLibsflutterBuildRelease
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:preBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:preReleaseBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:preBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:preReleaseBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:preBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:preReleaseBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ +198 ms] > Task :firebase_auth:packageReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:packageReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:compileReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:checkReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:generateReleaseBuildConfig
[gradle_jetifier_test] [STDOUT] stdout: [ +299 ms] > Task :app:cleanMergeReleaseAssets UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :firebase_core:compileReleaseAidl NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:compileReleaseAidl NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:compileReleaseAidl NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :app:mergeReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:compileReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :app:generateReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:mergeReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:compileReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:generateReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:packageReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:mergeReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:compileReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:generateReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:packageReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:mergeReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:copyFlutterAssetsRelease
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:mainApkListPersistenceRelease
[gradle_jetifier_test] [STDOUT] stdout: [ +97 ms] > Task :app:generateReleaseResValues
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:generateReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:generateReleaseResValues
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:compileReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:generateReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ +98 ms] > Task :firebase_auth:packageReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:generateReleaseResValues
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:compileReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:generateReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:packageReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [+1599 ms] > Task :app:mergeReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :app:createReleaseCompatibleScreenManifests
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:checkReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:checkReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ +199 ms] > Task :firebase_core:processReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:processReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :app:processReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ ] /tmp/flutter_module_test.PUXEVH/hello/android/app/src/main/AndroidManifest.xml:24:9-31:50 Warning:
[gradle_jetifier_test] [STDOUT] stdout: [ ] activity#com.google.firebase.auth.internal.FederatedSignInActivity@android:launchMode was tagged at AndroidManifest.xml:24 to replace other declarations but no other declaration present
[gradle_jetifier_test] [STDOUT] stdout: [ +599 ms] > Task :firebase_auth:parseReleaseLibraryResources
[gradle_jetifier_test] [STDOUT] stdout: [ +41 ms] > Task :firebase_core:parseReleaseLibraryResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:generateReleaseBuildConfig
[gradle_jetifier_test] [STDOUT] stdout: [ +57 ms] > Task :firebase_core:generateReleaseBuildConfig
[gradle_jetifier_test] [STDOUT] stdout: [ +199 ms] > Task :firebase_core:generateReleaseRFile
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:javaPreCompileRelease
[gradle_jetifier_test] [STDOUT] stdout: [+1999 ms] > Task :firebase_core:compileReleaseJavaWithJavac
[gradle_jetifier_test] [STDOUT] stderr: [ +1 ms] Note: /root/.pub-cache/hosted/pub.dartlang.org/firebase_core-0.2.5+1/android/src/main/java/io/flutter/plugins/firebase/core/FirebaseCorePlugin.java uses unchecked or unsafe operations.
[gradle_jetifier_test] [STDOUT] stderr: [ +1 ms] Note: Recompile with -Xlint:unchecked for details.
[gradle_jetifier_test] [STDOUT] stdout: [ +497 ms] > Task :firebase_auth:generateReleaseRFile
[gradle_jetifier_test] [STDOUT] stdout: [ +499 ms] > Task :app:processReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:bundleLibCompileRelease
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :firebase_auth:javaPreCompileRelease
[gradle_jetifier_test] [STDOUT] stdout: [ +300 ms] > Task :firebase_auth:compileReleaseJavaWithJavac FAILED
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:8: error: package android.support.annotation does not exist
[gradle_jetifier_test] [STDOUT] stderr: [ ] import android.support.annotation.NonNull;
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:9: error: package android.support.annotation does not exist
[gradle_jetifier_test] [STDOUT] stderr: [ ] import android.support.annotation.Nullable;
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:638: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] private void reportException(Result result, @Nullable Exception exception) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class Nullable
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:550: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<AuthResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.SignInCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:569: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<Void> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.TaskVoidCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:587: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<SignInMethodQueryResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.GetSignInMethodsCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:186: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<AuthResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:445: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<GetTokenResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:499: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] Note: /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java uses unchecked or unsafe operations.
[gradle_jetifier_test] [STDOUT] stderr: [ ] Note: Recompile with -Xlint:unchecked for details.
[gradle_jetifier_test] [STDOUT] stderr: [ ] 9 errors
[gradle_jetifier_test] [STDOUT] stderr: [ +92 ms] FAILURE: Build failed with an exception.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * What went wrong:
[gradle_jetifier_test] [STDOUT] stderr: [ ] Execution failed for task ':firebase_auth:compileReleaseJavaWithJavac'.
[gradle_jetifier_test] [STDOUT] stderr: [ ] > Compilation failed; see the compiler error output for details.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * Try:
[gradle_jetifier_test] [STDOUT] stderr: [ ] Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * Get more help at https://help.gradle.org
[gradle_jetifier_test] [STDOUT] stderr: [ ] BUILD FAILED in 1m 22s
[gradle_jetifier_test] [STDOUT] stdout: [ ] Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Use '--warning-mode all' to show the individual deprecation warnings.
[gradle_jetifier_test] [STDOUT] stdout: [ ] See https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings
[gradle_jetifier_test] [STDOUT] stdout: [ ] 40 actionable tasks: 39 executed, 1 up-to-date
[gradle_jetifier_test] [STDOUT] stdout: [ +512 ms] Running Gradle task 'assembleRelease'... (completed in 82.7s)
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] The built failed likely due to AndroidX incompatibilities in a plugin. The tool is about to try using Jetfier to solve the incompatibility.
[gradle_jetifier_test] [STDOUT] stdout: [ +3 ms] ✏️ Creating `android/settings_aar.gradle`...
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] ✏️ Creating `android/settings_aar.gradle`... (completed in 0ms)
[gradle_jetifier_test] [STDOUT] stdout: [ ] [!] Flutter tried to create the file `android/settings_aar.gradle`, but failed.
[gradle_jetifier_test] [STDOUT] stdout: [ ] To manually update `settings.gradle`, follow these steps:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: 1. Copy `settings.gradle` as `settings_aar.gradle`
[gradle_jetifier_test] [STDOUT] stdout: 2. Remove the following code from `settings_aar.gradle`:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: def localPropertiesFile = new File(rootProject.projectDir, "local.properties")
[gradle_jetifier_test] [STDOUT] stdout: def properties = new Properties()
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: assert localPropertiesFile.exists()
[gradle_jetifier_test] [STDOUT] stdout: localPropertiesFile.withReader("UTF-8") { reader -> properties.load(reader) }
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: def flutterSdkPath = properties.getProperty("flutter.sdk")
[gradle_jetifier_test] [STDOUT] stdout: assert flutterSdkPath != null, "flutter.sdk not set in local.properties"
[gradle_jetifier_test] [STDOUT] stdout: apply from: "$flutterSdkPath/packages/flutter_tools/gradle/app_plugin_loader.gradle"
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [ +6 ms] "flutter apk" took 83,150ms.
[gradle_jetifier_test] [STDOUT] stderr: Please create the file and run this command again.
[gradle_jetifier_test] [STDOUT] stderr:
[gradle_jetifier_test] [STDOUT] stderr: #0 throwToolExit (package:flutter_tools/src/base/common.dart:14:3)
[gradle_jetifier_test] [STDOUT] stderr: #1 createSettingsAarGradle (package:flutter_tools/src/android/gradle.dart:197:5)
[gradle_jetifier_test] [STDOUT] stderr: #2 buildGradleApp (package:flutter_tools/src/android/gradle.dart:258:5)
[gradle_jetifier_test] [STDOUT] stderr: #3 buildGradleApp (package:flutter_tools/src/android/gradle.dart:439:19)
[gradle_jetifier_test] [STDOUT] stderr: #4 _rootRunUnary (dart:async/zone.dart:1198:47)
[gradle_jetifier_test] [STDOUT] stderr: #5 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
[gradle_jetifier_test] [STDOUT] stderr: #6 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
[gradle_jetifier_test] [STDOUT] stderr: #7 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
[gradle_jetifier_test] [STDOUT] stderr: #8 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
[gradle_jetifier_test] [STDOUT] stderr: #9 Future._completeWithValue (dart:async/future_impl.dart:529:5)
[gradle_jetifier_test] [STDOUT] stderr: #10 Future._asyncCompleteWithValue.<anonymous closure> (dart:async/future_impl.dart:567:7)
[gradle_jetifier_test] [STDOUT] stderr: #11 _rootRun (dart:async/zone.dart:1190:13)
[gradle_jetifier_test] [STDOUT] stderr: #12 _CustomZone.run (dart:async/zone.dart:1093:19)
[gradle_jetifier_test] [STDOUT] stderr: #13 _CustomZone.runGuarded (dart:async/zone.dart:997:7)
[gradle_jetifier_test] [STDOUT] stderr: #14 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:1037:23)
[gradle_jetifier_test] [STDOUT] stderr: #15 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
[gradle_jetifier_test] [STDOUT] stderr: #16 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
[gradle_jetifier_test] [STDOUT] stderr: #17 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:118:13)
[gradle_jetifier_test] [STDOUT] stderr: #18 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:169:5)
[gradle_jetifier_test] [STDOUT] stderr:
[gradle_jetifier_test] [STDOUT] stderr:
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 1
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═══════════╡ ••• Checking running Dart processes after task... ••• ╞════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Cannot list processes on this system: `ps` not available.
[gradle_jetifier_test] [STDOUT] Cleaning up after task...
"/tmp/flutter sdk/bin/cache/dart-sdk/bin/dart" exit code: 0
Cleaning up system after task...
Executing: /tmp/flutter sdk/bin/flutter doctor -v in /tmp/flutter sdk/dev/devicelab
stdout: [✓] Flutter (Channel unknown, 1.19.0-2.0.pre.70, on Linux, locale en_US.UTF-8)
stdout: • Flutter version 1.19.0-2.0.pre.70 at /tmp/flutter sdk
stdout: • Framework revision d26585d66b (25 minutes ago), 2020-05-14 17:39:44 -0700
stdout: • Engine revision 47513a70eb
stdout: • Dart version 2.9.0 (build 2.9.0-9.0.dev 2676764792)
stdout:
stdout: [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
stdout: • Android SDK at /opt/android_sdk
stdout: • Platform android-28, build-tools 28.0.3
stdout: • ANDROID_HOME = /opt/android_sdk
stdout: • ANDROID_SDK_ROOT = /opt/android_sdk
stdout: • Java binary at: /usr/bin/java
stdout: • Java version OpenJDK Runtime Environment (build 1.8.0_242-8u242-b08-1~deb9u1-b08)
stdout: • All Android licenses accepted.
stdout:
stdout: [✓] Chrome - develop for the web
stdout: • Chrome at google-chrome
stdout:
stdout: [✗] Linux toolchain - develop for Linux desktop
stdout: ✗ clang++ is not installed
stdout: • GNU Make 4.1
stdout:
stdout: [!] Android Studio (not installed)
stdout: • Android Studio not found; download from https://developer.android.com/studio/index.html
stdout: (or visit https://flutter.dev/docs/get-started/install/linux#android-setup for detailed instructions).
stdout:
stdout: [✓] Connected device (3 available)
stdout: • Linux • Linux • linux-x64 • Linux
stdout: • Web Server • web-server • web-javascript • Flutter Tools
stdout: • Chrome • chrome • web-javascript • Google Chrome 80.0.3987.149
stdout:
stdout: ! Doctor found issues in 2 categories.
"/tmp/flutter sdk/bin/flutter" exit code: 0
Telling Gradle to shut down (JAVA_HOME=/usr)
Executing: chmod a+x /tmp/flutter_devicelab_shutdown_gradle.ZJNLYJ/gradlew in /tmp/flutter sdk/dev/devicelab
"chmod" exit code: 0
Executing: /tmp/flutter_devicelab_shutdown_gradle.ZJNLYJ/gradlew --stop in /tmp/flutter_devicelab_shutdown_gradle.ZJNLYJ with environment {JAVA_HOME: /usr}
stdout: Stopping Daemon(s)
stdout: 1 Daemon stopped
"/tmp/flutter_devicelab_shutdown_gradle.ZJNLYJ/gradlew" exit code: 0
Task result:
{
"success": false,
"reason": "Executable \"/tmp/flutter sdk/bin/flutter\" failed with exit code 1."
}
════════════════╡ ••• Finished task "gradle_jetifier_test" ••• ╞════════════════
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ERROR: Last command exited with 1 (expected: zero).
Command: ../../bin/cache/dart-sdk/bin/dart bin/run.dart -t gradle_jetifier_test
Relative working directory: dev/devicelab
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</details> | 1.0 | gradle_jetifier_test failed compiling Java (flake??) - I had this failure on an unrelated PR. Maybe some sort of download problem? I don't see anything obvious in the logs to cause this.
```
[gradle_jetifier_test] [STDOUT] stdout: [ +300 ms] > Task :firebase_auth:compileReleaseJavaWithJavac FAILED
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:8: error: package android.support.annotation does not exist
[gradle_jetifier_test] [STDOUT] stderr: [ ] import android.support.annotation.NonNull;
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:9: error: package android.support.annotation does not exist
[gradle_jetifier_test] [STDOUT] stderr: [ ] import android.support.annotation.Nullable;
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:638: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] private void reportException(Result result, @Nullable Exception exception) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class Nullable
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:550: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<AuthResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.SignInCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:569: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<Void> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.TaskVoidCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:587: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<SignInMethodQueryResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.GetSignInMethodsCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:186: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<AuthResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:445: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<GetTokenResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:499: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] Note: /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java uses unchecked or unsafe operations.
[gradle_jetifier_test] [STDOUT] stderr: [ ] Note: Recompile with -Xlint:unchecked for details.
[gradle_jetifier_test] [STDOUT] stderr: [ ] 9 errors
[gradle_jetifier_test] [STDOUT] stderr: [ +92 ms] FAILURE: Build failed with an exception.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * What went wrong:
[gradle_jetifier_test] [STDOUT] stderr: [ ] Execution failed for task ':firebase_auth:compileReleaseJavaWithJavac'.
[gradle_jetifier_test] [STDOUT] stderr: [ ] > Compilation failed; see the compiler error output for details.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * Try:
[gradle_jetifier_test] [STDOUT] stderr: [ ] Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * Get more help at https://help.gradle.org
[gradle_jetifier_test] [STDOUT] stderr: [ ] BUILD FAILED in 1m 22s
[gradle_jetifier_test] [STDOUT] stdout: [ ] Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Use '--warning-mode all' to show the individual deprecation warnings.
[gradle_jetifier_test] [STDOUT] stdout: [ ] See https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings
[gradle_jetifier_test] [STDOUT] stdout: [ ] 40 actionable tasks: 39 executed, 1 up-to-date
[gradle_jetifier_test] [STDOUT] stdout: [ +512 ms] Running Gradle task 'assembleRelease'... (completed in 82.7s)
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] The built failed likely due to AndroidX incompatibilities in a plugin. The tool is about to try using Jetfier to solve the incompatibility.
[gradle_jetifier_test] [STDOUT] stdout: [ +3 ms] ✏️ Creating `android/settings_aar.gradle`...
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] ✏️ Creating `android/settings_aar.gradle`... (completed in 0ms)
[gradle_jetifier_test] [STDOUT] stdout: [ ] [!] Flutter tried to create the file `android/settings_aar.gradle`, but failed.
[gradle_jetifier_test] [STDOUT] stdout: [ ] To manually update `settings.gradle`, follow these steps:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: 1. Copy `settings.gradle` as `settings_aar.gradle`
[gradle_jetifier_test] [STDOUT] stdout: 2. Remove the following code from `settings_aar.gradle`:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: def localPropertiesFile = new File(rootProject.projectDir, "local.properties")
[gradle_jetifier_test] [STDOUT] stdout: def properties = new Properties()
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: assert localPropertiesFile.exists()
[gradle_jetifier_test] [STDOUT] stdout: localPropertiesFile.withReader("UTF-8") { reader -> properties.load(reader) }
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: def flutterSdkPath = properties.getProperty("flutter.sdk")
[gradle_jetifier_test] [STDOUT] stdout: assert flutterSdkPath != null, "flutter.sdk not set in local.properties"
[gradle_jetifier_test] [STDOUT] stdout: apply from: "$flutterSdkPath/packages/flutter_tools/gradle/app_plugin_loader.gradle"
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [ +6 ms] "flutter apk" took 83,150ms.
[gradle_jetifier_test] [STDOUT] stderr: Please create the file and run this command again.
[gradle_jetifier_test] [STDOUT] stderr:
[gradle_jetifier_test] [STDOUT] stderr: #0 throwToolExit (package:flutter_tools/src/base/common.dart:14:3)
[gradle_jetifier_test] [STDOUT] stderr: #1 createSettingsAarGradle (package:flutter_tools/src/android/gradle.dart:197:5)
[gradle_jetifier_test] [STDOUT] stderr: #2 buildGradleApp (package:flutter_tools/src/android/gradle.dart:258:5)
[gradle_jetifier_test] [STDOUT] stderr: #3 buildGradleApp (package:flutter_tools/src/android/gradle.dart:439:19)
[gradle_jetifier_test] [STDOUT] stderr: #4 _rootRunUnary (dart:async/zone.dart:1198:47)
[gradle_jetifier_test] [STDOUT] stderr: #5 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
[gradle_jetifier_test] [STDOUT] stderr: #6 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
[gradle_jetifier_test] [STDOUT] stderr: #7 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
[gradle_jetifier_test] [STDOUT] stderr: #8 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
[gradle_jetifier_test] [STDOUT] stderr: #9 Future._completeWithValue (dart:async/future_impl.dart:529:5)
[gradle_jetifier_test] [STDOUT] stderr: #10 Future._asyncCompleteWithValue.<anonymous closure> (dart:async/future_impl.dart:567:7)
[gradle_jetifier_test] [STDOUT] stderr: #11 _rootRun (dart:async/zone.dart:1190:13)
[gradle_jetifier_test] [STDOUT] stderr: #12 _CustomZone.run (dart:async/zone.dart:1093:19)
[gradle_jetifier_test] [STDOUT] stderr: #13 _CustomZone.runGuarded (dart:async/zone.dart:997:7)
[gradle_jetifier_test] [STDOUT] stderr: #14 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:1037:23)
[gradle_jetifier_test] [STDOUT] stderr: #15 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
[gradle_jetifier_test] [STDOUT] stderr: #16 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
[gradle_jetifier_test] [STDOUT] stderr: #17 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:118:13)
[gradle_jetifier_test] [STDOUT] stderr: #18 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:169:5)
```
<details>
<summary>Full logs</summary>
```
════════════════╡ ••• Running task "gradle_jetifier_test" ••• ╞═════════════════
Executing: /tmp/flutter sdk/bin/cache/dart-sdk/bin/dart --enable-vm-service=0 --no-pause-isolates-on-exit bin/tasks/gradle_jetifier_test.dart in /tmp/flutter sdk/dev/devicelab
[gradle_jetifier_test] [STDOUT] Observatory listening on http://127.0.0.1:38849/EBAgEz-TqAc=/
[gradle_jetifier_test] [STDOUT] Running task.
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ══════════════════╡ ••• Checking running Dart processes ••• ╞═══════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Cannot list processes on this system: `ps` not available.
[gradle_jetifier_test] [STDOUT] enabling configs for macOS, Linux, Windows, and Web...
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter config --enable-macos-desktop --enable-windows-desktop --enable-linux-desktop --enable-web in /tmp/flutter sdk/dev/devicelab
[gradle_jetifier_test] [STDOUT] stdout: Setting "enable-web" value to "true".
[gradle_jetifier_test] [STDOUT] stdout: Setting "enable-linux-desktop" value to "true".
[gradle_jetifier_test] [STDOUT] stdout: Setting "enable-macos-desktop" value to "true".
[gradle_jetifier_test] [STDOUT] stdout: Setting "enable-windows-desktop" value to "true".
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: You may need to restart any open editors for them to read new settings.
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 0
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═════════════════════════════╡ ••• Find Java ••• ╞══════════════════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter doctor -v in /tmp/flutter sdk/dev/devicelab
[gradle_jetifier_test] [STDOUT] stdout: [✓] Flutter (Channel unknown, 1.19.0-2.0.pre.70, on Linux, locale en_US.UTF-8)
[gradle_jetifier_test] [STDOUT] stdout: • Flutter version 1.19.0-2.0.pre.70 at /tmp/flutter sdk
[gradle_jetifier_test] [STDOUT] stdout: • Framework revision d26585d66b (23 minutes ago), 2020-05-14 17:39:44 -0700
[gradle_jetifier_test] [STDOUT] stdout: • Engine revision 47513a70eb
[gradle_jetifier_test] [STDOUT] stdout: • Dart version 2.9.0 (build 2.9.0-9.0.dev 2676764792)
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[gradle_jetifier_test] [STDOUT] stdout: • Android SDK at /opt/android_sdk
[gradle_jetifier_test] [STDOUT] stdout: • Platform android-28, build-tools 28.0.3
[gradle_jetifier_test] [STDOUT] stdout: • ANDROID_HOME = /opt/android_sdk
[gradle_jetifier_test] [STDOUT] stdout: • ANDROID_SDK_ROOT = /opt/android_sdk
[gradle_jetifier_test] [STDOUT] stdout: • Java binary at: /usr/bin/java
[gradle_jetifier_test] [STDOUT] stdout: • Java version OpenJDK Runtime Environment (build 1.8.0_242-8u242-b08-1~deb9u1-b08)
[gradle_jetifier_test] [STDOUT] stdout: • All Android licenses accepted.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [✓] Chrome - develop for the web
[gradle_jetifier_test] [STDOUT] stdout: • Chrome at google-chrome
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [✗] Linux toolchain - develop for Linux desktop
[gradle_jetifier_test] [STDOUT] stdout: ✗ clang++ is not installed
[gradle_jetifier_test] [STDOUT] stdout: • GNU Make 4.1
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [!] Android Studio (not installed)
[gradle_jetifier_test] [STDOUT] stdout: • Android Studio not found; download from https://developer.android.com/studio/index.html
[gradle_jetifier_test] [STDOUT] stdout: (or visit https://flutter.dev/docs/get-started/install/linux#android-setup for detailed instructions).
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [✓] Connected device (3 available)
[gradle_jetifier_test] [STDOUT] stdout: • Linux • Linux • linux-x64 • Linux
[gradle_jetifier_test] [STDOUT] stdout: • Web Server • web-server • web-javascript • Flutter Tools
[gradle_jetifier_test] [STDOUT] stdout: • Chrome • chrome • web-javascript • Google Chrome 80.0.3987.149
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: ! Doctor found issues in 2 categories.
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 0
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Using JAVA_HOME=/usr
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ════════════════╡ ••• Create Flutter AndroidX app project ••• ╞═════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter create --org io.flutter.devicelab hello in /tmp/flutter_module_test.PUXEVH
[gradle_jetifier_test] [STDOUT] stdout: Creating project hello...
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/main.cc (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/flutter/.template_version (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/window_configuration.cc (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/app_configuration.mk (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/window_configuration.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/linux/Makefile (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/build.gradle (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/kotlin/io/flutter/devicelab/hello/MainActivity.kt (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/build.gradle (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/hello_android.iml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/lib/main.dart (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/Runner.vcxproj.filters (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/Runner.sln (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/flutter/.template_version (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/AppConfiguration.props (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/FlutterBuild.vcxproj (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/win32_window.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/run_loop.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/runner.exe.manifest (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/Runner.rc (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/win32_window.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/utils.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/flutter_window.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/window_configuration.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/window_configuration.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/flutter_window.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/utils.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/main.cpp (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/run_loop.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/resource.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/runner/resources/app_icon.ico (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/Runner.vcxproj (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/scripts/prepare_dependencies.bat (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/windows/scripts/bundle_assets_and_deps.bat (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/test/widget_test.dart (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.metadata (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/profile/AndroidManifest.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/AndroidManifest.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-mdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-hdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/values/styles.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/drawable/launch_background.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/app/src/debug/AndroidManifest.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/settings.gradle (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/gradle.properties (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/android/gradle/wrapper/gradle-wrapper.properties (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcworkspace/contents.xcworkspacedata (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Info.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Release.entitlements (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/MainFlutterWindow.swift (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_512.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_16.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_128.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_1024.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_32.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_64.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_256.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/DebugProfile.entitlements (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Configs/Release.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Configs/Debug.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Configs/AppInfo.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Configs/Warnings.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/Base.lproj/MainMenu.xib (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner/AppDelegate.swift (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcodeproj/project.pbxproj (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Flutter/Flutter-Debug.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/macos/Flutter/Flutter-Release.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/hello.iml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/runConfigurations/main_dart.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/libraries/KotlinJavaRuntime.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/libraries/Dart_SDK.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/modules.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.idea/workspace.xml (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Runner-Bridging-Header.h (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/AppDelegate.swift (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/project.pbxproj (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/README.md (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/icons/Icon-512.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/icons/Icon-192.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/favicon.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/manifest.json (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/web/index.html (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcworkspace/contents.xcworkspacedata (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcworkspace/xcshareddata/WorkspaceSettings.xcsettings (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Info.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/AppIcon.appiconset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage.png (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/README.md (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/Contents.json (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Assets.xcassets/LaunchImage.imageset/[email protected] (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Base.lproj/Main.storyboard (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner/Base.lproj/LaunchScreen.storyboard (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Runner.xcodeproj/project.xcworkspace/xcshareddata/WorkspaceSettings.xcsettings (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Flutter/Release.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Flutter/Debug.xcconfig (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/ios/Flutter/AppFrameworkInfo.plist (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/.gitignore (created)
[gradle_jetifier_test] [STDOUT] stdout: hello/pubspec.yaml (created)
[gradle_jetifier_test] [STDOUT] stdout: Running "flutter pub get" in hello... 0.8s
[gradle_jetifier_test] [STDOUT] stdout: Wrote 133 files.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: All done!
[gradle_jetifier_test] [STDOUT] stdout: [✓] Flutter: is fully installed. (Channel unknown, 1.19.0-2.0.pre.70, on Linux, locale en_US.UTF-8)
[gradle_jetifier_test] [STDOUT] stdout: [✓] Android toolchain - develop for Android devices: is fully installed. (Android SDK version 28.0.3)
[gradle_jetifier_test] [STDOUT] stdout: [✓] Chrome - develop for the web: is fully installed.
[gradle_jetifier_test] [STDOUT] stdout: [✗] Linux toolchain - develop for Linux desktop: is not installed.
[gradle_jetifier_test] [STDOUT] stdout: [!] Android Studio: is not available. (not installed)
[gradle_jetifier_test] [STDOUT] stdout: [✓] Connected device: is fully installed. (3 available)
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: Run "flutter doctor" for information about installing additional components.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: In order to run your application, type:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: $ cd hello
[gradle_jetifier_test] [STDOUT] stdout: $ flutter run
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: Your application code is in hello/lib/main.dart.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: WARNING: The Linux tooling and APIs are not yet stable. You will likely need to re-create the "linux" directory after future Flutter updates.
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: WARNING: The Windows tooling and APIs are not yet stable. You will likely need to re-create the "windows" directory after future Flutter updates.
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 0
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═══════════════╡ ••• Add plugin that uses support libraries ••• ╞═══════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter packages get in /tmp/flutter_module_test.PUXEVH/hello
[gradle_jetifier_test] [STDOUT] stdout: Running "flutter pub get" in hello... 0.8s
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 0
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═══════════════════════╡ ••• Update proguard rules ••• ╞════════════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═════════════════════════╡ ••• Build release APK ••• ╞══════════════════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Executing: /tmp/flutter sdk/bin/flutter build apk --target-platform android-arm --no-shrink --verbose in /tmp/flutter_module_test.PUXEVH/hello
[gradle_jetifier_test] [STDOUT] stdout: [ +13 ms] executing: [/tmp/flutter sdk/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[gradle_jetifier_test] [STDOUT] stdout: [ +35 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[gradle_jetifier_test] [STDOUT] stdout: [ ] d26585d66b98af25147d3420a88408556dcef706
[gradle_jetifier_test] [STDOUT] stdout: [ ] executing: [/tmp/flutter sdk/] git tag --contains HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ +199 ms] Exit code 0 from: git tag --contains HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ +2 ms] executing: [/tmp/flutter sdk/] git describe --match *.*.*-*.*.pre --first-parent --long --tags
[gradle_jetifier_test] [STDOUT] stdout: [ +156 ms] Exit code 0 from: git describe --match *.*.*-*.*.pre --first-parent --long --tags
[gradle_jetifier_test] [STDOUT] stdout: [ ] 1.19.0-1.0.pre-70-gd26585d66
[gradle_jetifier_test] [STDOUT] stdout: [ +13 ms] executing: [/tmp/flutter sdk/] git rev-parse --abbrev-ref --symbolic @{u}
[gradle_jetifier_test] [STDOUT] stdout: [ +4 ms] Exit code 128 from: git rev-parse --abbrev-ref --symbolic @{u}
[gradle_jetifier_test] [STDOUT] stdout: [ ] fatal: HEAD does not point to a branch
[gradle_jetifier_test] [STDOUT] stdout: [ +28 ms] executing: [/tmp/flutter sdk/] git rev-parse --abbrev-ref HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ +4 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ +33 ms] Artifact Instance of 'AndroidMavenArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ +8 ms] Artifact Instance of 'MaterialFonts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'GradleWrapper' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'AndroidMavenArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ +3 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Artifact Instance of 'FontSubsetArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ +92 ms] Found plugin firebase_auth at /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/
[gradle_jetifier_test] [STDOUT] stdout: [ +5 ms] Found plugin firebase_core at /root/.pub-cache/hosted/pub.dartlang.org/firebase_core-0.2.5+1/
[gradle_jetifier_test] [STDOUT] stdout: [ +70 ms] Found plugin firebase_auth at /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] Found plugin firebase_core at /root/.pub-cache/hosted/pub.dartlang.org/firebase_core-0.2.5+1/
[gradle_jetifier_test] [STDOUT] stdout: [ +72 ms] Generating /tmp/flutter_module_test.PUXEVH/hello/android/app/src/main/java/io/flutter/plugins/GeneratedPluginRegistrant.java
[gradle_jetifier_test] [STDOUT] stdout: [ +133 ms] Running Gradle task 'assembleRelease'...
[gradle_jetifier_test] [STDOUT] stdout: [ +2 ms] gradle.properties already sets `android.enableR8`
[gradle_jetifier_test] [STDOUT] stdout: [ +3 ms] Using gradle from /tmp/flutter_module_test.PUXEVH/hello/android/gradlew.
[gradle_jetifier_test] [STDOUT] stdout: [ ] /tmp/flutter_module_test.PUXEVH/hello/android/gradlew mode: 33261 rwxr-xr-x.
[gradle_jetifier_test] [STDOUT] stdout: [ +6 ms] executing: [/tmp/flutter_module_test.PUXEVH/hello/android/] /tmp/flutter_module_test.PUXEVH/hello/android/gradlew -Pverbose=true -Ptarget-platform=android-arm -Ptarget=lib/main.dart -Ptrack-widget-creation=true -Ptree-shake-icons=true assembleRelease
[gradle_jetifier_test] [STDOUT] stdout: [ +716 ms] Starting a Gradle Daemon, 4 stopped Daemons could not be reused, use --status for details
[gradle_jetifier_test] [STDOUT] stdout: [+28480 ms] Checking the license for package Android SDK Platform 27 in /opt/android_sdk/licenses
[gradle_jetifier_test] [STDOUT] stdout: [ ] License for package Android SDK Platform 27 accepted.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Preparing "Install Android SDK Platform 27 (revision: 3)".
[gradle_jetifier_test] [STDOUT] stdout: [+10798 ms] "Install Android SDK Platform 27 (revision: 3)" ready.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Installing Android SDK Platform 27 in /opt/android_sdk/platforms/android-27
[gradle_jetifier_test] [STDOUT] stdout: [ ] "Install Android SDK Platform 27 (revision: 3)" complete.
[gradle_jetifier_test] [STDOUT] stdout: [ +199 ms] "Install Android SDK Platform 27 (revision: 3)" finished.
[gradle_jetifier_test] [STDOUT] stdout: [+9701 ms] > Task :app:compileFlutterBuildRelease
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +10 ms] executing: [/tmp/flutter sdk/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +37 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] d26585d66b98af25147d3420a88408556dcef706
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] executing: [/tmp/flutter sdk/] git tag --contains HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +212 ms] Exit code 0 from: git tag --contains HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +4 ms] executing: [/tmp/flutter sdk/] git describe --match *.*.*-*.*.pre --first-parent --long --tags
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +194 ms] Exit code 0 from: git describe --match *.*.*-*.*.pre --first-parent --long --tags
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] 1.19.0-1.0.pre-70-gd26585d66
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +14 ms] executing: [/tmp/flutter sdk/] git rev-parse --abbrev-ref --symbolic @{u}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +7 ms] Exit code 128 from: git rev-parse --abbrev-ref --symbolic @{u}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] fatal: HEAD does not point to a branch
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +35 ms] executing: [/tmp/flutter sdk/] git rev-parse --abbrev-ref HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +5 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] HEAD
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +38 ms] Artifact Instance of 'AndroidMavenArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +9 ms] Artifact Instance of 'MaterialFonts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'GradleWrapper' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidMavenArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterSdk' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ ] Artifact Instance of 'FontSubsetArtifacts' is not required, skipping update.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +89 ms] Initializing file store
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +23 ms] kernel_snapshot: Starting due to {}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +38 ms] /tmp/flutter sdk/bin/cache/dart-sdk/bin/dart /tmp/flutter sdk/bin/cache/artifacts/engine/linux-x64/frontend_server.dart.snapshot --sdk-root /tmp/flutter sdk/bin/cache/artifacts/engine/common/flutter_patched_sdk_product/ --target=flutter -Ddart.developer.causal_async_stacks=false -Ddart.vm.profile=false -Ddart.vm.product=true --bytecode-options=source-positions --aot --tfa --packages /tmp/flutter_module_test.PUXEVH/hello/.packages --output-dill /tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/app.dill --depfile /tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/kernel_snapshot.d package:hello/main.dart
[gradle_jetifier_test] [STDOUT] stdout: [+16085 ms] [+16879 ms] kernel_snapshot: Complete
[gradle_jetifier_test] [STDOUT] stdout: [ +699 ms] [ +629 ms] aot_android_asset_bundle: Starting due to {}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +10 ms] android_aot_release_android-arm: Starting due to {InvalidatedReason.inputChanged}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +7 ms] executing: /tmp/flutter sdk/bin/cache/artifacts/engine/android-arm-release/linux-x64/gen_snapshot --deterministic --snapshot_kind=app-aot-elf --elf=/tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/armeabi-v7a/app.so --strip --no-sim-use-hardfp --no-use-integer-division --no-causal-async-stacks --lazy-async-stacks /tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/app.dill
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] [ +133 ms] Running command: /tmp/flutter sdk/bin/cache/dart-sdk/bin/dart /tmp/flutter sdk/bin/cache/artifacts/engine/linux-x64/const_finder.dart.snapshot --kernel-file /tmp/flutter_module_test.PUXEVH/hello/.dart_tool/flutter_build/f0c4dd2b86347dd9aef65e5404293900/app.dill --class-library-uri package:flutter/src/widgets/icon_data.dart --class-name IconData
[gradle_jetifier_test] [STDOUT] stdout: [+1100 ms] [+1037 ms] Running font-subset: /tmp/flutter sdk/bin/cache/artifacts/engine/linux-x64/font-subset /tmp/flutter_module_test.PUXEVH/hello/build/app/intermediates/flutter/release/flutter_assets/fonts/MaterialIcons-Regular.ttf /tmp/flutter sdk/bin/cache/artifacts/material_fonts/MaterialIcons-Regular.ttf, using codepoints 59574 58834 58820 58848 58829 57669
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +14 ms] aot_android_asset_bundle: Complete
[gradle_jetifier_test] [STDOUT] stdout: [+6299 ms] [+6304 ms] android_aot_release_android-arm: Complete
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] [ +84 ms] android_aot_bundle_release_android-arm: Starting due to {InvalidatedReason.inputChanged}
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +5 ms] android_aot_bundle_release_android-arm: Complete
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +49 ms] Persisting file store
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +8 ms] Done persisting file store
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +4 ms] build succeeded.
[gradle_jetifier_test] [STDOUT] stdout: [ ] [ +12 ms] "flutter assemble" took 25,359ms.
[gradle_jetifier_test] [STDOUT] stdout: [ +498 ms] > Task :app:packLibsflutterBuildRelease
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:preBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:preReleaseBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:preBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:preReleaseBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:preBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:preReleaseBuild UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ +198 ms] > Task :firebase_auth:packageReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:packageReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:compileReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:checkReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:generateReleaseBuildConfig
[gradle_jetifier_test] [STDOUT] stdout: [ +299 ms] > Task :app:cleanMergeReleaseAssets UP-TO-DATE
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :firebase_core:compileReleaseAidl NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:compileReleaseAidl NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:compileReleaseAidl NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :app:mergeReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:compileReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :app:generateReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:mergeReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:compileReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:generateReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:packageReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:mergeReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:compileReleaseShaders
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:generateReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:packageReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:mergeReleaseAssets
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:copyFlutterAssetsRelease
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:mainApkListPersistenceRelease
[gradle_jetifier_test] [STDOUT] stdout: [ +97 ms] > Task :app:generateReleaseResValues
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :app:generateReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:generateReleaseResValues
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:compileReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:generateReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ +98 ms] > Task :firebase_auth:packageReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:generateReleaseResValues
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:compileReleaseRenderscript NO-SOURCE
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:generateReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:packageReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [+1599 ms] > Task :app:mergeReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :app:createReleaseCompatibleScreenManifests
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:checkReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:checkReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ +199 ms] > Task :firebase_core:processReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:processReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :app:processReleaseManifest
[gradle_jetifier_test] [STDOUT] stdout: [ ] /tmp/flutter_module_test.PUXEVH/hello/android/app/src/main/AndroidManifest.xml:24:9-31:50 Warning:
[gradle_jetifier_test] [STDOUT] stdout: [ ] activity#com.google.firebase.auth.internal.FederatedSignInActivity@android:launchMode was tagged at AndroidManifest.xml:24 to replace other declarations but no other declaration present
[gradle_jetifier_test] [STDOUT] stdout: [ +599 ms] > Task :firebase_auth:parseReleaseLibraryResources
[gradle_jetifier_test] [STDOUT] stdout: [ +41 ms] > Task :firebase_core:parseReleaseLibraryResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_auth:generateReleaseBuildConfig
[gradle_jetifier_test] [STDOUT] stdout: [ +57 ms] > Task :firebase_core:generateReleaseBuildConfig
[gradle_jetifier_test] [STDOUT] stdout: [ +199 ms] > Task :firebase_core:generateReleaseRFile
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:javaPreCompileRelease
[gradle_jetifier_test] [STDOUT] stdout: [+1999 ms] > Task :firebase_core:compileReleaseJavaWithJavac
[gradle_jetifier_test] [STDOUT] stderr: [ +1 ms] Note: /root/.pub-cache/hosted/pub.dartlang.org/firebase_core-0.2.5+1/android/src/main/java/io/flutter/plugins/firebase/core/FirebaseCorePlugin.java uses unchecked or unsafe operations.
[gradle_jetifier_test] [STDOUT] stderr: [ +1 ms] Note: Recompile with -Xlint:unchecked for details.
[gradle_jetifier_test] [STDOUT] stdout: [ +497 ms] > Task :firebase_auth:generateReleaseRFile
[gradle_jetifier_test] [STDOUT] stdout: [ +499 ms] > Task :app:processReleaseResources
[gradle_jetifier_test] [STDOUT] stdout: [ ] > Task :firebase_core:bundleLibCompileRelease
[gradle_jetifier_test] [STDOUT] stdout: [ +99 ms] > Task :firebase_auth:javaPreCompileRelease
[gradle_jetifier_test] [STDOUT] stdout: [ +300 ms] > Task :firebase_auth:compileReleaseJavaWithJavac FAILED
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:8: error: package android.support.annotation does not exist
[gradle_jetifier_test] [STDOUT] stderr: [ ] import android.support.annotation.NonNull;
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:9: error: package android.support.annotation does not exist
[gradle_jetifier_test] [STDOUT] stderr: [ ] import android.support.annotation.Nullable;
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:638: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] private void reportException(Result result, @Nullable Exception exception) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class Nullable
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:550: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<AuthResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.SignInCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:569: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<Void> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.TaskVoidCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:587: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<SignInMethodQueryResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] location: class FirebaseAuthPlugin.GetSignInMethodsCompleteListener
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:186: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<AuthResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:445: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onComplete(@NonNull Task<GetTokenResult> task) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java:499: error: cannot find symbol
[gradle_jetifier_test] [STDOUT] stderr: [ ] public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) {
[gradle_jetifier_test] [STDOUT] stderr: [ ] ^
[gradle_jetifier_test] [STDOUT] stderr: [ ] symbol: class NonNull
[gradle_jetifier_test] [STDOUT] stderr: [ ] Note: /root/.pub-cache/hosted/pub.dartlang.org/firebase_auth-0.7.0/android/src/main/java/io/flutter/plugins/firebaseauth/FirebaseAuthPlugin.java uses unchecked or unsafe operations.
[gradle_jetifier_test] [STDOUT] stderr: [ ] Note: Recompile with -Xlint:unchecked for details.
[gradle_jetifier_test] [STDOUT] stderr: [ ] 9 errors
[gradle_jetifier_test] [STDOUT] stderr: [ +92 ms] FAILURE: Build failed with an exception.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * What went wrong:
[gradle_jetifier_test] [STDOUT] stderr: [ ] Execution failed for task ':firebase_auth:compileReleaseJavaWithJavac'.
[gradle_jetifier_test] [STDOUT] stderr: [ ] > Compilation failed; see the compiler error output for details.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * Try:
[gradle_jetifier_test] [STDOUT] stderr: [ ] Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
[gradle_jetifier_test] [STDOUT] stderr: [ ] * Get more help at https://help.gradle.org
[gradle_jetifier_test] [STDOUT] stderr: [ ] BUILD FAILED in 1m 22s
[gradle_jetifier_test] [STDOUT] stdout: [ ] Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
[gradle_jetifier_test] [STDOUT] stdout: [ ] Use '--warning-mode all' to show the individual deprecation warnings.
[gradle_jetifier_test] [STDOUT] stdout: [ ] See https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings
[gradle_jetifier_test] [STDOUT] stdout: [ ] 40 actionable tasks: 39 executed, 1 up-to-date
[gradle_jetifier_test] [STDOUT] stdout: [ +512 ms] Running Gradle task 'assembleRelease'... (completed in 82.7s)
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] The built failed likely due to AndroidX incompatibilities in a plugin. The tool is about to try using Jetfier to solve the incompatibility.
[gradle_jetifier_test] [STDOUT] stdout: [ +3 ms] ✏️ Creating `android/settings_aar.gradle`...
[gradle_jetifier_test] [STDOUT] stdout: [ +1 ms] ✏️ Creating `android/settings_aar.gradle`... (completed in 0ms)
[gradle_jetifier_test] [STDOUT] stdout: [ ] [!] Flutter tried to create the file `android/settings_aar.gradle`, but failed.
[gradle_jetifier_test] [STDOUT] stdout: [ ] To manually update `settings.gradle`, follow these steps:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: 1. Copy `settings.gradle` as `settings_aar.gradle`
[gradle_jetifier_test] [STDOUT] stdout: 2. Remove the following code from `settings_aar.gradle`:
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: def localPropertiesFile = new File(rootProject.projectDir, "local.properties")
[gradle_jetifier_test] [STDOUT] stdout: def properties = new Properties()
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: assert localPropertiesFile.exists()
[gradle_jetifier_test] [STDOUT] stdout: localPropertiesFile.withReader("UTF-8") { reader -> properties.load(reader) }
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: def flutterSdkPath = properties.getProperty("flutter.sdk")
[gradle_jetifier_test] [STDOUT] stdout: assert flutterSdkPath != null, "flutter.sdk not set in local.properties"
[gradle_jetifier_test] [STDOUT] stdout: apply from: "$flutterSdkPath/packages/flutter_tools/gradle/app_plugin_loader.gradle"
[gradle_jetifier_test] [STDOUT] stdout:
[gradle_jetifier_test] [STDOUT] stdout: [ +6 ms] "flutter apk" took 83,150ms.
[gradle_jetifier_test] [STDOUT] stderr: Please create the file and run this command again.
[gradle_jetifier_test] [STDOUT] stderr:
[gradle_jetifier_test] [STDOUT] stderr: #0 throwToolExit (package:flutter_tools/src/base/common.dart:14:3)
[gradle_jetifier_test] [STDOUT] stderr: #1 createSettingsAarGradle (package:flutter_tools/src/android/gradle.dart:197:5)
[gradle_jetifier_test] [STDOUT] stderr: #2 buildGradleApp (package:flutter_tools/src/android/gradle.dart:258:5)
[gradle_jetifier_test] [STDOUT] stderr: #3 buildGradleApp (package:flutter_tools/src/android/gradle.dart:439:19)
[gradle_jetifier_test] [STDOUT] stderr: #4 _rootRunUnary (dart:async/zone.dart:1198:47)
[gradle_jetifier_test] [STDOUT] stderr: #5 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
[gradle_jetifier_test] [STDOUT] stderr: #6 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
[gradle_jetifier_test] [STDOUT] stderr: #7 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
[gradle_jetifier_test] [STDOUT] stderr: #8 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
[gradle_jetifier_test] [STDOUT] stderr: #9 Future._completeWithValue (dart:async/future_impl.dart:529:5)
[gradle_jetifier_test] [STDOUT] stderr: #10 Future._asyncCompleteWithValue.<anonymous closure> (dart:async/future_impl.dart:567:7)
[gradle_jetifier_test] [STDOUT] stderr: #11 _rootRun (dart:async/zone.dart:1190:13)
[gradle_jetifier_test] [STDOUT] stderr: #12 _CustomZone.run (dart:async/zone.dart:1093:19)
[gradle_jetifier_test] [STDOUT] stderr: #13 _CustomZone.runGuarded (dart:async/zone.dart:997:7)
[gradle_jetifier_test] [STDOUT] stderr: #14 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:1037:23)
[gradle_jetifier_test] [STDOUT] stderr: #15 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
[gradle_jetifier_test] [STDOUT] stderr: #16 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
[gradle_jetifier_test] [STDOUT] stderr: #17 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:118:13)
[gradle_jetifier_test] [STDOUT] stderr: #18 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:169:5)
[gradle_jetifier_test] [STDOUT] stderr:
[gradle_jetifier_test] [STDOUT] stderr:
[gradle_jetifier_test] [STDOUT] "/tmp/flutter sdk/bin/flutter" exit code: 1
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] ═══════════╡ ••• Checking running Dart processes after task... ••• ╞════════════
[gradle_jetifier_test] [STDOUT]
[gradle_jetifier_test] [STDOUT] Cannot list processes on this system: `ps` not available.
[gradle_jetifier_test] [STDOUT] Cleaning up after task...
"/tmp/flutter sdk/bin/cache/dart-sdk/bin/dart" exit code: 0
Cleaning up system after task...
Executing: /tmp/flutter sdk/bin/flutter doctor -v in /tmp/flutter sdk/dev/devicelab
stdout: [✓] Flutter (Channel unknown, 1.19.0-2.0.pre.70, on Linux, locale en_US.UTF-8)
stdout: • Flutter version 1.19.0-2.0.pre.70 at /tmp/flutter sdk
stdout: • Framework revision d26585d66b (25 minutes ago), 2020-05-14 17:39:44 -0700
stdout: • Engine revision 47513a70eb
stdout: • Dart version 2.9.0 (build 2.9.0-9.0.dev 2676764792)
stdout:
stdout: [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
stdout: • Android SDK at /opt/android_sdk
stdout: • Platform android-28, build-tools 28.0.3
stdout: • ANDROID_HOME = /opt/android_sdk
stdout: • ANDROID_SDK_ROOT = /opt/android_sdk
stdout: • Java binary at: /usr/bin/java
stdout: • Java version OpenJDK Runtime Environment (build 1.8.0_242-8u242-b08-1~deb9u1-b08)
stdout: • All Android licenses accepted.
stdout:
stdout: [✓] Chrome - develop for the web
stdout: • Chrome at google-chrome
stdout:
stdout: [✗] Linux toolchain - develop for Linux desktop
stdout: ✗ clang++ is not installed
stdout: • GNU Make 4.1
stdout:
stdout: [!] Android Studio (not installed)
stdout: • Android Studio not found; download from https://developer.android.com/studio/index.html
stdout: (or visit https://flutter.dev/docs/get-started/install/linux#android-setup for detailed instructions).
stdout:
stdout: [✓] Connected device (3 available)
stdout: • Linux • Linux • linux-x64 • Linux
stdout: • Web Server • web-server • web-javascript • Flutter Tools
stdout: • Chrome • chrome • web-javascript • Google Chrome 80.0.3987.149
stdout:
stdout: ! Doctor found issues in 2 categories.
"/tmp/flutter sdk/bin/flutter" exit code: 0
Telling Gradle to shut down (JAVA_HOME=/usr)
Executing: chmod a+x /tmp/flutter_devicelab_shutdown_gradle.ZJNLYJ/gradlew in /tmp/flutter sdk/dev/devicelab
"chmod" exit code: 0
Executing: /tmp/flutter_devicelab_shutdown_gradle.ZJNLYJ/gradlew --stop in /tmp/flutter_devicelab_shutdown_gradle.ZJNLYJ with environment {JAVA_HOME: /usr}
stdout: Stopping Daemon(s)
stdout: 1 Daemon stopped
"/tmp/flutter_devicelab_shutdown_gradle.ZJNLYJ/gradlew" exit code: 0
Task result:
{
"success": false,
"reason": "Executable \"/tmp/flutter sdk/bin/flutter\" failed with exit code 1."
}
════════════════╡ ••• Finished task "gradle_jetifier_test" ••• ╞════════════════
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ERROR: Last command exited with 1 (expected: zero).
Command: ../../bin/cache/dart-sdk/bin/dart bin/run.dart -t gradle_jetifier_test
Relative working directory: dev/devicelab
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</details> | test | gradle jetifier test failed compiling java flake i had this failure on an unrelated pr maybe some sort of download problem i don t see anything obvious in the logs to cause this stdout task firebase auth compilereleasejavawithjavac failed stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error package android support annotation does not exist stderr import android support annotation nonnull stderr stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error package android support annotation does not exist stderr import android support annotation nullable stderr stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr private void reportexception result result nullable exception exception stderr stderr symbol class nullable stderr location class firebaseauthplugin stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr location class firebaseauthplugin signincompletelistener stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr location class firebaseauthplugin taskvoidcompletelistener stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr location class firebaseauthplugin getsigninmethodscompletelistener stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void onauthstatechanged nonnull firebaseauth firebaseauth stderr stderr symbol class nonnull stderr note root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java uses unchecked or unsafe operations stderr note recompile with xlint unchecked for details stderr errors stderr failure build failed with an exception stderr what went wrong stderr execution failed for task firebase auth compilereleasejavawithjavac stderr compilation failed see the compiler error output for details stderr try stderr run with stacktrace option to get the stack trace run with info or debug option to get more log output run with scan to get full insights stderr get more help at stderr build failed in stdout deprecated gradle features were used in this build making it incompatible with gradle stdout use warning mode all to show the individual deprecation warnings stdout see stdout actionable tasks executed up to date stdout running gradle task assemblerelease completed in stdout the built failed likely due to androidx incompatibilities in a plugin the tool is about to try using jetfier to solve the incompatibility stdout ✏️ creating android settings aar gradle stdout ✏️ creating android settings aar gradle completed in stdout flutter tried to create the file android settings aar gradle but failed stdout to manually update settings gradle follow these steps stdout stdout copy settings gradle as settings aar gradle stdout remove the following code from settings aar gradle stdout stdout def localpropertiesfile new file rootproject projectdir local properties stdout def properties new properties stdout stdout assert localpropertiesfile exists stdout localpropertiesfile withreader utf reader properties load reader stdout stdout def fluttersdkpath properties getproperty flutter sdk stdout assert fluttersdkpath null flutter sdk not set in local properties stdout apply from fluttersdkpath packages flutter tools gradle app plugin loader gradle stdout stdout flutter apk took stderr please create the file and run this command again stderr stderr throwtoolexit package flutter tools src base common dart stderr createsettingsaargradle package flutter tools src android gradle dart stderr buildgradleapp package flutter tools src android gradle dart stderr buildgradleapp package flutter tools src android gradle dart stderr rootrununary dart async zone dart stderr customzone rununary dart async zone dart stderr futurelistener handlevalue dart async future impl dart stderr future propagatetolisteners handlevaluecallback dart async future impl dart stderr future propagatetolisteners dart async future impl dart stderr future completewithvalue dart async future impl dart stderr future asynccompletewithvalue dart async future impl dart stderr rootrun dart async zone dart stderr customzone run dart async zone dart stderr customzone runguarded dart async zone dart stderr customzone bindcallbackguarded dart async zone dart stderr microtaskloop dart async schedule microtask dart stderr startmicrotaskloop dart async schedule microtask dart stderr runpendingimmediatecallback dart isolate patch isolate patch dart stderr rawreceiveportimpl handlemessage dart isolate patch isolate patch dart full logs ════════════════╡ ••• running task gradle jetifier test ••• ╞═════════════════ executing tmp flutter sdk bin cache dart sdk bin dart enable vm service no pause isolates on exit bin tasks gradle jetifier test dart in tmp flutter sdk dev devicelab observatory listening on running task ══════════════════╡ ••• checking running dart processes ••• ╞═══════════════════ cannot list processes on this system ps not available enabling configs for macos linux windows and web executing tmp flutter sdk bin flutter config enable macos desktop enable windows desktop enable linux desktop enable web in tmp flutter sdk dev devicelab stdout setting enable web value to true stdout setting enable linux desktop value to true stdout setting enable macos desktop value to true stdout setting enable windows desktop value to true stdout stdout you may need to restart any open editors for them to read new settings tmp flutter sdk bin flutter exit code ═════════════════════════════╡ ••• find java ••• ╞══════════════════════════════ executing tmp flutter sdk bin flutter doctor v in tmp flutter sdk dev devicelab stdout flutter channel unknown pre on linux locale en us utf stdout • flutter version pre at tmp flutter sdk stdout • framework revision minutes ago stdout • engine revision stdout • dart version build dev stdout stdout android toolchain develop for android devices android sdk version stdout • android sdk at opt android sdk stdout • platform android build tools stdout • android home opt android sdk stdout • android sdk root opt android sdk stdout • java binary at usr bin java stdout • java version openjdk runtime environment build stdout • all android licenses accepted stdout stdout chrome develop for the web stdout • chrome at google chrome stdout stdout linux toolchain develop for linux desktop stdout ✗ clang is not installed stdout • gnu make stdout stdout android studio not installed stdout • android studio not found download from stdout or visit for detailed instructions stdout stdout connected device available stdout • linux • linux • linux • linux stdout • web server • web server • web javascript • flutter tools stdout • chrome • chrome • web javascript • google chrome stdout stdout doctor found issues in categories tmp flutter sdk bin flutter exit code using java home usr ════════════════╡ ••• create flutter androidx app project ••• ╞═════════════════ executing tmp flutter sdk bin flutter create org io flutter devicelab hello in tmp flutter module test puxevh stdout creating project hello stdout hello linux gitignore created stdout hello linux main cc created stdout hello linux flutter template version created stdout hello linux window configuration cc created stdout hello linux app configuration mk created stdout hello linux window configuration h created stdout hello linux makefile created stdout hello android app build gradle created stdout hello android app src main kotlin io flutter devicelab hello mainactivity kt created stdout hello android build gradle created stdout hello android hello android iml created stdout hello lib main dart created stdout hello windows gitignore created stdout hello windows runner vcxproj filters created stdout hello windows runner sln created stdout hello windows flutter template version created stdout hello windows appconfiguration props created stdout hello windows flutterbuild vcxproj created stdout hello windows runner window h created stdout hello windows runner run loop cpp created stdout hello windows runner runner exe manifest created stdout hello windows runner runner rc created stdout hello windows runner window cpp created stdout hello windows runner utils h created stdout hello windows runner flutter window h created stdout hello windows runner window configuration cpp created stdout hello windows runner window configuration h created stdout hello windows runner flutter window cpp created stdout hello windows runner utils cpp created stdout hello windows runner main cpp created stdout hello windows runner run loop h created stdout hello windows runner resource h created stdout hello windows runner resources app icon ico created stdout hello windows runner vcxproj created stdout hello windows scripts prepare dependencies bat created stdout hello windows scripts bundle assets and deps bat created stdout hello test widget test dart created stdout hello metadata created stdout hello android gitignore created stdout hello android app src profile androidmanifest xml created stdout hello android app src main androidmanifest xml created stdout hello android app src main res mipmap mdpi ic launcher png created stdout hello android app src main res mipmap hdpi ic launcher png created stdout hello android app src main res mipmap xxhdpi ic launcher png created stdout hello android app src main res mipmap xxxhdpi ic launcher png created stdout hello android app src main res values styles xml created stdout hello android app src main res drawable launch background xml created stdout hello android app src main res mipmap xhdpi ic launcher png created stdout hello android app src debug androidmanifest xml created stdout hello android settings gradle created stdout hello android gradle properties created stdout hello android gradle wrapper gradle wrapper properties created stdout hello macos gitignore created stdout hello macos runner xcworkspace contents xcworkspacedata created stdout hello macos runner xcworkspace xcshareddata ideworkspacechecks plist created stdout hello macos runner info plist created stdout hello macos runner release entitlements created stdout hello macos runner mainflutterwindow swift created stdout hello macos runner assets xcassets appicon appiconset app icon png created stdout hello macos runner assets xcassets appicon appiconset app icon png created stdout hello macos runner assets xcassets appicon appiconset app icon png created stdout hello macos runner assets xcassets appicon appiconset app icon png created stdout hello macos runner assets xcassets appicon appiconset contents json created stdout hello macos runner assets xcassets appicon appiconset app icon png created stdout hello macos runner assets xcassets appicon appiconset app icon png created stdout hello macos runner assets xcassets appicon appiconset app icon png created stdout hello macos runner debugprofile entitlements created stdout hello macos runner configs release xcconfig created stdout hello macos runner configs debug xcconfig created stdout hello macos runner configs appinfo xcconfig created stdout hello macos runner configs warnings xcconfig created stdout hello macos runner base lproj mainmenu xib created stdout hello macos runner appdelegate swift created stdout hello macos runner xcodeproj project xcworkspace xcshareddata ideworkspacechecks plist created stdout hello macos runner xcodeproj project pbxproj created stdout hello macos runner xcodeproj xcshareddata xcschemes runner xcscheme created stdout hello macos flutter flutter debug xcconfig created stdout hello macos flutter flutter release xcconfig created stdout hello hello iml created stdout hello idea runconfigurations main dart xml created stdout hello idea libraries kotlinjavaruntime xml created stdout hello idea libraries dart sdk xml created stdout hello idea modules xml created stdout hello idea workspace xml created stdout hello ios runner runner bridging header h created stdout hello ios runner appdelegate swift created stdout hello ios runner xcodeproj project pbxproj created stdout hello ios runner xcodeproj xcshareddata xcschemes runner xcscheme created stdout hello readme md created stdout hello web icons icon png created stdout hello web icons icon png created stdout hello web favicon png created stdout hello web manifest json created stdout hello web index html created stdout hello ios gitignore created stdout hello ios runner xcworkspace contents xcworkspacedata created stdout hello ios runner xcworkspace xcshareddata ideworkspacechecks plist created stdout hello ios runner xcworkspace xcshareddata workspacesettings xcsettings created stdout hello ios runner info plist created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset contents json created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets appicon appiconset icon app png created stdout hello ios runner assets xcassets launchimage imageset launchimage png created stdout hello ios runner assets xcassets launchimage imageset launchimage png created stdout hello ios runner assets xcassets launchimage imageset readme md created stdout hello ios runner assets xcassets launchimage imageset contents json created stdout hello ios runner assets xcassets launchimage imageset launchimage png created stdout hello ios runner base lproj main storyboard created stdout hello ios runner base lproj launchscreen storyboard created stdout hello ios runner xcodeproj project xcworkspace contents xcworkspacedata created stdout hello ios runner xcodeproj project xcworkspace xcshareddata ideworkspacechecks plist created stdout hello ios runner xcodeproj project xcworkspace xcshareddata workspacesettings xcsettings created stdout hello ios flutter release xcconfig created stdout hello ios flutter debug xcconfig created stdout hello ios flutter appframeworkinfo plist created stdout hello gitignore created stdout hello pubspec yaml created stdout running flutter pub get in hello stdout wrote files stdout stdout all done stdout flutter is fully installed channel unknown pre on linux locale en us utf stdout android toolchain develop for android devices is fully installed android sdk version stdout chrome develop for the web is fully installed stdout linux toolchain develop for linux desktop is not installed stdout android studio is not available not installed stdout connected device is fully installed available stdout stdout run flutter doctor for information about installing additional components stdout stdout in order to run your application type stdout stdout cd hello stdout flutter run stdout stdout your application code is in hello lib main dart stdout stdout stdout warning the linux tooling and apis are not yet stable you will likely need to re create the linux directory after future flutter updates stdout stdout warning the windows tooling and apis are not yet stable you will likely need to re create the windows directory after future flutter updates tmp flutter sdk bin flutter exit code ═══════════════╡ ••• add plugin that uses support libraries ••• ╞═══════════════ executing tmp flutter sdk bin flutter packages get in tmp flutter module test puxevh hello stdout running flutter pub get in hello tmp flutter sdk bin flutter exit code ═══════════════════════╡ ••• update proguard rules ••• ╞════════════════════════ ═════════════════════════╡ ••• build release apk ••• ╞══════════════════════════ executing tmp flutter sdk bin flutter build apk target platform android arm no shrink verbose in tmp flutter module test puxevh hello stdout executing git c log showsignature false log n pretty format h stdout exit code from git c log showsignature false log n pretty format h stdout stdout executing git tag contains head stdout exit code from git tag contains head stdout executing git describe match pre first parent long tags stdout exit code from git describe match pre first parent long tags stdout pre stdout executing git rev parse abbrev ref symbolic u stdout exit code from git rev parse abbrev ref symbolic u stdout fatal head does not point to a branch stdout executing git rev parse abbrev ref head stdout exit code from git rev parse abbrev ref head stdout head stdout artifact instance of androidmavenartifacts is not required skipping update stdout artifact instance of androidgensnapshotartifacts is not required skipping update stdout artifact instance of androidinternalbuildartifacts is not required skipping update stdout artifact instance of iosengineartifacts is not required skipping update stdout artifact instance of flutterwebsdk is not required skipping update stdout artifact instance of windowsengineartifacts is not required skipping update stdout artifact instance of macosengineartifacts is not required skipping update stdout artifact instance of linuxengineartifacts is not required skipping update stdout artifact instance of linuxfuchsiasdkartifacts is not required skipping update stdout artifact instance of macosfuchsiasdkartifacts is not required skipping update stdout artifact instance of flutterrunnersdkartifacts is not required skipping update stdout artifact instance of flutterrunnerdebugsymbols is not required skipping update stdout artifact instance of materialfonts is not required skipping update stdout artifact instance of gradlewrapper is not required skipping update stdout artifact instance of androidmavenartifacts is not required skipping update stdout artifact instance of androidinternalbuildartifacts is not required skipping update stdout artifact instance of iosengineartifacts is not required skipping update stdout artifact instance of flutterwebsdk is not required skipping update stdout artifact instance of fluttersdk is not required skipping update stdout artifact instance of windowsengineartifacts is not required skipping update stdout artifact instance of macosengineartifacts is not required skipping update stdout artifact instance of linuxengineartifacts is not required skipping update stdout artifact instance of linuxfuchsiasdkartifacts is not required skipping update stdout artifact instance of macosfuchsiasdkartifacts is not required skipping update stdout artifact instance of flutterrunnersdkartifacts is not required skipping update stdout artifact instance of flutterrunnerdebugsymbols is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of fontsubsetartifacts is not required skipping update stdout found plugin firebase auth at root pub cache hosted pub dartlang org firebase auth stdout found plugin firebase core at root pub cache hosted pub dartlang org firebase core stdout found plugin firebase auth at root pub cache hosted pub dartlang org firebase auth stdout found plugin firebase core at root pub cache hosted pub dartlang org firebase core stdout generating tmp flutter module test puxevh hello android app src main java io flutter plugins generatedpluginregistrant java stdout running gradle task assemblerelease stdout gradle properties already sets android stdout using gradle from tmp flutter module test puxevh hello android gradlew stdout tmp flutter module test puxevh hello android gradlew mode rwxr xr x stdout executing tmp flutter module test puxevh hello android gradlew pverbose true ptarget platform android arm ptarget lib main dart ptrack widget creation true ptree shake icons true assemblerelease stdout starting a gradle daemon stopped daemons could not be reused use status for details stdout checking the license for package android sdk platform in opt android sdk licenses stdout license for package android sdk platform accepted stdout preparing install android sdk platform revision stdout install android sdk platform revision ready stdout installing android sdk platform in opt android sdk platforms android stdout install android sdk platform revision complete stdout install android sdk platform revision finished stdout task app compileflutterbuildrelease stdout executing git c log showsignature false log n pretty format h stdout exit code from git c log showsignature false log n pretty format h stdout stdout executing git tag contains head stdout exit code from git tag contains head stdout executing git describe match pre first parent long tags stdout exit code from git describe match pre first parent long tags stdout pre stdout executing git rev parse abbrev ref symbolic u stdout exit code from git rev parse abbrev ref symbolic u stdout fatal head does not point to a branch stdout executing git rev parse abbrev ref head stdout exit code from git rev parse abbrev ref head stdout head stdout artifact instance of androidmavenartifacts is not required skipping update stdout artifact instance of androidgensnapshotartifacts is not required skipping update stdout artifact instance of androidinternalbuildartifacts is not required skipping update stdout artifact instance of iosengineartifacts is not required skipping update stdout artifact instance of flutterwebsdk is not required skipping update stdout artifact instance of windowsengineartifacts is not required skipping update stdout artifact instance of macosengineartifacts is not required skipping update stdout artifact instance of linuxengineartifacts is not required skipping update stdout artifact instance of linuxfuchsiasdkartifacts is not required skipping update stdout artifact instance of macosfuchsiasdkartifacts is not required skipping update stdout artifact instance of flutterrunnersdkartifacts is not required skipping update stdout artifact instance of flutterrunnerdebugsymbols is not required skipping update stdout artifact instance of materialfonts is not required skipping update stdout artifact instance of gradlewrapper is not required skipping update stdout artifact instance of androidmavenartifacts is not required skipping update stdout artifact instance of androidgensnapshotartifacts is not required skipping update stdout artifact instance of androidinternalbuildartifacts is not required skipping update stdout artifact instance of iosengineartifacts is not required skipping update stdout artifact instance of flutterwebsdk is not required skipping update stdout artifact instance of fluttersdk is not required skipping update stdout artifact instance of windowsengineartifacts is not required skipping update stdout artifact instance of macosengineartifacts is not required skipping update stdout artifact instance of linuxengineartifacts is not required skipping update stdout artifact instance of linuxfuchsiasdkartifacts is not required skipping update stdout artifact instance of macosfuchsiasdkartifacts is not required skipping update stdout artifact instance of flutterrunnersdkartifacts is not required skipping update stdout artifact instance of flutterrunnerdebugsymbols is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of iosusbartifacts is not required skipping update stdout artifact instance of fontsubsetartifacts is not required skipping update stdout initializing file store stdout kernel snapshot starting due to stdout tmp flutter sdk bin cache dart sdk bin dart tmp flutter sdk bin cache artifacts engine linux frontend server dart snapshot sdk root tmp flutter sdk bin cache artifacts engine common flutter patched sdk product target flutter ddart developer causal async stacks false ddart vm profile false ddart vm product true bytecode options source positions aot tfa packages tmp flutter module test puxevh hello packages output dill tmp flutter module test puxevh hello dart tool flutter build app dill depfile tmp flutter module test puxevh hello dart tool flutter build kernel snapshot d package hello main dart stdout kernel snapshot complete stdout aot android asset bundle starting due to stdout android aot release android arm starting due to invalidatedreason inputchanged stdout executing tmp flutter sdk bin cache artifacts engine android arm release linux gen snapshot deterministic snapshot kind app aot elf elf tmp flutter module test puxevh hello dart tool flutter build armeabi app so strip no sim use hardfp no use integer division no causal async stacks lazy async stacks tmp flutter module test puxevh hello dart tool flutter build app dill stdout running command tmp flutter sdk bin cache dart sdk bin dart tmp flutter sdk bin cache artifacts engine linux const finder dart snapshot kernel file tmp flutter module test puxevh hello dart tool flutter build app dill class library uri package flutter src widgets icon data dart class name icondata stdout running font subset tmp flutter sdk bin cache artifacts engine linux font subset tmp flutter module test puxevh hello build app intermediates flutter release flutter assets fonts materialicons regular ttf tmp flutter sdk bin cache artifacts material fonts materialicons regular ttf using codepoints stdout aot android asset bundle complete stdout android aot release android arm complete stdout android aot bundle release android arm starting due to invalidatedreason inputchanged stdout android aot bundle release android arm complete stdout persisting file store stdout done persisting file store stdout build succeeded stdout flutter assemble took stdout task app packlibsflutterbuildrelease stdout task app prebuild up to date stdout task app prereleasebuild up to date stdout task firebase auth prebuild up to date stdout task firebase auth prereleasebuild up to date stdout task firebase core prebuild up to date stdout task firebase core prereleasebuild up to date stdout task firebase auth packagereleaserenderscript no source stdout task firebase core packagereleaserenderscript no source stdout task app compilereleaserenderscript no source stdout task app checkreleasemanifest stdout task app generatereleasebuildconfig stdout task app cleanmergereleaseassets up to date stdout task firebase core compilereleaseaidl no source stdout task firebase auth compilereleaseaidl no source stdout task app compilereleaseaidl no source stdout task app mergereleaseshaders stdout task app compilereleaseshaders stdout task app generatereleaseassets stdout task firebase auth mergereleaseshaders stdout task firebase auth compilereleaseshaders stdout task firebase auth generatereleaseassets stdout task firebase auth packagereleaseassets stdout task firebase core mergereleaseshaders stdout task firebase core compilereleaseshaders stdout task firebase core generatereleaseassets stdout task firebase core packagereleaseassets stdout task app mergereleaseassets stdout task app copyflutterassetsrelease stdout task app mainapklistpersistencerelease stdout task app generatereleaseresvalues stdout task app generatereleaseresources stdout task firebase auth generatereleaseresvalues stdout task firebase auth compilereleaserenderscript no source stdout task firebase auth generatereleaseresources stdout task firebase auth packagereleaseresources stdout task firebase core generatereleaseresvalues stdout task firebase core compilereleaserenderscript no source stdout task firebase core generatereleaseresources stdout task firebase core packagereleaseresources stdout task app mergereleaseresources stdout task app createreleasecompatiblescreenmanifests stdout task firebase auth checkreleasemanifest stdout task firebase core checkreleasemanifest stdout task firebase core processreleasemanifest stdout task firebase auth processreleasemanifest stdout task app processreleasemanifest stdout tmp flutter module test puxevh hello android app src main androidmanifest xml warning stdout activity com google firebase auth internal federatedsigninactivity android launchmode was tagged at androidmanifest xml to replace other declarations but no other declaration present stdout task firebase auth parsereleaselibraryresources stdout task firebase core parsereleaselibraryresources stdout task firebase auth generatereleasebuildconfig stdout task firebase core generatereleasebuildconfig stdout task firebase core generatereleaserfile stdout task firebase core javaprecompilerelease stdout task firebase core compilereleasejavawithjavac stderr note root pub cache hosted pub dartlang org firebase core android src main java io flutter plugins firebase core firebasecoreplugin java uses unchecked or unsafe operations stderr note recompile with xlint unchecked for details stdout task firebase auth generatereleaserfile stdout task app processreleaseresources stdout task firebase core bundlelibcompilerelease stdout task firebase auth javaprecompilerelease stdout task firebase auth compilereleasejavawithjavac failed stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error package android support annotation does not exist stderr import android support annotation nonnull stderr stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error package android support annotation does not exist stderr import android support annotation nullable stderr stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr private void reportexception result result nullable exception exception stderr stderr symbol class nullable stderr location class firebaseauthplugin stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr location class firebaseauthplugin signincompletelistener stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr location class firebaseauthplugin taskvoidcompletelistener stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr location class firebaseauthplugin getsigninmethodscompletelistener stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void oncomplete nonnull task task stderr stderr symbol class nonnull stderr root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java error cannot find symbol stderr public void onauthstatechanged nonnull firebaseauth firebaseauth stderr stderr symbol class nonnull stderr note root pub cache hosted pub dartlang org firebase auth android src main java io flutter plugins firebaseauth firebaseauthplugin java uses unchecked or unsafe operations stderr note recompile with xlint unchecked for details stderr errors stderr failure build failed with an exception stderr what went wrong stderr execution failed for task firebase auth compilereleasejavawithjavac stderr compilation failed see the compiler error output for details stderr try stderr run with stacktrace option to get the stack trace run with info or debug option to get more log output run with scan to get full insights stderr get more help at stderr build failed in stdout deprecated gradle features were used in this build making it incompatible with gradle stdout use warning mode all to show the individual deprecation warnings stdout see stdout actionable tasks executed up to date stdout running gradle task assemblerelease completed in stdout the built failed likely due to androidx incompatibilities in a plugin the tool is about to try using jetfier to solve the incompatibility stdout ✏️ creating android settings aar gradle stdout ✏️ creating android settings aar gradle completed in stdout flutter tried to create the file android settings aar gradle but failed stdout to manually update settings gradle follow these steps stdout stdout copy settings gradle as settings aar gradle stdout remove the following code from settings aar gradle stdout stdout def localpropertiesfile new file rootproject projectdir local properties stdout def properties new properties stdout stdout assert localpropertiesfile exists stdout localpropertiesfile withreader utf reader properties load reader stdout stdout def fluttersdkpath properties getproperty flutter sdk stdout assert fluttersdkpath null flutter sdk not set in local properties stdout apply from fluttersdkpath packages flutter tools gradle app plugin loader gradle stdout stdout flutter apk took stderr please create the file and run this command again stderr stderr throwtoolexit package flutter tools src base common dart stderr createsettingsaargradle package flutter tools src android gradle dart stderr buildgradleapp package flutter tools src android gradle dart stderr buildgradleapp package flutter tools src android gradle dart stderr rootrununary dart async zone dart stderr customzone rununary dart async zone dart stderr futurelistener handlevalue dart async future impl dart stderr future propagatetolisteners handlevaluecallback dart async future impl dart stderr future propagatetolisteners dart async future impl dart stderr future completewithvalue dart async future impl dart stderr future asynccompletewithvalue dart async future impl dart stderr rootrun dart async zone dart stderr customzone run dart async zone dart stderr customzone runguarded dart async zone dart stderr customzone bindcallbackguarded dart async zone dart stderr microtaskloop dart async schedule microtask dart stderr startmicrotaskloop dart async schedule microtask dart stderr runpendingimmediatecallback dart isolate patch isolate patch dart stderr rawreceiveportimpl handlemessage dart isolate patch isolate patch dart stderr stderr tmp flutter sdk bin flutter exit code ═══════════╡ ••• checking running dart processes after task ••• ╞════════════ cannot list processes on this system ps not available cleaning up after task tmp flutter sdk bin cache dart sdk bin dart exit code cleaning up system after task executing tmp flutter sdk bin flutter doctor v in tmp flutter sdk dev devicelab stdout flutter channel unknown pre on linux locale en us utf stdout • flutter version pre at tmp flutter sdk stdout • framework revision minutes ago stdout • engine revision stdout • dart version build dev stdout stdout android toolchain develop for android devices android sdk version stdout • android sdk at opt android sdk stdout • platform android build tools stdout • android home opt android sdk stdout • android sdk root opt android sdk stdout • java binary at usr bin java stdout • java version openjdk runtime environment build stdout • all android licenses accepted stdout stdout chrome develop for the web stdout • chrome at google chrome stdout stdout linux toolchain develop for linux desktop stdout ✗ clang is not installed stdout • gnu make stdout stdout android studio not installed stdout • android studio not found download from stdout or visit for detailed instructions stdout stdout connected device available stdout • linux • linux • linux • linux stdout • web server • web server • web javascript • flutter tools stdout • chrome • chrome • web javascript • google chrome stdout stdout doctor found issues in categories tmp flutter sdk bin flutter exit code telling gradle to shut down java home usr executing chmod a x tmp flutter devicelab shutdown gradle zjnlyj gradlew in tmp flutter sdk dev devicelab chmod exit code executing tmp flutter devicelab shutdown gradle zjnlyj gradlew stop in tmp flutter devicelab shutdown gradle zjnlyj with environment java home usr stdout stopping daemon s stdout daemon stopped tmp flutter devicelab shutdown gradle zjnlyj gradlew exit code task result success false reason executable tmp flutter sdk bin flutter failed with exit code ════════════════╡ ••• finished task gradle jetifier test ••• ╞════════════════ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ error last command exited with expected zero command bin cache dart sdk bin dart bin run dart t gradle jetifier test relative working directory dev devicelab ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ | 1 |
499,980 | 14,483,665,917 | IssuesEvent | 2020-12-10 15:25:33 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.disneyplus.com - see bug description | browser-focus-geckoview engine-gecko priority-important | <!-- @browser: Firefox Mobile 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:79.0) Gecko/79.0 Firefox/79.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63387 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.disneyplus.com/en-gb/sign-up?type=standard
**Browser / Version**: Firefox Mobile 79.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: nothing is appearing. screen is totally black.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.disneyplus.com - see bug description - <!-- @browser: Firefox Mobile 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:79.0) Gecko/79.0 Firefox/79.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63387 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.disneyplus.com/en-gb/sign-up?type=standard
**Browser / Version**: Firefox Mobile 79.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: nothing is appearing. screen is totally black.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | see bug description url browser version firefox mobile operating system android tested another browser yes chrome problem type something else description nothing is appearing screen is totally black steps to reproduce browser configuration none from with ❤️ | 0 |
174,372 | 21,263,648,437 | IssuesEvent | 2022-04-13 07:50:20 | m-ookouchi/bulletinborad | https://api.github.com/repos/m-ookouchi/bulletinborad | closed | jquery-3.3.1.slim.min.js: 1 vulnerabilities (highest severity is: 6.1) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.3.1.slim.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.slim.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.slim.min.js</a></p>
<p>Path to dependency file: /templates/base.html</p>
<p>Path to vulnerable library: /templates/base.html</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/m-ookouchi/bulletinborad/commit/d535348fa26c63e3ce2970995724bc792dced692">d535348fa26c63e3ce2970995724bc792dced692</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2020-11023](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | jquery-3.3.1.slim.min.js | Direct | jquery - 3.5.0;jquery-rails - 4.4.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11023</summary>
### Vulnerable Library - <b>jquery-3.3.1.slim.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.slim.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.slim.min.js</a></p>
<p>Path to dependency file: /templates/base.html</p>
<p>Path to vulnerable library: /templates/base.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.slim.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/m-ookouchi/bulletinborad/commit/d535348fa26c63e3ce2970995724bc792dced692">d535348fa26c63e3ce2970995724bc792dced692</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.3.1","packageFilePaths":["/templates/base.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jquery - 3.5.0;jquery-rails - 4.4.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-11023","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}]</REMEDIATE> --> | True | jquery-3.3.1.slim.min.js: 1 vulnerabilities (highest severity is: 6.1) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.3.1.slim.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.slim.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.slim.min.js</a></p>
<p>Path to dependency file: /templates/base.html</p>
<p>Path to vulnerable library: /templates/base.html</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/m-ookouchi/bulletinborad/commit/d535348fa26c63e3ce2970995724bc792dced692">d535348fa26c63e3ce2970995724bc792dced692</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2020-11023](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | jquery-3.3.1.slim.min.js | Direct | jquery - 3.5.0;jquery-rails - 4.4.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11023</summary>
### Vulnerable Library - <b>jquery-3.3.1.slim.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.slim.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.slim.min.js</a></p>
<p>Path to dependency file: /templates/base.html</p>
<p>Path to vulnerable library: /templates/base.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.slim.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/m-ookouchi/bulletinborad/commit/d535348fa26c63e3ce2970995724bc792dced692">d535348fa26c63e3ce2970995724bc792dced692</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.3.1","packageFilePaths":["/templates/base.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jquery - 3.5.0;jquery-rails - 4.4.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-11023","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}]</REMEDIATE> --> | non_test | jquery slim min js vulnerabilities highest severity is vulnerable library jquery slim min js javascript library for dom operations library home page a href path to dependency file templates base html path to vulnerable library templates base html found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium jquery slim min js direct jquery jquery rails details cve vulnerable library jquery slim min js javascript library for dom operations library home page a href path to dependency file templates base html path to vulnerable library templates base html dependency hierarchy x jquery slim min js vulnerable library found in head commit a href found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails step up your open source security game with whitesource istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion jquery jquery rails isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery vulnerabilityurl | 0 |
770,396 | 27,038,747,012 | IssuesEvent | 2023-02-13 02:05:00 | CanberraOceanRacingClub/namadgi3 | https://api.github.com/repos/CanberraOceanRacingClub/namadgi3 | closed | Port chart plotter pedestal | Priority_1 | Crew from the Sydney to Opua crossing have reported...
> Good morning Steven
>
> Attached are photos of the damage. The chart plotter has been removed, connections taped and stub covered with a plastic bag, reinforced by tape.
>
> There is some movement in the starboard chart plotter suggesting it may also be on the way out.
>
> Consider that if a stainless steel fabricator can be identified locally, both plates should be replaced.
>
> Kind regards
>
> Paul


@pauljones17 @mrmrmartin will be interested | 1.0 | Port chart plotter pedestal - Crew from the Sydney to Opua crossing have reported...
> Good morning Steven
>
> Attached are photos of the damage. The chart plotter has been removed, connections taped and stub covered with a plastic bag, reinforced by tape.
>
> There is some movement in the starboard chart plotter suggesting it may also be on the way out.
>
> Consider that if a stainless steel fabricator can be identified locally, both plates should be replaced.
>
> Kind regards
>
> Paul


@pauljones17 @mrmrmartin will be interested | non_test | port chart plotter pedestal crew from the sydney to opua crossing have reported good morning steven attached are photos of the damage the chart plotter has been removed connections taped and stub covered with a plastic bag reinforced by tape there is some movement in the starboard chart plotter suggesting it may also be on the way out consider that if a stainless steel fabricator can be identified locally both plates should be replaced kind regards paul mrmrmartin will be interested | 0 |
170,426 | 13,186,920,410 | IssuesEvent | 2020-08-13 01:45:43 | kubernetes/test-infra | https://api.github.com/repos/kubernetes/test-infra | closed | Make critical jobs Guaranteed Pod QOS: pull-kubernetes-e2e-gce-ubuntu-containerd | area/jobs area/release-eng kind/cleanup sig/release sig/testing | **What should be cleaned up or changed**:
This is part of https://github.com/kubernetes/test-infra/issues/18530
The following jobs should be Guaranteed Pod QOS, meaning they should have CPU and memory resource limits, and matching resource requests:
- pull-kubernetes-e2e-gce-ubuntu-containerd
These jobs run on (google.com only) k8s-prow-build, so @spiffxp has provided the following guess:
- suggest 4 cpu, slightly above 13 Gi (14?) mem
General steps to follow:
- update the job definitions in [/config/jobs](/config/jobs) to have matching CPU and memory limits and requests
- e.g. ([ci-kubernetes-e2e-gci-gce has a `resources:` field with matching entries](https://github.com/kubernetes/test-infra/blob/2eac54f721ed58479a3126e4f9ca5cbfcc73821b/config/jobs/kubernetes/sig-cloud-provider/gcp/gcp-gce.yaml#L357-L394))
- open a pull request, include a link to this issue and cc `@kubernetes/ci-signal` in the description
- keep an eye on the jobs for the next few days, and if they start failing more than usual, open followup pull requests to raise resources
- can look at recent prow.k8s.io runs (e.g. https://prow.k8s.io/?job=ci-kubernetes-e2e-gci-gce)
- can look at job history (e.g. https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce)
- can look at testgrid dashboards (e.g. https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-default)
- after the job has remained stable for a few days, declare victory
/sig testing
/sig release
/area jobs
/area release-eng | 1.0 | Make critical jobs Guaranteed Pod QOS: pull-kubernetes-e2e-gce-ubuntu-containerd - **What should be cleaned up or changed**:
This is part of https://github.com/kubernetes/test-infra/issues/18530
The following jobs should be Guaranteed Pod QOS, meaning they should have CPU and memory resource limits, and matching resource requests:
- pull-kubernetes-e2e-gce-ubuntu-containerd
These jobs run on (google.com only) k8s-prow-build, so @spiffxp has provided the following guess:
- suggest 4 cpu, slightly above 13 Gi (14?) mem
General steps to follow:
- update the job definitions in [/config/jobs](/config/jobs) to have matching CPU and memory limits and requests
- e.g. ([ci-kubernetes-e2e-gci-gce has a `resources:` field with matching entries](https://github.com/kubernetes/test-infra/blob/2eac54f721ed58479a3126e4f9ca5cbfcc73821b/config/jobs/kubernetes/sig-cloud-provider/gcp/gcp-gce.yaml#L357-L394))
- open a pull request, include a link to this issue and cc `@kubernetes/ci-signal` in the description
- keep an eye on the jobs for the next few days, and if they start failing more than usual, open followup pull requests to raise resources
- can look at recent prow.k8s.io runs (e.g. https://prow.k8s.io/?job=ci-kubernetes-e2e-gci-gce)
- can look at job history (e.g. https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce)
- can look at testgrid dashboards (e.g. https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-default)
- after the job has remained stable for a few days, declare victory
/sig testing
/sig release
/area jobs
/area release-eng | test | make critical jobs guaranteed pod qos pull kubernetes gce ubuntu containerd what should be cleaned up or changed this is part of the following jobs should be guaranteed pod qos meaning they should have cpu and memory resource limits and matching resource requests pull kubernetes gce ubuntu containerd these jobs run on google com only prow build so spiffxp has provided the following guess suggest cpu slightly above gi mem general steps to follow update the job definitions in config jobs to have matching cpu and memory limits and requests e g open a pull request include a link to this issue and cc kubernetes ci signal in the description keep an eye on the jobs for the next few days and if they start failing more than usual open followup pull requests to raise resources can look at recent prow io runs e g can look at job history e g can look at testgrid dashboards e g after the job has remained stable for a few days declare victory sig testing sig release area jobs area release eng | 1 |
297,145 | 25,602,811,561 | IssuesEvent | 2022-12-01 21:54:48 | bcgov/performance | https://api.github.com/repos/bcgov/performance | closed | SEE TICKET 794 - My Team > Team Members: add ability to edit/delete "excused" status | enhancement test failed | ### This ticket has been replaced by ticket 794
Currently a supervisor can excuse an employee for a specified date range. However, they can't edit or delete that date range once created. They need to have this ability added. | 1.0 | SEE TICKET 794 - My Team > Team Members: add ability to edit/delete "excused" status - ### This ticket has been replaced by ticket 794
Currently a supervisor can excuse an employee for a specified date range. However, they can't edit or delete that date range once created. They need to have this ability added. | test | see ticket my team team members add ability to edit delete excused status this ticket has been replaced by ticket currently a supervisor can excuse an employee for a specified date range however they can t edit or delete that date range once created they need to have this ability added | 1 |
549,738 | 16,099,182,497 | IssuesEvent | 2021-04-27 07:02:55 | kubernetes-sigs/azuredisk-csi-driver | https://api.github.com/repos/kubernetes-sigs/azuredisk-csi-driver | closed | support attacher v3.0.0 | do-not-merge/hold kind/feature lifecycle/stale priority/important-longterm | **Is your feature request related to a problem?/Why is this needed**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like in detail**
<!-- A clear and concise description of what you want to happen. -->
I found minimum k8s version for csi-attacher v3.0.0 is 1.17: https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.0.0
Why do we want to upgrade this version? Our CSI drivers still supports 1.14+ versions
And minimum supported k8s version for resizer v1.0.0 is 1.16: https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.0.0
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| 1.0 | support attacher v3.0.0 - **Is your feature request related to a problem?/Why is this needed**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like in detail**
<!-- A clear and concise description of what you want to happen. -->
I found minimum k8s version for csi-attacher v3.0.0 is 1.17: https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.0.0
Why do we want to upgrade this version? Our CSI drivers still supports 1.14+ versions
And minimum supported k8s version for resizer v1.0.0 is 1.16: https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.0.0
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| non_test | support attacher is your feature request related to a problem why is this needed describe the solution you d like in detail i found minimum version for csi attacher is why do we want to upgrade this version our csi drivers still supports versions and minimum supported version for resizer is describe alternatives you ve considered additional context | 0 |
194,010 | 14,667,214,784 | IssuesEvent | 2020-12-29 18:04:28 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | itsivareddy/terrafrom-Oci: oci/autoscaling_auto_scaling_configuration_test.go; 14 LoC | fresh small test |
Found a possible issue in [itsivareddy/terrafrom-Oci](https://www.github.com/itsivareddy/terrafrom-Oci) at [oci/autoscaling_auto_scaling_configuration_test.go](https://github.com/itsivareddy/terrafrom-Oci/blob/075608a9e201ee0e32484da68d5ba5370dfde1be/oci/autoscaling_auto_scaling_configuration_test.go#L501-L514)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to autoScalingConfigurationId is reassigned at line 505
[Click here to see the code in its original context.](https://github.com/itsivareddy/terrafrom-Oci/blob/075608a9e201ee0e32484da68d5ba5370dfde1be/oci/autoscaling_auto_scaling_configuration_test.go#L501-L514)
<details>
<summary>Click here to show the 14 line(s) of Go which triggered the analyzer.</summary>
```go
for _, autoScalingConfigurationId := range autoScalingConfigurationIds {
if ok := SweeperDefaultResourceId[autoScalingConfigurationId]; !ok {
deleteAutoScalingConfigurationRequest := oci_auto_scaling.DeleteAutoScalingConfigurationRequest{}
deleteAutoScalingConfigurationRequest.AutoScalingConfigurationId = &autoScalingConfigurationId
deleteAutoScalingConfigurationRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "auto_scaling")
_, error := autoScalingClient.DeleteAutoScalingConfiguration(context.Background(), deleteAutoScalingConfigurationRequest)
if error != nil {
fmt.Printf("Error deleting AutoScalingConfiguration %s %s, It is possible that the resource is already deleted. Please verify manually \n", autoScalingConfigurationId, error)
continue
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 075608a9e201ee0e32484da68d5ba5370dfde1be
| 1.0 | itsivareddy/terrafrom-Oci: oci/autoscaling_auto_scaling_configuration_test.go; 14 LoC -
Found a possible issue in [itsivareddy/terrafrom-Oci](https://www.github.com/itsivareddy/terrafrom-Oci) at [oci/autoscaling_auto_scaling_configuration_test.go](https://github.com/itsivareddy/terrafrom-Oci/blob/075608a9e201ee0e32484da68d5ba5370dfde1be/oci/autoscaling_auto_scaling_configuration_test.go#L501-L514)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to autoScalingConfigurationId is reassigned at line 505
[Click here to see the code in its original context.](https://github.com/itsivareddy/terrafrom-Oci/blob/075608a9e201ee0e32484da68d5ba5370dfde1be/oci/autoscaling_auto_scaling_configuration_test.go#L501-L514)
<details>
<summary>Click here to show the 14 line(s) of Go which triggered the analyzer.</summary>
```go
for _, autoScalingConfigurationId := range autoScalingConfigurationIds {
if ok := SweeperDefaultResourceId[autoScalingConfigurationId]; !ok {
deleteAutoScalingConfigurationRequest := oci_auto_scaling.DeleteAutoScalingConfigurationRequest{}
deleteAutoScalingConfigurationRequest.AutoScalingConfigurationId = &autoScalingConfigurationId
deleteAutoScalingConfigurationRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "auto_scaling")
_, error := autoScalingClient.DeleteAutoScalingConfiguration(context.Background(), deleteAutoScalingConfigurationRequest)
if error != nil {
fmt.Printf("Error deleting AutoScalingConfiguration %s %s, It is possible that the resource is already deleted. Please verify manually \n", autoScalingConfigurationId, error)
continue
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 075608a9e201ee0e32484da68d5ba5370dfde1be
| test | itsivareddy terrafrom oci oci autoscaling auto scaling configuration test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to autoscalingconfigurationid is reassigned at line click here to show the line s of go which triggered the analyzer go for autoscalingconfigurationid range autoscalingconfigurationids if ok sweeperdefaultresourceid ok deleteautoscalingconfigurationrequest oci auto scaling deleteautoscalingconfigurationrequest deleteautoscalingconfigurationrequest autoscalingconfigurationid autoscalingconfigurationid deleteautoscalingconfigurationrequest requestmetadata retrypolicy getretrypolicy true auto scaling error autoscalingclient deleteautoscalingconfiguration context background deleteautoscalingconfigurationrequest if error nil fmt printf error deleting autoscalingconfiguration s s it is possible that the resource is already deleted please verify manually n autoscalingconfigurationid error continue leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
10,499 | 3,121,122,394 | IssuesEvent | 2015-09-05 10:20:52 | systemd/systemd | https://api.github.com/repos/systemd/systemd | closed | "make check" broken since unified cgroup rework if cgroupfs is not mounted | cgroups tests | Since commit efdb02375beb ("core: unified cgroup hierarchy support") I am seeing 4 test case failures during "make check", in `test-engine`, `test-path`, `test-sched-prio`, and `test-bus-creds`. They all fail for the same reason:
```
$ SYSTEMD_LOG_LEVEL=debug ./test-path
Cannot determine cgroup we are running in: Exec format error
Assertion 'r >= 0' failed at src/test/test-path.c:47, function setup_test(). Aborting.
Aborted (core dumped)
```
This negative `r` result comes from `manager_new(MANAGER_USER, true, &tmp);`
This only happens in a build chroot, the test programs run fine in my "real" system. Once I bind-mount `/sys/fs/cgroup` into the chroot it works again, but introducing this requirement would mean that we have to stop running these tests on package build as our production buildds don't do this. Was introducing this requirement deliberate, or at least unavoidable? If so I'm okay with disabling the tests, but I'd like to ask first. | 1.0 | "make check" broken since unified cgroup rework if cgroupfs is not mounted - Since commit efdb02375beb ("core: unified cgroup hierarchy support") I am seeing 4 test case failures during "make check", in `test-engine`, `test-path`, `test-sched-prio`, and `test-bus-creds`. They all fail for the same reason:
```
$ SYSTEMD_LOG_LEVEL=debug ./test-path
Cannot determine cgroup we are running in: Exec format error
Assertion 'r >= 0' failed at src/test/test-path.c:47, function setup_test(). Aborting.
Aborted (core dumped)
```
This negative `r` result comes from `manager_new(MANAGER_USER, true, &tmp);`
This only happens in a build chroot, the test programs run fine in my "real" system. Once I bind-mount `/sys/fs/cgroup` into the chroot it works again, but introducing this requirement would mean that we have to stop running these tests on package build as our production buildds don't do this. Was introducing this requirement deliberate, or at least unavoidable? If so I'm okay with disabling the tests, but I'd like to ask first. | test | make check broken since unified cgroup rework if cgroupfs is not mounted since commit core unified cgroup hierarchy support i am seeing test case failures during make check in test engine test path test sched prio and test bus creds they all fail for the same reason systemd log level debug test path cannot determine cgroup we are running in exec format error assertion r failed at src test test path c function setup test aborting aborted core dumped this negative r result comes from manager new manager user true tmp this only happens in a build chroot the test programs run fine in my real system once i bind mount sys fs cgroup into the chroot it works again but introducing this requirement would mean that we have to stop running these tests on package build as our production buildds don t do this was introducing this requirement deliberate or at least unavoidable if so i m okay with disabling the tests but i d like to ask first | 1 |
310,407 | 26,715,005,681 | IssuesEvent | 2023-01-28 11:41:12 | prgrms-be-devcourse/BE-03-BlackDogBucks | https://api.github.com/repos/prgrms-be-devcourse/BE-03-BlackDogBucks | closed | [Payment]충전 카드의 금액을 충전할 수 있다. | :sparkles: feat :white_check_mark: test | ### 목적
사용자가 소지한 충전 카드의 금액을 충전한다.
### 작업
- [x] 충전 카드의 금액이 변경된다.
- [x] 5만원 이상 충전 시 쿠폰이 발급된다.
### 완료조건
- 카드 금액을 충전하는 API 생성
- 테스트 코드 작성 | 1.0 | [Payment]충전 카드의 금액을 충전할 수 있다. - ### 목적
사용자가 소지한 충전 카드의 금액을 충전한다.
### 작업
- [x] 충전 카드의 금액이 변경된다.
- [x] 5만원 이상 충전 시 쿠폰이 발급된다.
### 완료조건
- 카드 금액을 충전하는 API 생성
- 테스트 코드 작성 | test | 충전 카드의 금액을 충전할 수 있다 목적 사용자가 소지한 충전 카드의 금액을 충전한다 작업 충전 카드의 금액이 변경된다 이상 충전 시 쿠폰이 발급된다 완료조건 카드 금액을 충전하는 api 생성 테스트 코드 작성 | 1 |
134,511 | 30,036,348,802 | IssuesEvent | 2023-06-27 13:00:21 | sourcegraph/sourcegraph | https://api.github.com/repos/sourcegraph/sourcegraph | closed | Allow people to scroll up during Cody response generation | cody/vscode | Feedback from @steveyegge in Slack: https://sourcegraph.slack.com/archives/C05AGQYD528/p1686981156410759
> Some feedback on auto-scrolling in the chat window: Often when I ask Cody a question, I’m interested in reading its answer while it’s still generating.
> But by default we keep it scrolled to the bottom, even if you are trying to scroll up to read from the top. | 1.0 | Allow people to scroll up during Cody response generation - Feedback from @steveyegge in Slack: https://sourcegraph.slack.com/archives/C05AGQYD528/p1686981156410759
> Some feedback on auto-scrolling in the chat window: Often when I ask Cody a question, I’m interested in reading its answer while it’s still generating.
> But by default we keep it scrolled to the bottom, even if you are trying to scroll up to read from the top. | non_test | allow people to scroll up during cody response generation feedback from steveyegge in slack some feedback on auto scrolling in the chat window often when i ask cody a question i’m interested in reading its answer while it’s still generating but by default we keep it scrolled to the bottom even if you are trying to scroll up to read from the top | 0 |
306,365 | 26,461,939,123 | IssuesEvent | 2023-01-16 18:31:08 | Azure/ResourceModules | https://api.github.com/repos/Azure/ResourceModules | closed | [PSRule] Run PSRule pre-flight validation on the whole library (scheduled pipeline) | enhancement [cat] pipelines [cat] testing [cat] github | This issue is about implementing a scheduled pipeline to run PSRule validation checks on the whole repository.
This is useful to provide an overview of all failing checks per module overtime (if any).
A status badge should also be added to the repo home page.
Note: the same has been planned for Linter and broken link checks as commented in issue #2363
PR https://github.com/Azure/ResourceModules/pull/2094 hosts a PoC triggered on pull request. | 1.0 | [PSRule] Run PSRule pre-flight validation on the whole library (scheduled pipeline) - This issue is about implementing a scheduled pipeline to run PSRule validation checks on the whole repository.
This is useful to provide an overview of all failing checks per module overtime (if any).
A status badge should also be added to the repo home page.
Note: the same has been planned for Linter and broken link checks as commented in issue #2363
PR https://github.com/Azure/ResourceModules/pull/2094 hosts a PoC triggered on pull request. | test | run psrule pre flight validation on the whole library scheduled pipeline this issue is about implementing a scheduled pipeline to run psrule validation checks on the whole repository this is useful to provide an overview of all failing checks per module overtime if any a status badge should also be added to the repo home page note the same has been planned for linter and broken link checks as commented in issue pr hosts a poc triggered on pull request | 1 |
309,322 | 26,660,340,305 | IssuesEvent | 2023-01-25 20:29:19 | hypha-dao/dho-web-client | https://api.github.com/repos/hypha-dao/dho-web-client | closed | Set up Staging | Testing | **AC**
- [ ] set up staging domain: staging-dao.hypha.earth/hypha
- [ ] create workflow for automatically upload the build to URL
- [ ] get keys from backend | 1.0 | Set up Staging - **AC**
- [ ] set up staging domain: staging-dao.hypha.earth/hypha
- [ ] create workflow for automatically upload the build to URL
- [ ] get keys from backend | test | set up staging ac set up staging domain staging dao hypha earth hypha create workflow for automatically upload the build to url get keys from backend | 1 |
66,174 | 8,886,457,137 | IssuesEvent | 2019-01-15 00:39:02 | NSLS-II/ophyd | https://api.github.com/repos/NSLS-II/ophyd | closed | add documentation: hints, kind, and labels | documentation | As part of the ALS Hackathon, add documentation for the ``hints``, ``kind``, and ``labels`` attributes.
TODO
* [x] add ``hints`` subsection under Device to device-overview.rst
* [x] add links to ``hints`` from other sections in ophyd and bluesky
* [x] add ``kind`` section to signals.rst (but that doc is very limited!)
* [ ] add links to ``kind`` from other sections in ophyd and bluesky
* [x] in ophyd.device.Device, ``Kind`` needs a link to its class
* [x] add ``labels`` section to signals.rst (but that doc is very limited!)
* [ ] add links to ``labels`` from other sections in ophyd and bluesky
Take into account that ``labels`` might be deprecated in the near future. | 1.0 | add documentation: hints, kind, and labels - As part of the ALS Hackathon, add documentation for the ``hints``, ``kind``, and ``labels`` attributes.
TODO
* [x] add ``hints`` subsection under Device to device-overview.rst
* [x] add links to ``hints`` from other sections in ophyd and bluesky
* [x] add ``kind`` section to signals.rst (but that doc is very limited!)
* [ ] add links to ``kind`` from other sections in ophyd and bluesky
* [x] in ophyd.device.Device, ``Kind`` needs a link to its class
* [x] add ``labels`` section to signals.rst (but that doc is very limited!)
* [ ] add links to ``labels`` from other sections in ophyd and bluesky
Take into account that ``labels`` might be deprecated in the near future. | non_test | add documentation hints kind and labels as part of the als hackathon add documentation for the hints kind and labels attributes todo add hints subsection under device to device overview rst add links to hints from other sections in ophyd and bluesky add kind section to signals rst but that doc is very limited add links to kind from other sections in ophyd and bluesky in ophyd device device kind needs a link to its class add labels section to signals rst but that doc is very limited add links to labels from other sections in ophyd and bluesky take into account that labels might be deprecated in the near future | 0 |
136,460 | 11,049,188,626 | IssuesEvent | 2019-12-09 22:58:11 | MangopearUK/European-Boating-Association--Theme | https://api.github.com/repos/MangopearUK/European-Boating-Association--Theme | closed | Test & audit post: CEVNI 5 Published | Testing: second round | Page URL: https://eba.eu.com/2015/07/cevni-5-published/
## Table of contents
- [x] **Task 1:** Perform automated audits _(10 tasks)_
- [x] **Task 2:** Manual standards & accessibility tests _(61 tasks)_
- [x] **Task 3:** Breakpoint testing _(15 tasks)_
- [x] **Task 4:** Re-run automated audits _(10 tasks)_
## 1: Perform automated audits _(10 tasks)_
### Lighthouse:
- [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_
- [x] Run "Performance" audit in lighthouse _(using incognito tab)_
- [x] Run "Best practices" audit in lighthouse _(using incognito tab)_
- [x] Run "SEO" audit in lighthouse _(using incognito tab)_
- [x] Run "PWA" audit in lighthouse _(using incognito tab)_
### Pingdom
- [x] Run full audit of the the page's performance in Pingdom
### Browser's console
- [x] Check Chrome's console for errors
### Log results of audits
- [x] Screenshot snapshot of the lighthouse audits
- [x] Upload PDF of detailed lighthouse reports
- [x] Provide a screenshot of any console errors
## 2: Manual standards & accessibility tests _(61 tasks)_
### Forms
- [x] Give all form elements permanently visible labels
- [x] Place labels above form elements
- [x] Mark invalid fields clearly and provide associated error messages
- [x] Make forms as short as possible; offer shortcuts like autocompleting the address using the postcode
- [x] Ensure all form fields have the correct requried state
- [x] Provide status and error messages as WAI-ARIA live regions
### Readability of content
- [x] Ensure page has good grammar
- [x] Ensure page content has been spell-checked
- [x] Make sure headings are in logical order
- [x] Ensure the same content is available across different devices and platforms
- [x] Begin long, multi-section documents with a table of contents
### Presentation
- [x] Make sure all content is formatted correctly
- [x] Avoid all-caps text
- [x] Make sure data tables wider than their container can be scrolled horizontally
- [x] Use the same design patterns to solve the same problems
- [x] Do not mark up subheadings/straplines with separate heading elements
### Links & buttons
#### Links
- [x] Check all links to ensure they work
- [x] Check all links to third party websites use `rel="noopener"`
- [x] Make sure the purpose of a link is clearly described: "read more" vs. "read more about accessibility"
- [x] Provide a skip link if necessary
- [x] Underline links — at least in body copy
- [x] Warn users of links that have unusual behaviors, like linking off-site, or loading a new tab (i.e. aria-label)
#### Buttons
- [x] Ensure primary calls to action are easy to recognize and reach
- [x] Provide clear, unambiguous focus styles
- [x] Ensure states (pressed, expanded, invalid, etc) are communicated to assistive software
- [x] Ensure disabled controls are not focusable
- [x] Make sure controls within hidden content are not focusable
- [x] Provide large touch "targets" for interactive elements
- [x] Make controls look like controls; give them strong perceived affordance
- [x] Use well-established, therefore recognizable, icons and symbols
### Assistive technology
- [x] Ensure content is not obscured through zooming
- [x] Support Windows high contrast mode (use images, not background images)
- [x] Provide alternative text for salient images
- [x] Make scrollable elements focusable for keyboard users
- [x] Ensure keyboard focus order is logical regarding visual layout
- [x] Match semantics to behavior for assistive technology users
- [x] Provide a default language and use lang="[ISO code]" for subsections in different languages
- [x] Inform the user when there are important changes to the application state
- [x] Do not hijack standard scrolling behavior
- [x] Do not instate "infinite scroll" by default; provide buttons to load more items
### General accessibility
- [x] Make sure text and background colors contrast sufficiently
- [x] Do not rely on color for differentiation of visual elements
- [x] Avoid images of text — text that cannot be translated, selected, or understood by assistive tech
- [x] Provide a print stylesheet
- [x] Honour requests to remove animation via the prefers-reduced-motion media query
### SEO
- [x] Ensure all pages have appropriate title
- [x] Ensure all pages have meta descriptions
- [x] Make content easier to find and improve search results with structured data [Read more](https://developers.google.com/search/docs/guides/prototype)
- [x] Check whether page should be appearing in sitemap
- [x] Make sure page has Facebook and Twitter large image previews set correctly
- [x] Check canonical links for page
- [x] Mark as cornerstone content?
### Performance
- [x] Ensure all CSS assets are minified and concatenated
- [x] Ensure all JS assets are minified and concatenated
- [x] Ensure all images are compressed
- [x] Where possible, remove redundant code
- [x] Ensure all SVG assets have been optimised
- [x] Make sure styles and scripts are not render blocking
- [x] Ensure large image assets are lazy loaded
### Other
- [x] Make sure all content belongs to a landmark element
- [x] Provide a manifest.json file for identifiable homescreen entries
## 3: Breakpoint testing _(15 tasks)_
### Desktop
- [x] Provide a full screenshot of **1920px** wide page
- [x] Provide a full screenshot of **1500px** wide page
- [x] Provide a full screenshot of **1280px** wide page
- [x] Provide a full screenshot of **1024px** wide page
### Tablet
- [x] Provide a full screenshot of **960px** wide page
- [x] Provide a full screenshot of **800px** wide page
- [x] Provide a full screenshot of **760px** wide page
- [x] Provide a full screenshot of **650px** wide page
### Mobile
- [x] Provide a full screenshot of **600px** wide page
- [x] Provide a full screenshot of **500px** wide page
- [x] Provide a full screenshot of **450px** wide page
- [x] Provide a full screenshot of **380px** wide page
- [x] Provide a full screenshot of **320px** wide page
- [x] Provide a full screenshot of **280px** wide page
- [x] Provide a full screenshot of **250px** wide page
## 4: Re-run automated audits _(10 tasks)_
### Lighthouse:
- [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_
- [x] Run "Performance" audit in lighthouse _(using incognito tab)_
- [x] Run "Best practices" audit in lighthouse _(using incognito tab)_
- [x] Run "SEO" audit in lighthouse _(using incognito tab)_
- [x] Run "PWA" audit in lighthouse _(using incognito tab)_
### Pingdom
- [x] Run full audit of the the page's performance in Pingdom
### Browser's console
- [x] Check Chrome's console for errors
### Log results of audits
- [x] Screenshot snapshot of the lighthouse audits
- [x] Upload PDF of detailed lighthouse reports
- [x] Provide a screenshot of any console errors | 1.0 | Test & audit post: CEVNI 5 Published - Page URL: https://eba.eu.com/2015/07/cevni-5-published/
## Table of contents
- [x] **Task 1:** Perform automated audits _(10 tasks)_
- [x] **Task 2:** Manual standards & accessibility tests _(61 tasks)_
- [x] **Task 3:** Breakpoint testing _(15 tasks)_
- [x] **Task 4:** Re-run automated audits _(10 tasks)_
## 1: Perform automated audits _(10 tasks)_
### Lighthouse:
- [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_
- [x] Run "Performance" audit in lighthouse _(using incognito tab)_
- [x] Run "Best practices" audit in lighthouse _(using incognito tab)_
- [x] Run "SEO" audit in lighthouse _(using incognito tab)_
- [x] Run "PWA" audit in lighthouse _(using incognito tab)_
### Pingdom
- [x] Run full audit of the the page's performance in Pingdom
### Browser's console
- [x] Check Chrome's console for errors
### Log results of audits
- [x] Screenshot snapshot of the lighthouse audits
- [x] Upload PDF of detailed lighthouse reports
- [x] Provide a screenshot of any console errors
## 2: Manual standards & accessibility tests _(61 tasks)_
### Forms
- [x] Give all form elements permanently visible labels
- [x] Place labels above form elements
- [x] Mark invalid fields clearly and provide associated error messages
- [x] Make forms as short as possible; offer shortcuts like autocompleting the address using the postcode
- [x] Ensure all form fields have the correct requried state
- [x] Provide status and error messages as WAI-ARIA live regions
### Readability of content
- [x] Ensure page has good grammar
- [x] Ensure page content has been spell-checked
- [x] Make sure headings are in logical order
- [x] Ensure the same content is available across different devices and platforms
- [x] Begin long, multi-section documents with a table of contents
### Presentation
- [x] Make sure all content is formatted correctly
- [x] Avoid all-caps text
- [x] Make sure data tables wider than their container can be scrolled horizontally
- [x] Use the same design patterns to solve the same problems
- [x] Do not mark up subheadings/straplines with separate heading elements
### Links & buttons
#### Links
- [x] Check all links to ensure they work
- [x] Check all links to third party websites use `rel="noopener"`
- [x] Make sure the purpose of a link is clearly described: "read more" vs. "read more about accessibility"
- [x] Provide a skip link if necessary
- [x] Underline links — at least in body copy
- [x] Warn users of links that have unusual behaviors, like linking off-site, or loading a new tab (i.e. aria-label)
#### Buttons
- [x] Ensure primary calls to action are easy to recognize and reach
- [x] Provide clear, unambiguous focus styles
- [x] Ensure states (pressed, expanded, invalid, etc) are communicated to assistive software
- [x] Ensure disabled controls are not focusable
- [x] Make sure controls within hidden content are not focusable
- [x] Provide large touch "targets" for interactive elements
- [x] Make controls look like controls; give them strong perceived affordance
- [x] Use well-established, therefore recognizable, icons and symbols
### Assistive technology
- [x] Ensure content is not obscured through zooming
- [x] Support Windows high contrast mode (use images, not background images)
- [x] Provide alternative text for salient images
- [x] Make scrollable elements focusable for keyboard users
- [x] Ensure keyboard focus order is logical regarding visual layout
- [x] Match semantics to behavior for assistive technology users
- [x] Provide a default language and use lang="[ISO code]" for subsections in different languages
- [x] Inform the user when there are important changes to the application state
- [x] Do not hijack standard scrolling behavior
- [x] Do not instate "infinite scroll" by default; provide buttons to load more items
### General accessibility
- [x] Make sure text and background colors contrast sufficiently
- [x] Do not rely on color for differentiation of visual elements
- [x] Avoid images of text — text that cannot be translated, selected, or understood by assistive tech
- [x] Provide a print stylesheet
- [x] Honour requests to remove animation via the prefers-reduced-motion media query
### SEO
- [x] Ensure all pages have appropriate title
- [x] Ensure all pages have meta descriptions
- [x] Make content easier to find and improve search results with structured data [Read more](https://developers.google.com/search/docs/guides/prototype)
- [x] Check whether page should be appearing in sitemap
- [x] Make sure page has Facebook and Twitter large image previews set correctly
- [x] Check canonical links for page
- [x] Mark as cornerstone content?
### Performance
- [x] Ensure all CSS assets are minified and concatenated
- [x] Ensure all JS assets are minified and concatenated
- [x] Ensure all images are compressed
- [x] Where possible, remove redundant code
- [x] Ensure all SVG assets have been optimised
- [x] Make sure styles and scripts are not render blocking
- [x] Ensure large image assets are lazy loaded
### Other
- [x] Make sure all content belongs to a landmark element
- [x] Provide a manifest.json file for identifiable homescreen entries
## 3: Breakpoint testing _(15 tasks)_
### Desktop
- [x] Provide a full screenshot of **1920px** wide page
- [x] Provide a full screenshot of **1500px** wide page
- [x] Provide a full screenshot of **1280px** wide page
- [x] Provide a full screenshot of **1024px** wide page
### Tablet
- [x] Provide a full screenshot of **960px** wide page
- [x] Provide a full screenshot of **800px** wide page
- [x] Provide a full screenshot of **760px** wide page
- [x] Provide a full screenshot of **650px** wide page
### Mobile
- [x] Provide a full screenshot of **600px** wide page
- [x] Provide a full screenshot of **500px** wide page
- [x] Provide a full screenshot of **450px** wide page
- [x] Provide a full screenshot of **380px** wide page
- [x] Provide a full screenshot of **320px** wide page
- [x] Provide a full screenshot of **280px** wide page
- [x] Provide a full screenshot of **250px** wide page
## 4: Re-run automated audits _(10 tasks)_
### Lighthouse:
- [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_
- [x] Run "Performance" audit in lighthouse _(using incognito tab)_
- [x] Run "Best practices" audit in lighthouse _(using incognito tab)_
- [x] Run "SEO" audit in lighthouse _(using incognito tab)_
- [x] Run "PWA" audit in lighthouse _(using incognito tab)_
### Pingdom
- [x] Run full audit of the the page's performance in Pingdom
### Browser's console
- [x] Check Chrome's console for errors
### Log results of audits
- [x] Screenshot snapshot of the lighthouse audits
- [x] Upload PDF of detailed lighthouse reports
- [x] Provide a screenshot of any console errors | test | test audit post cevni published page url table of contents task perform automated audits tasks task manual standards accessibility tests tasks task breakpoint testing tasks task re run automated audits tasks perform automated audits tasks lighthouse run accessibility audit in lighthouse using incognito tab run performance audit in lighthouse using incognito tab run best practices audit in lighthouse using incognito tab run seo audit in lighthouse using incognito tab run pwa audit in lighthouse using incognito tab pingdom run full audit of the the page s performance in pingdom browser s console check chrome s console for errors log results of audits screenshot snapshot of the lighthouse audits upload pdf of detailed lighthouse reports provide a screenshot of any console errors manual standards accessibility tests tasks forms give all form elements permanently visible labels place labels above form elements mark invalid fields clearly and provide associated error messages make forms as short as possible offer shortcuts like autocompleting the address using the postcode ensure all form fields have the correct requried state provide status and error messages as wai aria live regions readability of content ensure page has good grammar ensure page content has been spell checked make sure headings are in logical order ensure the same content is available across different devices and platforms begin long multi section documents with a table of contents presentation make sure all content is formatted correctly avoid all caps text make sure data tables wider than their container can be scrolled horizontally use the same design patterns to solve the same problems do not mark up subheadings straplines with separate heading elements links buttons links check all links to ensure they work check all links to third party websites use rel noopener make sure the purpose of a link is clearly described read more vs read more about accessibility provide a skip link if necessary underline links — at least in body copy warn users of links that have unusual behaviors like linking off site or loading a new tab i e aria label buttons ensure primary calls to action are easy to recognize and reach provide clear unambiguous focus styles ensure states pressed expanded invalid etc are communicated to assistive software ensure disabled controls are not focusable make sure controls within hidden content are not focusable provide large touch targets for interactive elements make controls look like controls give them strong perceived affordance use well established therefore recognizable icons and symbols assistive technology ensure content is not obscured through zooming support windows high contrast mode use images not background images provide alternative text for salient images make scrollable elements focusable for keyboard users ensure keyboard focus order is logical regarding visual layout match semantics to behavior for assistive technology users provide a default language and use lang for subsections in different languages inform the user when there are important changes to the application state do not hijack standard scrolling behavior do not instate infinite scroll by default provide buttons to load more items general accessibility make sure text and background colors contrast sufficiently do not rely on color for differentiation of visual elements avoid images of text — text that cannot be translated selected or understood by assistive tech provide a print stylesheet honour requests to remove animation via the prefers reduced motion media query seo ensure all pages have appropriate title ensure all pages have meta descriptions make content easier to find and improve search results with structured data check whether page should be appearing in sitemap make sure page has facebook and twitter large image previews set correctly check canonical links for page mark as cornerstone content performance ensure all css assets are minified and concatenated ensure all js assets are minified and concatenated ensure all images are compressed where possible remove redundant code ensure all svg assets have been optimised make sure styles and scripts are not render blocking ensure large image assets are lazy loaded other make sure all content belongs to a landmark element provide a manifest json file for identifiable homescreen entries breakpoint testing tasks desktop provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page tablet provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page mobile provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page re run automated audits tasks lighthouse run accessibility audit in lighthouse using incognito tab run performance audit in lighthouse using incognito tab run best practices audit in lighthouse using incognito tab run seo audit in lighthouse using incognito tab run pwa audit in lighthouse using incognito tab pingdom run full audit of the the page s performance in pingdom browser s console check chrome s console for errors log results of audits screenshot snapshot of the lighthouse audits upload pdf of detailed lighthouse reports provide a screenshot of any console errors | 1 |
632,125 | 20,173,669,937 | IssuesEvent | 2022-02-10 12:43:29 | NTBBloodbath/doom-nvim | https://api.github.com/repos/NTBBloodbath/doom-nvim | closed | [BUG] Lua lsp doesn't install | scope: bug good first issue priority: high | <!--
Note: Please search to see if an issue already exists for the bug you encountered.
-->
### Current Behavior:
<!-- A concise description of what you're experiencing. -->
The sumneko_lua lsp fails to install.
Output of :LspInstallInfo:

### Expected Behavior:
<!-- A concise description of what you expected to happen. -->
sumneko_lua installs.
### Steps To Reproduce:
1. Enable the +lsp flag of lua.
### Anything else:
<!--
Links? References? Screenshots? Anything that will give us more context about the issue that you are encountering!
-->
The issue is [already fixed](https://github.com/williamboman/nvim-lsp-installer/blob/2c80c2ffbfd28e1e2eec4993ba6aea411a0a52d6/lua/nvim-lsp-installer/servers/sumneko_lua/init.lua#L19) in nvim-lsp-installer but since the packer dependency is pinned it is using a version that doesn't have the fix.
| 1.0 | [BUG] Lua lsp doesn't install - <!--
Note: Please search to see if an issue already exists for the bug you encountered.
-->
### Current Behavior:
<!-- A concise description of what you're experiencing. -->
The sumneko_lua lsp fails to install.
Output of :LspInstallInfo:

### Expected Behavior:
<!-- A concise description of what you expected to happen. -->
sumneko_lua installs.
### Steps To Reproduce:
1. Enable the +lsp flag of lua.
### Anything else:
<!--
Links? References? Screenshots? Anything that will give us more context about the issue that you are encountering!
-->
The issue is [already fixed](https://github.com/williamboman/nvim-lsp-installer/blob/2c80c2ffbfd28e1e2eec4993ba6aea411a0a52d6/lua/nvim-lsp-installer/servers/sumneko_lua/init.lua#L19) in nvim-lsp-installer but since the packer dependency is pinned it is using a version that doesn't have the fix.
| non_test | lua lsp doesn t install note please search to see if an issue already exists for the bug you encountered current behavior the sumneko lua lsp fails to install output of lspinstallinfo expected behavior sumneko lua installs steps to reproduce enable the lsp flag of lua anything else links references screenshots anything that will give us more context about the issue that you are encountering the issue is in nvim lsp installer but since the packer dependency is pinned it is using a version that doesn t have the fix | 0 |
381,145 | 11,274,023,654 | IssuesEvent | 2020-01-14 17:40:46 | googleapis/google-cloud-python | https://api.github.com/repos/googleapis/google-cloud-python | closed | Synthesis failed for speech | api: speech autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate speech. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth-speech'
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
main()
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
text=True,
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'
```
Google internal developers can see the full log [here](https://sponge/264076a6-41d9-4ca2-9fc2-c9d0a8a92def).
| 1.0 | Synthesis failed for speech - Hello! Autosynth couldn't regenerate speech. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth-speech'
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
main()
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
text=True,
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'
```
Google internal developers can see the full log [here](https://sponge/264076a6-41d9-4ca2-9fc2-c9d0a8a92def).
| non_test | synthesis failed for speech hello autosynth couldn t regenerate speech broken heart here s the output from running synth py cloning into working repo switched to branch autosynth speech traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth autosynth synth py line in main file tmpfs src git autosynth autosynth synth py line in main last synth commit hash get last metadata commit args metadata path file tmpfs src git autosynth autosynth synth py line in get last metadata commit text true file home kbuilder pyenv versions lib subprocess py line in run with popen popenargs kwargs as process typeerror init got an unexpected keyword argument text google internal developers can see the full log | 0 |
77,864 | 15,569,899,986 | IssuesEvent | 2021-03-17 01:15:16 | Sam-Marx/anti_nude_bot | https://api.github.com/repos/Sam-Marx/anti_nude_bot | opened | CVE-2019-16865 (High) detected in Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl | security vulnerability | ## CVE-2019-16865 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /anti_nude_bot/requirements.txt</p>
<p>Path to vulnerable library: teSource-ArchiveExtractor_0c4fd107-566e-4a98-973e-bda8edd30ae2/20190703163800_95826/20190703163719_depth_0/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64/PIL</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in Pillow before 6.2.0. When reading specially crafted invalid image files, the library can either allocate very large amounts of memory or take an extremely long period of time to process the image.
<p>Publish Date: 2019-10-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16865>CVE-2019-16865</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16865">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16865</a></p>
<p>Release Date: 2019-10-04</p>
<p>Fix Resolution: 6.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-16865 (High) detected in Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2019-16865 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /anti_nude_bot/requirements.txt</p>
<p>Path to vulnerable library: teSource-ArchiveExtractor_0c4fd107-566e-4a98-973e-bda8edd30ae2/20190703163800_95826/20190703163719_depth_0/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64/PIL</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in Pillow before 6.2.0. When reading specially crafted invalid image files, the library can either allocate very large amounts of memory or take an extremely long period of time to process the image.
<p>Publish Date: 2019-10-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16865>CVE-2019-16865</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16865">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16865</a></p>
<p>Release Date: 2019-10-04</p>
<p>Fix Resolution: 6.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in pillow whl cve high severity vulnerability vulnerable library pillow whl python imaging library fork library home page a href path to dependency file anti nude bot requirements txt path to vulnerable library tesource archiveextractor depth pillow pil dependency hierarchy x pillow whl vulnerable library vulnerability details an issue was discovered in pillow before when reading specially crafted invalid image files the library can either allocate very large amounts of memory or take an extremely long period of time to process the image publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
454,058 | 13,094,194,732 | IssuesEvent | 2020-08-03 11:56:17 | larray-project/larray | https://api.github.com/repos/larray-project/larray | opened | add some axes info in error message on invalid label | difficulty: low enhancement priority: high size: small | Having something **similar** to AxisCollection.info added to the "%r is not a valid label for any axis" message would very helpful. I have actually tried hacking my version to do just that and found it very helpful. I would only avoid displaying the first line with the array shape. | 1.0 | add some axes info in error message on invalid label - Having something **similar** to AxisCollection.info added to the "%r is not a valid label for any axis" message would very helpful. I have actually tried hacking my version to do just that and found it very helpful. I would only avoid displaying the first line with the array shape. | non_test | add some axes info in error message on invalid label having something similar to axiscollection info added to the r is not a valid label for any axis message would very helpful i have actually tried hacking my version to do just that and found it very helpful i would only avoid displaying the first line with the array shape | 0 |
11,226 | 8,320,877,362 | IssuesEvent | 2018-09-25 21:36:32 | GetDKAN/dkan | https://api.github.com/repos/GetDKAN/dkan | closed | Web Browser XSS Protection Not Enabled | Security | Risk identified by Zap scan (see https://github.com/NuCivic/healthdata/issues/1010 ):
Web Browser XSS Protection is not enabled, or is disabled by the configuration of the 'X-XSS-Protection' HTTP response header on the web server | True | Web Browser XSS Protection Not Enabled - Risk identified by Zap scan (see https://github.com/NuCivic/healthdata/issues/1010 ):
Web Browser XSS Protection is not enabled, or is disabled by the configuration of the 'X-XSS-Protection' HTTP response header on the web server | non_test | web browser xss protection not enabled risk identified by zap scan see web browser xss protection is not enabled or is disabled by the configuration of the x xss protection http response header on the web server | 0 |
275,460 | 23,917,181,293 | IssuesEvent | 2022-09-09 13:42:21 | ossf/scorecard-action | https://api.github.com/repos/ossf/scorecard-action | closed | Failing e2e tests - scorecard-latest-release on ossf-tests/scorecard-action | e2e automated-tests | Matrix: {
"results_format": "json",
"publish_results": false,
"upload_result": false
}
Repo: https://github.com/ossf-tests/scorecard-action/tree/main
Run: https://github.com/ossf-tests/scorecard-action/actions/runs/3020412602
Workflow name: scorecard-latest-release
Workflow file: https://github.com/ossf-tests/scorecard-action/tree/main/.github/workflows/scorecards-latest-release.yml
Trigger: push
Branch: main | 1.0 | Failing e2e tests - scorecard-latest-release on ossf-tests/scorecard-action - Matrix: {
"results_format": "json",
"publish_results": false,
"upload_result": false
}
Repo: https://github.com/ossf-tests/scorecard-action/tree/main
Run: https://github.com/ossf-tests/scorecard-action/actions/runs/3020412602
Workflow name: scorecard-latest-release
Workflow file: https://github.com/ossf-tests/scorecard-action/tree/main/.github/workflows/scorecards-latest-release.yml
Trigger: push
Branch: main | test | failing tests scorecard latest release on ossf tests scorecard action matrix results format json publish results false upload result false repo run workflow name scorecard latest release workflow file trigger push branch main | 1 |
308,994 | 26,644,436,083 | IssuesEvent | 2023-01-25 08:51:21 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Test extension contributed keyboard shortcuts in the keyboard shortcuts editor | testplan-item | Refs: https://github.com/microsoft/vscode/issues/121673
- [x] anyOS @meganrogge
- [x] anyOS @aeschli
Complexity: 3
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23172018%0A%0A&assignees=sandy081)
---
To test:
- Install an extension that contributes keyboard shortcuts Eg: `GitHub.vscode-pull-request-github`
- Verify that you can filter keyboard shortcuts contributed by that extension using the action **Extension Keyboard Shortcuts** that is available in the gear context menu (in extension editor and in extension view). This action opens Keyboard shortcuts contributed by this extension in the Keyboard Shortcuts editor
- Verify that you can see the extension in the source column that contributed keyboard shortcuts
- Verify that clicking on the extension opens the extension editor for that extension
- In the keyboard shortcuts editor overflow menu in the title bar, there is an action to see all keyboard shortcuts contributed by extensions **Show Extension Keyboard Shortcuts**. Make sure this work as expected
| 1.0 | Test extension contributed keyboard shortcuts in the keyboard shortcuts editor - Refs: https://github.com/microsoft/vscode/issues/121673
- [x] anyOS @meganrogge
- [x] anyOS @aeschli
Complexity: 3
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23172018%0A%0A&assignees=sandy081)
---
To test:
- Install an extension that contributes keyboard shortcuts Eg: `GitHub.vscode-pull-request-github`
- Verify that you can filter keyboard shortcuts contributed by that extension using the action **Extension Keyboard Shortcuts** that is available in the gear context menu (in extension editor and in extension view). This action opens Keyboard shortcuts contributed by this extension in the Keyboard Shortcuts editor
- Verify that you can see the extension in the source column that contributed keyboard shortcuts
- Verify that clicking on the extension opens the extension editor for that extension
- In the keyboard shortcuts editor overflow menu in the title bar, there is an action to see all keyboard shortcuts contributed by extensions **Show Extension Keyboard Shortcuts**. Make sure this work as expected
| test | test extension contributed keyboard shortcuts in the keyboard shortcuts editor refs anyos meganrogge anyos aeschli complexity to test install an extension that contributes keyboard shortcuts eg github vscode pull request github verify that you can filter keyboard shortcuts contributed by that extension using the action extension keyboard shortcuts that is available in the gear context menu in extension editor and in extension view this action opens keyboard shortcuts contributed by this extension in the keyboard shortcuts editor verify that you can see the extension in the source column that contributed keyboard shortcuts verify that clicking on the extension opens the extension editor for that extension in the keyboard shortcuts editor overflow menu in the title bar there is an action to see all keyboard shortcuts contributed by extensions show extension keyboard shortcuts make sure this work as expected | 1 |
129,270 | 10,568,329,655 | IssuesEvent | 2019-10-06 12:15:24 | harenber/ptc-go | https://api.github.com/repos/harenber/ptc-go | closed | Abort/disconnect on call to Close() | tested | Hi Torsten,
this package currently catches interrupt signals to implement abort/disconnect.
https://github.com/harenber/ptc-go/blob/master/ptc/pmodem.go#L331
A more idiomatic approach would be to allow for concurrent call to `Close()`, so that Pat (and other consumers) can communicate to the package that it should disconnect. This will also ensure that any application importing ptc-go can control the signal handling as they see fit.
Can we do without the signal handler in ptc-go? | 1.0 | Abort/disconnect on call to Close() - Hi Torsten,
this package currently catches interrupt signals to implement abort/disconnect.
https://github.com/harenber/ptc-go/blob/master/ptc/pmodem.go#L331
A more idiomatic approach would be to allow for concurrent call to `Close()`, so that Pat (and other consumers) can communicate to the package that it should disconnect. This will also ensure that any application importing ptc-go can control the signal handling as they see fit.
Can we do without the signal handler in ptc-go? | test | abort disconnect on call to close hi torsten this package currently catches interrupt signals to implement abort disconnect a more idiomatic approach would be to allow for concurrent call to close so that pat and other consumers can communicate to the package that it should disconnect this will also ensure that any application importing ptc go can control the signal handling as they see fit can we do without the signal handler in ptc go | 1 |
78,887 | 7,680,918,200 | IssuesEvent | 2018-05-16 04:42:38 | adobe/brackets | https://api.github.com/repos/adobe/brackets | closed | [Brackets auto-update Windows] In the update notification dialogue, get it now button should change to Install Now. | Testing | ### Description
In the update notification dialogue, get it now button should change to Install Now.
### Steps to Reproduce
1. Launch brackets 1.13.
2. Click on Update Notification Button.
3. Click on Get it now button.
4. Click on later button in the update info bar at the bottom.
5. Again click on update notification button.
6. The get it now button should now change to install update button.
**Expected behavior:** Once user has clicked on get it now button after that when ever user opens update notification dialogue in the same session then user must see the install update button.
**Actual behavior:** In the update notification dialogue, the get it now button does not change to install update button.
### Versions
Windows 10 64 Bit
Release 1.13 build 1.13.0-17639 | 1.0 | [Brackets auto-update Windows] In the update notification dialogue, get it now button should change to Install Now. - ### Description
In the update notification dialogue, get it now button should change to Install Now.
### Steps to Reproduce
1. Launch brackets 1.13.
2. Click on Update Notification Button.
3. Click on Get it now button.
4. Click on later button in the update info bar at the bottom.
5. Again click on update notification button.
6. The get it now button should now change to install update button.
**Expected behavior:** Once user has clicked on get it now button after that when ever user opens update notification dialogue in the same session then user must see the install update button.
**Actual behavior:** In the update notification dialogue, the get it now button does not change to install update button.
### Versions
Windows 10 64 Bit
Release 1.13 build 1.13.0-17639 | test | in the update notification dialogue get it now button should change to install now description in the update notification dialogue get it now button should change to install now steps to reproduce launch brackets click on update notification button click on get it now button click on later button in the update info bar at the bottom again click on update notification button the get it now button should now change to install update button expected behavior once user has clicked on get it now button after that when ever user opens update notification dialogue in the same session then user must see the install update button actual behavior in the update notification dialogue the get it now button does not change to install update button versions windows bit release build | 1 |
217,354 | 16,853,941,419 | IssuesEvent | 2021-06-21 02:00:41 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | [CI] move distributed and RPC tests into its own CI jobs. | module: ci module: tests oncall: distributed | See discussion in #60136
What we found in the previous issue was that distributed tests were ran against 3 different backends and 2 different init methods when running CI (on almost all eligible variance of the CI configuration)
There are several recent flakiness discovered on these distributed tests. We were wondering if we should move these tests into its own CI jobs. Some considerations
1. reduce the combination of distributed test configurations for PRs (move some to master-only)
2. reduce number of different configurations for distributed test ran on master (for example can we run them on cuda11 only instead of in both cuda10 & cuda11?)
3. Creating a separate distributed CI jobs to host `test/distributed/*` test cases (reduce flakiness in distributed test affect on other CI jobs since we terminate with the first test module exit non-zero)
| 1.0 | [CI] move distributed and RPC tests into its own CI jobs. - See discussion in #60136
What we found in the previous issue was that distributed tests were ran against 3 different backends and 2 different init methods when running CI (on almost all eligible variance of the CI configuration)
There are several recent flakiness discovered on these distributed tests. We were wondering if we should move these tests into its own CI jobs. Some considerations
1. reduce the combination of distributed test configurations for PRs (move some to master-only)
2. reduce number of different configurations for distributed test ran on master (for example can we run them on cuda11 only instead of in both cuda10 & cuda11?)
3. Creating a separate distributed CI jobs to host `test/distributed/*` test cases (reduce flakiness in distributed test affect on other CI jobs since we terminate with the first test module exit non-zero)
| test | move distributed and rpc tests into its own ci jobs see discussion in what we found in the previous issue was that distributed tests were ran against different backends and different init methods when running ci on almost all eligible variance of the ci configuration there are several recent flakiness discovered on these distributed tests we were wondering if we should move these tests into its own ci jobs some considerations reduce the combination of distributed test configurations for prs move some to master only reduce number of different configurations for distributed test ran on master for example can we run them on only instead of in both creating a separate distributed ci jobs to host test distributed test cases reduce flakiness in distributed test affect on other ci jobs since we terminate with the first test module exit non zero | 1 |
79,790 | 7,725,109,667 | IssuesEvent | 2018-05-24 16:54:58 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: drop/tpcc/w=100,nodes=9 failed on release-2.0 | C-test-failure O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/b24536e74fdd58fa35f59e14d087f21dfa249905
Parameters:
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=676306&tab=buildLog
```
cluster.go:594,drop.go:37,drop.go:167: /home/agent/work/.go/bin/roachprod put teamcity-676306-drop-tpcc-w-100-nodes-9:1-9 /home/agent/work/.go/src/github.com/cockroachdb/cockroach/cockroach-linux-2.6.32-gnu-amd64 ./cockroach: exit status 1
``` | 1.0 | roachtest: drop/tpcc/w=100,nodes=9 failed on release-2.0 - SHA: https://github.com/cockroachdb/cockroach/commits/b24536e74fdd58fa35f59e14d087f21dfa249905
Parameters:
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=676306&tab=buildLog
```
cluster.go:594,drop.go:37,drop.go:167: /home/agent/work/.go/bin/roachprod put teamcity-676306-drop-tpcc-w-100-nodes-9:1-9 /home/agent/work/.go/src/github.com/cockroachdb/cockroach/cockroach-linux-2.6.32-gnu-amd64 ./cockroach: exit status 1
``` | test | roachtest drop tpcc w nodes failed on release sha parameters failed test cluster go drop go drop go home agent work go bin roachprod put teamcity drop tpcc w nodes home agent work go src github com cockroachdb cockroach cockroach linux gnu cockroach exit status | 1 |
104,852 | 13,130,765,366 | IssuesEvent | 2020-08-06 15:51:48 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | Request Design QA - SCO Migration | bah-sco-migration design product support | #### Context
We were tasked with migrating three pages of content from the School Resources pages to a single page, a “one-stop shop” for all the information School Certifying Officials (SCOs) and school administrators need to do their work. The challenge was amplified when senior stakeholders reacted to the length of our first iteration, "This page is still too long!" The work you'll review is a solution which delights our users and effectively meets our stakeholders expectations.
This work also is content for a non-Veteran audience so serves as a test case for Tier 2 content migration efforts.
#### URL
Prototype https://bahdigital.invisionapp.com/share/HKIAC3863ST#/screens
Staging https://staging.va.gov/school-administrators/
#### Design Challenges
_The use of accordions within spokes_
One of the client's goals was to create a new page that felt fairly short and light without seeming cluttered. Incorporating accordions within the page spokes provided a viable solution for organizing and presenting a large amount of content into compact spaces and not extending the page's vertical real estate.
In addition to helping condense the content, the accordions provide a helpful navigation structure for the resources that live within each spoke. Users can view content by relevant and helpful categories.
_Inclusion of Announcements_
One of the project requirements was a prominent announcements area that would allow for updates and automatically expiring content. Stakeholders wanted this content to be highly visible to visitors to the page, and for the page to always display fresh content.
We used the light blue, featured content component with select modifications to accommodate the nature of the content, essentially a list of headlines with dates. This announcements area was placed below the jump links and the “Key Resources for SCOs” areas in order to maintain the appropriate information hierarchy.
_Key Resources for SCOs_
This section comprises the most valuable resources for School Certifying Officials, which were identified during discovery interviews with SCOs. These are links the SCOs can access quickly as arrive on the page, reducing challenges they may have had finding what they need when they need it. Since the links are small and concise, SCOs can immediately see what is available to them. During usability testing, SCOs expressed how helpful these resources were to them, and they appreciated how prominent they are on the page.
_Resources to support students_
The final section of the School Resources page is dedicated to a collection of Veteran-facing resources SCOs use to support military-connected students. The styling for this section is based on the Related Links pattern found at the end of many of VA.gov’s hub pages. Despite being the last section on the page, SCOs easily navigated to this content during testing, and they expressed how helpful these links would be to them, in addition to the rest of the content on the page. Since this proved so valuable to SCOs during testing, we included a jump link to it at the top of the page to ensure ease of access. | 1.0 | Request Design QA - SCO Migration - #### Context
We were tasked with migrating three pages of content from the School Resources pages to a single page, a “one-stop shop” for all the information School Certifying Officials (SCOs) and school administrators need to do their work. The challenge was amplified when senior stakeholders reacted to the length of our first iteration, "This page is still too long!" The work you'll review is a solution which delights our users and effectively meets our stakeholders expectations.
This work also is content for a non-Veteran audience so serves as a test case for Tier 2 content migration efforts.
#### URL
Prototype https://bahdigital.invisionapp.com/share/HKIAC3863ST#/screens
Staging https://staging.va.gov/school-administrators/
#### Design Challenges
_The use of accordions within spokes_
One of the client's goals was to create a new page that felt fairly short and light without seeming cluttered. Incorporating accordions within the page spokes provided a viable solution for organizing and presenting a large amount of content into compact spaces and not extending the page's vertical real estate.
In addition to helping condense the content, the accordions provide a helpful navigation structure for the resources that live within each spoke. Users can view content by relevant and helpful categories.
_Inclusion of Announcements_
One of the project requirements was a prominent announcements area that would allow for updates and automatically expiring content. Stakeholders wanted this content to be highly visible to visitors to the page, and for the page to always display fresh content.
We used the light blue, featured content component with select modifications to accommodate the nature of the content, essentially a list of headlines with dates. This announcements area was placed below the jump links and the “Key Resources for SCOs” areas in order to maintain the appropriate information hierarchy.
_Key Resources for SCOs_
This section comprises the most valuable resources for School Certifying Officials, which were identified during discovery interviews with SCOs. These are links the SCOs can access quickly as arrive on the page, reducing challenges they may have had finding what they need when they need it. Since the links are small and concise, SCOs can immediately see what is available to them. During usability testing, SCOs expressed how helpful these resources were to them, and they appreciated how prominent they are on the page.
_Resources to support students_
The final section of the School Resources page is dedicated to a collection of Veteran-facing resources SCOs use to support military-connected students. The styling for this section is based on the Related Links pattern found at the end of many of VA.gov’s hub pages. Despite being the last section on the page, SCOs easily navigated to this content during testing, and they expressed how helpful these links would be to them, in addition to the rest of the content on the page. Since this proved so valuable to SCOs during testing, we included a jump link to it at the top of the page to ensure ease of access. | non_test | request design qa sco migration context we were tasked with migrating three pages of content from the school resources pages to a single page a “one stop shop” for all the information school certifying officials scos and school administrators need to do their work the challenge was amplified when senior stakeholders reacted to the length of our first iteration this page is still too long the work you ll review is a solution which delights our users and effectively meets our stakeholders expectations this work also is content for a non veteran audience so serves as a test case for tier content migration efforts url prototype staging design challenges the use of accordions within spokes one of the client s goals was to create a new page that felt fairly short and light without seeming cluttered incorporating accordions within the page spokes provided a viable solution for organizing and presenting a large amount of content into compact spaces and not extending the page s vertical real estate in addition to helping condense the content the accordions provide a helpful navigation structure for the resources that live within each spoke users can view content by relevant and helpful categories inclusion of announcements one of the project requirements was a prominent announcements area that would allow for updates and automatically expiring content stakeholders wanted this content to be highly visible to visitors to the page and for the page to always display fresh content we used the light blue featured content component with select modifications to accommodate the nature of the content essentially a list of headlines with dates this announcements area was placed below the jump links and the “key resources for scos” areas in order to maintain the appropriate information hierarchy key resources for scos this section comprises the most valuable resources for school certifying officials which were identified during discovery interviews with scos these are links the scos can access quickly as arrive on the page reducing challenges they may have had finding what they need when they need it since the links are small and concise scos can immediately see what is available to them during usability testing scos expressed how helpful these resources were to them and they appreciated how prominent they are on the page resources to support students the final section of the school resources page is dedicated to a collection of veteran facing resources scos use to support military connected students the styling for this section is based on the related links pattern found at the end of many of va gov’s hub pages despite being the last section on the page scos easily navigated to this content during testing and they expressed how helpful these links would be to them in addition to the rest of the content on the page since this proved so valuable to scos during testing we included a jump link to it at the top of the page to ensure ease of access | 0 |
130,319 | 18,155,748,289 | IssuesEvent | 2021-09-27 01:10:21 | dreamboy9/spark | https://api.github.com/repos/dreamboy9/spark | opened | CVE-2021-38153 (Medium) detected in kafka-clients-2.8.0.jar | security vulnerability | ## CVE-2021-38153 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kafka-clients-2.8.0.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://kafka.apache.org">https://kafka.apache.org</a></p>
<p>Path to dependency file: spark/external/kafka-0-10-assembly/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar,canner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar,canner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar,/home/wss-scanner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar,canner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **kafka-clients-2.8.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Some components in Apache Kafka use `Arrays.equals` to validate a password or key, which is vulnerable to timing attacks that make brute force attacks for such credentials more likely to be successful. Users should upgrade to 2.8.1 or higher, or 3.0.0 or higher where this vulnerability has been fixed. The affected versions include Apache Kafka 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.7.1, and 2.8.0.
<p>Publish Date: 2021-09-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38153>CVE-2021-38153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-38153">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-38153</a></p>
<p>Release Date: 2021-09-22</p>
<p>Fix Resolution: org.apache.kafka:kafka-clients:2.8.1,3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-38153 (Medium) detected in kafka-clients-2.8.0.jar - ## CVE-2021-38153 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kafka-clients-2.8.0.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://kafka.apache.org">https://kafka.apache.org</a></p>
<p>Path to dependency file: spark/external/kafka-0-10-assembly/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar,canner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar,canner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar,/home/wss-scanner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar,canner/.m2/repository/org/apache/kafka/kafka-clients/2.8.0/kafka-clients-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **kafka-clients-2.8.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Some components in Apache Kafka use `Arrays.equals` to validate a password or key, which is vulnerable to timing attacks that make brute force attacks for such credentials more likely to be successful. Users should upgrade to 2.8.1 or higher, or 3.0.0 or higher where this vulnerability has been fixed. The affected versions include Apache Kafka 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.7.1, and 2.8.0.
<p>Publish Date: 2021-09-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38153>CVE-2021-38153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-38153">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-38153</a></p>
<p>Release Date: 2021-09-22</p>
<p>Fix Resolution: org.apache.kafka:kafka-clients:2.8.1,3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in kafka clients jar cve medium severity vulnerability vulnerable library kafka clients jar library home page a href path to dependency file spark external kafka assembly pom xml path to vulnerable library home wss scanner repository org apache kafka kafka clients kafka clients jar canner repository org apache kafka kafka clients kafka clients jar canner repository org apache kafka kafka clients kafka clients jar home wss scanner repository org apache kafka kafka clients kafka clients jar canner repository org apache kafka kafka clients kafka clients jar dependency hierarchy x kafka clients jar vulnerable library found in base branch master vulnerability details some components in apache kafka use arrays equals to validate a password or key which is vulnerable to timing attacks that make brute force attacks for such credentials more likely to be successful users should upgrade to or higher or or higher where this vulnerability has been fixed the affected versions include apache kafka and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache kafka kafka clients step up your open source security game with whitesource | 0 |
296,595 | 9,122,408,488 | IssuesEvent | 2019-02-23 07:39:25 | ChrisCScott/forecaster | https://api.github.com/repos/ChrisCScott/forecaster | closed | Align timing of income and living expenses transactions | IncomeForecast WithdrawalForecast medium priority | Currently we assume that income is received whenever dictated by `Person.payment_frequency` (and we assume it's received in the middle of each period), whereas living expenses are incurred monthly at the start of each month. These don't line up, which leads to odd behaviour.
Ideally, living expenses would be recorded as being incurred at the same time that employment income is received. Consider adding living expenses at the timings dictated by each person's `payment_frequency`, proportionately to each person's `net_income`. | 1.0 | Align timing of income and living expenses transactions - Currently we assume that income is received whenever dictated by `Person.payment_frequency` (and we assume it's received in the middle of each period), whereas living expenses are incurred monthly at the start of each month. These don't line up, which leads to odd behaviour.
Ideally, living expenses would be recorded as being incurred at the same time that employment income is received. Consider adding living expenses at the timings dictated by each person's `payment_frequency`, proportionately to each person's `net_income`. | non_test | align timing of income and living expenses transactions currently we assume that income is received whenever dictated by person payment frequency and we assume it s received in the middle of each period whereas living expenses are incurred monthly at the start of each month these don t line up which leads to odd behaviour ideally living expenses would be recorded as being incurred at the same time that employment income is received consider adding living expenses at the timings dictated by each person s payment frequency proportionately to each person s net income | 0 |
12,556 | 4,489,821,710 | IssuesEvent | 2016-08-30 12:30:30 | Lokiedu/libertysoil-site | https://api.github.com/repos/Lokiedu/libertysoil-site | closed | Tags which were liked by other users are not displayed in News Feed | Step 7 1/2 CODE REVIEW | http://alpha.libertysoil.org/
Windows 7, Chrome 49
Precondition: need two users which follow each other
1.Like any tag (first user)
2.Notice that this action is displayed in first user's news feed
3.Go to second user's news feed
4.Notice that this action does not appear in news feed | 1.0 | Tags which were liked by other users are not displayed in News Feed - http://alpha.libertysoil.org/
Windows 7, Chrome 49
Precondition: need two users which follow each other
1.Like any tag (first user)
2.Notice that this action is displayed in first user's news feed
3.Go to second user's news feed
4.Notice that this action does not appear in news feed | non_test | tags which were liked by other users are not displayed in news feed windows chrome precondition need two users which follow each other like any tag first user notice that this action is displayed in first user s news feed go to second user s news feed notice that this action does not appear in news feed | 0 |
26,030 | 11,254,006,122 | IssuesEvent | 2020-01-11 20:15:38 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | Forbid using `elasticsearch.ssl.certificate` without `elasticsearch.ssl.key` and vice versa | Team:Security chore | Starting in 8.0, we should prevent Kibana from starting if `elasticsearch.ssl.certificate` without `elasticsearch.ssl.key` and vice versa. This configuration will not enable TLS client authentication to Elasticsearch, and is unsupported.
Starting in 7.6, we're warning the user via deprecation logs (see #54392), so we are safe to enforce this in 8.0 after properly documenting it as a breaking change. | True | Forbid using `elasticsearch.ssl.certificate` without `elasticsearch.ssl.key` and vice versa - Starting in 8.0, we should prevent Kibana from starting if `elasticsearch.ssl.certificate` without `elasticsearch.ssl.key` and vice versa. This configuration will not enable TLS client authentication to Elasticsearch, and is unsupported.
Starting in 7.6, we're warning the user via deprecation logs (see #54392), so we are safe to enforce this in 8.0 after properly documenting it as a breaking change. | non_test | forbid using elasticsearch ssl certificate without elasticsearch ssl key and vice versa starting in we should prevent kibana from starting if elasticsearch ssl certificate without elasticsearch ssl key and vice versa this configuration will not enable tls client authentication to elasticsearch and is unsupported starting in we re warning the user via deprecation logs see so we are safe to enforce this in after properly documenting it as a breaking change | 0 |
Subsets and Splits