Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
518,570 | 15,030,511,912 | IssuesEvent | 2021-02-02 07:34:31 | red-hat-storage/ocs-ci | https://api.github.com/repos/red-hat-storage/ocs-ci | closed | use new url for downloading openshift binaries | Medium Priority | use new url foe downloading openshift binaries
amd64.ocp.releases.ci.openshift.org | 1.0 | use new url for downloading openshift binaries - use new url foe downloading openshift binaries
amd64.ocp.releases.ci.openshift.org | non_test | use new url for downloading openshift binaries use new url foe downloading openshift binaries ocp releases ci openshift org | 0 |
294,996 | 25,446,798,746 | IssuesEvent | 2022-11-24 07:02:23 | Sexy-Sisters/TOJ-server-v2 | https://api.github.com/repos/Sexy-Sisters/TOJ-server-v2 | closed | Sign In | ⚡️FEAT ✅ TEST | ## 📑 describe
> 사용자는 로그인할 수 있다.
## ✅ Things to do
- [x] test
- [x] logic
- [x] API
## 🙋🏿What I want to say
> nothing
| 1.0 | Sign In - ## 📑 describe
> 사용자는 로그인할 수 있다.
## ✅ Things to do
- [x] test
- [x] logic
- [x] API
## 🙋🏿What I want to say
> nothing
| test | sign in 📑 describe 사용자는 로그인할 수 있다 ✅ things to do test logic api 🙋🏿what i want to say nothing | 1 |
12,069 | 3,251,698,825 | IssuesEvent | 2015-10-19 11:20:03 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | closed | На главном портале реализовать в обїекте markers, подобьекте motion - поддержку нового сценария "присваивания признака записываемости полей" (writible=true) | active test _central-js | назвать его: "WritableFieldsOnCondition"
реализовать по образу и подобию: "ShowFieldsOnCondition_"
(т.е. возможные названия вариаций: WritableFieldsOnCondition_1, WritableFieldsOnCondition_Tax, WritableFieldsOnCondition_temp1)
Реализовать возможность динамически присваивать/убирать признак обязательности к заполнению полей
Примеры сценария:
если выполняется условие 1 - Writable должно быть поля1
если выполняется условие 2 - Writable должно быть поля2
если выполняется условие 3 - поля 1 и 2 not Writable
если выполняется условие 4 - поле 1 Not Writable для заполнения при печати в дашборде, поле 2 - обязательное для заполнения при печати.
на текущий момент учитывается только атрибут Writable в полях формы, а этот метод должен, если Writable=false позволять считать его как true
Пример:
```json
{
"motion": {
"WritableFieldsOnCondition_1": {
"aField_ID": ["info1", "file1"],
"asID_Field": {
"sClient": "client"
},
"sCondition": "[sClient] == 'attr1'"
}
}
}
``` | 1.0 | На главном портале реализовать в обїекте markers, подобьекте motion - поддержку нового сценария "присваивания признака записываемости полей" (writible=true) - назвать его: "WritableFieldsOnCondition"
реализовать по образу и подобию: "ShowFieldsOnCondition_"
(т.е. возможные названия вариаций: WritableFieldsOnCondition_1, WritableFieldsOnCondition_Tax, WritableFieldsOnCondition_temp1)
Реализовать возможность динамически присваивать/убирать признак обязательности к заполнению полей
Примеры сценария:
если выполняется условие 1 - Writable должно быть поля1
если выполняется условие 2 - Writable должно быть поля2
если выполняется условие 3 - поля 1 и 2 not Writable
если выполняется условие 4 - поле 1 Not Writable для заполнения при печати в дашборде, поле 2 - обязательное для заполнения при печати.
на текущий момент учитывается только атрибут Writable в полях формы, а этот метод должен, если Writable=false позволять считать его как true
Пример:
```json
{
"motion": {
"WritableFieldsOnCondition_1": {
"aField_ID": ["info1", "file1"],
"asID_Field": {
"sClient": "client"
},
"sCondition": "[sClient] == 'attr1'"
}
}
}
``` | test | на главном портале реализовать в обїекте markers подобьекте motion поддержку нового сценария присваивания признака записываемости полей writible true назвать его writablefieldsoncondition реализовать по образу и подобию showfieldsoncondition т е возможные названия вариаций writablefieldsoncondition writablefieldsoncondition tax writablefieldsoncondition реализовать возможность динамически присваивать убирать признак обязательности к заполнению полей примеры сценария если выполняется условие writable должно быть если выполняется условие writable должно быть если выполняется условие поля и not writable если выполняется условие поле not writable для заполнения при печати в дашборде поле обязательное для заполнения при печати на текущий момент учитывается только атрибут writable в полях формы а этот метод должен если writable false позволять считать его как true пример json motion writablefieldsoncondition afield id asid field sclient client scondition | 1 |
39,224 | 12,643,923,604 | IssuesEvent | 2020-06-16 10:39:16 | Ndh-31/AAIBHApp | https://api.github.com/repos/Ndh-31/AAIBHApp | opened | CVE-2009-4269 (Low) detected in derby-10.5.3.0.jar | security vulnerability | ## CVE-2009-4269 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.5.3.0.jar</b></p></summary>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to vulnerable library: /AAIBHApp/AAIBHApp/aaibh-ear/bin/bin/derby-10.5.3.0_1.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.5.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Ndh-31/AAIBHApp/commit/169eb8259db4f54489525becfdeb2745d697365e">169eb8259db4f54489525becfdeb2745d697365e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The password hash generation algorithm in the BUILTIN authentication functionality for Apache Derby before 10.6.1.0 performs a transformation that reduces the size of the set of inputs to SHA-1, which produces a small search space that makes it easier for local and possibly remote attackers to crack passwords by generating hash collisions, related to password substitution.
<p>Publish Date: 2010-08-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2009-4269>CVE-2009-4269</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>2.1</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4269">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4269</a></p>
<p>Release Date: 2010-08-16</p>
<p>Fix Resolution: 10.6.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2009-4269 (Low) detected in derby-10.5.3.0.jar - ## CVE-2009-4269 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.5.3.0.jar</b></p></summary>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to vulnerable library: /AAIBHApp/AAIBHApp/aaibh-ear/bin/bin/derby-10.5.3.0_1.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.5.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Ndh-31/AAIBHApp/commit/169eb8259db4f54489525becfdeb2745d697365e">169eb8259db4f54489525becfdeb2745d697365e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The password hash generation algorithm in the BUILTIN authentication functionality for Apache Derby before 10.6.1.0 performs a transformation that reduces the size of the set of inputs to SHA-1, which produces a small search space that makes it easier for local and possibly remote attackers to crack passwords by generating hash collisions, related to password substitution.
<p>Publish Date: 2010-08-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2009-4269>CVE-2009-4269</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>2.1</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4269">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4269</a></p>
<p>Release Date: 2010-08-16</p>
<p>Fix Resolution: 10.6.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve low detected in derby jar cve low severity vulnerability vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to vulnerable library aaibhapp aaibhapp aaibh ear bin bin derby jar dependency hierarchy x derby jar vulnerable library found in head commit a href vulnerability details the password hash generation algorithm in the builtin authentication functionality for apache derby before performs a transformation that reduces the size of the set of inputs to sha which produces a small search space that makes it easier for local and possibly remote attackers to crack passwords by generating hash collisions related to password substitution publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
328,381 | 28,117,209,316 | IssuesEvent | 2023-03-31 11:43:19 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Jest Integration Tests.src/plugins/files/server/routes/integration_tests - File HTTP API find names | failed-test needs-team | A test failed on a tracked branch
```
Error: Unable to read snapshot manifest: Internal Server Error
<?xml version='1.0' encoding='UTF-8'?><Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><Details>AMJxxAvIufxAUmzh9LCkPQpCGg3lQXdMox3AA81kG4a5ilXZeowi+sALreC8QCDxraq4GUu5/iw6CVsFGT+Fq9wHjsNVWb0sUN2FWMrNtS2fK8lJJekqWCVE86zQbUWJ8CvyTIitBsFd</Details></Error>
at getArtifactSpecForSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/artifact.ts:151:11)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Function.getSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/artifact.ts:194:26)
at downloadSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/install/install_snapshot.ts:43:20)
at installSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/install/install_snapshot.ts:70:28)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.js:101:31
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-tooling-log/src/tooling_log.ts:84:18
at Cluster.installSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.js:100:12)
at TestCluster.start (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-test/src/es/test_es_cluster.ts:220:24)
at startES (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/core/test-helpers/core-test-helpers-kbn-server/src/create_root.ts:268:7)
at setupIntegrationEnvironment (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/src/plugins/files/server/test_utils/setup_integration_environment.ts:85:20)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/src/plugins/files/server/routes/integration_tests/routes.test.ts:22:19)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/28476#01872f0c-2836-4113-ad6e-954d8d472f5c)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/plugins/files/server/routes/integration_tests","test.name":"File HTTP API find names","test.failCount":1}} --> | 1.0 | Failing test: Jest Integration Tests.src/plugins/files/server/routes/integration_tests - File HTTP API find names - A test failed on a tracked branch
```
Error: Unable to read snapshot manifest: Internal Server Error
<?xml version='1.0' encoding='UTF-8'?><Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><Details>AMJxxAvIufxAUmzh9LCkPQpCGg3lQXdMox3AA81kG4a5ilXZeowi+sALreC8QCDxraq4GUu5/iw6CVsFGT+Fq9wHjsNVWb0sUN2FWMrNtS2fK8lJJekqWCVE86zQbUWJ8CvyTIitBsFd</Details></Error>
at getArtifactSpecForSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/artifact.ts:151:11)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Function.getSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/artifact.ts:194:26)
at downloadSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/install/install_snapshot.ts:43:20)
at installSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/install/install_snapshot.ts:70:28)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.js:101:31
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-tooling-log/src/tooling_log.ts:84:18
at Cluster.installSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.js:100:12)
at TestCluster.start (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/kbn-test/src/es/test_es_cluster.ts:220:24)
at startES (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/packages/core/test-helpers/core-test-helpers-kbn-server/src/create_root.ts:268:7)
at setupIntegrationEnvironment (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/src/plugins/files/server/test_utils/setup_integration_environment.ts:85:20)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a56a6fe843560bb5/elastic/kibana-on-merge/kibana/src/plugins/files/server/routes/integration_tests/routes.test.ts:22:19)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/28476#01872f0c-2836-4113-ad6e-954d8d472f5c)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/plugins/files/server/routes/integration_tests","test.name":"File HTTP API find names","test.failCount":1}} --> | test | failing test jest integration tests src plugins files server routes integration tests file http api find names a test failed on a tracked branch error unable to read snapshot manifest internal server error internalerror we encountered an internal error please try again at getartifactspecforsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src artifact ts at runmicrotasks at processticksandrejections node internal process task queues at function getsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src artifact ts at downloadsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src install install snapshot ts at installsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src install install snapshot ts at var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src cluster js at var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn tooling log src tooling log ts at cluster installsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src cluster js at testcluster start var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn test src es test es cluster ts at startes var lib buildkite agent builds kb spot elastic kibana on merge kibana packages core test helpers core test helpers kbn server src create root ts at setupintegrationenvironment var lib buildkite agent builds kb spot elastic kibana on merge kibana src plugins files server test utils setup integration environment ts at object var lib buildkite agent builds kb spot elastic kibana on merge kibana src plugins files server routes integration tests routes test ts first failure | 1 |
14,238 | 2,795,282,160 | IssuesEvent | 2015-05-11 21:09:29 | STEllAR-GROUP/hpx | https://api.github.com/repos/STEllAR-GROUP/hpx | closed | boost::filesystem::exists throws unexpected exception | category: init type: defect | `boost::filesystem::exists` throw an exception if the passed path can not be read due to missing permissions. This might lead to problems at startup during the loading of component ini data. | 1.0 | boost::filesystem::exists throws unexpected exception - `boost::filesystem::exists` throw an exception if the passed path can not be read due to missing permissions. This might lead to problems at startup during the loading of component ini data. | non_test | boost filesystem exists throws unexpected exception boost filesystem exists throw an exception if the passed path can not be read due to missing permissions this might lead to problems at startup during the loading of component ini data | 0 |
282,820 | 24,498,365,473 | IssuesEvent | 2022-10-10 10:38:38 | dzhw/zofar | https://api.github.com/repos/dzhw/zofar | closed | calendar: special characters in open questions | status: testing | When respondents put in special characters in open questions that will put in JSON-Array a message appears that unallowed characters encountered.
`Encountered " <ILLEGAL_CHARACTER> "\' "" at line 1, column 5.`
We need zofar functions for escape/unescape and include those in convienience methods for load and put in JSON-Array. We also need to pay attention on our usual functions/methods for saving variables. | 1.0 | calendar: special characters in open questions - When respondents put in special characters in open questions that will put in JSON-Array a message appears that unallowed characters encountered.
`Encountered " <ILLEGAL_CHARACTER> "\' "" at line 1, column 5.`
We need zofar functions for escape/unescape and include those in convienience methods for load and put in JSON-Array. We also need to pay attention on our usual functions/methods for saving variables. | test | calendar special characters in open questions when respondents put in special characters in open questions that will put in json array a message appears that unallowed characters encountered encountered at line column we need zofar functions for escape unescape and include those in convienience methods for load and put in json array we also need to pay attention on our usual functions methods for saving variables | 1 |
186,873 | 6,743,322,897 | IssuesEvent | 2017-10-20 11:33:48 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | Wallets: grey screen without Main Wallet is shown | bug intermediate medium-priority wallet wontfix | ### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Bug
[comment]: # (Describe the feature you would like, or briefly summarise the bug and what you did, what you expected to happen, and what actually happens. Sections below)
*Summary*: Grey screen without Main Wallet "card" is shown.
#### Expected behavior
[comment]: # (Describe what you expected to happen.)
Wallets screen with Wallet "card" is shown

#### Actual behavior
[comment]: # (Describe what actually happened.)
Grey screen with no Main Wallet "card" is shown

### Reproduction
[comment]: # (Describe how we can replicate the bug step by step.)
Screencast on Android 6.0.1 (Samsung Galaxy S6): https://www.screencast.com/t/MWH44Yeh
Screencast on Android 7.0 (Samsung Galaxy S7): https://www.screencast.com/t/Ssyc4yHSZdw
- Open Status and login
- In Chats tap on Wallet -> ok Wallets screen with Main Wallet card is shown
- Tap on device Back button to return to Chats
- Tap on console
- In console tap on "commands" button
- Tap on /faucet command
- Tap on "Status Testnet Faucet" and tap on send icon
- Tap on "<" to return to Chats
- Tap on Wallet
### Additional Information
[comment]: # (Please do your best to fill this out.)
* Status version: 0.9.4
* Operating System: Android 6.0.1 (Samsung Galaxy S6) and Android 7.0 (Samsung Galaxy S7)
Note that there is Instabug report on iOS too, however I can't find the steps to reproduce it on iOS.
#### Logs
* Android 7.0 (Samsung Galaxy S7): [20172903192135-logcat.txt](https://github.com/status-im/status-react/files/880045/20172903192135-logcat.txt)
* Android 6.0.1 (Samsung Galaxy S6): [20172903191951-logcat.txt](https://github.com/status-im/status-react/files/880046/20172903191951-logcat.txt)
#### Instabug reports:
iOS:
Not sure how to reproduce issue on iOS
* 0.9.4 (grey screen) https://dashboard.instabug.com/applications/status/beta/bugs/166
Android:
* 0.9.4 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/327
* 0.9.4 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/312
* 0.9.2 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/245
* 0.9.2 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/234
* 0.9.2 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/217
* 0.9.1 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/146 | 1.0 | Wallets: grey screen without Main Wallet is shown - ### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Bug
[comment]: # (Describe the feature you would like, or briefly summarise the bug and what you did, what you expected to happen, and what actually happens. Sections below)
*Summary*: Grey screen without Main Wallet "card" is shown.
#### Expected behavior
[comment]: # (Describe what you expected to happen.)
Wallets screen with Wallet "card" is shown

#### Actual behavior
[comment]: # (Describe what actually happened.)
Grey screen with no Main Wallet "card" is shown

### Reproduction
[comment]: # (Describe how we can replicate the bug step by step.)
Screencast on Android 6.0.1 (Samsung Galaxy S6): https://www.screencast.com/t/MWH44Yeh
Screencast on Android 7.0 (Samsung Galaxy S7): https://www.screencast.com/t/Ssyc4yHSZdw
- Open Status and login
- In Chats tap on Wallet -> ok Wallets screen with Main Wallet card is shown
- Tap on device Back button to return to Chats
- Tap on console
- In console tap on "commands" button
- Tap on /faucet command
- Tap on "Status Testnet Faucet" and tap on send icon
- Tap on "<" to return to Chats
- Tap on Wallet
### Additional Information
[comment]: # (Please do your best to fill this out.)
* Status version: 0.9.4
* Operating System: Android 6.0.1 (Samsung Galaxy S6) and Android 7.0 (Samsung Galaxy S7)
Note that there is Instabug report on iOS too, however I can't find the steps to reproduce it on iOS.
#### Logs
* Android 7.0 (Samsung Galaxy S7): [20172903192135-logcat.txt](https://github.com/status-im/status-react/files/880045/20172903192135-logcat.txt)
* Android 6.0.1 (Samsung Galaxy S6): [20172903191951-logcat.txt](https://github.com/status-im/status-react/files/880046/20172903191951-logcat.txt)
#### Instabug reports:
iOS:
Not sure how to reproduce issue on iOS
* 0.9.4 (grey screen) https://dashboard.instabug.com/applications/status/beta/bugs/166
Android:
* 0.9.4 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/327
* 0.9.4 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/312
* 0.9.2 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/245
* 0.9.2 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/234
* 0.9.2 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/217
* 0.9.1 (grey screen) https://dashboard.instabug.com/applications/status-10bb8cbc-15f6-44f0-8c7a-db478dc34d7d/beta/bugs/146 | non_test | wallets grey screen without main wallet is shown description feature or bug i e type bug type bug describe the feature you would like or briefly summarise the bug and what you did what you expected to happen and what actually happens sections below summary grey screen without main wallet card is shown expected behavior describe what you expected to happen wallets screen with wallet card is shown actual behavior describe what actually happened grey screen with no main wallet card is shown reproduction describe how we can replicate the bug step by step screencast on android samsung galaxy screencast on android samsung galaxy open status and login in chats tap on wallet ok wallets screen with main wallet card is shown tap on device back button to return to chats tap on console in console tap on commands button tap on faucet command tap on status testnet faucet and tap on send icon tap on to return to chats tap on wallet additional information please do your best to fill this out status version operating system android samsung galaxy and android samsung galaxy note that there is instabug report on ios too however i can t find the steps to reproduce it on ios logs android samsung galaxy android samsung galaxy instabug reports ios not sure how to reproduce issue on ios grey screen android grey screen grey screen grey screen grey screen grey screen grey screen | 0 |
290,200 | 21,871,438,276 | IssuesEvent | 2022-05-19 05:54:48 | numpy/numpy | https://api.github.com/repos/numpy/numpy | opened | DOC: The definition of `numpy.sinc` in the docstring should be valid for the whole domain | 04 - Documentation | ### Issue with current documentation:
Currently, the second line of the docstring for `numpy.sinc` reads:
> The sinc function is :math:`\sin(\pi x)/(\pi x)`.
Later, in the notes, it is mentioned that
> ``sinc(0)`` is the limit value 1.
Since the analytic character of the sinc function is important in applications and also affects the implementation, I would suggest that the short description be changed to
> The sinc function is :math:`\lim_{t\to x} \sin(\pi t)/\sin(\pi t)`.
I came across this (admittedly very minor and nitpicky) issue through `jax.numpy`, which [inherits the first part of NumPy's docstring](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.sinc.html). In the case of JAX the behavior at 0 is more crucial for compatibility with their automatic differentiation engine.
### Idea or request for content:
_No response_ | 1.0 | DOC: The definition of `numpy.sinc` in the docstring should be valid for the whole domain - ### Issue with current documentation:
Currently, the second line of the docstring for `numpy.sinc` reads:
> The sinc function is :math:`\sin(\pi x)/(\pi x)`.
Later, in the notes, it is mentioned that
> ``sinc(0)`` is the limit value 1.
Since the analytic character of the sinc function is important in applications and also affects the implementation, I would suggest that the short description be changed to
> The sinc function is :math:`\lim_{t\to x} \sin(\pi t)/\sin(\pi t)`.
I came across this (admittedly very minor and nitpicky) issue through `jax.numpy`, which [inherits the first part of NumPy's docstring](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.sinc.html). In the case of JAX the behavior at 0 is more crucial for compatibility with their automatic differentiation engine.
### Idea or request for content:
_No response_ | non_test | doc the definition of numpy sinc in the docstring should be valid for the whole domain issue with current documentation currently the second line of the docstring for numpy sinc reads the sinc function is math sin pi x pi x later in the notes it is mentioned that sinc is the limit value since the analytic character of the sinc function is important in applications and also affects the implementation i would suggest that the short description be changed to the sinc function is math lim t to x sin pi t sin pi t i came across this admittedly very minor and nitpicky issue through jax numpy which in the case of jax the behavior at is more crucial for compatibility with their automatic differentiation engine idea or request for content no response | 0 |
163,795 | 13,927,533,818 | IssuesEvent | 2020-10-21 19:59:19 | getsentry/sentry | https://api.github.com/repos/getsentry/sentry | closed | Trailing slash in URL breaks U2F. | Component: Documentation | Testing latest master (I believe, listed as 8.23.0.dev0) if we run with a trailing slash in the configured URL, then U2F fails thinking it's an URL mismatch:
(company-internal domain replaced by `our.site` below, and some IPs changed)
```
ValueError: Invalid facet! Was: u'https://our.site', expecting one of: [u'https://our.site/']
```
Notice that the only difference is the trailing slash.
For more context:
```
17:59:34 [INFO] sentry.superuser: superuser.request (user_id=1 url=u'https://our.site/api/0/users/me/authenticators/u2f/enroll/' method=u'POST' ip_address=u'10.123.123.123')
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/api/base.py", line 87, in handle_exception
response = super(Endpoint, self).handle_exception(exc)
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/api/base.py", line 170, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/api/decorators.py", line 28, in wrapped
return func(self, request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/api/endpoints/user_authenticator_enroll.py", line 182, in post
serializer.data['deviceName']
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/models/authenticator.py", line 554, in try_enroll
binding, cert = u2f.complete_register(enrollment_data, response_data, self.u2f_facets)
File "/usr/local/lib/python2.7/site-packages/u2flib_server/u2f.py", line 64, in complete_register
valid_facets)
File "/usr/local/lib/python2.7/site-packages/u2flib_server/u2f_v2.py", line 203, in complete_register
"navigator.id.finishEnrollment", valid_facets)
File "/usr/local/lib/python2.7/site-packages/u2flib_server/u2f_v2.py", line 184, in _validate_client_data
client_data.origin, valid_facets))
ValueError: Invalid facet! Was: u'https://our.site', expecting one of: [u'https://our.site/']
10.123.123.123 - - [05/Jun/2018:17:59:34 +0000] "POST /api/0/users/me/authenticators/u2f/enroll/ HTTP/1.1" 500 576 "https://our.site/settings/account/security/u2f/enroll/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36"
```
Changing the URL through the admin interface and restarting web server resolved the issue. It's an easy fix, but it's not obvious what's going on until checking the locks. | 1.0 | Trailing slash in URL breaks U2F. - Testing latest master (I believe, listed as 8.23.0.dev0) if we run with a trailing slash in the configured URL, then U2F fails thinking it's an URL mismatch:
(company-internal domain replaced by `our.site` below, and some IPs changed)
```
ValueError: Invalid facet! Was: u'https://our.site', expecting one of: [u'https://our.site/']
```
Notice that the only difference is the trailing slash.
For more context:
```
17:59:34 [INFO] sentry.superuser: superuser.request (user_id=1 url=u'https://our.site/api/0/users/me/authenticators/u2f/enroll/' method=u'POST' ip_address=u'10.123.123.123')
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/api/base.py", line 87, in handle_exception
response = super(Endpoint, self).handle_exception(exc)
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/api/base.py", line 170, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/api/decorators.py", line 28, in wrapped
return func(self, request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/api/endpoints/user_authenticator_enroll.py", line 182, in post
serializer.data['deviceName']
File "/usr/local/lib/python2.7/site-packages/sentry-8.23.0.dev0-py2.7.egg/sentry/models/authenticator.py", line 554, in try_enroll
binding, cert = u2f.complete_register(enrollment_data, response_data, self.u2f_facets)
File "/usr/local/lib/python2.7/site-packages/u2flib_server/u2f.py", line 64, in complete_register
valid_facets)
File "/usr/local/lib/python2.7/site-packages/u2flib_server/u2f_v2.py", line 203, in complete_register
"navigator.id.finishEnrollment", valid_facets)
File "/usr/local/lib/python2.7/site-packages/u2flib_server/u2f_v2.py", line 184, in _validate_client_data
client_data.origin, valid_facets))
ValueError: Invalid facet! Was: u'https://our.site', expecting one of: [u'https://our.site/']
10.123.123.123 - - [05/Jun/2018:17:59:34 +0000] "POST /api/0/users/me/authenticators/u2f/enroll/ HTTP/1.1" 500 576 "https://our.site/settings/account/security/u2f/enroll/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36"
```
Changing the URL through the admin interface and restarting web server resolved the issue. It's an easy fix, but it's not obvious what's going on until checking the locks. | non_test | trailing slash in url breaks testing latest master i believe listed as if we run with a trailing slash in the configured url then fails thinking it s an url mismatch company internal domain replaced by our site below and some ips changed valueerror invalid facet was u expecting one of notice that the only difference is the trailing slash for more context sentry superuser superuser request user id url u method u post ip address u traceback most recent call last file usr local lib site packages sentry egg sentry api base py line in handle exception response super endpoint self handle exception exc file usr local lib site packages sentry egg sentry api base py line in dispatch response handler request args kwargs file usr local lib site packages sentry egg sentry api decorators py line in wrapped return func self request args kwargs file usr local lib site packages sentry egg sentry api endpoints user authenticator enroll py line in post serializer data file usr local lib site packages sentry egg sentry models authenticator py line in try enroll binding cert complete register enrollment data response data self facets file usr local lib site packages server py line in complete register valid facets file usr local lib site packages server py line in complete register navigator id finishenrollment valid facets file usr local lib site packages server py line in validate client data client data origin valid facets valueerror invalid facet was u expecting one of post api users me authenticators enroll http mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari changing the url through the admin interface and restarting web server resolved the issue it s an easy fix but it s not obvious what s going on until checking the locks | 0 |
109,250 | 23,743,789,751 | IssuesEvent | 2022-08-31 14:26:16 | sourcegraph/sourcegraph | https://api.github.com/repos/sourcegraph/sourcegraph | closed | Decide on metrics showing auto-indexing performance / correctness | team/code-intelligence team/language-platform-and-navigation | Decide on metrics showing auto-indexing performance/correctness, implement alerting & write Cloud playbook.
See [RFC 723](https://docs.google.com/document/d/1Ss7ZffP82rV6nl4i5gBmICfuMK1xvSFI4Y5sLPXyDZc/edit#) for details | 1.0 | Decide on metrics showing auto-indexing performance / correctness - Decide on metrics showing auto-indexing performance/correctness, implement alerting & write Cloud playbook.
See [RFC 723](https://docs.google.com/document/d/1Ss7ZffP82rV6nl4i5gBmICfuMK1xvSFI4Y5sLPXyDZc/edit#) for details | non_test | decide on metrics showing auto indexing performance correctness decide on metrics showing auto indexing performance correctness implement alerting write cloud playbook see for details | 0 |
124,732 | 10,321,571,399 | IssuesEvent | 2019-08-31 03:22:31 | istio/istio | https://api.github.com/repos/istio/istio | closed | Create 1.3.0-rc.1 pre-release | area/test and release do-not-merge/work-in-progress | - [x] Prepare `istio/cni`.
- [x] Sync the fork of the `istio-iptables.sh` script.
```
git clone [email protected]:istio/cni.git
pushd cni
git checkout release-1.3
git checkout -b update-istio-iptables-release-1.3
make -f tools/istio-cni-docker.mk update-istio-iptables.sh
git add tools/packaging/common/istio-iptables.sh
git commit -m 'Update istio-iptables.sh'
git push --set-upstream origin update-istio-iptables-release-1.3
popd
```
- [x] Check whether the commit SHAs in `istio/istio` and `istio/proxy` are consistent.
- [x] Check whether the same `istio/api` commit is used in `istio/istio` and `istio/proxy`.
```
git clone [email protected]:istio/istio.git
pushd istio
git checkout release-1.3
export ISTIO_API_SHA_ISTIO=$(sed -n 's,^.*istio\.io/api v0.0.0-[^-]*-,,p' go.mod)
export ISTIO_PROXY_SHA_ISTIO=$(jq -r '.[] | select(.repoName == "proxy") | .lastStableSHA' < istio.deps)
export GO_CONTROL_PLANE_SHA=$(sed -n 's,^.*github.com/envoyproxy/go-control-plane ,,p' go.mod)
popd
git clone [email protected]:istio/proxy.git
pushd proxy
git checkout $ISTIO_PROXY_SHA_ISTIO
export ISTIO_API_SHA_PROXY=$(git rev-parse --short=12 $(sed -n 's,^ISTIO_API = "\([^"]*\)".*$,\1,p' < repositories.bzl))
export ENVOY_SHA_PROXY=$(sed -n 's,^ENVOY_SHA = "\([^"]*\)".*$,\1,p' < WORKSPACE)
popd
test "$ISTIO_API_SHA_ISTIO" != "$ISTIO_API_SHA_PROXY" && echo "different istio/api commits in istio/istio and istio/proxy"
```
- [x] Check whether the version of Envoy from which the go-control-plane's API has been extracted from is no more than a few days older to the version of Envoy used in `istio/proxy`.
```
git clone [email protected]:envoyproxy/go-control-plane.git
pushd go-control-plane
git checkout $GO_CONTROL_PLANE_SHA
export DATA_PLANE_API_SHA=$(git submodule | sed -n 's,^-\([^ ]*\) data-plane-api.*$,\1,p')
popd
git clone [email protected]:envoyproxy/data-plane-api.git
pushd data-plane-api
git checkout $DATA_PLANE_API_SHA
export ENVOY_SHA_DATA_PLANE_API=$(git log -1 | sed -n 's,^.*Mirrored from.*envoy @ ,,p')
popd
echo "Envoy SHA used in istio/proxy: https://github.com/istio/envoy/commit/$ENVOY_SHA_PROXY"
echo "Envoy SHA used in envoyproxy/go-control-plane: https://github.com/envoyproxy/envoy/commit/$ENVOY_SHA_DATA_PLANE_API"
```
If the versions are different, you'll have to update the version used in `istio/istio` or `istio/proxy` to use the most recent version in both repos.
- [x] Check whether the API commit SHA is the same in both `repositories.bzl` and `istio.deps` in `istio/proxy`.
```
pushd proxy
git checkout $ISTIO_PROXY_SHA
export API_COMMIT_IN_REPOSITORIES=$(sed -n 's,^ISTIO_API = "\([^"]*\)".*$,\1,p' < repositories.bzl)
export API_COMMIT_IN_DEPS=$(jq -r '.[] | select(.repoName == "api") | .lastStableSHA' < istio.deps)
test "$API_COMMIT_IN_REPOSITORIES" != "$API_COMMIT_IN_DEPS" && echo "different API commits"
popd
```
- [x] Update the SHAs in `istio/proxy`. All SHAs should ideally be updated in a single PR since it takes ~1.5 hours to build an image after it is merged.
- [x] If necessary, update the version of Envoy used in `istio/proxy`. Update `WORKSPACE` and `istio.deps` with the same SHA. The SHA must exist in the [istio/envoy:release-1.3](https://github.com/istio/envoy/tree/release-1.3) branch. ✅
- [x] If the `istio/api` SHAs are different within `istio/proxy` between `repositories.bzl` and `istio.deps`, fix it in `istio.deps`. ✅
- [x] If the `istio/api` SHA is behind the one used in `istio/istio`, replace the SHA in both `repositories.bzl` and `istio.deps` to use the same as in `istio/istio`. ✅
- [x] Update the SHAs in `istio/istio`. All SHAs should ideally be updated in a single PR to ensure consistency.
- [x] If the version of API used in the `go-control-plane` version is too far behind the Envoy version used in `istio/proxy`: ✅
- [x] If necessary, first create a new release of [go-control-plane](https://github.com/envoyproxy/go-control-plane) that uses the same API version as the Envoy version used in `istio/proxy`.
- [x] Update the `go-control-plane` version in `istio/istio` in `go.mod`.
- [x] If necessary, update the `istio/proxy` SHA in `istio/istio` in `istio.deps`. [`istio.deps`](https://github.com/istio/istio/pull/16664/files#diff-3180d69f98f2fa18068a603520e6026d)
- [x] Update the `istio/cni` SHA in `istio/istio` in `istio.deps`. This SHA must be the `HEAD` of the `release-1.3` branch. The release script pulls the `HEAD` of the branch, not this specific SHA, but the SHA is printed out in the release manifest, so it's better to keep it consistent. ✅
- [x] Build the release by updating the `monthly/release_params.sh` parameters to use the SHA of the `HEAD` of `istio.istio:release-1.3`, and the `1.3.0-rc.1` version.
```
git clone [email protected]:istio/istio.git
pushd istio
git checkout release-1.3
export ISTIO_SHA=$(git rev-parse HEAD)
popd
git clone [email protected]:istio-releases/pipeline.git
pushd pipeline
git checkout release-1.3
git checkout -b create-1.3.0-rc.1-release
sed -e 's/CB_BRANCH=.*$/CB_BRANCH=release-1.3/' \
-e 's/CB_COMMIT=.*$/CB_COMMIT='$ISTIO_SHA'/' \
-e 's/CB_VERSION=.*$/CB_VERSION=1.3.0-rc.1/' \
-i monthly/release_params.sh
git add monthly/release_params.sh
git commit -m 'Build release 1.3.0-rc.1'
git push --set-upstream origin create-1.3.0-rc.1-release
popd
```
Wait until the build finishes and is available in https://gcsweb.istio.io/gcs/istio-prerelease/prerelease/1.3.0-rc.1/ and in draft state in https://github.com/istio/istio/releases.
- [x] Run the long-running tests on the build. (owners: @mandarjog, @howardjohn, @JimmyCYJ)
- [x] Publish the release in the GitHub UI. Use the `edit` button on the draft release in https://github.com/istio/istio/releases. Make sure the `pre-release` box is checked. Remove the `RELEASE NOTES` link from the description. | 1.0 | Create 1.3.0-rc.1 pre-release - - [x] Prepare `istio/cni`.
- [x] Sync the fork of the `istio-iptables.sh` script.
```
git clone [email protected]:istio/cni.git
pushd cni
git checkout release-1.3
git checkout -b update-istio-iptables-release-1.3
make -f tools/istio-cni-docker.mk update-istio-iptables.sh
git add tools/packaging/common/istio-iptables.sh
git commit -m 'Update istio-iptables.sh'
git push --set-upstream origin update-istio-iptables-release-1.3
popd
```
- [x] Check whether the commit SHAs in `istio/istio` and `istio/proxy` are consistent.
- [x] Check whether the same `istio/api` commit is used in `istio/istio` and `istio/proxy`.
```
git clone [email protected]:istio/istio.git
pushd istio
git checkout release-1.3
export ISTIO_API_SHA_ISTIO=$(sed -n 's,^.*istio\.io/api v0.0.0-[^-]*-,,p' go.mod)
export ISTIO_PROXY_SHA_ISTIO=$(jq -r '.[] | select(.repoName == "proxy") | .lastStableSHA' < istio.deps)
export GO_CONTROL_PLANE_SHA=$(sed -n 's,^.*github.com/envoyproxy/go-control-plane ,,p' go.mod)
popd
git clone [email protected]:istio/proxy.git
pushd proxy
git checkout $ISTIO_PROXY_SHA_ISTIO
export ISTIO_API_SHA_PROXY=$(git rev-parse --short=12 $(sed -n 's,^ISTIO_API = "\([^"]*\)".*$,\1,p' < repositories.bzl))
export ENVOY_SHA_PROXY=$(sed -n 's,^ENVOY_SHA = "\([^"]*\)".*$,\1,p' < WORKSPACE)
popd
test "$ISTIO_API_SHA_ISTIO" != "$ISTIO_API_SHA_PROXY" && echo "different istio/api commits in istio/istio and istio/proxy"
```
- [x] Check whether the version of Envoy from which the go-control-plane's API has been extracted from is no more than a few days older to the version of Envoy used in `istio/proxy`.
```
git clone [email protected]:envoyproxy/go-control-plane.git
pushd go-control-plane
git checkout $GO_CONTROL_PLANE_SHA
export DATA_PLANE_API_SHA=$(git submodule | sed -n 's,^-\([^ ]*\) data-plane-api.*$,\1,p')
popd
git clone [email protected]:envoyproxy/data-plane-api.git
pushd data-plane-api
git checkout $DATA_PLANE_API_SHA
export ENVOY_SHA_DATA_PLANE_API=$(git log -1 | sed -n 's,^.*Mirrored from.*envoy @ ,,p')
popd
echo "Envoy SHA used in istio/proxy: https://github.com/istio/envoy/commit/$ENVOY_SHA_PROXY"
echo "Envoy SHA used in envoyproxy/go-control-plane: https://github.com/envoyproxy/envoy/commit/$ENVOY_SHA_DATA_PLANE_API"
```
If the versions are different, you'll have to update the version used in `istio/istio` or `istio/proxy` to use the most recent version in both repos.
- [x] Check whether the API commit SHA is the same in both `repositories.bzl` and `istio.deps` in `istio/proxy`.
```
pushd proxy
git checkout $ISTIO_PROXY_SHA
export API_COMMIT_IN_REPOSITORIES=$(sed -n 's,^ISTIO_API = "\([^"]*\)".*$,\1,p' < repositories.bzl)
export API_COMMIT_IN_DEPS=$(jq -r '.[] | select(.repoName == "api") | .lastStableSHA' < istio.deps)
test "$API_COMMIT_IN_REPOSITORIES" != "$API_COMMIT_IN_DEPS" && echo "different API commits"
popd
```
- [x] Update the SHAs in `istio/proxy`. All SHAs should ideally be updated in a single PR since it takes ~1.5 hours to build an image after it is merged.
- [x] If necessary, update the version of Envoy used in `istio/proxy`. Update `WORKSPACE` and `istio.deps` with the same SHA. The SHA must exist in the [istio/envoy:release-1.3](https://github.com/istio/envoy/tree/release-1.3) branch. ✅
- [x] If the `istio/api` SHAs are different within `istio/proxy` between `repositories.bzl` and `istio.deps`, fix it in `istio.deps`. ✅
- [x] If the `istio/api` SHA is behind the one used in `istio/istio`, replace the SHA in both `repositories.bzl` and `istio.deps` to use the same as in `istio/istio`. ✅
- [x] Update the SHAs in `istio/istio`. All SHAs should ideally be updated in a single PR to ensure consistency.
- [x] If the version of API used in the `go-control-plane` version is too far behind the Envoy version used in `istio/proxy`: ✅
- [x] If necessary, first create a new release of [go-control-plane](https://github.com/envoyproxy/go-control-plane) that uses the same API version as the Envoy version used in `istio/proxy`.
- [x] Update the `go-control-plane` version in `istio/istio` in `go.mod`.
- [x] If necessary, update the `istio/proxy` SHA in `istio/istio` in `istio.deps`. [`istio.deps`](https://github.com/istio/istio/pull/16664/files#diff-3180d69f98f2fa18068a603520e6026d)
- [x] Update the `istio/cni` SHA in `istio/istio` in `istio.deps`. This SHA must be the `HEAD` of the `release-1.3` branch. The release script pulls the `HEAD` of the branch, not this specific SHA, but the SHA is printed out in the release manifest, so it's better to keep it consistent. ✅
- [x] Build the release by updating the `monthly/release_params.sh` parameters to use the SHA of the `HEAD` of `istio.istio:release-1.3`, and the `1.3.0-rc.1` version.
```
git clone [email protected]:istio/istio.git
pushd istio
git checkout release-1.3
export ISTIO_SHA=$(git rev-parse HEAD)
popd
git clone [email protected]:istio-releases/pipeline.git
pushd pipeline
git checkout release-1.3
git checkout -b create-1.3.0-rc.1-release
sed -e 's/CB_BRANCH=.*$/CB_BRANCH=release-1.3/' \
-e 's/CB_COMMIT=.*$/CB_COMMIT='$ISTIO_SHA'/' \
-e 's/CB_VERSION=.*$/CB_VERSION=1.3.0-rc.1/' \
-i monthly/release_params.sh
git add monthly/release_params.sh
git commit -m 'Build release 1.3.0-rc.1'
git push --set-upstream origin create-1.3.0-rc.1-release
popd
```
Wait until the build finishes and is available in https://gcsweb.istio.io/gcs/istio-prerelease/prerelease/1.3.0-rc.1/ and in draft state in https://github.com/istio/istio/releases.
- [x] Run the long-running tests on the build. (owners: @mandarjog, @howardjohn, @JimmyCYJ)
- [x] Publish the release in the GitHub UI. Use the `edit` button on the draft release in https://github.com/istio/istio/releases. Make sure the `pre-release` box is checked. Remove the `RELEASE NOTES` link from the description. | test | create rc pre release prepare istio cni sync the fork of the istio iptables sh script git clone git github com istio cni git pushd cni git checkout release git checkout b update istio iptables release make f tools istio cni docker mk update istio iptables sh git add tools packaging common istio iptables sh git commit m update istio iptables sh git push set upstream origin update istio iptables release popd check whether the commit shas in istio istio and istio proxy are consistent check whether the same istio api commit is used in istio istio and istio proxy git clone git github com istio istio git pushd istio git checkout release export istio api sha istio sed n s istio io api p go mod export istio proxy sha istio jq r select reponame proxy laststablesha istio deps export go control plane sha sed n s github com envoyproxy go control plane p go mod popd git clone git github com istio proxy git pushd proxy git checkout istio proxy sha istio export istio api sha proxy git rev parse short sed n s istio api p repositories bzl export envoy sha proxy sed n s envoy sha p workspace popd test istio api sha istio istio api sha proxy echo different istio api commits in istio istio and istio proxy check whether the version of envoy from which the go control plane s api has been extracted from is no more than a few days older to the version of envoy used in istio proxy git clone git github com envoyproxy go control plane git pushd go control plane git checkout go control plane sha export data plane api sha git submodule sed n s data plane api p popd git clone git github com envoyproxy data plane api git pushd data plane api git checkout data plane api sha export envoy sha data plane api git log sed n s mirrored from envoy p popd echo envoy sha used in istio proxy echo envoy sha used in envoyproxy go control plane if the versions are different you ll have to update the version used in istio istio or istio proxy to use the most recent version in both repos check whether the api commit sha is the same in both repositories bzl and istio deps in istio proxy pushd proxy git checkout istio proxy sha export api commit in repositories sed n s istio api p repositories bzl export api commit in deps jq r select reponame api laststablesha istio deps test api commit in repositories api commit in deps echo different api commits popd update the shas in istio proxy all shas should ideally be updated in a single pr since it takes hours to build an image after it is merged if necessary update the version of envoy used in istio proxy update workspace and istio deps with the same sha the sha must exist in the branch ✅ if the istio api shas are different within istio proxy between repositories bzl and istio deps fix it in istio deps ✅ if the istio api sha is behind the one used in istio istio replace the sha in both repositories bzl and istio deps to use the same as in istio istio ✅ update the shas in istio istio all shas should ideally be updated in a single pr to ensure consistency if the version of api used in the go control plane version is too far behind the envoy version used in istio proxy ✅ if necessary first create a new release of that uses the same api version as the envoy version used in istio proxy update the go control plane version in istio istio in go mod if necessary update the istio proxy sha in istio istio in istio deps update the istio cni sha in istio istio in istio deps this sha must be the head of the release branch the release script pulls the head of the branch not this specific sha but the sha is printed out in the release manifest so it s better to keep it consistent ✅ build the release by updating the monthly release params sh parameters to use the sha of the head of istio istio release and the rc version git clone git github com istio istio git pushd istio git checkout release export istio sha git rev parse head popd git clone git github com istio releases pipeline git pushd pipeline git checkout release git checkout b create rc release sed e s cb branch cb branch release e s cb commit cb commit istio sha e s cb version cb version rc i monthly release params sh git add monthly release params sh git commit m build release rc git push set upstream origin create rc release popd wait until the build finishes and is available in and in draft state in run the long running tests on the build owners mandarjog howardjohn jimmycyj publish the release in the github ui use the edit button on the draft release in make sure the pre release box is checked remove the release notes link from the description | 1 |
23,532 | 10,894,904,567 | IssuesEvent | 2019-11-19 09:38:01 | elikkatzgit/quantumsim | https://api.github.com/repos/elikkatzgit/quantumsim | closed | CVE-2013-7459 (High) detected in pycrypto-2.6.1.tar.gz | security vulnerability | ## CVE-2013-7459 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pycrypto-2.6.1.tar.gz</b></p></summary>
<p>Cryptographic modules for Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/60/db/645aa9af249f059cc3a368b118de33889219e0362141e75d4eaf6f80f163/pycrypto-2.6.1.tar.gz">https://files.pythonhosted.org/packages/60/db/645aa9af249f059cc3a368b118de33889219e0362141e75d4eaf6f80f163/pycrypto-2.6.1.tar.gz</a></p>
<p>Path to dependency file: /tmp/ws-scm/quantumsim/requirements.txt</p>
<p>Path to vulnerable library: /quantumsim/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **pycrypto-2.6.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/elikkatzgit/quantumsim/commit/d6624156203bb0fc439915ed3fc47432b9cbbeb5">d6624156203bb0fc439915ed3fc47432b9cbbeb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Heap-based buffer overflow in the ALGnew function in block_templace.c in Python Cryptography Toolkit (aka pycrypto) allows remote attackers to execute arbitrary code as demonstrated by a crafted iv parameter to cryptmsg.py.
<p>Publish Date: 2017-02-15
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7459>CVE-2013-7459</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security.gentoo.org/glsa/201702-14">https://security.gentoo.org/glsa/201702-14</a></p>
<p>Release Date: 2017-02-20</p>
<p>Fix Resolution: All PyCrypto users should upgrade to the latest version >= pycrypto-2.6.1-r2
</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"pycrypto","packageVersion":"2.6.1","isTransitiveDependency":false,"dependencyTree":"pycrypto:2.6.1","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2013-7459","vulnerabilityDetails":"Heap-based buffer overflow in the ALGnew function in block_templace.c in Python Cryptography Toolkit (aka pycrypto) allows remote attackers to execute arbitrary code as demonstrated by a crafted iv parameter to cryptmsg.py.","vulnerabilityUrl":"https://cve.mitre.org/cgi-bin/cvename.cgi?name\u003dCVE-2013-7459","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2013-7459 (High) detected in pycrypto-2.6.1.tar.gz - ## CVE-2013-7459 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pycrypto-2.6.1.tar.gz</b></p></summary>
<p>Cryptographic modules for Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/60/db/645aa9af249f059cc3a368b118de33889219e0362141e75d4eaf6f80f163/pycrypto-2.6.1.tar.gz">https://files.pythonhosted.org/packages/60/db/645aa9af249f059cc3a368b118de33889219e0362141e75d4eaf6f80f163/pycrypto-2.6.1.tar.gz</a></p>
<p>Path to dependency file: /tmp/ws-scm/quantumsim/requirements.txt</p>
<p>Path to vulnerable library: /quantumsim/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **pycrypto-2.6.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/elikkatzgit/quantumsim/commit/d6624156203bb0fc439915ed3fc47432b9cbbeb5">d6624156203bb0fc439915ed3fc47432b9cbbeb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Heap-based buffer overflow in the ALGnew function in block_templace.c in Python Cryptography Toolkit (aka pycrypto) allows remote attackers to execute arbitrary code as demonstrated by a crafted iv parameter to cryptmsg.py.
<p>Publish Date: 2017-02-15
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7459>CVE-2013-7459</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security.gentoo.org/glsa/201702-14">https://security.gentoo.org/glsa/201702-14</a></p>
<p>Release Date: 2017-02-20</p>
<p>Fix Resolution: All PyCrypto users should upgrade to the latest version >= pycrypto-2.6.1-r2
</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"pycrypto","packageVersion":"2.6.1","isTransitiveDependency":false,"dependencyTree":"pycrypto:2.6.1","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2013-7459","vulnerabilityDetails":"Heap-based buffer overflow in the ALGnew function in block_templace.c in Python Cryptography Toolkit (aka pycrypto) allows remote attackers to execute arbitrary code as demonstrated by a crafted iv parameter to cryptmsg.py.","vulnerabilityUrl":"https://cve.mitre.org/cgi-bin/cvename.cgi?name\u003dCVE-2013-7459","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in pycrypto tar gz cve high severity vulnerability vulnerable library pycrypto tar gz cryptographic modules for python library home page a href path to dependency file tmp ws scm quantumsim requirements txt path to vulnerable library quantumsim requirements txt dependency hierarchy x pycrypto tar gz vulnerable library found in head commit a href vulnerability details heap based buffer overflow in the algnew function in block templace c in python cryptography toolkit aka pycrypto allows remote attackers to execute arbitrary code as demonstrated by a crafted iv parameter to cryptmsg py publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution all pycrypto users should upgrade to the latest version pycrypto isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails heap based buffer overflow in the algnew function in block templace c in python cryptography toolkit aka pycrypto allows remote attackers to execute arbitrary code as demonstrated by a crafted iv parameter to cryptmsg py vulnerabilityurl | 0 |
26,560 | 20,251,623,515 | IssuesEvent | 2022-02-14 18:28:13 | google/clspv | https://api.github.com/repos/google/clspv | opened | Switch to opaque pointers | infrastructure | LLVM is moving towards using opaque pointers generally. The information that used to be in the pointer element type is available on GEPs and load/store instructions. Clspv needs to switch uses of `getNonOpaquePointerElementType()` to more robust ways of accessing the desired type. | 1.0 | Switch to opaque pointers - LLVM is moving towards using opaque pointers generally. The information that used to be in the pointer element type is available on GEPs and load/store instructions. Clspv needs to switch uses of `getNonOpaquePointerElementType()` to more robust ways of accessing the desired type. | non_test | switch to opaque pointers llvm is moving towards using opaque pointers generally the information that used to be in the pointer element type is available on geps and load store instructions clspv needs to switch uses of getnonopaquepointerelementtype to more robust ways of accessing the desired type | 0 |
333,392 | 29,580,664,823 | IssuesEvent | 2023-06-07 05:22:18 | backdrop/backdrop-issues | https://api.github.com/repos/backdrop/backdrop-issues | closed | PR sandboxes provisioned with PHP 7.2 instead of 8.1 | type - bug report status - has pull request pr - needs testing pr - ready to be committed | We switched our PR sandboxes to be using PHP 8.1 in #5926. However, for reasons that seem upstream, our PR sandboxes are being provisioned with PHP 7.2 when using the `tugboatqa/php:8.1-apache` image.
I have tested this in a "dummy" PR. and switching to `tugboatqa/php:8.0-apache` or `tugboatqa/php:7.4-apache` provisions the sandboxes with the respective image 👍🏼 ...I have also checked https://github.com/TugboatQA/dockerfiles/blob/main/php/TAGS.md and `8.1-apache` is a valid tag, however testing showed that it simply doesn't work.
We had a discussion in Zulip about this with @BWPanda, @indigoxela and @larsdesigns: https://backdrop.zulipchat.com/#narrow/stream/218635-Backdrop/topic/Core.20PR.20Sandboxes/near/362303731
Testing has confirmed that using one of the image tags that include the OS in their name works, so we should use those instead. | 1.0 | PR sandboxes provisioned with PHP 7.2 instead of 8.1 - We switched our PR sandboxes to be using PHP 8.1 in #5926. However, for reasons that seem upstream, our PR sandboxes are being provisioned with PHP 7.2 when using the `tugboatqa/php:8.1-apache` image.
I have tested this in a "dummy" PR. and switching to `tugboatqa/php:8.0-apache` or `tugboatqa/php:7.4-apache` provisions the sandboxes with the respective image 👍🏼 ...I have also checked https://github.com/TugboatQA/dockerfiles/blob/main/php/TAGS.md and `8.1-apache` is a valid tag, however testing showed that it simply doesn't work.
We had a discussion in Zulip about this with @BWPanda, @indigoxela and @larsdesigns: https://backdrop.zulipchat.com/#narrow/stream/218635-Backdrop/topic/Core.20PR.20Sandboxes/near/362303731
Testing has confirmed that using one of the image tags that include the OS in their name works, so we should use those instead. | test | pr sandboxes provisioned with php instead of we switched our pr sandboxes to be using php in however for reasons that seem upstream our pr sandboxes are being provisioned with php when using the tugboatqa php apache image i have tested this in a dummy pr and switching to tugboatqa php apache or tugboatqa php apache provisions the sandboxes with the respective image 👍🏼 i have also checked and apache is a valid tag however testing showed that it simply doesn t work we had a discussion in zulip about this with bwpanda indigoxela and larsdesigns testing has confirmed that using one of the image tags that include the os in their name works so we should use those instead | 1 |
426,381 | 12,371,705,875 | IssuesEvent | 2020-05-18 19:02:49 | Automattic/abacus | https://api.github.com/repos/Automattic/abacus | closed | Check Resolved ESLint Rules | [!priority] medium [component] experimenter interface [type] enhancement | We are using a bunch of ESLint plugins, many of which have their own recommended configuration. This makes it hard to know what rules are actually in place. One could look at the source code of each plugin and merge them together to figure out the ESLint configuration being used. Fortunately, ESLint comes with the ability to print the configuration for a given file. Just run ESLint with the --print-config flag and a file for which you want to know the configuration.
```
eslint --print-config file.js
```
Let's make sure `eslint-plugin-react`, `eslint-plugin-react-hooks`, and `eslint-plugin-jsx-a11y` are in place and configured. If not, add them and configure them. | 1.0 | Check Resolved ESLint Rules - We are using a bunch of ESLint plugins, many of which have their own recommended configuration. This makes it hard to know what rules are actually in place. One could look at the source code of each plugin and merge them together to figure out the ESLint configuration being used. Fortunately, ESLint comes with the ability to print the configuration for a given file. Just run ESLint with the --print-config flag and a file for which you want to know the configuration.
```
eslint --print-config file.js
```
Let's make sure `eslint-plugin-react`, `eslint-plugin-react-hooks`, and `eslint-plugin-jsx-a11y` are in place and configured. If not, add them and configure them. | non_test | check resolved eslint rules we are using a bunch of eslint plugins many of which have their own recommended configuration this makes it hard to know what rules are actually in place one could look at the source code of each plugin and merge them together to figure out the eslint configuration being used fortunately eslint comes with the ability to print the configuration for a given file just run eslint with the print config flag and a file for which you want to know the configuration eslint print config file js let s make sure eslint plugin react eslint plugin react hooks and eslint plugin jsx are in place and configured if not add them and configure them | 0 |
274,206 | 29,929,677,072 | IssuesEvent | 2023-06-22 08:36:08 | amaybaum-local/vprofile-project8 | https://api.github.com/repos/amaybaum-local/vprofile-project8 | closed | CVE-2018-1257 (Medium, reachable) detected in spring-messaging-4.3.7.RELEASE.jar - autoclosed | Mend: dependency security vulnerability | ## CVE-2018-1257 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-messaging-4.3.7.RELEASE.jar</b></p></summary>
<p>Spring Messaging</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /Users/alexmaybaum/.m2/repository/org/springframework/spring-messaging/4.3.7.RELEASE/spring-messaging-4.3.7.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-rabbit-1.7.1.RELEASE.jar (Root Library)
- :x: **spring-messaging-4.3.7.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/amaybaum-local/vprofile-project8/commit/5eec2b68c36038bc971648668da557114972e048">5eec2b68c36038bc971648668da557114972e048</a></p>
<p>Found in base branch: <b>vp-rem</b></p>
</p>
</details>
<p></p>
<details><summary> <img src='https://whitesource-resources.whitesourcesoftware.com/viaRed.png' width=19 height=20> Reachability Analysis</summary>
<p>
This vulnerability is potentially used
```
com.visualpathit.account.validator.UserValidator (Application)
-> org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration$1 (Extension)
-> org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration (Extension)
-> ❌ org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler (Vulnerable Component)
```
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework, versions 5.0.x prior to 5.0.6, versions 4.3.x prior to 4.3.17, and older unsupported versions allows applications to expose STOMP over WebSocket endpoints with a simple, in-memory STOMP broker through the spring-messaging module. A malicious user (or attacker) can craft a message to the broker that can lead to a regular expression, denial of service attack.
<p>Publish Date: 2018-05-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-1257>CVE-2018-1257</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1257">https://nvd.nist.gov/vuln/detail/CVE-2018-1257</a></p>
<p>Release Date: 2018-05-11</p>
<p>Fix Resolution: 5.0.6,4.3.17</p>
</p>
</details>
<p></p>
| True | CVE-2018-1257 (Medium, reachable) detected in spring-messaging-4.3.7.RELEASE.jar - autoclosed - ## CVE-2018-1257 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-messaging-4.3.7.RELEASE.jar</b></p></summary>
<p>Spring Messaging</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /Users/alexmaybaum/.m2/repository/org/springframework/spring-messaging/4.3.7.RELEASE/spring-messaging-4.3.7.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-rabbit-1.7.1.RELEASE.jar (Root Library)
- :x: **spring-messaging-4.3.7.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/amaybaum-local/vprofile-project8/commit/5eec2b68c36038bc971648668da557114972e048">5eec2b68c36038bc971648668da557114972e048</a></p>
<p>Found in base branch: <b>vp-rem</b></p>
</p>
</details>
<p></p>
<details><summary> <img src='https://whitesource-resources.whitesourcesoftware.com/viaRed.png' width=19 height=20> Reachability Analysis</summary>
<p>
This vulnerability is potentially used
```
com.visualpathit.account.validator.UserValidator (Application)
-> org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration$1 (Extension)
-> org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration (Extension)
-> ❌ org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler (Vulnerable Component)
```
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework, versions 5.0.x prior to 5.0.6, versions 4.3.x prior to 4.3.17, and older unsupported versions allows applications to expose STOMP over WebSocket endpoints with a simple, in-memory STOMP broker through the spring-messaging module. A malicious user (or attacker) can craft a message to the broker that can lead to a regular expression, denial of service attack.
<p>Publish Date: 2018-05-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-1257>CVE-2018-1257</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1257">https://nvd.nist.gov/vuln/detail/CVE-2018-1257</a></p>
<p>Release Date: 2018-05-11</p>
<p>Fix Resolution: 5.0.6,4.3.17</p>
</p>
</details>
<p></p>
| non_test | cve medium reachable detected in spring messaging release jar autoclosed cve medium severity vulnerability vulnerable library spring messaging release jar spring messaging library home page a href path to dependency file pom xml path to vulnerable library users alexmaybaum repository org springframework spring messaging release spring messaging release jar dependency hierarchy spring rabbit release jar root library x spring messaging release jar vulnerable library found in head commit a href found in base branch vp rem reachability analysis this vulnerability is potentially used com visualpathit account validator uservalidator application org springframework messaging simp config abstractmessagebrokerconfiguration extension org springframework messaging simp config abstractmessagebrokerconfiguration extension ❌ org springframework messaging simp broker simplebrokermessagehandler vulnerable component vulnerability details spring framework versions x prior to versions x prior to and older unsupported versions allows applications to expose stomp over websocket endpoints with a simple in memory stomp broker through the spring messaging module a malicious user or attacker can craft a message to the broker that can lead to a regular expression denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
122,226 | 26,105,759,503 | IssuesEvent | 2022-12-27 13:01:12 | Clueless-Community/seamless-ui | https://api.github.com/repos/Clueless-Community/seamless-ui | closed | Improve footer white 3 | codepeak 22 issue:1 | 4) `footer-white-3` Replace social media icons
#### Existing icons

#### Required Icons

| 1.0 | Improve footer white 3 - 4) `footer-white-3` Replace social media icons
#### Existing icons

#### Required Icons

| non_test | improve footer white footer white replace social media icons existing icons required icons | 0 |
373,646 | 11,046,862,159 | IssuesEvent | 2019-12-09 17:42:44 | kubernetes/release | https://api.github.com/repos/kubernetes/release | closed | Running Release Manager tasks within a container image | area/release-eng kind/feature priority/important-soon sig/release | @hoegaarden [mentioned](https://github.com/kubernetes/release/issues/958#issuecomment-563239639) that he uses a container image to run the Release Manager tasks.
[Cutting v1.15.0-alpha.2](https://docs.google.com/document/d/1Xv5w_eNvLvD-nNinMNqQAh0qlzee8btqAyHyFFMz3z4/edit?usp=sharing) doc mentioned in the Branch Manager handbook also references using as a container image for tasks.
It would be great to have one of these Dockerfiles (or an amalgam of the two) put up in the repo for other Release Managers to consider using.
/assign @hoegaarden
/sig release
/area release-eng
/milestone v1.18
/priority important-soon
/kind feature | 1.0 | Running Release Manager tasks within a container image - @hoegaarden [mentioned](https://github.com/kubernetes/release/issues/958#issuecomment-563239639) that he uses a container image to run the Release Manager tasks.
[Cutting v1.15.0-alpha.2](https://docs.google.com/document/d/1Xv5w_eNvLvD-nNinMNqQAh0qlzee8btqAyHyFFMz3z4/edit?usp=sharing) doc mentioned in the Branch Manager handbook also references using as a container image for tasks.
It would be great to have one of these Dockerfiles (or an amalgam of the two) put up in the repo for other Release Managers to consider using.
/assign @hoegaarden
/sig release
/area release-eng
/milestone v1.18
/priority important-soon
/kind feature | non_test | running release manager tasks within a container image hoegaarden that he uses a container image to run the release manager tasks doc mentioned in the branch manager handbook also references using as a container image for tasks it would be great to have one of these dockerfiles or an amalgam of the two put up in the repo for other release managers to consider using assign hoegaarden sig release area release eng milestone priority important soon kind feature | 0 |
261,973 | 22,783,718,987 | IssuesEvent | 2022-07-09 00:33:25 | marcel099/ignite-node-d08-d09-d10-testes-unitarios | https://api.github.com/repos/marcel099/ignite-node-d08-d09-d10-testes-unitarios | closed | Considerar statements do tipo transfer quando buscar o saldo | enhancement unit test integration test | - Caso de Uso `getBalance`
- [x] Alterar arquivos `useCase` e `controller` para considerar transfer statements no cálculo do saldo
- [x] Alterar arquivos `useCase` e `controller` para retornar transfer statements com suas informações
- [x] Criar novos testes unitários e adaptar antigos caso necessário
- [x] Criar novos testes integração e adaptar antigos caso necessário | 2.0 | Considerar statements do tipo transfer quando buscar o saldo - - Caso de Uso `getBalance`
- [x] Alterar arquivos `useCase` e `controller` para considerar transfer statements no cálculo do saldo
- [x] Alterar arquivos `useCase` e `controller` para retornar transfer statements com suas informações
- [x] Criar novos testes unitários e adaptar antigos caso necessário
- [x] Criar novos testes integração e adaptar antigos caso necessário | test | considerar statements do tipo transfer quando buscar o saldo caso de uso getbalance alterar arquivos usecase e controller para considerar transfer statements no cálculo do saldo alterar arquivos usecase e controller para retornar transfer statements com suas informações criar novos testes unitários e adaptar antigos caso necessário criar novos testes integração e adaptar antigos caso necessário | 1 |
531,991 | 15,528,291,754 | IssuesEvent | 2021-03-13 10:15:51 | googleapis/python-datalabeling | https://api.github.com/repos/googleapis/python-datalabeling | opened | samples.snippets.create_annotation_spec_set_test: test_create_annotation_spec_set failed | flakybot: issue priority: p1 type: bug | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 9b489d7558b91c0cfc2e81b28498c2539553167b
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/5ce7f2fa-3ee8-4f7b-b988-24fa54b77a7f), [Sponge](http://sponge2/5ce7f2fa-3ee8-4f7b-b988-24fa54b77a7f)
status: failed
<details><summary>Test output</summary><br><pre>args = (parent: "projects/python-docs-samples-tests"
annotation_spec_set {
display_name: "YOUR_ANNOTATION_SPEC_SET_DISPLAY_...bel_description_1"
}
annotation_specs {
display_name: "label_2"
description: "label_description_2"
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/python-docs-samples-tests'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.36.1 gax/1.26.1 gapic/1.0.0')]}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7fd48c9b3588>
request = parent: "projects/python-docs-samples-tests"
annotation_spec_set {
display_name: "YOUR_ANNOTATION_SPEC_SET_DISPLAY_N...label_description_1"
}
annotation_specs {
display_name: "label_2"
description: "label_description_2"
}
}
timeout = None
metadata = [('x-goog-request-params', 'parent=projects/python-docs-samples-tests'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.36.1 gax/1.26.1 gapic/1.0.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:923:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7fd48cb0e240>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fd48cae9e88>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.UNAVAILABLE
E details = "502:Bad Gateway"
E debug_error_string = "{"created":"@1615630105.120790631","description":"Error received from peer ipv4:74.125.142.81:443","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"502:Bad Gateway","grpc_status":14}"
E >
.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:826: _InactiveRpcError
The above exception was the direct cause of the following exception:
cleaner = [], capsys = <_pytest.capture.CaptureFixture object at 0x7fd48caf9668>
def test_create_annotation_spec_set(cleaner, capsys):
@backoff.on_exception(
backoff.expo, ServerError, max_time=testing_lib.RETRY_DEADLINE
)
def run_sample():
return create_annotation_spec_set.create_annotation_spec_set(PROJECT_ID)
> response = run_sample()
create_annotation_spec_set_test.py:47:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/py-3-6/lib/python3.6/site-packages/backoff/_sync.py:94: in retry
ret = target(*args, **kwargs)
create_annotation_spec_set_test.py:45: in run_sample
return create_annotation_spec_set.create_annotation_spec_set(PROJECT_ID)
create_annotation_spec_set.py:56: in create_annotation_spec_set
request={"parent": project_path, "annotation_spec_set": annotation_spec_set}
../../google/cloud/datalabeling_v1beta1/services/data_labeling_service/client.py:1781: in create_annotation_spec_set
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in __call__
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:75: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "502:Bad Gateway"
debug_e...42.81:443","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"502:Bad Gateway","grpc_status":14}"
>
> ???
E google.api_core.exceptions.ServiceUnavailable: 503 502:Bad Gateway
<string>:3: ServiceUnavailable</pre></details> | 1.0 | samples.snippets.create_annotation_spec_set_test: test_create_annotation_spec_set failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 9b489d7558b91c0cfc2e81b28498c2539553167b
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/5ce7f2fa-3ee8-4f7b-b988-24fa54b77a7f), [Sponge](http://sponge2/5ce7f2fa-3ee8-4f7b-b988-24fa54b77a7f)
status: failed
<details><summary>Test output</summary><br><pre>args = (parent: "projects/python-docs-samples-tests"
annotation_spec_set {
display_name: "YOUR_ANNOTATION_SPEC_SET_DISPLAY_...bel_description_1"
}
annotation_specs {
display_name: "label_2"
description: "label_description_2"
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/python-docs-samples-tests'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.36.1 gax/1.26.1 gapic/1.0.0')]}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7fd48c9b3588>
request = parent: "projects/python-docs-samples-tests"
annotation_spec_set {
display_name: "YOUR_ANNOTATION_SPEC_SET_DISPLAY_N...label_description_1"
}
annotation_specs {
display_name: "label_2"
description: "label_description_2"
}
}
timeout = None
metadata = [('x-goog-request-params', 'parent=projects/python-docs-samples-tests'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.36.1 gax/1.26.1 gapic/1.0.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:923:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7fd48cb0e240>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fd48cae9e88>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.UNAVAILABLE
E details = "502:Bad Gateway"
E debug_error_string = "{"created":"@1615630105.120790631","description":"Error received from peer ipv4:74.125.142.81:443","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"502:Bad Gateway","grpc_status":14}"
E >
.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:826: _InactiveRpcError
The above exception was the direct cause of the following exception:
cleaner = [], capsys = <_pytest.capture.CaptureFixture object at 0x7fd48caf9668>
def test_create_annotation_spec_set(cleaner, capsys):
@backoff.on_exception(
backoff.expo, ServerError, max_time=testing_lib.RETRY_DEADLINE
)
def run_sample():
return create_annotation_spec_set.create_annotation_spec_set(PROJECT_ID)
> response = run_sample()
create_annotation_spec_set_test.py:47:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/py-3-6/lib/python3.6/site-packages/backoff/_sync.py:94: in retry
ret = target(*args, **kwargs)
create_annotation_spec_set_test.py:45: in run_sample
return create_annotation_spec_set.create_annotation_spec_set(PROJECT_ID)
create_annotation_spec_set.py:56: in create_annotation_spec_set
request={"parent": project_path, "annotation_spec_set": annotation_spec_set}
../../google/cloud/datalabeling_v1beta1/services/data_labeling_service/client.py:1781: in create_annotation_spec_set
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in __call__
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:75: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "502:Bad Gateway"
debug_e...42.81:443","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"502:Bad Gateway","grpc_status":14}"
>
> ???
E google.api_core.exceptions.ServiceUnavailable: 503 502:Bad Gateway
<string>:3: ServiceUnavailable</pre></details> | non_test | samples snippets create annotation spec set test test create annotation spec set failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output args parent projects python docs samples tests annotation spec set display name your annotation spec set display bel description annotation specs display name label description label description kwargs metadata six wraps callable def error remapped callable args kwargs try return callable args kwargs nox py lib site packages google api core grpc helpers py self request parent projects python docs samples tests annotation spec set display name your annotation spec set display n label description annotation specs display name label description label description timeout none metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox py lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous multithreadedrendezvous state call none deadline return state response rendezvous else return state response else raise inactiverpcerror state e grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with e status statuscode unavailable e details bad gateway e debug error string created description error received from peer file src core lib surface call cc file line grpc message bad gateway grpc status e nox py lib site packages grpc channel py inactiverpcerror the above exception was the direct cause of the following exception cleaner capsys def test create annotation spec set cleaner capsys backoff on exception backoff expo servererror max time testing lib retry deadline def run sample return create annotation spec set create annotation spec set project id response run sample create annotation spec set test py nox py lib site packages backoff sync py in retry ret target args kwargs create annotation spec set test py in run sample return create annotation spec set create annotation spec set project id create annotation spec set py in create annotation spec set request parent project path annotation spec set annotation spec set google cloud datalabeling services data labeling service client py in create annotation spec set response rpc request retry retry timeout timeout metadata metadata nox py lib site packages google api core gapic method py in call return wrapped func args kwargs nox py lib site packages google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value none from value inactiverpcerror of rpc that terminated with status statuscode unavailable details bad gateway debug e file src core lib surface call cc file line grpc message bad gateway grpc status e google api core exceptions serviceunavailable bad gateway serviceunavailable | 0 |
137,715 | 5,315,287,808 | IssuesEvent | 2017-02-13 16:57:59 | prometheus/prometheus | https://api.github.com/repos/prometheus/prometheus | closed | Make Prometheus tolerate dynamic values | kind/enhancement priority/Pmaybe | In https://github.com/mwitkow/go-grpc-prometheus/issues/5 @brian-brazil brought up as an issue the discoverability of label values when all labels can't be pre-populated.
I believe that this is better handled through metric metadata (already such a concept exists, but is not utilized). An additional annotation alongside the `help` string that lists "predefined" values for a given label could then be scraped and stored by Prometheus.
| 1.0 | Make Prometheus tolerate dynamic values - In https://github.com/mwitkow/go-grpc-prometheus/issues/5 @brian-brazil brought up as an issue the discoverability of label values when all labels can't be pre-populated.
I believe that this is better handled through metric metadata (already such a concept exists, but is not utilized). An additional annotation alongside the `help` string that lists "predefined" values for a given label could then be scraped and stored by Prometheus.
| non_test | make prometheus tolerate dynamic values in brian brazil brought up as an issue the discoverability of label values when all labels can t be pre populated i believe that this is better handled through metric metadata already such a concept exists but is not utilized an additional annotation alongside the help string that lists predefined values for a given label could then be scraped and stored by prometheus | 0 |
132,944 | 10,774,088,619 | IssuesEvent | 2019-11-03 01:57:01 | pandas-dev/pandas | https://api.github.com/repos/pandas-dev/pandas | closed | pd.Series.map never maps NAs through a dictionary | Needs Tests good first issue | #### Code Sample
```python
import pandas as pd, numpy as np
print(pd.Series([1, 2, np.nan]).map({1: "a", 2: "b", np.nan: "c"}))
```
This prints:
0 a
1 b
2 NaN
dtype: object
#### Problem description
The parameter `na_action` of `pd.Series.map` is ignored if the first argument of `map` is a dictionary. Rather than mapping NAs if `na_action` is `None` and passing them through unaltered if it's `"ignore"`, pandas always passes NAs through unaltered, as if `na_action` were `"ignore"`. (The default is `None`.)
#### Expected Output
0 a
1 b
2 c
dtype: object
#### Output of ``pd.show_versions()``
<details>
commit: None
python: 3.6.1.final.0
python-bits: 64
OS: Linux
OS-release: 4.10.0-33-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: 3.0.7
pip: 9.0.1
setuptools: 36.0.1
Cython: 0.25.2
numpy: 1.13.1
scipy: 0.19.1
xarray: None
IPython: None
sphinx: None
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: 1.0.0
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: None
sqlalchemy: 1.1.9
pymysql: None
psycopg2: None
jinja2: 2.9.5
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| 1.0 | pd.Series.map never maps NAs through a dictionary - #### Code Sample
```python
import pandas as pd, numpy as np
print(pd.Series([1, 2, np.nan]).map({1: "a", 2: "b", np.nan: "c"}))
```
This prints:
0 a
1 b
2 NaN
dtype: object
#### Problem description
The parameter `na_action` of `pd.Series.map` is ignored if the first argument of `map` is a dictionary. Rather than mapping NAs if `na_action` is `None` and passing them through unaltered if it's `"ignore"`, pandas always passes NAs through unaltered, as if `na_action` were `"ignore"`. (The default is `None`.)
#### Expected Output
0 a
1 b
2 c
dtype: object
#### Output of ``pd.show_versions()``
<details>
commit: None
python: 3.6.1.final.0
python-bits: 64
OS: Linux
OS-release: 4.10.0-33-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: 3.0.7
pip: 9.0.1
setuptools: 36.0.1
Cython: 0.25.2
numpy: 1.13.1
scipy: 0.19.1
xarray: None
IPython: None
sphinx: None
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: 1.0.0
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: None
sqlalchemy: 1.1.9
pymysql: None
psycopg2: None
jinja2: 2.9.5
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| test | pd series map never maps nas through a dictionary code sample python import pandas as pd numpy as np print pd series map a b np nan c this prints a b nan dtype object problem description the parameter na action of pd series map is ignored if the first argument of map is a dictionary rather than mapping nas if na action is none and passing them through unaltered if it s ignore pandas always passes nas through unaltered as if na action were ignore the default is none expected output a b c dtype object output of pd show versions commit none python final python bits os linux os release generic machine processor byteorder little lc all none lang en us utf locale en us utf pandas pytest pip setuptools cython numpy scipy xarray none ipython none sphinx none patsy dateutil pytz blosc none bottleneck none tables none numexpr none feather none matplotlib openpyxl none xlrd xlwt none xlsxwriter none lxml none none sqlalchemy pymysql none none none pandas gbq none pandas datareader none | 1 |
271,939 | 23,642,063,151 | IssuesEvent | 2022-08-25 18:03:29 | Kong/gateway-operator | https://api.github.com/repos/Kong/gateway-operator | opened | Scalability Testing | area/tests priority/medium area/scalability | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem Statement
We have [3 scaling modes](https://github.com/Kong/gateway-operator/pull/234) we're intending to support for GA:
- #233
- #8, #171
- #229
In order to properly support these modes we should have automated performance testing which can help us understand our limits and upper bounds for component utilization.
### Proposed Solution
- [ ] provide testing for the `DataPlane` with low/medium/high resource availability: the test should be able to emit performance metrics which help to characterize what kind of performance can be expected at these levels
- [ ] provide testing for the `ControlPlane`: how many `DataPlanes` it can effectively serve simultaneously and emit performance metrics ot help characterize the limits on the number of `DataPlane` replicas which can be served by a single `ControlPlane`
### Additional information
_No response_
### Acceptance Criteria
- [ ] as an end-user I have some kong-provided performance data that can help me to better understand the limits of my production setup using the Kong Gateway Operator | 1.0 | Scalability Testing - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem Statement
We have [3 scaling modes](https://github.com/Kong/gateway-operator/pull/234) we're intending to support for GA:
- #233
- #8, #171
- #229
In order to properly support these modes we should have automated performance testing which can help us understand our limits and upper bounds for component utilization.
### Proposed Solution
- [ ] provide testing for the `DataPlane` with low/medium/high resource availability: the test should be able to emit performance metrics which help to characterize what kind of performance can be expected at these levels
- [ ] provide testing for the `ControlPlane`: how many `DataPlanes` it can effectively serve simultaneously and emit performance metrics ot help characterize the limits on the number of `DataPlane` replicas which can be served by a single `ControlPlane`
### Additional information
_No response_
### Acceptance Criteria
- [ ] as an end-user I have some kong-provided performance data that can help me to better understand the limits of my production setup using the Kong Gateway Operator | test | scalability testing is there an existing issue for this i have searched the existing issues problem statement we have we re intending to support for ga in order to properly support these modes we should have automated performance testing which can help us understand our limits and upper bounds for component utilization proposed solution provide testing for the dataplane with low medium high resource availability the test should be able to emit performance metrics which help to characterize what kind of performance can be expected at these levels provide testing for the controlplane how many dataplanes it can effectively serve simultaneously and emit performance metrics ot help characterize the limits on the number of dataplane replicas which can be served by a single controlplane additional information no response acceptance criteria as an end user i have some kong provided performance data that can help me to better understand the limits of my production setup using the kong gateway operator | 1 |
640,711 | 20,796,997,609 | IssuesEvent | 2022-03-17 10:15:46 | gbv/cocoda | https://api.github.com/repos/gbv/cocoda | closed | BARTOC integration: Will not work properly as soon as there are more than 100 vocabularies | bug low priority | Currently, there are 77 vocabularies in BARTOC's coli-conc KOS registry. However, I've found out that if there are more than 100, only the first 100 will be loaded properly. I'm actually not sure why yet because it worked previously with DANTE, but it should be fixed in any case. | 1.0 | BARTOC integration: Will not work properly as soon as there are more than 100 vocabularies - Currently, there are 77 vocabularies in BARTOC's coli-conc KOS registry. However, I've found out that if there are more than 100, only the first 100 will be loaded properly. I'm actually not sure why yet because it worked previously with DANTE, but it should be fixed in any case. | non_test | bartoc integration will not work properly as soon as there are more than vocabularies currently there are vocabularies in bartoc s coli conc kos registry however i ve found out that if there are more than only the first will be loaded properly i m actually not sure why yet because it worked previously with dante but it should be fixed in any case | 0 |
16,031 | 10,519,076,129 | IssuesEvent | 2019-09-29 15:26:50 | CoFH/cofh.github.io | https://api.github.com/repos/CoFH/cofh.github.io | closed | Rework downloads page to have a section for each Minecraft version | content usability | This way it is more clear which Minecraft version has which mods | True | Rework downloads page to have a section for each Minecraft version - This way it is more clear which Minecraft version has which mods | non_test | rework downloads page to have a section for each minecraft version this way it is more clear which minecraft version has which mods | 0 |
11,287 | 8,356,772,360 | IssuesEvent | 2018-10-02 19:29:59 | goharbor/harbor | https://api.github.com/repos/goharbor/harbor | closed | [xss report] XSS report for Security about bootstrap 4.0.0-alpha. | area/ui security | There is XSS risk in github security report about bootstrap 4.0.0-alpha. This is caused by Clarity dependency on bootstrap.
| True | [xss report] XSS report for Security about bootstrap 4.0.0-alpha. - There is XSS risk in github security report about bootstrap 4.0.0-alpha. This is caused by Clarity dependency on bootstrap.
| non_test | xss report for security about bootstrap alpha there is xss risk in github security report about bootstrap alpha this is caused by clarity dependency on bootstrap | 0 |
277,976 | 24,116,015,519 | IssuesEvent | 2022-09-20 14:47:23 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: unoptimized-query-oracle/disable-rules=all failed | C-test-failure O-robot O-roachtest branch-release-22.2 | roachtest.unoptimized-query-oracle/disable-rules=all [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6530788?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6530788?buildTab=artifacts#/unoptimized-query-oracle/disable-rules=all) on release-22.2 @ [edeefa9120e86739808ca43fc52d4cbac813fab3](https://github.com/cockroachdb/cockroach/commits/edeefa9120e86739808ca43fc52d4cbac813fab3):
```
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*queryComparisonHelper).makeError
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/query_comparison_util.go:349
| [...repeated from below...]
Wraps: (2) . 2657 statements run
Wraps: (3) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runUnoptimizedQueryOracleImpl
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/unoptimized_query_oracle.go:168
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerUnoptimizedQueryOracle.func1.1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/unoptimized_query_oracle.go:57
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runOneRoundQueryComparison
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/query_comparison_util.go:240
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runQueryComparison
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/query_comparison_util.go:69
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerUnoptimizedQueryOracle.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/unoptimized_query_oracle.go:54
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:908
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (4) expected unoptimized and optimized results to be equal
| []string{
| ... // 119 identical elements
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',64.5437593688300616"...,
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',7.13442027782918569"...,
| strings.Join({
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',74.9900538536399065",
| "0,,0,NULL,-75213199179428.4647461463600935",
| + "0",
| ",0.7536297312447724",
| }, ""),
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',8.57651433500717140"...,
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',8.95171183756047877"...,
| ... // 19 identical elements
| }
| sql: SELECT
| '12:59:56.816725':::TIME AS col_6810,
| tab_2238.col5_16 AS col_6811,
| tab_2238.col5_4 AS col_6812,
| tab_2238.col5_15 AS col_6813,
| 0:::OID AS col_6814,
| NULL AS col_6815,
| tab_2238.col5_12 AS col_6816,
| 0.7536297312447724:::FLOAT8 AS col_6817
| FROM
| defaultdb.public.table5@[0] AS tab_2238
| ORDER BY
| tab_2238.col5_1 DESC, tab_2238.col5_4 DESC, tab_2238.col5_1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*unoptimized-query-oracle/disable-rules=all.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: unoptimized-query-oracle/disable-rules=all failed - roachtest.unoptimized-query-oracle/disable-rules=all [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6530788?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6530788?buildTab=artifacts#/unoptimized-query-oracle/disable-rules=all) on release-22.2 @ [edeefa9120e86739808ca43fc52d4cbac813fab3](https://github.com/cockroachdb/cockroach/commits/edeefa9120e86739808ca43fc52d4cbac813fab3):
```
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*queryComparisonHelper).makeError
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/query_comparison_util.go:349
| [...repeated from below...]
Wraps: (2) . 2657 statements run
Wraps: (3) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runUnoptimizedQueryOracleImpl
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/unoptimized_query_oracle.go:168
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerUnoptimizedQueryOracle.func1.1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/unoptimized_query_oracle.go:57
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runOneRoundQueryComparison
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/query_comparison_util.go:240
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runQueryComparison
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/query_comparison_util.go:69
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerUnoptimizedQueryOracle.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/unoptimized_query_oracle.go:54
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:908
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (4) expected unoptimized and optimized results to be equal
| []string{
| ... // 119 identical elements
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',64.5437593688300616"...,
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',7.13442027782918569"...,
| strings.Join({
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',74.9900538536399065",
| "0,,0,NULL,-75213199179428.4647461463600935",
| + "0",
| ",0.7536297312447724",
| }, ""),
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',8.57651433500717140"...,
| "0000-01-01 12:59:56.816725 +0000 UTC,JldA0\x12',8.95171183756047877"...,
| ... // 19 identical elements
| }
| sql: SELECT
| '12:59:56.816725':::TIME AS col_6810,
| tab_2238.col5_16 AS col_6811,
| tab_2238.col5_4 AS col_6812,
| tab_2238.col5_15 AS col_6813,
| 0:::OID AS col_6814,
| NULL AS col_6815,
| tab_2238.col5_12 AS col_6816,
| 0.7536297312447724:::FLOAT8 AS col_6817
| FROM
| defaultdb.public.table5@[0] AS tab_2238
| ORDER BY
| tab_2238.col5_1 DESC, tab_2238.col5_4 DESC, tab_2238.col5_1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*unoptimized-query-oracle/disable-rules=all.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest unoptimized query oracle disable rules all failed roachtest unoptimized query oracle disable rules all with on release stack trace github com cockroachdb cockroach pkg cmd roachtest tests querycomparisonhelper makeerror github com cockroachdb cockroach pkg cmd roachtest tests query comparison util go wraps statements run wraps attached stack trace stack trace github com cockroachdb cockroach pkg cmd roachtest tests rununoptimizedqueryoracleimpl github com cockroachdb cockroach pkg cmd roachtest tests unoptimized query oracle go github com cockroachdb cockroach pkg cmd roachtest tests registerunoptimizedqueryoracle github com cockroachdb cockroach pkg cmd roachtest tests unoptimized query oracle go github com cockroachdb cockroach pkg cmd roachtest tests runoneroundquerycomparison github com cockroachdb cockroach pkg cmd roachtest tests query comparison util go github com cockroachdb cockroach pkg cmd roachtest tests runquerycomparison github com cockroachdb cockroach pkg cmd roachtest tests query comparison util go github com cockroachdb cockroach pkg cmd roachtest tests registerunoptimizedqueryoracle github com cockroachdb cockroach pkg cmd roachtest tests unoptimized query oracle go main testrunner runtest main pkg cmd roachtest test runner go runtime goexit goroot src runtime asm s wraps expected unoptimized and optimized results to be equal string identical elements utc utc strings join utc null utc utc identical elements sql select time as col tab as col tab as col tab as col oid as col null as col tab as col as col from defaultdb public as tab order by tab desc tab desc tab error types withstack withstack errutil withprefix withstack withstack errutil leaferror parameters roachtest cloud gce roachtest cpu roachtest ssd help see see cc cockroachdb sql queries | 1 |
18,780 | 6,641,352,691 | IssuesEvent | 2017-09-27 00:42:21 | habitat-sh/habitat | https://api.github.com/repos/habitat-sh/habitat | closed | [Builder-Web] `latest stable` in UI is pointing to latest package | A-builder C-bug | The sidebar on the package view is currently showing the _latest package_ instead of the _latest stable package_

| 1.0 | [Builder-Web] `latest stable` in UI is pointing to latest package - The sidebar on the package view is currently showing the _latest package_ instead of the _latest stable package_

| non_test | latest stable in ui is pointing to latest package the sidebar on the package view is currently showing the latest package instead of the latest stable package | 0 |
32,404 | 7,531,108,904 | IssuesEvent | 2018-04-15 00:44:33 | dahall/TaskScheduler | https://api.github.com/repos/dahall/TaskScheduler | closed | HRESULT: 0x8007007E | codeplex-disc |
Retrieving the COM class factory for component with CLSID {0F87369F-A4E5-4CFC-BD3E-73E6154572DD} failed due to the following error: 8007007e The specified module could not be found. (Exception from HRESULT: 0x8007007E).
The exception occurred at Microsoft.Win32.TaskScheduler.TaskService.Connect() in C:\Users\dahall\Documents\Visual Studio 2010\Projects\TaskService\TaskService.cs:line 802
Any idea what could be happening here? I'm assuming that the Task Service is broken on that system but in what way? Is there a way to repair it?
Originally posted: 2016-08-26T08:37:52 | 1.0 | HRESULT: 0x8007007E -
Retrieving the COM class factory for component with CLSID {0F87369F-A4E5-4CFC-BD3E-73E6154572DD} failed due to the following error: 8007007e The specified module could not be found. (Exception from HRESULT: 0x8007007E).
The exception occurred at Microsoft.Win32.TaskScheduler.TaskService.Connect() in C:\Users\dahall\Documents\Visual Studio 2010\Projects\TaskService\TaskService.cs:line 802
Any idea what could be happening here? I'm assuming that the Task Service is broken on that system but in what way? Is there a way to repair it?
Originally posted: 2016-08-26T08:37:52 | non_test | hresult retrieving the com class factory for component with clsid failed due to the following error the specified module could not be found exception from hresult the exception occurred at microsoft taskscheduler taskservice connect in c users dahall documents visual studio projects taskservice taskservice cs line any idea what could be happening here i m assuming that the task service is broken on that system but in what way is there a way to repair it originally posted | 0 |
247,076 | 20,956,186,837 | IssuesEvent | 2022-03-27 05:48:48 | pywbem/pywbemtools | https://api.github.com/repos/pywbem/pywbemtools | closed | New issues reported by Pylint 2.13 | type: bug area: test resolution: fixed roll back/forward done | From test run https://github.com/pywbem/pywbemtools/runs/5707140141?check_suite_focus=true#step:16:20
```
tests/unit/utils.py:431:19: E0601: Using variable 'i' before assignment (used-before-assignment)
tests/unit/utils.py:431:49: E0601: Using variable 'start_line' before assignment (used-before-assignment)
``` | 1.0 | New issues reported by Pylint 2.13 - From test run https://github.com/pywbem/pywbemtools/runs/5707140141?check_suite_focus=true#step:16:20
```
tests/unit/utils.py:431:19: E0601: Using variable 'i' before assignment (used-before-assignment)
tests/unit/utils.py:431:49: E0601: Using variable 'start_line' before assignment (used-before-assignment)
``` | test | new issues reported by pylint from test run tests unit utils py using variable i before assignment used before assignment tests unit utils py using variable start line before assignment used before assignment | 1 |
139,366 | 11,259,673,265 | IssuesEvent | 2020-01-13 08:55:08 | ovotech/data-mocks | https://api.github.com/repos/ovotech/data-mocks | closed | Test data-mocks with ie11 | testing todo | There have been some issues with data-mocks and Android, which has led to the question of whether the library works properly in IE11. It hasn't been tested yet on IE11 so let's do that :) | 1.0 | Test data-mocks with ie11 - There have been some issues with data-mocks and Android, which has led to the question of whether the library works properly in IE11. It hasn't been tested yet on IE11 so let's do that :) | test | test data mocks with there have been some issues with data mocks and android which has led to the question of whether the library works properly in it hasn t been tested yet on so let s do that | 1 |
63,208 | 6,829,325,598 | IssuesEvent | 2017-11-08 23:54:23 | bitcoin/bitcoin | https://api.github.com/repos/bitcoin/bitcoin | closed | How to create src/test/data/script_tests.json JSON file? | Tests | Hi!
I'd like to know how to create src/test/data/script_tests.json JSON file?
Thanks. | 1.0 | How to create src/test/data/script_tests.json JSON file? - Hi!
I'd like to know how to create src/test/data/script_tests.json JSON file?
Thanks. | test | how to create src test data script tests json json file hi i d like to know how to create src test data script tests json json file thanks | 1 |
22,559 | 3,963,184,258 | IssuesEvent | 2016-05-02 19:30:47 | vignek/workcollabration | https://api.github.com/repos/vignek/workcollabration | opened | no validation for password | TestObject | There are no validation on password. One letter password also works
### Reporter
Vignesh Kumar
### App Version Under Test
Name from APK: Work Collabration
Version from APK: 1.0
Version code: 1
Your version name: 1.0
Package: com.sheikbro.onlinechat
Uploaded: April 28, 2016 — 04:48 AM
TestObject ID: 2
### Device
Name: Samsung Google Nexus 10 P8110
Android version: 5.1.1
API level: 22
Resolution: 1600 x 2560 (xhdpi)
Screen size: 10.1"
CPU: ARM | dual core | 1700 MHz
RAM: 2048 MB
Internal storage: 16384 MB
Model number: P8110
Detailed specification: http://www.gsmarena.com/results.php3?sName=P8110
TestObject ID: Samsung_Google_Nexus_10_P8110_real
TestObject Manual Testing: https://app.testobject.com/#/vignek/work-collabration/manual/viewer?device=Samsung_Google_Nexus_10_P8110_real
### Issue on TestObject
https://app.testobject.com/#/vignek/work-collabration/issues/31 | 1.0 | no validation for password - There are no validation on password. One letter password also works
### Reporter
Vignesh Kumar
### App Version Under Test
Name from APK: Work Collabration
Version from APK: 1.0
Version code: 1
Your version name: 1.0
Package: com.sheikbro.onlinechat
Uploaded: April 28, 2016 — 04:48 AM
TestObject ID: 2
### Device
Name: Samsung Google Nexus 10 P8110
Android version: 5.1.1
API level: 22
Resolution: 1600 x 2560 (xhdpi)
Screen size: 10.1"
CPU: ARM | dual core | 1700 MHz
RAM: 2048 MB
Internal storage: 16384 MB
Model number: P8110
Detailed specification: http://www.gsmarena.com/results.php3?sName=P8110
TestObject ID: Samsung_Google_Nexus_10_P8110_real
TestObject Manual Testing: https://app.testobject.com/#/vignek/work-collabration/manual/viewer?device=Samsung_Google_Nexus_10_P8110_real
### Issue on TestObject
https://app.testobject.com/#/vignek/work-collabration/issues/31 | test | no validation for password there are no validation on password one letter password also works reporter vignesh kumar app version under test name from apk work collabration version from apk version code your version name package com sheikbro onlinechat uploaded april — am testobject id device name samsung google nexus android version api level resolution x xhdpi screen size cpu arm dual core mhz ram mb internal storage mb model number detailed specification testobject id samsung google nexus real testobject manual testing issue on testobject | 1 |
263,267 | 23,045,210,346 | IssuesEvent | 2022-07-23 19:52:13 | astropy/astropy | https://api.github.com/repos/astropy/astropy | closed | TST: test_angle_arrays failed with ValueError: setting an array element with a sequence | testing coordinates numpy-dev | Looks like something in numpy-dev "nightly wheel" is incompatible with this Angle test. Example log: https://github.com/astropy/astropy/runs/7454435493?check_suite_focus=true
```
Numpy: 1.24.0.dev0+569.gf1e50253a
...
______________________________ test_angle_arrays _______________________________
def test_angle_arrays():
"""
Test arrays values with Angle objects.
"""
# Tests incomplete
a1 = Angle([0, 45, 90, 180, 270, 360, 720.], unit=u.degree)
npt.assert_almost_equal([0., 45., 90., 180., 270., 360., 720.], a1.value)
a2 = Angle(np.array([-90, -45, 0, 45, 90, 180, 270, 360]), unit=u.degree)
npt.assert_almost_equal([-90, -45, 0, 45, 90, 180, 270, 360],
a2.value)
a3 = Angle(["12 degrees", "3 hours", "5 deg", "4rad"])
npt.assert_almost_equal([12., 45., 5., 229.18311805],
a3.value)
assert a3.unit == u.degree
a4 = Angle(["12 degrees", "3 hours", "5 deg", "4rad"], u.radian)
npt.assert_almost_equal(a4.degree, a3.value)
assert a4.unit == u.radian
a5 = Angle([0, 45, 90, 180, 270, 360], unit=u.degree)
a6 = a5.sum()
npt.assert_almost_equal(a6.value, 945.0)
assert a6.unit is u.degree
with ExitStack() as stack:
stack.enter_context(pytest.raises(TypeError))
# Arrays where the elements are Angle objects are not supported -- it's
# really tricky to do correctly, if at all, due to the possibility of
# nesting.
if not NUMPY_LT_1_19:
stack.enter_context(
pytest.warns(DeprecationWarning,
match='automatic object dtype is deprecated'))
> a7 = Angle([a1, a2, a3], unit=u.degree)
../../.tox/py38-test-devdeps/lib/python3.8/site-packages/astropy/coordinates/tests/test_arrays.py:55:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../.tox/py38-test-devdeps/lib/python3.8/site-packages/astropy/coordinates/angles.py:141: in __new__
return super().__new__(cls, angle, unit, dtype=dtype, copy=copy,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'astropy.coordinates.angles.Angle'>
value = [array([ 0., 45., 90., 180., 270., 360., 720.]), array([-90., -45., 0., 45., 90., 180., 270., 360.]), array([ 12. , 45. , 5. , 229.18311805])]
unit = Unit("deg"), dtype = None, copy = True, order = None, subok = False
ndmin = 0
def __new__(cls, value, unit=None, dtype=np.inexact, copy=True, order=None,
subok=False, ndmin=0):
if unit is not None:
# convert unit first, to avoid multiple string->unit conversions
unit = Unit(unit)
# inexact -> upcast to float dtype
float_default = dtype is np.inexact
if float_default:
dtype = None
# optimize speed for Quantity with no dtype given, copy=False
if isinstance(value, Quantity):
if unit is not None and unit is not value.unit:
value = value.to(unit)
# the above already makes a copy (with float dtype)
copy = False
if type(value) is not cls and not (subok and
isinstance(value, cls)):
value = value.view(cls)
if float_default and value.dtype.kind in 'iu':
dtype = float
return np.array(value, dtype=dtype, copy=copy, order=order,
subok=True, ndmin=ndmin)
# Maybe str, or list/tuple of Quantity? If so, this may set value_unit.
# To ensure array remains fast, we short-circuit it.
value_unit = None
if not isinstance(value, np.ndarray):
if isinstance(value, str):
# The first part of the regex string matches any integer/float;
# the second parts adds possible trailing .+-, which will break
# the float function below and ensure things like 1.2.3deg
# will not work.
pattern = (r'\s*[+-]?'
r'((\d+\.?\d*)|(\.\d+)|([nN][aA][nN])|'
r'([iI][nN][fF]([iI][nN][iI][tT][yY]){0,1}))'
r'([eE][+-]?\d+)?'
r'[.+-]?')
v = re.match(pattern, value)
unit_string = None
try:
value = float(v.group())
except Exception:
raise TypeError('Cannot parse "{}" as a {}. It does not '
'start with a number.'
.format(value, cls.__name__))
unit_string = v.string[v.end():].strip()
if unit_string:
value_unit = Unit(unit_string)
if unit is None:
unit = value_unit # signal no conversion needed below.
elif isiterable(value) and len(value) > 0:
# Iterables like lists and tuples.
if all(isinstance(v, Quantity) for v in value):
# If a list/tuple containing only quantities, convert all
# to the same unit.
if unit is None:
unit = value[0].unit
value = [q.to_value(unit) for q in value]
value_unit = unit # signal below that conversion has been done
elif (dtype is None and not hasattr(value, 'dtype')
and isinstance(unit, StructuredUnit)):
# Special case for list/tuple of values and a structured unit:
# ``np.array(value, dtype=None)`` would treat tuples as lower
# levels of the array, rather than as elements of a structured
# array, so we use the structure of the unit to help infer the
# structured dtype of the value.
dtype = unit._recursively_get_dtype(value)
if value_unit is None:
# If the value has a `unit` attribute and if not None
# (for Columns with uninitialized unit), treat it like a quantity.
value_unit = getattr(value, 'unit', None)
if value_unit is None:
# Default to dimensionless for no (initialized) unit attribute.
if unit is None:
unit = cls._default_unit
value_unit = unit # signal below that no conversion is needed
else:
try:
value_unit = Unit(value_unit)
except Exception as exc:
raise TypeError("The unit attribute {!r} of the input could "
"not be parsed as an astropy Unit, raising "
"the following exception:\n{}"
.format(value.unit, exc))
if unit is None:
unit = value_unit
elif unit is not value_unit:
copy = False # copy will be made in conversion at end
> value = np.array(value, dtype=dtype, copy=copy, order=order,
subok=True, ndmin=ndmin)
E ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (3,) + inhomogeneous part.
../../.tox/py38-test-devdeps/lib/python3.8/site-packages/astropy/units/quantity.py:508: ValueError
```
@mhvk , any advise? 🙏 | 1.0 | TST: test_angle_arrays failed with ValueError: setting an array element with a sequence - Looks like something in numpy-dev "nightly wheel" is incompatible with this Angle test. Example log: https://github.com/astropy/astropy/runs/7454435493?check_suite_focus=true
```
Numpy: 1.24.0.dev0+569.gf1e50253a
...
______________________________ test_angle_arrays _______________________________
def test_angle_arrays():
"""
Test arrays values with Angle objects.
"""
# Tests incomplete
a1 = Angle([0, 45, 90, 180, 270, 360, 720.], unit=u.degree)
npt.assert_almost_equal([0., 45., 90., 180., 270., 360., 720.], a1.value)
a2 = Angle(np.array([-90, -45, 0, 45, 90, 180, 270, 360]), unit=u.degree)
npt.assert_almost_equal([-90, -45, 0, 45, 90, 180, 270, 360],
a2.value)
a3 = Angle(["12 degrees", "3 hours", "5 deg", "4rad"])
npt.assert_almost_equal([12., 45., 5., 229.18311805],
a3.value)
assert a3.unit == u.degree
a4 = Angle(["12 degrees", "3 hours", "5 deg", "4rad"], u.radian)
npt.assert_almost_equal(a4.degree, a3.value)
assert a4.unit == u.radian
a5 = Angle([0, 45, 90, 180, 270, 360], unit=u.degree)
a6 = a5.sum()
npt.assert_almost_equal(a6.value, 945.0)
assert a6.unit is u.degree
with ExitStack() as stack:
stack.enter_context(pytest.raises(TypeError))
# Arrays where the elements are Angle objects are not supported -- it's
# really tricky to do correctly, if at all, due to the possibility of
# nesting.
if not NUMPY_LT_1_19:
stack.enter_context(
pytest.warns(DeprecationWarning,
match='automatic object dtype is deprecated'))
> a7 = Angle([a1, a2, a3], unit=u.degree)
../../.tox/py38-test-devdeps/lib/python3.8/site-packages/astropy/coordinates/tests/test_arrays.py:55:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../.tox/py38-test-devdeps/lib/python3.8/site-packages/astropy/coordinates/angles.py:141: in __new__
return super().__new__(cls, angle, unit, dtype=dtype, copy=copy,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'astropy.coordinates.angles.Angle'>
value = [array([ 0., 45., 90., 180., 270., 360., 720.]), array([-90., -45., 0., 45., 90., 180., 270., 360.]), array([ 12. , 45. , 5. , 229.18311805])]
unit = Unit("deg"), dtype = None, copy = True, order = None, subok = False
ndmin = 0
def __new__(cls, value, unit=None, dtype=np.inexact, copy=True, order=None,
subok=False, ndmin=0):
if unit is not None:
# convert unit first, to avoid multiple string->unit conversions
unit = Unit(unit)
# inexact -> upcast to float dtype
float_default = dtype is np.inexact
if float_default:
dtype = None
# optimize speed for Quantity with no dtype given, copy=False
if isinstance(value, Quantity):
if unit is not None and unit is not value.unit:
value = value.to(unit)
# the above already makes a copy (with float dtype)
copy = False
if type(value) is not cls and not (subok and
isinstance(value, cls)):
value = value.view(cls)
if float_default and value.dtype.kind in 'iu':
dtype = float
return np.array(value, dtype=dtype, copy=copy, order=order,
subok=True, ndmin=ndmin)
# Maybe str, or list/tuple of Quantity? If so, this may set value_unit.
# To ensure array remains fast, we short-circuit it.
value_unit = None
if not isinstance(value, np.ndarray):
if isinstance(value, str):
# The first part of the regex string matches any integer/float;
# the second parts adds possible trailing .+-, which will break
# the float function below and ensure things like 1.2.3deg
# will not work.
pattern = (r'\s*[+-]?'
r'((\d+\.?\d*)|(\.\d+)|([nN][aA][nN])|'
r'([iI][nN][fF]([iI][nN][iI][tT][yY]){0,1}))'
r'([eE][+-]?\d+)?'
r'[.+-]?')
v = re.match(pattern, value)
unit_string = None
try:
value = float(v.group())
except Exception:
raise TypeError('Cannot parse "{}" as a {}. It does not '
'start with a number.'
.format(value, cls.__name__))
unit_string = v.string[v.end():].strip()
if unit_string:
value_unit = Unit(unit_string)
if unit is None:
unit = value_unit # signal no conversion needed below.
elif isiterable(value) and len(value) > 0:
# Iterables like lists and tuples.
if all(isinstance(v, Quantity) for v in value):
# If a list/tuple containing only quantities, convert all
# to the same unit.
if unit is None:
unit = value[0].unit
value = [q.to_value(unit) for q in value]
value_unit = unit # signal below that conversion has been done
elif (dtype is None and not hasattr(value, 'dtype')
and isinstance(unit, StructuredUnit)):
# Special case for list/tuple of values and a structured unit:
# ``np.array(value, dtype=None)`` would treat tuples as lower
# levels of the array, rather than as elements of a structured
# array, so we use the structure of the unit to help infer the
# structured dtype of the value.
dtype = unit._recursively_get_dtype(value)
if value_unit is None:
# If the value has a `unit` attribute and if not None
# (for Columns with uninitialized unit), treat it like a quantity.
value_unit = getattr(value, 'unit', None)
if value_unit is None:
# Default to dimensionless for no (initialized) unit attribute.
if unit is None:
unit = cls._default_unit
value_unit = unit # signal below that no conversion is needed
else:
try:
value_unit = Unit(value_unit)
except Exception as exc:
raise TypeError("The unit attribute {!r} of the input could "
"not be parsed as an astropy Unit, raising "
"the following exception:\n{}"
.format(value.unit, exc))
if unit is None:
unit = value_unit
elif unit is not value_unit:
copy = False # copy will be made in conversion at end
> value = np.array(value, dtype=dtype, copy=copy, order=order,
subok=True, ndmin=ndmin)
E ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (3,) + inhomogeneous part.
../../.tox/py38-test-devdeps/lib/python3.8/site-packages/astropy/units/quantity.py:508: ValueError
```
@mhvk , any advise? 🙏 | test | tst test angle arrays failed with valueerror setting an array element with a sequence looks like something in numpy dev nightly wheel is incompatible with this angle test example log numpy test angle arrays def test angle arrays test arrays values with angle objects tests incomplete angle unit u degree npt assert almost equal value angle np array unit u degree npt assert almost equal value angle npt assert almost equal value assert unit u degree angle u radian npt assert almost equal degree value assert unit u radian angle unit u degree sum npt assert almost equal value assert unit is u degree with exitstack as stack stack enter context pytest raises typeerror arrays where the elements are angle objects are not supported it s really tricky to do correctly if at all due to the possibility of nesting if not numpy lt stack enter context pytest warns deprecationwarning match automatic object dtype is deprecated angle unit u degree tox test devdeps lib site packages astropy coordinates tests test arrays py tox test devdeps lib site packages astropy coordinates angles py in new return super new cls angle unit dtype dtype copy copy cls value array array unit unit deg dtype none copy true order none subok false ndmin def new cls value unit none dtype np inexact copy true order none subok false ndmin if unit is not none convert unit first to avoid multiple string unit conversions unit unit unit inexact upcast to float dtype float default dtype is np inexact if float default dtype none optimize speed for quantity with no dtype given copy false if isinstance value quantity if unit is not none and unit is not value unit value value to unit the above already makes a copy with float dtype copy false if type value is not cls and not subok and isinstance value cls value value view cls if float default and value dtype kind in iu dtype float return np array value dtype dtype copy copy order order subok true ndmin ndmin maybe str or list tuple of quantity if so this may set value unit to ensure array remains fast we short circuit it value unit none if not isinstance value np ndarray if isinstance value str the first part of the regex string matches any integer float the second parts adds possible trailing which will break the float function below and ensure things like will not work pattern r s r d d d r r d r v re match pattern value unit string none try value float v group except exception raise typeerror cannot parse as a it does not start with a number format value cls name unit string v string strip if unit string value unit unit unit string if unit is none unit value unit signal no conversion needed below elif isiterable value and len value iterables like lists and tuples if all isinstance v quantity for v in value if a list tuple containing only quantities convert all to the same unit if unit is none unit value unit value value unit unit signal below that conversion has been done elif dtype is none and not hasattr value dtype and isinstance unit structuredunit special case for list tuple of values and a structured unit np array value dtype none would treat tuples as lower levels of the array rather than as elements of a structured array so we use the structure of the unit to help infer the structured dtype of the value dtype unit recursively get dtype value if value unit is none if the value has a unit attribute and if not none for columns with uninitialized unit treat it like a quantity value unit getattr value unit none if value unit is none default to dimensionless for no initialized unit attribute if unit is none unit cls default unit value unit unit signal below that no conversion is needed else try value unit unit value unit except exception as exc raise typeerror the unit attribute r of the input could not be parsed as an astropy unit raising the following exception n format value unit exc if unit is none unit value unit elif unit is not value unit copy false copy will be made in conversion at end value np array value dtype dtype copy copy order order subok true ndmin ndmin e valueerror setting an array element with a sequence the requested array has an inhomogeneous shape after dimensions the detected shape was inhomogeneous part tox test devdeps lib site packages astropy units quantity py valueerror mhvk any advise 🙏 | 1 |
192,888 | 14,631,893,035 | IssuesEvent | 2020-12-23 20:58:16 | IntellectualSites/FastAsyncWorldEdit | https://api.github.com/repos/IntellectualSites/FastAsyncWorldEdit | opened | FAWE (Is it up to date?) | Requires Testing | <!-- ⚠️⚠️ Do Not Delete This! You must follow this template. ⚠️⚠️ -->
<!--- Incomplete reports will be marked as invalid, and closed, with few exceptions.-->
<!--- If you are using 1.14 or 1.15 consider updating to 1.16.3 before raising an issue -->
<!--- The priority lays on 1.16 right now, so issues reported for or 1.15 will be fixed for the 1.16 versions -->
**[REQUIRED] FastAsyncWorldEdit Configuration Files**
<!--- Issue /fawe debugpaste in game or in your console and copy the supplied URL here --> https://haste.athion.net/qupidadeqo.cs (Plugin doesn't even load the commands...)
<!--- If you cannot perform the above, we require logs/latest.log; config.yml and config-legacy.yml --> https://haste.athion.net/ewobafefez.md
<!--- Please provide this information by using a paste service such as https://haste.athion.net -->
<!--- If you are unwilling to supply the information we need, we reserve the right to not assist you. Redact IP addresses if you need to. -->
**/fawe debugpaste**:
**Required Information**
- FAWE Version Number (`/version FastAsyncWorldEdit`): FastAsyncWorldEdit-1.16-488
- Spigot/Paper Version Number (`/version`): git-Paper-342 (MC: 1.16.4)
- Minecraft Version: 1.16.4
**Describe the bug**
The plugin "start" with an error named (Is it up to date?). It load the config but then when I restart the server it says the same error... and the plugin doesn't work... I tried to remove the config to generate a new one... it generated a new one but the same error ocurred (IS IT UP TO DATE?). I've deleted the file... generates a new one... but doesn't work...
**To Reproduce**
Steps to reproduce the behavior: "downloading" the plugin '-' idk
1. Run to 'console'
2. Click on 'restart the server'
3. See error on console
**Plugins being used on the server**
<!--- Optional but recommended - issue "/plugins" in-game or in console and copy/paste the list --> Vault (green), FastAsyncWorldEdit (IN RED)
**Checklist**:
<!--- Make sure you've completed the following steps (put an "X" between of brackets): -->
- [X] I included all information required in the sections above
- [] I made sure there are no duplicates of this report [(Use Search)](https://github.com/IntellectualSites/FastAsyncWorldEdit/issues?q=is%3Aissue)
- [X] I made sure I am using an up-to-date version of [FastAsyncWorldEdit for 1.16.4](https://ci.athion.net/job/FastAsyncWorldEdit-1.16/)
- [X] I made sure the bug/error is not caused by any other plugin
| 1.0 | FAWE (Is it up to date?) - <!-- ⚠️⚠️ Do Not Delete This! You must follow this template. ⚠️⚠️ -->
<!--- Incomplete reports will be marked as invalid, and closed, with few exceptions.-->
<!--- If you are using 1.14 or 1.15 consider updating to 1.16.3 before raising an issue -->
<!--- The priority lays on 1.16 right now, so issues reported for or 1.15 will be fixed for the 1.16 versions -->
**[REQUIRED] FastAsyncWorldEdit Configuration Files**
<!--- Issue /fawe debugpaste in game or in your console and copy the supplied URL here --> https://haste.athion.net/qupidadeqo.cs (Plugin doesn't even load the commands...)
<!--- If you cannot perform the above, we require logs/latest.log; config.yml and config-legacy.yml --> https://haste.athion.net/ewobafefez.md
<!--- Please provide this information by using a paste service such as https://haste.athion.net -->
<!--- If you are unwilling to supply the information we need, we reserve the right to not assist you. Redact IP addresses if you need to. -->
**/fawe debugpaste**:
**Required Information**
- FAWE Version Number (`/version FastAsyncWorldEdit`): FastAsyncWorldEdit-1.16-488
- Spigot/Paper Version Number (`/version`): git-Paper-342 (MC: 1.16.4)
- Minecraft Version: 1.16.4
**Describe the bug**
The plugin "start" with an error named (Is it up to date?). It load the config but then when I restart the server it says the same error... and the plugin doesn't work... I tried to remove the config to generate a new one... it generated a new one but the same error ocurred (IS IT UP TO DATE?). I've deleted the file... generates a new one... but doesn't work...
**To Reproduce**
Steps to reproduce the behavior: "downloading" the plugin '-' idk
1. Run to 'console'
2. Click on 'restart the server'
3. See error on console
**Plugins being used on the server**
<!--- Optional but recommended - issue "/plugins" in-game or in console and copy/paste the list --> Vault (green), FastAsyncWorldEdit (IN RED)
**Checklist**:
<!--- Make sure you've completed the following steps (put an "X" between of brackets): -->
- [X] I included all information required in the sections above
- [] I made sure there are no duplicates of this report [(Use Search)](https://github.com/IntellectualSites/FastAsyncWorldEdit/issues?q=is%3Aissue)
- [X] I made sure I am using an up-to-date version of [FastAsyncWorldEdit for 1.16.4](https://ci.athion.net/job/FastAsyncWorldEdit-1.16/)
- [X] I made sure the bug/error is not caused by any other plugin
| test | fawe is it up to date fastasyncworldedit configuration files plugin doesn t even load the commands fawe debugpaste required information fawe version number version fastasyncworldedit fastasyncworldedit spigot paper version number version git paper mc minecraft version describe the bug the plugin start with an error named is it up to date it load the config but then when i restart the server it says the same error and the plugin doesn t work i tried to remove the config to generate a new one it generated a new one but the same error ocurred is it up to date i ve deleted the file generates a new one but doesn t work to reproduce steps to reproduce the behavior downloading the plugin idk run to console click on restart the server see error on console plugins being used on the server vault green fastasyncworldedit in red checklist i included all information required in the sections above i made sure there are no duplicates of this report i made sure i am using an up to date version of i made sure the bug error is not caused by any other plugin | 1 |
1,888 | 3,418,147,453 | IssuesEvent | 2015-12-08 00:10:58 | WP-API/WP-API | https://api.github.com/repos/WP-API/WP-API | closed | Return collections as an object and include pagination, links | Bug Discussion Infrastructure | Rather than returning a collection of objects as an array, we need to return the collection response as an object with the array of data under `data` attribute or similar. This will let us add `_links` for the top-level collection, as well as move our existing pagination headers into the response body.
Related #1136, #865 | 1.0 | Return collections as an object and include pagination, links - Rather than returning a collection of objects as an array, we need to return the collection response as an object with the array of data under `data` attribute or similar. This will let us add `_links` for the top-level collection, as well as move our existing pagination headers into the response body.
Related #1136, #865 | non_test | return collections as an object and include pagination links rather than returning a collection of objects as an array we need to return the collection response as an object with the array of data under data attribute or similar this will let us add links for the top level collection as well as move our existing pagination headers into the response body related | 0 |
513,872 | 14,927,720,610 | IssuesEvent | 2021-01-24 16:31:03 | bounswe/bounswe2020group5 | https://api.github.com/repos/bounswe/bounswe2020group5 | opened | Deletion of unused/unnecessary branches. | Priority: Low backend frontend mobile | It would be better if the unused branches are deleted. | 1.0 | Deletion of unused/unnecessary branches. - It would be better if the unused branches are deleted. | non_test | deletion of unused unnecessary branches it would be better if the unused branches are deleted | 0 |
590,081 | 17,770,396,683 | IssuesEvent | 2021-08-30 13:02:55 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.roblox.com - design is broken | priority-important browser-fenix engine-gecko | <!-- @browser: Apple Mail 605.1.15 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/9B176 ROBLOX iOS App 2.445.410643 Hybrid RobloxApp/2.445.410643 (GlobalDist;AppleAppStore -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/85122 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.roblox.com/home?nu=true
**Browser / Version**: Apple Mail 605.1.15
**Operating System**: Mac OS X 10.15.4
**Tested Another Browser**: Yes Other
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
It wont show the whole menu like friends and stuff
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/8/f3fce784-d072-4d32-bb3c-c11499ff892a.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210825095400</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/8/9d18d00b-9f50-4351-87c2-004476d4f075)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.roblox.com - design is broken - <!-- @browser: Apple Mail 605.1.15 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/9B176 ROBLOX iOS App 2.445.410643 Hybrid RobloxApp/2.445.410643 (GlobalDist;AppleAppStore -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/85122 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.roblox.com/home?nu=true
**Browser / Version**: Apple Mail 605.1.15
**Operating System**: Mac OS X 10.15.4
**Tested Another Browser**: Yes Other
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
It wont show the whole menu like friends and stuff
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/8/f3fce784-d072-4d32-bb3c-c11499ff892a.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210825095400</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/8/9d18d00b-9f50-4351-87c2-004476d4f075)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | design is broken url browser version apple mail operating system mac os x tested another browser yes other problem type design is broken description items not fully visible steps to reproduce it wont show the whole menu like friends and stuff view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
514,629 | 14,941,823,331 | IssuesEvent | 2021-01-25 20:20:31 | GoogleCloudPlatform/golang-samples | https://api.github.com/repos/GoogleCloudPlatform/golang-samples | closed | firestore/firestore_snippets: TestListenChanges failed | api: firestore flakybot: flaky flakybot: issue priority: p2 samples type: bug | This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: ae6c8fa9115c91be5f54bcb162e3bd014eac5d05
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/9255b763-517f-4701-a697-0e8e31eadb46), [Sponge](http://sponge2/9255b763-517f-4701-a697-0e8e31eadb46)
status: failed
<details><summary>Test output</summary><br><pre>listen_test.go:125: listenChanges got
----
New city: map[name:Los Angeles state:CA]
New city: map[state:CA name:San Francisco]
----
Want to contain:
----
population:3900000
----</pre></details> | 1.0 | firestore/firestore_snippets: TestListenChanges failed - This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: ae6c8fa9115c91be5f54bcb162e3bd014eac5d05
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/9255b763-517f-4701-a697-0e8e31eadb46), [Sponge](http://sponge2/9255b763-517f-4701-a697-0e8e31eadb46)
status: failed
<details><summary>Test output</summary><br><pre>listen_test.go:125: listenChanges got
----
New city: map[name:Los Angeles state:CA]
New city: map[state:CA name:San Francisco]
----
Want to contain:
----
population:3900000
----</pre></details> | non_test | firestore firestore snippets testlistenchanges failed this test failed to configure my behavior see if i m commenting on this issue too often add the buildcop quiet label and i will stop commenting commit buildurl status failed test output listen test go listenchanges got new city map new city map want to contain population | 0 |
159,822 | 12,491,620,881 | IssuesEvent | 2020-06-01 04:47:30 | ryanheise/audio_service | https://api.github.com/repos/ryanheise/audio_service | closed | PROPOSAL: New playback state model | 3 testing enhancement | The current playback state model is based on a protocol that allows the state of audio playback to be shared with peripheral devices like Android Auto, smart watches, and headsets. The states are:
```dart
enum BasicPlaybackState {
none,
stopped,
paused,
playing,
fastForwarding,
rewinding,
buffering,
error,
connecting,
skippingToPrevious,
skippingToNext,
skippingToQueueItem,
}
```
There are several limitations with this model. One is that people have requested additional states to be added, such as `completed` (similar to the corresponding state in `just_audio`). Another is that while in states such as `buffering`, `skippingToNext`, and some others, there is no way to determine whether we entered these states from `paused` or `playing` which is useful to know in the UI to determine which controls to display.
The proposal is to divide the the current playback state into two orthogonal states, similar to ExoPlayer:
```dart
bool playing;
AudioProcessingState processingState;
```
Where the processing state is this enum:
```dart
enum AudioProcessingState {
none,
connecting,
ready,
buffering,
fastForwarding,
rewinding,
skippingToPrevious,
skippingToNext,
skippingToQueueItem,
completed,
stopped,
error,
}
```
So, we can represent combinations such as `{playing=true, processingState=buffering}` as well as `{playing=false, processingState=buffering}`, along with various other combinations. In principle, the idea is that we should hear audio playing only when `{playing=true, processingState=ready}`, although maybe applications could take some liberties here, and continue playing the current track while `skippingToNext` is in progress. It is likely that I will make a similar change to the `just_audio` plugin.
I see two options:
Option 1:
Store the two orthogonal states directly in the existing class `PlaybackState`:
```dart
class PlaybackState {
final bool playing;
final AudioProcessingState processingState;
final Set<MediaAction> actions;
final int position;
final double speed;
final int updateTime;
}
```
Option 2
Have a new class that encapsulates just the combination of these two new orthogonal states, and store that in a single field in `PlaybackState`:
```dart
class PlaybackState {
final PlayingState playingState;
final Set<MediaAction> actions;
final int position;
final double speed;
final int updateTime;
}
class PlayingState {
final bool playing;
final AudioProcessingState processingState;
}
```
Opinions? Preferences? I'll withhold my own for the moment.
On the Android side, I should add that we still need to conform to the standard protocol, but I should be able to create a mapping function that takes this new state representation and converts it to the Android standard before broadcasting that state to other peripherals.
(Note: One other benefit is that we can capture enough state information to drive the entering and exiting of the Android service's foreground state through these playback state transitions rather than the current approach which has to make some assumptions and guesses about the current state.) | 1.0 | PROPOSAL: New playback state model - The current playback state model is based on a protocol that allows the state of audio playback to be shared with peripheral devices like Android Auto, smart watches, and headsets. The states are:
```dart
enum BasicPlaybackState {
none,
stopped,
paused,
playing,
fastForwarding,
rewinding,
buffering,
error,
connecting,
skippingToPrevious,
skippingToNext,
skippingToQueueItem,
}
```
There are several limitations with this model. One is that people have requested additional states to be added, such as `completed` (similar to the corresponding state in `just_audio`). Another is that while in states such as `buffering`, `skippingToNext`, and some others, there is no way to determine whether we entered these states from `paused` or `playing` which is useful to know in the UI to determine which controls to display.
The proposal is to divide the the current playback state into two orthogonal states, similar to ExoPlayer:
```dart
bool playing;
AudioProcessingState processingState;
```
Where the processing state is this enum:
```dart
enum AudioProcessingState {
none,
connecting,
ready,
buffering,
fastForwarding,
rewinding,
skippingToPrevious,
skippingToNext,
skippingToQueueItem,
completed,
stopped,
error,
}
```
So, we can represent combinations such as `{playing=true, processingState=buffering}` as well as `{playing=false, processingState=buffering}`, along with various other combinations. In principle, the idea is that we should hear audio playing only when `{playing=true, processingState=ready}`, although maybe applications could take some liberties here, and continue playing the current track while `skippingToNext` is in progress. It is likely that I will make a similar change to the `just_audio` plugin.
I see two options:
Option 1:
Store the two orthogonal states directly in the existing class `PlaybackState`:
```dart
class PlaybackState {
final bool playing;
final AudioProcessingState processingState;
final Set<MediaAction> actions;
final int position;
final double speed;
final int updateTime;
}
```
Option 2
Have a new class that encapsulates just the combination of these two new orthogonal states, and store that in a single field in `PlaybackState`:
```dart
class PlaybackState {
final PlayingState playingState;
final Set<MediaAction> actions;
final int position;
final double speed;
final int updateTime;
}
class PlayingState {
final bool playing;
final AudioProcessingState processingState;
}
```
Opinions? Preferences? I'll withhold my own for the moment.
On the Android side, I should add that we still need to conform to the standard protocol, but I should be able to create a mapping function that takes this new state representation and converts it to the Android standard before broadcasting that state to other peripherals.
(Note: One other benefit is that we can capture enough state information to drive the entering and exiting of the Android service's foreground state through these playback state transitions rather than the current approach which has to make some assumptions and guesses about the current state.) | test | proposal new playback state model the current playback state model is based on a protocol that allows the state of audio playback to be shared with peripheral devices like android auto smart watches and headsets the states are dart enum basicplaybackstate none stopped paused playing fastforwarding rewinding buffering error connecting skippingtoprevious skippingtonext skippingtoqueueitem there are several limitations with this model one is that people have requested additional states to be added such as completed similar to the corresponding state in just audio another is that while in states such as buffering skippingtonext and some others there is no way to determine whether we entered these states from paused or playing which is useful to know in the ui to determine which controls to display the proposal is to divide the the current playback state into two orthogonal states similar to exoplayer dart bool playing audioprocessingstate processingstate where the processing state is this enum dart enum audioprocessingstate none connecting ready buffering fastforwarding rewinding skippingtoprevious skippingtonext skippingtoqueueitem completed stopped error so we can represent combinations such as playing true processingstate buffering as well as playing false processingstate buffering along with various other combinations in principle the idea is that we should hear audio playing only when playing true processingstate ready although maybe applications could take some liberties here and continue playing the current track while skippingtonext is in progress it is likely that i will make a similar change to the just audio plugin i see two options option store the two orthogonal states directly in the existing class playbackstate dart class playbackstate final bool playing final audioprocessingstate processingstate final set actions final int position final double speed final int updatetime option have a new class that encapsulates just the combination of these two new orthogonal states and store that in a single field in playbackstate dart class playbackstate final playingstate playingstate final set actions final int position final double speed final int updatetime class playingstate final bool playing final audioprocessingstate processingstate opinions preferences i ll withhold my own for the moment on the android side i should add that we still need to conform to the standard protocol but i should be able to create a mapping function that takes this new state representation and converts it to the android standard before broadcasting that state to other peripherals note one other benefit is that we can capture enough state information to drive the entering and exiting of the android service s foreground state through these playback state transitions rather than the current approach which has to make some assumptions and guesses about the current state | 1 |
2,768 | 3,006,655,791 | IssuesEvent | 2015-07-27 11:58:53 | gimli-org/gimli | https://api.github.com/repos/gimli-org/gimli | closed | Provide support for newer compilers via CastXML | building and distribution | GCCXML is not supporting newer compiler versions and is replaced by CastXML (https://github.com/CastXML/CastXML). If py++ is ready to work with CastXML (https://github.com/gccxml/pygccxml/issues/19), we should think about switching. | 1.0 | Provide support for newer compilers via CastXML - GCCXML is not supporting newer compiler versions and is replaced by CastXML (https://github.com/CastXML/CastXML). If py++ is ready to work with CastXML (https://github.com/gccxml/pygccxml/issues/19), we should think about switching. | non_test | provide support for newer compilers via castxml gccxml is not supporting newer compiler versions and is replaced by castxml if py is ready to work with castxml we should think about switching | 0 |
46,517 | 2,958,508,593 | IssuesEvent | 2015-07-08 21:45:24 | duckduckgo/zeroclickinfo-fathead | https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead | closed | PyPi: module info out of date | Bug Priority: Medium | The module information for the pypi instant answer is out of date. The Latest Version is incorrect and also the 'More at Python Package Index' points at the location of the old package.
In case of [certifi](https://duckduckgo.com/?q=python+certifi&ia=about) the package points to version https://pypi.python.org/pypi/certifi/14.05.14 while the latest version is a year newer. This is not only with the _certify_ module but I tried other modules as well. Try [python requests](https://duckduckgo.com/?q=python+requests&ia=about) or [python panda](https://duckduckgo.com/?q=python+panda&ia=about)

------
IA Page: http://duck.co/ia/view/py_pi | 1.0 | PyPi: module info out of date - The module information for the pypi instant answer is out of date. The Latest Version is incorrect and also the 'More at Python Package Index' points at the location of the old package.
In case of [certifi](https://duckduckgo.com/?q=python+certifi&ia=about) the package points to version https://pypi.python.org/pypi/certifi/14.05.14 while the latest version is a year newer. This is not only with the _certify_ module but I tried other modules as well. Try [python requests](https://duckduckgo.com/?q=python+requests&ia=about) or [python panda](https://duckduckgo.com/?q=python+panda&ia=about)

------
IA Page: http://duck.co/ia/view/py_pi | non_test | pypi module info out of date the module information for the pypi instant answer is out of date the latest version is incorrect and also the more at python package index points at the location of the old package in case of the package points to version while the latest version is a year newer this is not only with the certify module but i tried other modules as well try or ia page | 0 |
222,311 | 17,406,776,374 | IssuesEvent | 2021-08-03 07:12:27 | theislab/scvelo | https://api.github.com/repos/theislab/scvelo | closed | Add `varm` to AnnData strategy | enhancement testing | <!-- What kind of feature would you like to request? -->
## Description
The strategy to generate `AnnData` objects for the unit tests does not yet add entries to `adata.varm`. | 1.0 | Add `varm` to AnnData strategy - <!-- What kind of feature would you like to request? -->
## Description
The strategy to generate `AnnData` objects for the unit tests does not yet add entries to `adata.varm`. | test | add varm to anndata strategy description the strategy to generate anndata objects for the unit tests does not yet add entries to adata varm | 1 |
261,913 | 27,828,543,524 | IssuesEvent | 2023-03-20 01:09:47 | ARUMAIS/deployment | https://api.github.com/repos/ARUMAIS/deployment | opened | CVE-2021-46877 (Medium) detected in jackson-databind-2.11.3.jar | Mend: dependency security vulnerability | ## CVE-2021-46877 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.11.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.11.3/jackson-databind-2.11.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.4.0.jar (Root Library)
- spring-boot-starter-json-2.4.0.jar
- :x: **jackson-databind-2.11.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/L00163425/deployment/commit/f4289bcaaf9f9ff877252959a5601ddce3986f3c">f4289bcaaf9f9ff877252959a5601ddce3986f3c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.
<p>Publish Date: 2023-03-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-46877>CVE-2021-46877</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2021-46877">https://www.cve.org/CVERecord?id=CVE-2021-46877</a></p>
<p>Release Date: 2023-03-18</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.12.6</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.5.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-46877 (Medium) detected in jackson-databind-2.11.3.jar - ## CVE-2021-46877 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.11.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.11.3/jackson-databind-2.11.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.4.0.jar (Root Library)
- spring-boot-starter-json-2.4.0.jar
- :x: **jackson-databind-2.11.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/L00163425/deployment/commit/f4289bcaaf9f9ff877252959a5601ddce3986f3c">f4289bcaaf9f9ff877252959a5601ddce3986f3c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.
<p>Publish Date: 2023-03-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-46877>CVE-2021-46877</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2021-46877">https://www.cve.org/CVERecord?id=CVE-2021-46877</a></p>
<p>Release Date: 2023-03-18</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.12.6</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.5.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web jar root library spring boot starter json jar x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details jackson databind x through x before and x before allows attackers to cause a denial of service gb transient heap usage per read in uncommon situations involving jsonnode jdk serialization publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution org springframework boot spring boot starter web step up your open source security game with mend | 0 |
167,437 | 13,025,269,577 | IssuesEvent | 2020-07-27 13:17:41 | repobee/repobee | https://api.github.com/repos/repobee/repobee | closed | Fix integration test failure | testing | For some reason, `test_assign_one_review` is failing, and I can't figure out why. Haven't been able to reproduce it locally. | 1.0 | Fix integration test failure - For some reason, `test_assign_one_review` is failing, and I can't figure out why. Haven't been able to reproduce it locally. | test | fix integration test failure for some reason test assign one review is failing and i can t figure out why haven t been able to reproduce it locally | 1 |
729,286 | 25,117,396,933 | IssuesEvent | 2022-11-09 04:02:30 | dotnet/wcf | https://api.github.com/repos/dotnet/wcf | closed | Client can not be generated | bug tooling priority 1 triaged | Hello.
We try to move from `Web References` to .Net Standard and `Service references`. The most of the services we use, we could regenerate with `dotnet-svcutil`. But for the following it doesn't work:
- https://secure.umweltbundesamt.at/ebsws/FetchService?wsdl
The generated file hase only 5 types (of over 100 in the Web References). And no client is generated.
Here is the command line, that I used:
```powershell
dotnet svcutil https://secure.umweltbundesamt.at/ebsws/FetchService?wsdl -n "*,Services.eRAS.Delivery" -o "DeliveryService" -nl -mc -i -d "ServiceReferences/Delivery"
```
During the generation I get the following output:
```bash
PS C:\src\PROJECT\src\Services.eRAS> dotnet svcutil https://secure.umweltbundesamt.at/ebsws/FetchService?wsdl -n "*,Services.eRAS.Delivery" -o "DeliveryService" -nl -mc -i -d "ServiceReferences/Delivery"
Resolving project references ...
Attempting to download metadata from 'https://secure.umweltbundesamt.at/ebsws/FetchService?wsdl' using WS-Metadata Exchange and HttpGet.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 150
Validation Error: The 'http://edm.umweltbundesamt/ebsmws/datatypes:EBSConsignmentNoteNotification' element is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 175
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 176
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:EBSConsignmentNoteType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 282
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 283
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:EBSConsignmentNoteType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 225
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:AustrianMaterialMovementIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 258
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 279
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAOperatingSiteGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 277
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAInstallationGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 253
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:AustrianMaterialMovementIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 286
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 309
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAOperatingSiteGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 307
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAInstallationGLNUsgAIdentifierType' is not declared.
Warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter
Error: The element 'http://edm.umweltbundesamt/ebsmws/datatypes:EBSConsignmentNoteNotification' is missing.
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='DeliveryEndpoint']
Warning: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='DeliveryEndpoint']
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:binding[@name='DeliveryBinding']
Warning: Cannot import wsdl:port
Detail:
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:service[@name='DeliveryService']/wsdl:port[@name='DeliveryServicePort']
Warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter
Error: The datatype 'http://edm.umweltbundesamt/ebsmws/datatypes:AustrianMaterialMovementIdentifierType' is missing.
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='FetchEndpoint']
Warning: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='FetchEndpoint']
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:binding[@name='FetchBinding']
Warning: Cannot import wsdl:port
Detail:
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:service[@name='FetchService']/wsdl:port[@name='FetchServicePort']
Warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter
Error: The datatype 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is missing.
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='PaperlessDeliveryEndpoint']
Warning: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='PaperlessDeliveryEndpoint']
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:binding[@name='PaperlessDeliveryBinding']
Warning: Cannot import wsdl:port
Detail:
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:service[@name='PaperlessDeliveryService']/wsdl:port[@name='PaperlessDeliveryServicePort']
Warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter
Error: Object reference not set to an instance of an object.
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='PaperlessFetchEndpoint']
Warning: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='PaperlessFetchEndpoint']
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:binding[@name='PaperlessFetchBinding']
Warning: Cannot import wsdl:port
Detail:
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:service[@name='PaperlessFetchService']/wsdl:port[@name='PaperlessFetchServicePort']
Warning: No endpoints compatible with .Net Core apps were found.
Generating files...
"C:\src\PROJECT\src\Services.eRAS\ServiceReferences\Delivery\DeliveryService.cs"
```
I think, the problem is in the data type import inside the WSDL file, that is not respected by the generator:
```xml
<wsdl:import namespace="http://edm.umweltbundesamt/ebsmws/datatypes" location="https://secure.umweltbundesamt.at:443/ebsws/DeliveryService?xsd=1" />
``` | 1.0 | Client can not be generated - Hello.
We try to move from `Web References` to .Net Standard and `Service references`. The most of the services we use, we could regenerate with `dotnet-svcutil`. But for the following it doesn't work:
- https://secure.umweltbundesamt.at/ebsws/FetchService?wsdl
The generated file hase only 5 types (of over 100 in the Web References). And no client is generated.
Here is the command line, that I used:
```powershell
dotnet svcutil https://secure.umweltbundesamt.at/ebsws/FetchService?wsdl -n "*,Services.eRAS.Delivery" -o "DeliveryService" -nl -mc -i -d "ServiceReferences/Delivery"
```
During the generation I get the following output:
```bash
PS C:\src\PROJECT\src\Services.eRAS> dotnet svcutil https://secure.umweltbundesamt.at/ebsws/FetchService?wsdl -n "*,Services.eRAS.Delivery" -o "DeliveryService" -nl -mc -i -d "ServiceReferences/Delivery"
Resolving project references ...
Attempting to download metadata from 'https://secure.umweltbundesamt.at/ebsws/FetchService?wsdl' using WS-Metadata Exchange and HttpGet.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 150
Validation Error: The 'http://edm.umweltbundesamt/ebsmws/datatypes:EBSConsignmentNoteNotification' element is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 175
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 176
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:EBSConsignmentNoteType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 282
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 283
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:EBSConsignmentNoteType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 225
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:AustrianMaterialMovementIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 258
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 279
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAOperatingSiteGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 277
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAInstallationGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 253
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:AustrianMaterialMovementIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 286
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 309
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAOperatingSiteGLNUsgAIdentifierType' is not declared.
Warning: There was a validation error on a schema generated during export:
Source:
Line: 2 Column: 307
Validation Error: Type 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAInstallationGLNUsgAIdentifierType' is not declared.
Warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter
Error: The element 'http://edm.umweltbundesamt/ebsmws/datatypes:EBSConsignmentNoteNotification' is missing.
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='DeliveryEndpoint']
Warning: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='DeliveryEndpoint']
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:binding[@name='DeliveryBinding']
Warning: Cannot import wsdl:port
Detail:
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:service[@name='DeliveryService']/wsdl:port[@name='DeliveryServicePort']
Warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter
Error: The datatype 'http://edm.umweltbundesamt/ebsmws/datatypes:AustrianMaterialMovementIdentifierType' is missing.
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='FetchEndpoint']
Warning: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='FetchEndpoint']
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:binding[@name='FetchBinding']
Warning: Cannot import wsdl:port
Detail:
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:service[@name='FetchService']/wsdl:port[@name='FetchServicePort']
Warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter
Error: The datatype 'http://edm.umweltbundesamt/ebsmws/datatypes:FEAAPartyGLNUsgAIdentifierType' is missing.
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='PaperlessDeliveryEndpoint']
Warning: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='PaperlessDeliveryEndpoint']
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:binding[@name='PaperlessDeliveryBinding']
Warning: Cannot import wsdl:port
Detail:
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:service[@name='PaperlessDeliveryService']/wsdl:port[@name='PaperlessDeliveryServicePort']
Warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter
Error: Object reference not set to an instance of an object.
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='PaperlessFetchEndpoint']
Warning: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:portType[@name='PaperlessFetchEndpoint']
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:binding[@name='PaperlessFetchBinding']
Warning: Cannot import wsdl:port
Detail:
XPath to Error Source: //wsdl:definitions[@targetNamespace='http://edm.umweltbundesamt.at/ebsmws']/wsdl:service[@name='PaperlessFetchService']/wsdl:port[@name='PaperlessFetchServicePort']
Warning: No endpoints compatible with .Net Core apps were found.
Generating files...
"C:\src\PROJECT\src\Services.eRAS\ServiceReferences\Delivery\DeliveryService.cs"
```
I think, the problem is in the data type import inside the WSDL file, that is not respected by the generator:
```xml
<wsdl:import namespace="http://edm.umweltbundesamt/ebsmws/datatypes" location="https://secure.umweltbundesamt.at:443/ebsws/DeliveryService?xsd=1" />
``` | non_test | client can not be generated hello we try to move from web references to net standard and service references the most of the services we use we could regenerate with dotnet svcutil but for the following it doesn t work the generated file hase only types of over in the web references and no client is generated here is the command line that i used powershell dotnet svcutil n services eras delivery o deliveryservice nl mc i d servicereferences delivery during the generation i get the following output bash ps c src project src services eras dotnet svcutil n services eras delivery o deliveryservice nl mc i d servicereferences delivery resolving project references attempting to download metadata from using ws metadata exchange and httpget warning there was a validation error on a schema generated during export source line column validation error the element is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning there was a validation error on a schema generated during export source line column validation error type is not declared warning cannot import wsdl porttype detail an exception was thrown while running a wsdl import extension system servicemodel description xmlserializermessagecontractimporter error the element is missing xpath to error source wsdl definitions wsdl porttype warning cannot import wsdl binding detail there was an error importing a wsdl porttype that the wsdl binding is dependent on xpath to wsdl porttype wsdl definitions wsdl porttype xpath to error source wsdl definitions wsdl binding warning cannot import wsdl port detail xpath to error source wsdl definitions wsdl service wsdl port warning cannot import wsdl porttype detail an exception was thrown while running a wsdl import extension system servicemodel description xmlserializermessagecontractimporter error the datatype is missing xpath to error source wsdl definitions wsdl porttype warning cannot import wsdl binding detail there was an error importing a wsdl porttype that the wsdl binding is dependent on xpath to wsdl porttype wsdl definitions wsdl porttype xpath to error source wsdl definitions wsdl binding warning cannot import wsdl port detail xpath to error source wsdl definitions wsdl service wsdl port warning cannot import wsdl porttype detail an exception was thrown while running a wsdl import extension system servicemodel description xmlserializermessagecontractimporter error the datatype is missing xpath to error source wsdl definitions wsdl porttype warning cannot import wsdl binding detail there was an error importing a wsdl porttype that the wsdl binding is dependent on xpath to wsdl porttype wsdl definitions wsdl porttype xpath to error source wsdl definitions wsdl binding warning cannot import wsdl port detail xpath to error source wsdl definitions wsdl service wsdl port warning cannot import wsdl porttype detail an exception was thrown while running a wsdl import extension system servicemodel description xmlserializermessagecontractimporter error object reference not set to an instance of an object xpath to error source wsdl definitions wsdl porttype warning cannot import wsdl binding detail there was an error importing a wsdl porttype that the wsdl binding is dependent on xpath to wsdl porttype wsdl definitions wsdl porttype xpath to error source wsdl definitions wsdl binding warning cannot import wsdl port detail xpath to error source wsdl definitions wsdl service wsdl port warning no endpoints compatible with net core apps were found generating files c src project src services eras servicereferences delivery deliveryservice cs i think the problem is in the data type import inside the wsdl file that is not respected by the generator xml | 0 |
159,522 | 12,478,578,452 | IssuesEvent | 2020-05-29 16:42:55 | spel-uchile/SUCHAI-Flight-Software | https://api.github.com/repos/spel-uchile/SUCHAI-Flight-Software | closed | Sequence of 5 commands throws exit code -11 | Fuzz-Testing bug | The sequence is:
tm_parse_status 168850784532074374035732101058055169116 12826105913956939644 E0@2Ni*@ -162817515606114395225612514469063087299 -48916261179497125227303848440736194118 -6386859017823808887 r@F5uu -8152784359493193565
gssb_set_burn_config -285600039855062482800991264458719794343 -3143019222374379877 10778628039574652351 -219709885077994447318495468478095151127 FZG -322088856487707042825536300309208787826 w
drp_set_deployed
fp_del_cmd_unix ^TyCU& -243018170100445935 "4n3Kr/hr 80211873877148678249755549121640936724 1175049179
com_get_config h -7060649124262353519 2063750755 | 1.0 | Sequence of 5 commands throws exit code -11 - The sequence is:
tm_parse_status 168850784532074374035732101058055169116 12826105913956939644 E0@2Ni*@ -162817515606114395225612514469063087299 -48916261179497125227303848440736194118 -6386859017823808887 r@F5uu -8152784359493193565
gssb_set_burn_config -285600039855062482800991264458719794343 -3143019222374379877 10778628039574652351 -219709885077994447318495468478095151127 FZG -322088856487707042825536300309208787826 w
drp_set_deployed
fp_del_cmd_unix ^TyCU& -243018170100445935 "4n3Kr/hr 80211873877148678249755549121640936724 1175049179
com_get_config h -7060649124262353519 2063750755 | test | sequence of commands throws exit code the sequence is tm parse status r gssb set burn config fzg w drp set deployed fp del cmd unix tycu hr com get config h | 1 |
86,732 | 10,515,882,546 | IssuesEvent | 2019-09-28 13:28:14 | BentoBoxWorld/BentoBox | https://api.github.com/repos/BentoBoxWorld/BentoBox | closed | [Developer Issue] PluginManager cannot find any events for addons | Type: Documentation Type: Not a bug | ### Description
#### Describe the bug
<!-- A clear and concise description of the problem you're encountering. -->
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please type below this line. -->
If you try to register a listener for any events that come from addons, you will not be able to do so due to an error
#### Steps to reproduce the behavior
**Code in ListenBSkyBlock.class**
```java
@EventHandler(priority=EventPriority.MONITOR, ignoreCancelled=true)
public void onCompleteChallenge(ChallengeCompletedEvent e) {
UUID uuid = e.getPlayerUUID();
OfflinePlayer offline = Bukkit.getOfflinePlayer(uuid);
if(!offline.isOnline()) return;
Player player = offline.getPlayer();
player.closeInventory();
}
```
**Code in main plugin class:**
```java
if(Bukkit.getPluginManager().isPluginEnabled("BentoBox")) {
Bukkit.getPluginManager().registerEvents(new ListenBSkyBlock(), this);
}
```
#### Related Log Section
```
[15:16:02] [Craft Scheduler Thread - 15/INFO]: [Vault] Checking for Updates ...
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded Blueprint Bundle 'default' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded Blueprint Bundle 'double_island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded Blueprint Bundle 'harder_island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'double' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'end-island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'harder' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'nether-island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Enabling BSkyBlock...
[15:16:02] [Server thread/INFO]: [BentoBox] Loading biomes...
[15:16:02] [Server thread/INFO]: [BentoBox] Enabling Biomes...
[15:16:02] [Server thread/INFO]: [BentoBox] Loading challenges...
[15:16:02] [Server thread/INFO]: [BentoBox] Enabling Challenges...
[15:16:02] [Server thread/WARN]: [BentoBox] Unknown material (SIGN) in config.yml blocks section. Skipping...
[15:16:02] [Server thread/WARN]: [BentoBox] Unknown material (WALL_SIGN) in config.yml blocks section. Skipping...
[15:16:02] [Server thread/INFO]: [BentoBox] [Level] Level hooking into BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Enabling Level...
[15:16:02] [Server thread/INFO]: [BentoBox] Addons successfully enabled.
[15:16:02] [Server thread/INFO]:
____ _ ____
| _ \ | | | _ \ by tastybento and Poslovitch
| |_) | ___ _ __ | |_ ___ | |_) | _____ __ 2017 - 2019
| _ < / _ \ '_ \| __/ _ \| _ < / _ \ \/ /
| |_) | __/ | | | || (_) | |_) | (_) > < v1.5.3
|____/ \___|_| |_|\__\___/|____/ \___/_/\_\ Loaded in 1107ms.
[15:16:02] [Server thread/INFO]: [CS-CoreLib - Protection] Loaded Protection Module "WorldGuard"
[15:16:02] [Server thread/INFO]: [CS-CoreLib - Protection] Loaded Protection Module "BentoBox"
[15:16:02] [Server thread/INFO]: ###################### - Slimefun - ######################
[15:16:02] [Server thread/INFO]: Successfully loaded 467 Items (227 Researches)
[15:16:02] [Server thread/INFO]: ( 467 Items from Slimefun, 0 Items from Addons )
[15:16:02] [Server thread/INFO]: ##########################################################
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks for World "SlimySkies"
[15:16:03] [Server thread/INFO]: [Slimefun] This may take a long time...
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks... 100% (FINISHED - 0ms)
[15:16:03] [Server thread/INFO]: [Slimefun] Loaded a total of 0 Blocks for World "SlimySkies"
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks for World "SlimySkies_nether"
[15:16:03] [Server thread/INFO]: [Slimefun] This may take a long time...
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks... 100% (FINISHED - 0ms)
[15:16:03] [Server thread/INFO]: [Slimefun] Loaded a total of 0 Blocks for World "SlimySkies_nether"
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks for World "SlimySkies_the_end"
[15:16:03] [Server thread/INFO]: [Slimefun] This may take a long time...
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks... 100% (FINISHED - 0ms)
[15:16:03] [Server thread/INFO]: [Slimefun] Loaded a total of 0 Blocks for World "SlimySkies_the_end"
[15:16:03] [Server thread/ERROR]: [Slimy Skies] Plugin SlimySkies v1.0.0 has failed to register events for class com.SirBlobman.slimy.skies.listener.ListenBSkyBlock because world/bentobox/challenges/events/ChallengeCompletedEvent does not exist.
```
#### Expected behavior
<!-- Clear and concise description of what you actually expected to happen when you encountered this bug. -->
<!-- Please type below this line. -->
I expected the event to be registered successfully so I can do custom stuff when players complete challenges
### Environment
#### Server
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please replace the underscores with your answer. Do not remove the '*' characters. -->
- OS: **Windows 10 1903**
- Java version: **Java 8**
#### Plugins
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please paste the `/plugins` output inside the code block below (remove the underscores). Do not provide an image. -->
```
[15:42:13 INFO]: Plugins (15): BentoBox, CS-CoreLib, Essentials, EssentialsChat, EssentialsGeoIP, EssentialsSpawn, HolographicDisplays, LuckPerms, PlugMan, ProtocolSupport, Slimefun, SlimySkies, Vault, WorldEdit, WorldGuard
```
#### BentoBox setup
##### BentoBox and Addons
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please paste the output of `/bentobox version` in the code block below (replace the underscores). Do not provide an image. -->
```
> bentobox version
[15:42:33 INFO]: Running PAPER Invalid.
[15:42:33 INFO]: BentoBox version: 1.5.3
[15:42:33 INFO]: Loaded Game Worlds:
[15:42:33 INFO]: SlimySkies (SlimySkies): Overworld, Nether, End
[15:42:33 INFO]: Loaded Addons:
[15:42:33 INFO]: Biomes 1.5.0.0 (ENABLED)
[15:42:33 INFO]: BSkyBlock 1.5.0 (ENABLED)
[15:42:33 INFO]: Challenges 0.7.5 (ENABLED)
[15:42:33 INFO]: Level 1.5.0 (ENABLED)
```
##### Configuration
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please replace the underscores with your answer. Do not remove the '*' characters. -->
- Database: **YAML** <!-- Available options: YAML, JSON, MYSQL, MARIADB, MONGODB -->
### Additional context
<!-- Any additional information you'd like to provide us. -->
<!-- Please type below this line. -->
```
[15:43:19 INFO]: This server is running Paper version git-Paper-154 (MC: 1.14.4) (Implementing API version 1.14.4-R0.1-SNAPSHOT)
```
I apologize in advance if I did something incorrectly.
| 1.0 | [Developer Issue] PluginManager cannot find any events for addons - ### Description
#### Describe the bug
<!-- A clear and concise description of the problem you're encountering. -->
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please type below this line. -->
If you try to register a listener for any events that come from addons, you will not be able to do so due to an error
#### Steps to reproduce the behavior
**Code in ListenBSkyBlock.class**
```java
@EventHandler(priority=EventPriority.MONITOR, ignoreCancelled=true)
public void onCompleteChallenge(ChallengeCompletedEvent e) {
UUID uuid = e.getPlayerUUID();
OfflinePlayer offline = Bukkit.getOfflinePlayer(uuid);
if(!offline.isOnline()) return;
Player player = offline.getPlayer();
player.closeInventory();
}
```
**Code in main plugin class:**
```java
if(Bukkit.getPluginManager().isPluginEnabled("BentoBox")) {
Bukkit.getPluginManager().registerEvents(new ListenBSkyBlock(), this);
}
```
#### Related Log Section
```
[15:16:02] [Craft Scheduler Thread - 15/INFO]: [Vault] Checking for Updates ...
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded Blueprint Bundle 'default' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded Blueprint Bundle 'double_island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded Blueprint Bundle 'harder_island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'double' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'end-island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'harder' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Loaded blueprint 'nether-island' for BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Enabling BSkyBlock...
[15:16:02] [Server thread/INFO]: [BentoBox] Loading biomes...
[15:16:02] [Server thread/INFO]: [BentoBox] Enabling Biomes...
[15:16:02] [Server thread/INFO]: [BentoBox] Loading challenges...
[15:16:02] [Server thread/INFO]: [BentoBox] Enabling Challenges...
[15:16:02] [Server thread/WARN]: [BentoBox] Unknown material (SIGN) in config.yml blocks section. Skipping...
[15:16:02] [Server thread/WARN]: [BentoBox] Unknown material (WALL_SIGN) in config.yml blocks section. Skipping...
[15:16:02] [Server thread/INFO]: [BentoBox] [Level] Level hooking into BSkyBlock
[15:16:02] [Server thread/INFO]: [BentoBox] Enabling Level...
[15:16:02] [Server thread/INFO]: [BentoBox] Addons successfully enabled.
[15:16:02] [Server thread/INFO]:
____ _ ____
| _ \ | | | _ \ by tastybento and Poslovitch
| |_) | ___ _ __ | |_ ___ | |_) | _____ __ 2017 - 2019
| _ < / _ \ '_ \| __/ _ \| _ < / _ \ \/ /
| |_) | __/ | | | || (_) | |_) | (_) > < v1.5.3
|____/ \___|_| |_|\__\___/|____/ \___/_/\_\ Loaded in 1107ms.
[15:16:02] [Server thread/INFO]: [CS-CoreLib - Protection] Loaded Protection Module "WorldGuard"
[15:16:02] [Server thread/INFO]: [CS-CoreLib - Protection] Loaded Protection Module "BentoBox"
[15:16:02] [Server thread/INFO]: ###################### - Slimefun - ######################
[15:16:02] [Server thread/INFO]: Successfully loaded 467 Items (227 Researches)
[15:16:02] [Server thread/INFO]: ( 467 Items from Slimefun, 0 Items from Addons )
[15:16:02] [Server thread/INFO]: ##########################################################
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks for World "SlimySkies"
[15:16:03] [Server thread/INFO]: [Slimefun] This may take a long time...
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks... 100% (FINISHED - 0ms)
[15:16:03] [Server thread/INFO]: [Slimefun] Loaded a total of 0 Blocks for World "SlimySkies"
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks for World "SlimySkies_nether"
[15:16:03] [Server thread/INFO]: [Slimefun] This may take a long time...
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks... 100% (FINISHED - 0ms)
[15:16:03] [Server thread/INFO]: [Slimefun] Loaded a total of 0 Blocks for World "SlimySkies_nether"
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks for World "SlimySkies_the_end"
[15:16:03] [Server thread/INFO]: [Slimefun] This may take a long time...
[15:16:03] [Server thread/INFO]: [Slimefun] Loading Blocks... 100% (FINISHED - 0ms)
[15:16:03] [Server thread/INFO]: [Slimefun] Loaded a total of 0 Blocks for World "SlimySkies_the_end"
[15:16:03] [Server thread/ERROR]: [Slimy Skies] Plugin SlimySkies v1.0.0 has failed to register events for class com.SirBlobman.slimy.skies.listener.ListenBSkyBlock because world/bentobox/challenges/events/ChallengeCompletedEvent does not exist.
```
#### Expected behavior
<!-- Clear and concise description of what you actually expected to happen when you encountered this bug. -->
<!-- Please type below this line. -->
I expected the event to be registered successfully so I can do custom stuff when players complete challenges
### Environment
#### Server
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please replace the underscores with your answer. Do not remove the '*' characters. -->
- OS: **Windows 10 1903**
- Java version: **Java 8**
#### Plugins
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please paste the `/plugins` output inside the code block below (remove the underscores). Do not provide an image. -->
```
[15:42:13 INFO]: Plugins (15): BentoBox, CS-CoreLib, Essentials, EssentialsChat, EssentialsGeoIP, EssentialsSpawn, HolographicDisplays, LuckPerms, PlugMan, ProtocolSupport, Slimefun, SlimySkies, Vault, WorldEdit, WorldGuard
```
#### BentoBox setup
##### BentoBox and Addons
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please paste the output of `/bentobox version` in the code block below (replace the underscores). Do not provide an image. -->
```
> bentobox version
[15:42:33 INFO]: Running PAPER Invalid.
[15:42:33 INFO]: BentoBox version: 1.5.3
[15:42:33 INFO]: Loaded Game Worlds:
[15:42:33 INFO]: SlimySkies (SlimySkies): Overworld, Nether, End
[15:42:33 INFO]: Loaded Addons:
[15:42:33 INFO]: Biomes 1.5.0.0 (ENABLED)
[15:42:33 INFO]: BSkyBlock 1.5.0 (ENABLED)
[15:42:33 INFO]: Challenges 0.7.5 (ENABLED)
[15:42:33 INFO]: Level 1.5.0 (ENABLED)
```
##### Configuration
<!-- /!\ Leaving this section blank will result in your ticket being closed without further explanation. -->
<!-- Please replace the underscores with your answer. Do not remove the '*' characters. -->
- Database: **YAML** <!-- Available options: YAML, JSON, MYSQL, MARIADB, MONGODB -->
### Additional context
<!-- Any additional information you'd like to provide us. -->
<!-- Please type below this line. -->
```
[15:43:19 INFO]: This server is running Paper version git-Paper-154 (MC: 1.14.4) (Implementing API version 1.14.4-R0.1-SNAPSHOT)
```
I apologize in advance if I did something incorrectly.
| non_test | pluginmanager cannot find any events for addons description describe the bug if you try to register a listener for any events that come from addons you will not be able to do so due to an error steps to reproduce the behavior code in listenbskyblock class java eventhandler priority eventpriority monitor ignorecancelled true public void oncompletechallenge challengecompletedevent e uuid uuid e getplayeruuid offlineplayer offline bukkit getofflineplayer uuid if offline isonline return player player offline getplayer player closeinventory code in main plugin class java if bukkit getpluginmanager ispluginenabled bentobox bukkit getpluginmanager registerevents new listenbskyblock this related log section checking for updates loaded blueprint bundle default for bskyblock loaded blueprint bundle double island for bskyblock loaded blueprint bundle harder island for bskyblock loaded blueprint double for bskyblock loaded blueprint end island for bskyblock loaded blueprint harder for bskyblock loaded blueprint island for bskyblock loaded blueprint nether island for bskyblock enabling bskyblock loading biomes enabling biomes loading challenges enabling challenges unknown material sign in config yml blocks section skipping unknown material wall sign in config yml blocks section skipping level hooking into bskyblock enabling level addons successfully enabled by tastybento and poslovitch loaded in loaded protection module worldguard loaded protection module bentobox slimefun successfully loaded items researches items from slimefun items from addons loading blocks for world slimyskies this may take a long time loading blocks finished loaded a total of blocks for world slimyskies loading blocks for world slimyskies nether this may take a long time loading blocks finished loaded a total of blocks for world slimyskies nether loading blocks for world slimyskies the end this may take a long time loading blocks finished loaded a total of blocks for world slimyskies the end plugin slimyskies has failed to register events for class com sirblobman slimy skies listener listenbskyblock because world bentobox challenges events challengecompletedevent does not exist expected behavior i expected the event to be registered successfully so i can do custom stuff when players complete challenges environment server os windows java version java plugins plugins bentobox cs corelib essentials essentialschat essentialsgeoip essentialsspawn holographicdisplays luckperms plugman protocolsupport slimefun slimyskies vault worldedit worldguard bentobox setup bentobox and addons bentobox version running paper invalid bentobox version loaded game worlds slimyskies slimyskies overworld nether end loaded addons biomes enabled bskyblock enabled challenges enabled level enabled configuration database yaml additional context this server is running paper version git paper mc implementing api version snapshot i apologize in advance if i did something incorrectly | 0 |
11,537 | 17,374,407,908 | IssuesEvent | 2021-07-30 18:33:27 | desertrat-io/tortle | https://api.github.com/repos/desertrat-io/tortle | closed | Bring refactor up to legacy level | requirement | The legacy version didn't get terribly far until SBT dependencies went to hell, so just poke through and get back to where we were:
Color scheme
iconography
layouts
pretty simple stuff | 1.0 | Bring refactor up to legacy level - The legacy version didn't get terribly far until SBT dependencies went to hell, so just poke through and get back to where we were:
Color scheme
iconography
layouts
pretty simple stuff | non_test | bring refactor up to legacy level the legacy version didn t get terribly far until sbt dependencies went to hell so just poke through and get back to where we were color scheme iconography layouts pretty simple stuff | 0 |
197,857 | 14,945,674,542 | IssuesEvent | 2021-01-26 04:47:04 | NUWCDIVNPT/stig-manager | https://api.github.com/repos/NUWCDIVNPT/stig-manager | opened | Ensure tests check for proper return code for Create actions | bug maintenance tests | Tests in some cases are checking for implemented response codes rather than those in the API Spec.
Tests should change to match spec, and then implementation may need to be updated.
Deviations so far:
POST Collections - implemented/tests: 200, spec: 201
POST Users - - implemented/tests: 200, spec: 201
matches spec, but perhaps spec should change to match convention of returning "201 Created" for this action
POST Reviews implemented/tests/spec: 200
POST STIGs implemented/tests/spec: 200
matches spec:
POST Assets - implemented/tests/spec: 201
| 1.0 | Ensure tests check for proper return code for Create actions - Tests in some cases are checking for implemented response codes rather than those in the API Spec.
Tests should change to match spec, and then implementation may need to be updated.
Deviations so far:
POST Collections - implemented/tests: 200, spec: 201
POST Users - - implemented/tests: 200, spec: 201
matches spec, but perhaps spec should change to match convention of returning "201 Created" for this action
POST Reviews implemented/tests/spec: 200
POST STIGs implemented/tests/spec: 200
matches spec:
POST Assets - implemented/tests/spec: 201
| test | ensure tests check for proper return code for create actions tests in some cases are checking for implemented response codes rather than those in the api spec tests should change to match spec and then implementation may need to be updated deviations so far post collections implemented tests spec post users implemented tests spec matches spec but perhaps spec should change to match convention of returning created for this action post reviews implemented tests spec post stigs implemented tests spec matches spec post assets implemented tests spec | 1 |
178,066 | 13,760,271,152 | IssuesEvent | 2020-10-07 05:26:11 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | [Desktop] Top site behavior seems wrong / unpredictable | OS/Desktop QA/Test-Plan-Specified QA/Yes feature/new-tab priority/P3 release-notes/include | ## Test plan
See https://github.com/brave/brave-core/pull/6584
## Description
Top site pinning is still broken, doesn't do anything and get's regularly replaced by random sites, this has been an issue for well over 3 to 4 years now, on both windows 10 and linux, is it ever going to get fixed? Or has it simply been given up on? | 1.0 | [Desktop] Top site behavior seems wrong / unpredictable - ## Test plan
See https://github.com/brave/brave-core/pull/6584
## Description
Top site pinning is still broken, doesn't do anything and get's regularly replaced by random sites, this has been an issue for well over 3 to 4 years now, on both windows 10 and linux, is it ever going to get fixed? Or has it simply been given up on? | test | top site behavior seems wrong unpredictable test plan see description top site pinning is still broken doesn t do anything and get s regularly replaced by random sites this has been an issue for well over to years now on both windows and linux is it ever going to get fixed or has it simply been given up on | 1 |
14,546 | 3,410,263,530 | IssuesEvent | 2015-12-04 19:21:49 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | Button/door ID not linking during game | Bug Needs Reproducing/Testing | I can't seem to link buttons and shutters/blastdoors via VV
Process:
Spawn shutter
spawn door button
VV button and shutter ID to a number i.e. 151
Press button. Nothing. | 1.0 | Button/door ID not linking during game - I can't seem to link buttons and shutters/blastdoors via VV
Process:
Spawn shutter
spawn door button
VV button and shutter ID to a number i.e. 151
Press button. Nothing. | test | button door id not linking during game i can t seem to link buttons and shutters blastdoors via vv process spawn shutter spawn door button vv button and shutter id to a number i e press button nothing | 1 |
285,179 | 24,648,213,002 | IssuesEvent | 2022-10-17 16:22:29 | Slimefun/Slimefun4 | https://api.github.com/repos/Slimefun/Slimefun4 | opened | Changing SF generators to vanilla blocks | 🐞 Bug Report 🎯 Needs testing | ### ❗ Checklist
- [X] I am using the official english version of Slimefun and did not modify the jar.
- [X] I am using an up to date "DEV" (not "RC") version of Slimefun.
- [X] I am aware that issues related to Slimefun addons need to be reported on their bug trackers and not here.
- [X] I searched for similar open issues and could not find an existing bug report on this.
### 📍 Description
Hello, there's a bug on our server that probably affects Slimefun in general, that's why I'm posting it here and not in some special room. The bug is that SF electricity generators are changing to vanilla items, but all items can change this way based on the code from the error I'm posting below.
Please fix this as soon as possible, this is a very serious bug as players may start to abuse it and ask us to replace a vanilla block with an op generator, but the block was never a generator.
### 📑 Reproduction Steps
We don't know, it happens randomly.
### 💡 Expected Behavior
Bug fix.
### 📷 Screenshots / Videos
_No response_
### 📜 Server Log
https://pastebin.com/wbGr8thN
### 📂 `/error-reports/` folder
_No response_
### 💻 Server Software
Purpur
### 🎮 Minecraft Version
1.19.x
### ⭐ Slimefun version
Slimefun vDEV - 1034 (git 7b9d769b)
### 🧭 Other plugins
_No response_ | 1.0 | Changing SF generators to vanilla blocks - ### ❗ Checklist
- [X] I am using the official english version of Slimefun and did not modify the jar.
- [X] I am using an up to date "DEV" (not "RC") version of Slimefun.
- [X] I am aware that issues related to Slimefun addons need to be reported on their bug trackers and not here.
- [X] I searched for similar open issues and could not find an existing bug report on this.
### 📍 Description
Hello, there's a bug on our server that probably affects Slimefun in general, that's why I'm posting it here and not in some special room. The bug is that SF electricity generators are changing to vanilla items, but all items can change this way based on the code from the error I'm posting below.
Please fix this as soon as possible, this is a very serious bug as players may start to abuse it and ask us to replace a vanilla block with an op generator, but the block was never a generator.
### 📑 Reproduction Steps
We don't know, it happens randomly.
### 💡 Expected Behavior
Bug fix.
### 📷 Screenshots / Videos
_No response_
### 📜 Server Log
https://pastebin.com/wbGr8thN
### 📂 `/error-reports/` folder
_No response_
### 💻 Server Software
Purpur
### 🎮 Minecraft Version
1.19.x
### ⭐ Slimefun version
Slimefun vDEV - 1034 (git 7b9d769b)
### 🧭 Other plugins
_No response_ | test | changing sf generators to vanilla blocks ❗ checklist i am using the official english version of slimefun and did not modify the jar i am using an up to date dev not rc version of slimefun i am aware that issues related to slimefun addons need to be reported on their bug trackers and not here i searched for similar open issues and could not find an existing bug report on this 📍 description hello there s a bug on our server that probably affects slimefun in general that s why i m posting it here and not in some special room the bug is that sf electricity generators are changing to vanilla items but all items can change this way based on the code from the error i m posting below please fix this as soon as possible this is a very serious bug as players may start to abuse it and ask us to replace a vanilla block with an op generator but the block was never a generator 📑 reproduction steps we don t know it happens randomly 💡 expected behavior bug fix 📷 screenshots videos no response 📜 server log 📂 error reports folder no response 💻 server software purpur 🎮 minecraft version x ⭐ slimefun version slimefun vdev git 🧭 other plugins no response | 1 |
181,816 | 14,075,205,578 | IssuesEvent | 2020-11-04 08:41:20 | mautic/mautic | https://api.github.com/repos/mautic/mautic | closed | Database columns referenced using camel case instead of snake case | T1 bug hacktoberfest mautic-3 ready-to-test | | Q | A
| --- | ---
| Mautic version | 2.15.3
| PHP version | 7.1
### Steps to reproduce
1. Go to `/s/companies` and search for `is:mine`
2. This results in a 500 error because it is using the field name `createdAt` instead of `created_at`. The same is true for the other filters as well, i.e. `is:published`
### Log errors
`mautic.CRITICAL: Uncaught PHP Exception Doctrine\DBAL\Exception\InvalidFieldNameException: "An exception occurred while executing 'SELECT COUNT(comp.id) as count FROM companies comp WHERE comp.createdBy = ?' with params [2]: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'comp.createdBy' in 'where clause'"` | 1.0 | Database columns referenced using camel case instead of snake case - | Q | A
| --- | ---
| Mautic version | 2.15.3
| PHP version | 7.1
### Steps to reproduce
1. Go to `/s/companies` and search for `is:mine`
2. This results in a 500 error because it is using the field name `createdAt` instead of `created_at`. The same is true for the other filters as well, i.e. `is:published`
### Log errors
`mautic.CRITICAL: Uncaught PHP Exception Doctrine\DBAL\Exception\InvalidFieldNameException: "An exception occurred while executing 'SELECT COUNT(comp.id) as count FROM companies comp WHERE comp.createdBy = ?' with params [2]: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'comp.createdBy' in 'where clause'"` | test | database columns referenced using camel case instead of snake case q a mautic version php version steps to reproduce go to s companies and search for is mine this results in a error because it is using the field name createdat instead of created at the same is true for the other filters as well i e is published log errors mautic critical uncaught php exception doctrine dbal exception invalidfieldnameexception an exception occurred while executing select count comp id as count from companies comp where comp createdby with params sqlstate column not found unknown column comp createdby in where clause | 1 |
281,983 | 24,442,480,472 | IssuesEvent | 2022-10-06 15:26:28 | apache/beam | https://api.github.com/repos/apache/beam | closed | [Bug]: Py37 PostCommit Failure test apache_beam.io.gcp.bigquery_file_loads_test.BigQueryFileLoadsIT.test_bqfl_streaming | io gcp P1 bug test-failures | ### What happened?
Has been failing since Sep 14, from https://ci-beam.apache.org/job/beam_PostCommit_Python37/5719/. The test was skipped in Py38 PostCommit and only run on Py37 PostCommit.
Error message:
```
Error Message
google.api_core.exceptions.NotFound: 404 Not found: Table apache-beam-testing:python_bq_file_loads_1663183712232.output_table_ints was not found in location US
Location: US
Job ID: baf2a9a6-ecd8-49b7-a6ef-9965c7fd0cd9
```
Caused by #23012
### Issue Priority
Priority: 1
### Issue Component
Component: test-failures | 1.0 | [Bug]: Py37 PostCommit Failure test apache_beam.io.gcp.bigquery_file_loads_test.BigQueryFileLoadsIT.test_bqfl_streaming - ### What happened?
Has been failing since Sep 14, from https://ci-beam.apache.org/job/beam_PostCommit_Python37/5719/. The test was skipped in Py38 PostCommit and only run on Py37 PostCommit.
Error message:
```
Error Message
google.api_core.exceptions.NotFound: 404 Not found: Table apache-beam-testing:python_bq_file_loads_1663183712232.output_table_ints was not found in location US
Location: US
Job ID: baf2a9a6-ecd8-49b7-a6ef-9965c7fd0cd9
```
Caused by #23012
### Issue Priority
Priority: 1
### Issue Component
Component: test-failures | test | postcommit failure test apache beam io gcp bigquery file loads test bigqueryfileloadsit test bqfl streaming what happened has been failing since sep from the test was skipped in postcommit and only run on postcommit error message error message google api core exceptions notfound not found table apache beam testing python bq file loads output table ints was not found in location us location us job id caused by issue priority priority issue component component test failures | 1 |
212,737 | 16,478,073,758 | IssuesEvent | 2021-05-24 08:21:54 | opencv/opencv | https://api.github.com/repos/opencv/opencv | closed | [Enchancement] Expose methods in phasecorr.cpp | category: imgproc feature pr: needs test | I've recently discovered that there is a mulSpectrums() method, but no divSpectrums() method publically available. I've did a small research and found that actually the method is actually present in opencv, but only internally - https://github.com/opencv/opencv/blob/94f00cf09694c38407cce23ed9fea288ab622b0f/modules/imgproc/src/phasecorr.cpp#L161
Also the file includes other very helpful methods. So, i think it would be cool to expose these methods. | 1.0 | [Enchancement] Expose methods in phasecorr.cpp - I've recently discovered that there is a mulSpectrums() method, but no divSpectrums() method publically available. I've did a small research and found that actually the method is actually present in opencv, but only internally - https://github.com/opencv/opencv/blob/94f00cf09694c38407cce23ed9fea288ab622b0f/modules/imgproc/src/phasecorr.cpp#L161
Also the file includes other very helpful methods. So, i think it would be cool to expose these methods. | test | expose methods in phasecorr cpp i ve recently discovered that there is a mulspectrums method but no divspectrums method publically available i ve did a small research and found that actually the method is actually present in opencv but only internally also the file includes other very helpful methods so i think it would be cool to expose these methods | 1 |
341,190 | 30,571,743,170 | IssuesEvent | 2023-07-20 23:11:17 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix decompositions.test_numpy_svd | NumPy Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix decompositions.test_numpy_svd - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5616626394"><img src=https://img.shields.io/badge/-success-success></a>
| test | fix decompositions test numpy svd tensorflow a href src jax a href src numpy a href src torch a href src paddle a href src | 1 |
317,722 | 27,260,957,766 | IssuesEvent | 2023-02-22 14:52:10 | dask/distributed | https://api.github.com/repos/dask/distributed | opened | Flaky test `distributed.dashboard.tests.test_scheduler_bokeh.py::test_shuffling` | flaky test shuffle | `distributed.dashboard.tests.test_scheduler_bokeh.py::test_shuffling` seems to be failing occasionally.

This appears due to an internal shuffle extension error
```
File "/Users/runner/work/distributed/distributed/distributed/shuffle/_worker_extension.py", line 439, in _barrier
shuffle = await self._get_shuffle_run(shuffle_id, run_id)
File "/Users/runner/work/distributed/distributed/distributed/shuffle/_worker_extension.py", line 466, in _get_shuffle_run
shuffle = await self._refresh_shuffle(
File "/Users/runner/work/distributed/distributed/distributed/shuffle/_worker_extension.py", line 542, in _refresh_shuffle
result = await self.worker.scheduler.shuffle_get(
File "/Users/runner/work/distributed/distributed/distributed/core.py", line 1221, in send_recv_from_rpc
return await send_recv(comm=comm, op=key, **kwargs)
File "/Users/runner/work/distributed/distributed/distributed/core.py", line 1011, in send_recv
raise exc.with_traceback(tb)
File "/Users/runner/work/distributed/distributed/distributed/core.py", line 818, in _handle_comm
result = handler(**msg)
File "/Users/runner/work/distributed/distributed/distributed/shuffle/_scheduler_extension.py", line 89, in get
assert schema is not None
```
The test failure looks like a deadlock but the traceback would indicate an ordinary compute failure. @hendrikmakait any idea what might be going on here? | 1.0 | Flaky test `distributed.dashboard.tests.test_scheduler_bokeh.py::test_shuffling` - `distributed.dashboard.tests.test_scheduler_bokeh.py::test_shuffling` seems to be failing occasionally.

This appears due to an internal shuffle extension error
```
File "/Users/runner/work/distributed/distributed/distributed/shuffle/_worker_extension.py", line 439, in _barrier
shuffle = await self._get_shuffle_run(shuffle_id, run_id)
File "/Users/runner/work/distributed/distributed/distributed/shuffle/_worker_extension.py", line 466, in _get_shuffle_run
shuffle = await self._refresh_shuffle(
File "/Users/runner/work/distributed/distributed/distributed/shuffle/_worker_extension.py", line 542, in _refresh_shuffle
result = await self.worker.scheduler.shuffle_get(
File "/Users/runner/work/distributed/distributed/distributed/core.py", line 1221, in send_recv_from_rpc
return await send_recv(comm=comm, op=key, **kwargs)
File "/Users/runner/work/distributed/distributed/distributed/core.py", line 1011, in send_recv
raise exc.with_traceback(tb)
File "/Users/runner/work/distributed/distributed/distributed/core.py", line 818, in _handle_comm
result = handler(**msg)
File "/Users/runner/work/distributed/distributed/distributed/shuffle/_scheduler_extension.py", line 89, in get
assert schema is not None
```
The test failure looks like a deadlock but the traceback would indicate an ordinary compute failure. @hendrikmakait any idea what might be going on here? | test | flaky test distributed dashboard tests test scheduler bokeh py test shuffling distributed dashboard tests test scheduler bokeh py test shuffling seems to be failing occasionally this appears due to an internal shuffle extension error file users runner work distributed distributed distributed shuffle worker extension py line in barrier shuffle await self get shuffle run shuffle id run id file users runner work distributed distributed distributed shuffle worker extension py line in get shuffle run shuffle await self refresh shuffle file users runner work distributed distributed distributed shuffle worker extension py line in refresh shuffle result await self worker scheduler shuffle get file users runner work distributed distributed distributed core py line in send recv from rpc return await send recv comm comm op key kwargs file users runner work distributed distributed distributed core py line in send recv raise exc with traceback tb file users runner work distributed distributed distributed core py line in handle comm result handler msg file users runner work distributed distributed distributed shuffle scheduler extension py line in get assert schema is not none the test failure looks like a deadlock but the traceback would indicate an ordinary compute failure hendrikmakait any idea what might be going on here | 1 |
68,710 | 7,108,007,216 | IssuesEvent | 2018-01-16 22:06:01 | saltstack/salt | https://api.github.com/repos/saltstack/salt | opened | Add Auto Test for New salt-ssh / master/ minion Feature | Needs Testcase | Add auto test for https://github.com/saltstack/salt/issues/40997
Dependencies: Kitchen-salt test runner needs to be finished | 1.0 | Add Auto Test for New salt-ssh / master/ minion Feature - Add auto test for https://github.com/saltstack/salt/issues/40997
Dependencies: Kitchen-salt test runner needs to be finished | test | add auto test for new salt ssh master minion feature add auto test for dependencies kitchen salt test runner needs to be finished | 1 |
755,221 | 26,421,593,547 | IssuesEvent | 2023-01-13 21:07:09 | dcs-retribution/dcs-retribution | https://api.github.com/repos/dcs-retribution/dcs-retribution | closed | Attempted to load save, rec'd error regarding package without flight | Bug Priority Medium | ### Affected versions
Development build
### Test/Development build
https://github.com/dcs-retribution/dcs-retribution/actions/runs/3814230005
### Description
At load off this save, you get the message:

The save will not load.
Is there a possibility of doing some sort of validation before the save write? Most likely there is an opportunity to overhaul the pilot/squadron/flight/package dynamic.
### Save game and other files
[Gladiators Campaign Turn 1.zip](https://github.com/dcs-retribution/dcs-retribution/files/10333212/Gladiators.Campaign.Turn.1.zip)
| 1.0 | Attempted to load save, rec'd error regarding package without flight - ### Affected versions
Development build
### Test/Development build
https://github.com/dcs-retribution/dcs-retribution/actions/runs/3814230005
### Description
At load off this save, you get the message:

The save will not load.
Is there a possibility of doing some sort of validation before the save write? Most likely there is an opportunity to overhaul the pilot/squadron/flight/package dynamic.
### Save game and other files
[Gladiators Campaign Turn 1.zip](https://github.com/dcs-retribution/dcs-retribution/files/10333212/Gladiators.Campaign.Turn.1.zip)
| non_test | attempted to load save rec d error regarding package without flight affected versions development build test development build description at load off this save you get the message the save will not load is there a possibility of doing some sort of validation before the save write most likely there is an opportunity to overhaul the pilot squadron flight package dynamic save game and other files | 0 |
177,549 | 13,728,562,532 | IssuesEvent | 2020-10-04 12:15:47 | mikelsanabria11/iwvg-devops-mikel-sanabria2 | https://api.github.com/repos/mikelsanabria11/iwvg-devops-mikel-sanabria2 | closed | Method: findFractionAdditionByUserId | points:0.25 points:0.5 type:enhancement type:test | Implement Search method 1: findFractionAdditionByUserId & Test | 1.0 | Method: findFractionAdditionByUserId - Implement Search method 1: findFractionAdditionByUserId & Test | test | method findfractionadditionbyuserid implement search method findfractionadditionbyuserid test | 1 |
1,337 | 9,935,040,130 | IssuesEvent | 2019-07-02 15:39:21 | exercism/exercism | https://api.github.com/repos/exercism/exercism | closed | Troubleshoot maintainer-sync bot | area/automation type/operations | We deployed the maintainer-sync bot, and it is not taking the expected actions.
Test repo: https://github.com/exercism/test-maintainer-sync
To test the logic (sorry, this has to be done via PRs for now. The bot needs to be updated to respond to 'push' instead of 'pull-request.create'):
- create a branch
- edit the maintainer config file https://github.com/exercism/test-maintainer-sync/blob/master/config/maintainers.json
- submit a pull request
- merge the pull request
You can change the log-level by editing the `.env` config file (`debug` or `trace`; I think `DEBUG` should be enough).
We should double-check the ENV vars in `.env`.
@iHiD and I should both have access to the "recent deliveries" page, which sometimes helps debug: https://github.com/organizations/exercism/settings/apps/maintainer-sync/advanced
| 1.0 | Troubleshoot maintainer-sync bot - We deployed the maintainer-sync bot, and it is not taking the expected actions.
Test repo: https://github.com/exercism/test-maintainer-sync
To test the logic (sorry, this has to be done via PRs for now. The bot needs to be updated to respond to 'push' instead of 'pull-request.create'):
- create a branch
- edit the maintainer config file https://github.com/exercism/test-maintainer-sync/blob/master/config/maintainers.json
- submit a pull request
- merge the pull request
You can change the log-level by editing the `.env` config file (`debug` or `trace`; I think `DEBUG` should be enough).
We should double-check the ENV vars in `.env`.
@iHiD and I should both have access to the "recent deliveries" page, which sometimes helps debug: https://github.com/organizations/exercism/settings/apps/maintainer-sync/advanced
| non_test | troubleshoot maintainer sync bot we deployed the maintainer sync bot and it is not taking the expected actions test repo to test the logic sorry this has to be done via prs for now the bot needs to be updated to respond to push instead of pull request create create a branch edit the maintainer config file submit a pull request merge the pull request you can change the log level by editing the env config file debug or trace i think debug should be enough we should double check the env vars in env ihid and i should both have access to the recent deliveries page which sometimes helps debug | 0 |
198,859 | 6,978,455,490 | IssuesEvent | 2017-12-12 17:34:23 | Microsoft/PTVS | https://api.github.com/repos/Microsoft/PTVS | closed | Python project reference does not add search path | area:Project System bug priority:P1 | When you add a reference to a Python project, it doesn't do anything.
We should infer a search path to the project root. | 1.0 | Python project reference does not add search path - When you add a reference to a Python project, it doesn't do anything.
We should infer a search path to the project root. | non_test | python project reference does not add search path when you add a reference to a python project it doesn t do anything we should infer a search path to the project root | 0 |
171,769 | 13,247,820,931 | IssuesEvent | 2020-08-19 17:54:00 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: django failed | C-test-failure O-roachtest O-robot branch-provisional_202008151325_v19.2.10 | [(roachtest).django failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2185654&tab=buildLog) on [provisional_202008151325_v19.2.10@13350e2bebf99997730262bce26b843ed3ecb1be](https://github.com/cockroachdb/cockroach/commits/13350e2bebf99997730262bce26b843ed3ecb1be):
```
The test failed on branch=provisional_202008151325_v19.2.10, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/django/run_1
orm_helpers.go:218,orm_helpers.go:144,django.go:204,django.go:216,test_runner.go:754:
Tests run on Cockroach v19.2.9-53-g13350e2
Tests run against django cockroach-3.0.x
7836 Total Tests Run
7835 tests passed
1 test failed
685 tests skipped
20 tests ignored
0 tests passed unexpectedly
1 test failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: inspectdb.tests.InspectDBTestCase.test_attribute_name_not_python_keyword (unexpected)
--- SKIP: backends.mysql.test_creation.DatabaseCreationTests.test_create_test_db_unexpected_error due to MySQL tests (expected)
For a full summary look at the django artifacts
An updated blocklist (djangoBlocklist19_2) is available in the artifacts' django log
```
<details><summary>More</summary><p>
Artifacts: [/django](https://teamcity.cockroachdb.com/viewLog.html?buildId=2185654&tab=artifacts#/django)
Related:
- #52762 roachtest: django failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #52761 roachtest: django failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Adjango.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: django failed - [(roachtest).django failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2185654&tab=buildLog) on [provisional_202008151325_v19.2.10@13350e2bebf99997730262bce26b843ed3ecb1be](https://github.com/cockroachdb/cockroach/commits/13350e2bebf99997730262bce26b843ed3ecb1be):
```
The test failed on branch=provisional_202008151325_v19.2.10, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/django/run_1
orm_helpers.go:218,orm_helpers.go:144,django.go:204,django.go:216,test_runner.go:754:
Tests run on Cockroach v19.2.9-53-g13350e2
Tests run against django cockroach-3.0.x
7836 Total Tests Run
7835 tests passed
1 test failed
685 tests skipped
20 tests ignored
0 tests passed unexpectedly
1 test failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: inspectdb.tests.InspectDBTestCase.test_attribute_name_not_python_keyword (unexpected)
--- SKIP: backends.mysql.test_creation.DatabaseCreationTests.test_create_test_db_unexpected_error due to MySQL tests (expected)
For a full summary look at the django artifacts
An updated blocklist (djangoBlocklist19_2) is available in the artifacts' django log
```
<details><summary>More</summary><p>
Artifacts: [/django](https://teamcity.cockroachdb.com/viewLog.html?buildId=2185654&tab=artifacts#/django)
Related:
- #52762 roachtest: django failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #52761 roachtest: django failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Adjango.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest django failed on the test failed on branch provisional cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts django run orm helpers go orm helpers go django go django go test runner go tests run on cockroach tests run against django cockroach x total tests run tests passed test failed tests skipped tests ignored tests passed unexpectedly test failed unexpectedly tests expected failed but skipped tests expected failed but not run fail inspectdb tests inspectdbtestcase test attribute name not python keyword unexpected skip backends mysql test creation databasecreationtests test create test db unexpected error due to mysql tests expected for a full summary look at the django artifacts an updated blocklist is available in the artifacts django log more artifacts related roachtest django failed roachtest django failed powered by | 1 |
169,617 | 14,225,717,772 | IssuesEvent | 2020-11-17 21:42:52 | fjank/github-workflow-kata | https://api.github.com/repos/fjank/github-workflow-kata | closed | Add code style checking to make sure the code is nice'n'dandy | documentation | Suggestions:
Checkstyle, PMD and Spotbugs. perhaps investigate Google and Facebooks linters?
As a first step, only add the checks, but do not fail the build, nor fix any warnings.
Also make sure to add badges to README if available.
Most important, make the resulting reports available online. Either as a package (link to reports), or even better, publish to github pages. | 1.0 | Add code style checking to make sure the code is nice'n'dandy - Suggestions:
Checkstyle, PMD and Spotbugs. perhaps investigate Google and Facebooks linters?
As a first step, only add the checks, but do not fail the build, nor fix any warnings.
Also make sure to add badges to README if available.
Most important, make the resulting reports available online. Either as a package (link to reports), or even better, publish to github pages. | non_test | add code style checking to make sure the code is nice n dandy suggestions checkstyle pmd and spotbugs perhaps investigate google and facebooks linters as a first step only add the checks but do not fail the build nor fix any warnings also make sure to add badges to readme if available most important make the resulting reports available online either as a package link to reports or even better publish to github pages | 0 |
14,568 | 3,410,686,862 | IssuesEvent | 2015-12-04 21:23:18 | TedStudley/mc-mini | https://api.github.com/repos/TedStudley/mc-mini | opened | Add Unit Tests for ParamParser | testing | Add explicit unit tests for the logic in `parser.cpp` and `parser.h` | 1.0 | Add Unit Tests for ParamParser - Add explicit unit tests for the logic in `parser.cpp` and `parser.h` | test | add unit tests for paramparser add explicit unit tests for the logic in parser cpp and parser h | 1 |
79,444 | 22,768,444,404 | IssuesEvent | 2022-07-08 07:40:19 | parca-dev/parca-agent | https://api.github.com/repos/parca-dev/parca-agent | closed | chore: Use distroless base images | area/build-pipeline chore | https://github.com/GoogleContainerTools/distroless/blob/main/base/README.md
We can use distroless images as base and we can simplify our build docker build stages. What do you think?
For the ones who don't have context, in practice, we can remove the following and their dependencies from our docker builds:
https://github.com/parca-dev/parca-agent/blob/ba3b7410f79a7820c58b9b66c200a5bbe82474b5/Dockerfile#L65-L67 | 1.0 | chore: Use distroless base images - https://github.com/GoogleContainerTools/distroless/blob/main/base/README.md
We can use distroless images as base and we can simplify our build docker build stages. What do you think?
For the ones who don't have context, in practice, we can remove the following and their dependencies from our docker builds:
https://github.com/parca-dev/parca-agent/blob/ba3b7410f79a7820c58b9b66c200a5bbe82474b5/Dockerfile#L65-L67 | non_test | chore use distroless base images we can use distroless images as base and we can simplify our build docker build stages what do you think for the ones who don t have context in practice we can remove the following and their dependencies from our docker builds | 0 |
317,537 | 27,243,933,670 | IssuesEvent | 2023-02-21 23:22:34 | Exiled-Team/EXILED | https://api.github.com/repos/Exiled-Team/EXILED | closed | [BUG] Error thrown in the console when a player leaves the server | requires-testing | **Describe the bug**
When a player leaves the server, an error is thrown in the console. This does not result in any other issues and has no gameplay impact, but it's an error in the console nonetheless.
**To Reproduce**
Steps to reproduce the behavior:
1. Join the server
2. Leave the server
3. Check the console
**Expected behavior**
No error in log. lol
**Server logs**
```
[2023-01-13 13:09:06.722 -06:00] Player tpd1864blake (76561198081841934@steam) (2) connected with the IP: 127.0.0.1
[2023-01-13 13:09:15.919 -06:00] New round has been started.
[2023-01-13 13:13:37.636 -06:00] Refreshed public key of central server - key hash not changed.
[2023-01-13 15:04:13.602 -06:00] Player tpd1864blake disconnected
[2023-01-13 15:04:13.633 -06:00] Server has entered the idle mode.
[2023-01-13 15:04:28.607 -06:00] [ERROR] [Exiled.API] Scale error: System.NullReferenceException
at (wrapper managed-to-native) UnityEngine.Component.get_transform(UnityEngine.Component)
at Exiled.API.Features.Player.set_Scale (UnityEngine.Vector3 value) [0x00006] in <643047ceaa2d4678ad61a6d9003e4e13>:0
```
**EXILED Version ("latest" is not a version):**
6.0.0-beta.21
**Results of `show plugins` command in console:**
```
[2023-01-13 15:13:55.210 -06:00] Total number of plugins: 7
Enabled plugins: 7
Disabled plugins: 0
Exiled.Events:
- Author: Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_events
- Priority: Highest
Skull Island:
- Author: tpd1864blake
- Version: 1.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: skull_island
- Priority: Medium
Exiled.CustomItems:
- Author: Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_custom_items
- Priority: Medium
Exiled.CustomRoles:
- Author: Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_custom_roles
- Priority: Medium
Exiled.CreditTags:
- Author: Babyboucher20 & iRebbok & Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_credits
- Priority: Medium
Exiled.Permissions:
- Author: Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_permissions
- Priority: Medium
Exiled.Updater:
- Author: Exiled.Updater
- Version: 3.1.1.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_updater
- Priority: Medium
```
**Additional context**
I have a plugin that changes the scale of players through different methods. To revert these changes on dying, respawning, etc. I have it automatically set your scale to 1,1,1 when the OnSpawned event is called. I believe that the game changes your role to Spectator, then changes it again to None when a player leaves the server, based on some Log.Info() outputs I've observed. Keep in mind that my plugin tries to detect when the player has left through `Player == null`, but somehow it still slips by. I believe that when I attempt to modify the scale of the player as they are leaving the server, Exiled fails to realize that the player has left and attempts to go through with it, and something with some delay or whatever results in Exiled attempting to change the scale of the player after the player object has been removed.
| 1.0 | [BUG] Error thrown in the console when a player leaves the server - **Describe the bug**
When a player leaves the server, an error is thrown in the console. This does not result in any other issues and has no gameplay impact, but it's an error in the console nonetheless.
**To Reproduce**
Steps to reproduce the behavior:
1. Join the server
2. Leave the server
3. Check the console
**Expected behavior**
No error in log. lol
**Server logs**
```
[2023-01-13 13:09:06.722 -06:00] Player tpd1864blake (76561198081841934@steam) (2) connected with the IP: 127.0.0.1
[2023-01-13 13:09:15.919 -06:00] New round has been started.
[2023-01-13 13:13:37.636 -06:00] Refreshed public key of central server - key hash not changed.
[2023-01-13 15:04:13.602 -06:00] Player tpd1864blake disconnected
[2023-01-13 15:04:13.633 -06:00] Server has entered the idle mode.
[2023-01-13 15:04:28.607 -06:00] [ERROR] [Exiled.API] Scale error: System.NullReferenceException
at (wrapper managed-to-native) UnityEngine.Component.get_transform(UnityEngine.Component)
at Exiled.API.Features.Player.set_Scale (UnityEngine.Vector3 value) [0x00006] in <643047ceaa2d4678ad61a6d9003e4e13>:0
```
**EXILED Version ("latest" is not a version):**
6.0.0-beta.21
**Results of `show plugins` command in console:**
```
[2023-01-13 15:13:55.210 -06:00] Total number of plugins: 7
Enabled plugins: 7
Disabled plugins: 0
Exiled.Events:
- Author: Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_events
- Priority: Highest
Skull Island:
- Author: tpd1864blake
- Version: 1.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: skull_island
- Priority: Medium
Exiled.CustomItems:
- Author: Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_custom_items
- Priority: Medium
Exiled.CustomRoles:
- Author: Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_custom_roles
- Priority: Medium
Exiled.CreditTags:
- Author: Babyboucher20 & iRebbok & Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_credits
- Priority: Medium
Exiled.Permissions:
- Author: Exiled Team
- Version: 6.0.0.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_permissions
- Priority: Medium
Exiled.Updater:
- Author: Exiled.Updater
- Version: 3.1.1.0
- Required Exiled Version: 6.0.0.0
- Prefix: exiled_updater
- Priority: Medium
```
**Additional context**
I have a plugin that changes the scale of players through different methods. To revert these changes on dying, respawning, etc. I have it automatically set your scale to 1,1,1 when the OnSpawned event is called. I believe that the game changes your role to Spectator, then changes it again to None when a player leaves the server, based on some Log.Info() outputs I've observed. Keep in mind that my plugin tries to detect when the player has left through `Player == null`, but somehow it still slips by. I believe that when I attempt to modify the scale of the player as they are leaving the server, Exiled fails to realize that the player has left and attempts to go through with it, and something with some delay or whatever results in Exiled attempting to change the scale of the player after the player object has been removed.
| test | error thrown in the console when a player leaves the server describe the bug when a player leaves the server an error is thrown in the console this does not result in any other issues and has no gameplay impact but it s an error in the console nonetheless to reproduce steps to reproduce the behavior join the server leave the server check the console expected behavior no error in log lol server logs player steam connected with the ip new round has been started refreshed public key of central server key hash not changed player disconnected server has entered the idle mode scale error system nullreferenceexception at wrapper managed to native unityengine component get transform unityengine component at exiled api features player set scale unityengine value in exiled version latest is not a version beta results of show plugins command in console total number of plugins enabled plugins disabled plugins exiled events author exiled team version required exiled version prefix exiled events priority highest skull island author version required exiled version prefix skull island priority medium exiled customitems author exiled team version required exiled version prefix exiled custom items priority medium exiled customroles author exiled team version required exiled version prefix exiled custom roles priority medium exiled credittags author irebbok exiled team version required exiled version prefix exiled credits priority medium exiled permissions author exiled team version required exiled version prefix exiled permissions priority medium exiled updater author exiled updater version required exiled version prefix exiled updater priority medium additional context i have a plugin that changes the scale of players through different methods to revert these changes on dying respawning etc i have it automatically set your scale to when the onspawned event is called i believe that the game changes your role to spectator then changes it again to none when a player leaves the server based on some log info outputs i ve observed keep in mind that my plugin tries to detect when the player has left through player null but somehow it still slips by i believe that when i attempt to modify the scale of the player as they are leaving the server exiled fails to realize that the player has left and attempts to go through with it and something with some delay or whatever results in exiled attempting to change the scale of the player after the player object has been removed | 1 |
5,224 | 3,184,490,522 | IssuesEvent | 2015-09-27 12:28:13 | drozdik/Elephant | https://api.github.com/repos/drozdik/Elephant | opened | Make requirements for iteration issues | Code Doc | Make requirements for iteration issues:
#127
#126
#70
#69 | 1.0 | Make requirements for iteration issues - Make requirements for iteration issues:
#127
#126
#70
#69 | non_test | make requirements for iteration issues make requirements for iteration issues | 0 |
151,299 | 12,030,830,766 | IssuesEvent | 2020-04-13 08:15:59 | paritytech/substrate | https://api.github.com/repos/paritytech/substrate | closed | Test for process interruption generates untracked files | F4-tests 🎯 F8-enhancement 🎁 | After running the tests purge_chain_works.rs and running_the_node_and_interrupt.rs a few directories are created: interrupt_test and purge_chain_test.
There is nothing wrong with that but they should be ignored in the Git repository and not marked as untracked files. If we don't do that, someone WILL commit them by mistake one day.
Maybe a good solution would be to prefix all those data generated and then add the prefix to the .gitignore. Or maybe put all the data generated in a directory that we put in the .gitignore. Or generate the data in /tmp instead of inside the repository. | 1.0 | Test for process interruption generates untracked files - After running the tests purge_chain_works.rs and running_the_node_and_interrupt.rs a few directories are created: interrupt_test and purge_chain_test.
There is nothing wrong with that but they should be ignored in the Git repository and not marked as untracked files. If we don't do that, someone WILL commit them by mistake one day.
Maybe a good solution would be to prefix all those data generated and then add the prefix to the .gitignore. Or maybe put all the data generated in a directory that we put in the .gitignore. Or generate the data in /tmp instead of inside the repository. | test | test for process interruption generates untracked files after running the tests purge chain works rs and running the node and interrupt rs a few directories are created interrupt test and purge chain test there is nothing wrong with that but they should be ignored in the git repository and not marked as untracked files if we don t do that someone will commit them by mistake one day maybe a good solution would be to prefix all those data generated and then add the prefix to the gitignore or maybe put all the data generated in a directory that we put in the gitignore or generate the data in tmp instead of inside the repository | 1 |
126,209 | 12,288,366,358 | IssuesEvent | 2020-05-09 16:24:53 | reach4help/reach4help | https://api.github.com/repos/reach4help/reach4help | closed | Build web-client documentation | documentation frontend | - [ ] Modules
- [x] Project Structure
- [x] Component & Containers
- [x] Pages & Routes
- [x] Redux
- [x] Structure
- [x] Async calls
- [x] Build a quick guide to contribution
<!--zenhub info: do not edit anything after this line, it will be automatically changed-->
--------
### [ZenHub Information](https://app.zenhub.com/workspaces/reach4help-5e8dcbfb14ac087f410cbabb/issues/reach4help/reach4help/351)
*This information is updated automatically. To modify it, please use ZenHub.*
<!--zenhub info end-->
<!--zenhub info: do not edit anything after this line, it will be automatically changed-->
--------
### [ZenHub Information](https://app.zenhub.com/workspaces/reach4help-5e8dcbfb14ac087f410cbabb/issues/reach4help/reach4help/351)
*This information is updated automatically. To modify it, please use ZenHub.*
<!--zenhub info end--> | 1.0 | Build web-client documentation - - [ ] Modules
- [x] Project Structure
- [x] Component & Containers
- [x] Pages & Routes
- [x] Redux
- [x] Structure
- [x] Async calls
- [x] Build a quick guide to contribution
<!--zenhub info: do not edit anything after this line, it will be automatically changed-->
--------
### [ZenHub Information](https://app.zenhub.com/workspaces/reach4help-5e8dcbfb14ac087f410cbabb/issues/reach4help/reach4help/351)
*This information is updated automatically. To modify it, please use ZenHub.*
<!--zenhub info end-->
<!--zenhub info: do not edit anything after this line, it will be automatically changed-->
--------
### [ZenHub Information](https://app.zenhub.com/workspaces/reach4help-5e8dcbfb14ac087f410cbabb/issues/reach4help/reach4help/351)
*This information is updated automatically. To modify it, please use ZenHub.*
<!--zenhub info end--> | non_test | build web client documentation modules project structure component containers pages routes redux structure async calls build a quick guide to contribution this information is updated automatically to modify it please use zenhub this information is updated automatically to modify it please use zenhub | 0 |
26,825 | 4,243,377,909 | IssuesEvent | 2016-07-06 22:47:12 | Dan12/starter-ruby-bot | https://api.github.com/repos/Dan12/starter-ruby-bot | opened | TEST: EDIT 3 3 3 | testing | Environment: `development`, Database dump: `b111`
--
## Steps to reproduce:
1. sdf asf sdaf
2. sadf asdf sadf
### Expected
asdfasdfa
### Actual
hhhhssss
## Priority: normal | 1.0 | TEST: EDIT 3 3 3 - Environment: `development`, Database dump: `b111`
--
## Steps to reproduce:
1. sdf asf sdaf
2. sadf asdf sadf
### Expected
asdfasdfa
### Actual
hhhhssss
## Priority: normal | test | test edit environment development database dump steps to reproduce sdf asf sdaf sadf asdf sadf expected asdfasdfa actual hhhhssss priority normal | 1 |
108,836 | 9,333,750,649 | IssuesEvent | 2019-03-28 15:00:06 | phetsims/QA | https://api.github.com/repos/phetsims/QA | opened | Dev Test: Ohms Law 1.4.0-dev.29 | QA:a11y QA:dev-test | As a follow up to #289, it would be good to verify that the remaining issues found have been fixed before the next release. Sorry it has been a while since the last test. Please verify that these issues have been fixed, and test the simulation to make sure that no knew issues have crept in.
- [ ] [Performance on iPad Air 2](https://github.com/phetsims/ohms-law/issues/132) (Can you please compare against performance in this version? [dev.26](https://phet-dev.colorado.edu/html/ohms-law/1.4.0-dev.26/phet/ohms-law_en_phet.html))
- [ ] [Mobile VO allows gesture input](https://github.com/phetsims/ohms-law/issues/131) | 1.0 | Dev Test: Ohms Law 1.4.0-dev.29 - As a follow up to #289, it would be good to verify that the remaining issues found have been fixed before the next release. Sorry it has been a while since the last test. Please verify that these issues have been fixed, and test the simulation to make sure that no knew issues have crept in.
- [ ] [Performance on iPad Air 2](https://github.com/phetsims/ohms-law/issues/132) (Can you please compare against performance in this version? [dev.26](https://phet-dev.colorado.edu/html/ohms-law/1.4.0-dev.26/phet/ohms-law_en_phet.html))
- [ ] [Mobile VO allows gesture input](https://github.com/phetsims/ohms-law/issues/131) | test | dev test ohms law dev as a follow up to it would be good to verify that the remaining issues found have been fixed before the next release sorry it has been a while since the last test please verify that these issues have been fixed and test the simulation to make sure that no knew issues have crept in can you please compare against performance in this version | 1 |
273,961 | 23,798,908,449 | IssuesEvent | 2022-09-03 01:34:56 | rubyforgood/casa | https://api.github.com/repos/rubyforgood/casa | closed | Fix Learning hour flaking test | Type: Bug Help Wanted testing | ### Current Behavior
In `spec/services/learning_hours_export_csv_service_spec.rb`
when the name of the learning hour is set to `Such, Such Were the Joys` the test will fail because it's not expecting quotes. Most likely due to the comma.
Also please fix "learning" misspelled as "larning"
### How to Replicate
in your local casa, run `bundle exec rspec spec/services/learning_hours_export_csv_service_spec.rb` from the repo root
### Questions? Join Slack!
We highly recommend that you join us in slack https://rubyforgood.herokuapp.com/ #casa channel to ask questions quickly and hear about office hours (currently Tuesday 6-8pm Pacific), stakeholder news, and upcoming new issues.
| 1.0 | Fix Learning hour flaking test - ### Current Behavior
In `spec/services/learning_hours_export_csv_service_spec.rb`
when the name of the learning hour is set to `Such, Such Were the Joys` the test will fail because it's not expecting quotes. Most likely due to the comma.
Also please fix "learning" misspelled as "larning"
### How to Replicate
in your local casa, run `bundle exec rspec spec/services/learning_hours_export_csv_service_spec.rb` from the repo root
### Questions? Join Slack!
We highly recommend that you join us in slack https://rubyforgood.herokuapp.com/ #casa channel to ask questions quickly and hear about office hours (currently Tuesday 6-8pm Pacific), stakeholder news, and upcoming new issues.
| test | fix learning hour flaking test current behavior in spec services learning hours export csv service spec rb when the name of the learning hour is set to such such were the joys the test will fail because it s not expecting quotes most likely due to the comma also please fix learning misspelled as larning how to replicate in your local casa run bundle exec rspec spec services learning hours export csv service spec rb from the repo root questions join slack we highly recommend that you join us in slack casa channel to ask questions quickly and hear about office hours currently tuesday pacific stakeholder news and upcoming new issues | 1 |
219,262 | 24,466,822,931 | IssuesEvent | 2022-10-07 15:43:22 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | closed | Vulnerability roundup 109: uclibc-ng-1.0.38: 1 advisory [9.6] | 1.severity: security | [search](https://search.nix.gsc.io/?q=uclibc-ng&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=uclibc-ng+in%3Apath&type=Code)
* [ ] [CVE-2021-43523](https://nvd.nist.gov/vuln/detail/CVE-2021-43523) CVSSv3=9.6 (nixos-21.05)
## CVE details
### CVE-2021-43523
In uClibc and uClibc-ng before 1.0.39, incorrect handling of special characters in domain names returned by DNS servers via gethostbyname, getaddrinfo, gethostbyaddr, and getnameinfo can lead to output of wrong hostnames (leading to domain hijacking) or injection into applications (leading to remote code execution, XSS, applications crashes, etc.). In other words, a validation step, which is expected in any stub resolver, does not occur.
-----
Scanned versions: nixos-21.05: 3b422991781.
Cc @rasendubi
| True | Vulnerability roundup 109: uclibc-ng-1.0.38: 1 advisory [9.6] - [search](https://search.nix.gsc.io/?q=uclibc-ng&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=uclibc-ng+in%3Apath&type=Code)
* [ ] [CVE-2021-43523](https://nvd.nist.gov/vuln/detail/CVE-2021-43523) CVSSv3=9.6 (nixos-21.05)
## CVE details
### CVE-2021-43523
In uClibc and uClibc-ng before 1.0.39, incorrect handling of special characters in domain names returned by DNS servers via gethostbyname, getaddrinfo, gethostbyaddr, and getnameinfo can lead to output of wrong hostnames (leading to domain hijacking) or injection into applications (leading to remote code execution, XSS, applications crashes, etc.). In other words, a validation step, which is expected in any stub resolver, does not occur.
-----
Scanned versions: nixos-21.05: 3b422991781.
Cc @rasendubi
| non_test | vulnerability roundup uclibc ng advisory nixos cve details cve in uclibc and uclibc ng before incorrect handling of special characters in domain names returned by dns servers via gethostbyname getaddrinfo gethostbyaddr and getnameinfo can lead to output of wrong hostnames leading to domain hijacking or injection into applications leading to remote code execution xss applications crashes etc in other words a validation step which is expected in any stub resolver does not occur scanned versions nixos cc rasendubi | 0 |
818,210 | 30,678,495,651 | IssuesEvent | 2023-07-26 07:32:18 | dumpus-app/dumpus-app | https://api.github.com/repos/dumpus-app/dumpus-app | closed | Most used bot list is empty | bug enhancement help wanted good first issue low priority | As in the title the most used bot list is shown empty probably because there were no bots on the server or none has been used.
Probably should hide this if we meet those two cases | 1.0 | Most used bot list is empty - As in the title the most used bot list is shown empty probably because there were no bots on the server or none has been used.
Probably should hide this if we meet those two cases | non_test | most used bot list is empty as in the title the most used bot list is shown empty probably because there were no bots on the server or none has been used probably should hide this if we meet those two cases | 0 |
194,600 | 14,681,911,761 | IssuesEvent | 2020-12-31 14:44:08 | mapasculturais/mapasculturais | https://api.github.com/repos/mapasculturais/mapasculturais | closed | ENTIDADE PROJETO :: botão de publicar resultado quando incompleta gera confusão | BACKEND complexidade:BAIXA prioridade:MEDIA status:test-ready tipo:AJUSTE tipo:MELHORIA | Depois de preencher uma ficha de edital incompleta, ao clicar sobre o botão de publicar resultado, o sistema parece não responder pq nenhuma mensagem é indicada e o brower não faz o scroll para os campos com os sinais de alerta em vermelhos (exclamação). | 1.0 | ENTIDADE PROJETO :: botão de publicar resultado quando incompleta gera confusão - Depois de preencher uma ficha de edital incompleta, ao clicar sobre o botão de publicar resultado, o sistema parece não responder pq nenhuma mensagem é indicada e o brower não faz o scroll para os campos com os sinais de alerta em vermelhos (exclamação). | test | entidade projeto botão de publicar resultado quando incompleta gera confusão depois de preencher uma ficha de edital incompleta ao clicar sobre o botão de publicar resultado o sistema parece não responder pq nenhuma mensagem é indicada e o brower não faz o scroll para os campos com os sinais de alerta em vermelhos exclamação | 1 |
193,103 | 6,877,851,864 | IssuesEvent | 2017-11-20 09:43:44 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | closed | Mapv3 filter enhancement | Component_Map Priority_Low Type_Enhancement | from forum.oc.pl:
> areckis » środa, 12 kwietnia 2017, 07:39
> Mapa skrzynek v3. "Ukryj skrzynki". W tej chwili włączenie opcji "znalezione" oraz "jeszcze nie znalezione" ukrywa wszystkie skrzynki wraz z własnymi, które i tak mają własny przycisk ukrycia. Może by odizolować ukrywanie znalezionych/nieznalezionych od własnych (bez względu na to czy były znalezione, czy nie)? | 1.0 | Mapv3 filter enhancement - from forum.oc.pl:
> areckis » środa, 12 kwietnia 2017, 07:39
> Mapa skrzynek v3. "Ukryj skrzynki". W tej chwili włączenie opcji "znalezione" oraz "jeszcze nie znalezione" ukrywa wszystkie skrzynki wraz z własnymi, które i tak mają własny przycisk ukrycia. Może by odizolować ukrywanie znalezionych/nieznalezionych od własnych (bez względu na to czy były znalezione, czy nie)? | non_test | filter enhancement from forum oc pl areckis » środa kwietnia mapa skrzynek ukryj skrzynki w tej chwili włączenie opcji znalezione oraz jeszcze nie znalezione ukrywa wszystkie skrzynki wraz z własnymi które i tak mają własny przycisk ukrycia może by odizolować ukrywanie znalezionych nieznalezionych od własnych bez względu na to czy były znalezione czy nie | 0 |
190,868 | 14,581,991,368 | IssuesEvent | 2020-12-18 11:37:01 | mozilla-mobile/firefox-ios | https://api.github.com/repos/mozilla-mobile/firefox-ios | closed | FXIOS-1330 ⁃ [XCUITest] ActivityStreamTest - testTopSitesRemoveAllExceptPinnedClearPrivateData | eng:intermittent-test | ### Bitrise Test Run:
Provide a Bitrise test run report link here showcasing the problem
https://addons-testing.bitrise.io/builds/637c33d4c09d1249/testreport/a1f9561e-ee9f-4b35-bffb-bc04f0012847/testsuite/0/testcases?status=failed
### Stacktrace:
### Build:
latest main
The test started by going to BrowserTab and we need to be sure the Homescreen is loaded before that.
Just by adding :
waitForExistence(app.cells["TopSitesCell"].cells.element(boundBy: 0), timeout: 3) in L#138 will solve the issue
┆Issue is synchronized with this [Jira Task](https://jira.mozilla.com/browse/FXIOS-1330)
| 1.0 | FXIOS-1330 ⁃ [XCUITest] ActivityStreamTest - testTopSitesRemoveAllExceptPinnedClearPrivateData - ### Bitrise Test Run:
Provide a Bitrise test run report link here showcasing the problem
https://addons-testing.bitrise.io/builds/637c33d4c09d1249/testreport/a1f9561e-ee9f-4b35-bffb-bc04f0012847/testsuite/0/testcases?status=failed
### Stacktrace:
### Build:
latest main
The test started by going to BrowserTab and we need to be sure the Homescreen is loaded before that.
Just by adding :
waitForExistence(app.cells["TopSitesCell"].cells.element(boundBy: 0), timeout: 3) in L#138 will solve the issue
┆Issue is synchronized with this [Jira Task](https://jira.mozilla.com/browse/FXIOS-1330)
| test | fxios ⁃ activitystreamtest testtopsitesremoveallexceptpinnedclearprivatedata bitrise test run provide a bitrise test run report link here showcasing the problem stacktrace build latest main the test started by going to browsertab and we need to be sure the homescreen is loaded before that just by adding waitforexistence app cells cells element boundby timeout in l will solve the issue ┆issue is synchronized with this | 1 |
145,107 | 11,648,769,263 | IssuesEvent | 2020-03-01 22:39:46 | urapadmin/kiosk | https://api.github.com/repos/urapadmin/kiosk | closed | units are not always called units. | A: next! filemaker needs testing | According to type they can be called unit or area or trench or something else. So the exact name for the unit should be either type-dependent or even unit dependent? No that sounds extreme. | 1.0 | units are not always called units. - According to type they can be called unit or area or trench or something else. So the exact name for the unit should be either type-dependent or even unit dependent? No that sounds extreme. | test | units are not always called units according to type they can be called unit or area or trench or something else so the exact name for the unit should be either type dependent or even unit dependent no that sounds extreme | 1 |
205,030 | 23,296,034,396 | IssuesEvent | 2022-08-06 15:32:59 | turkdevops/grafana | https://api.github.com/repos/turkdevops/grafana | closed | CVE-2020-7754 (High) detected in npm-user-validate-1.0.0.tgz - autoclosed | security vulnerability | ## CVE-2020-7754 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-user-validate-1.0.0.tgz</b></p></summary>
<p>User validations for npm</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz">https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz</a></p>
<p>
Dependency Hierarchy:
- npm-6.14.6.tgz (Root Library)
- :x: **npm-user-validate-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/a1c271764655c7e3ff81126d5929b8dda6170bf4">a1c271764655c7e3ff81126d5929b8dda6170bf4</a></p>
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package npm-user-validate before 1.0.1. The regex that validates user emails took exponentially longer to process long input strings beginning with @ characters.
<p>Publish Date: 2020-10-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7754>CVE-2020-7754</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754</a></p>
<p>Release Date: 2020-10-27</p>
<p>Fix Resolution (npm-user-validate): 1.0.1</p>
<p>Direct dependency fix Resolution (npm): 6.14.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7754 (High) detected in npm-user-validate-1.0.0.tgz - autoclosed - ## CVE-2020-7754 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-user-validate-1.0.0.tgz</b></p></summary>
<p>User validations for npm</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz">https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz</a></p>
<p>
Dependency Hierarchy:
- npm-6.14.6.tgz (Root Library)
- :x: **npm-user-validate-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/a1c271764655c7e3ff81126d5929b8dda6170bf4">a1c271764655c7e3ff81126d5929b8dda6170bf4</a></p>
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package npm-user-validate before 1.0.1. The regex that validates user emails took exponentially longer to process long input strings beginning with @ characters.
<p>Publish Date: 2020-10-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7754>CVE-2020-7754</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754</a></p>
<p>Release Date: 2020-10-27</p>
<p>Fix Resolution (npm-user-validate): 1.0.1</p>
<p>Direct dependency fix Resolution (npm): 6.14.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in npm user validate tgz autoclosed cve high severity vulnerability vulnerable library npm user validate tgz user validations for npm library home page a href dependency hierarchy npm tgz root library x npm user validate tgz vulnerable library found in head commit a href found in base branch datasource meta vulnerability details this affects the package npm user validate before the regex that validates user emails took exponentially longer to process long input strings beginning with characters publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution npm user validate direct dependency fix resolution npm step up your open source security game with mend | 0 |
726,287 | 24,993,800,560 | IssuesEvent | 2022-11-02 21:23:10 | enjoythecode/scrum-wizards-cs321 | https://api.github.com/repos/enjoythecode/scrum-wizards-cs321 | opened | FRONT-END: Make login form text visible | medium priority weight-light | Currently, input fields are gray when inputting, which is difficult to see. The color should be changed to improve contrast. | 1.0 | FRONT-END: Make login form text visible - Currently, input fields are gray when inputting, which is difficult to see. The color should be changed to improve contrast. | non_test | front end make login form text visible currently input fields are gray when inputting which is difficult to see the color should be changed to improve contrast | 0 |
744,518 | 25,946,629,315 | IssuesEvent | 2022-12-17 02:50:36 | restarone/violet_rails | https://api.github.com/repos/restarone/violet_rails | closed | email threading is broken | bug high priority | **Describe the bug**
When a user receives email on their subdomain, emails that shouldn't be part of 1 thread are pushed into the same thread
<img width="1728" alt="Screen Shot 2022-11-16 at 8 00 57 AM" src="https://user-images.githubusercontent.com/35935196/202187292-59a74473-f7be-430b-a5b1-6c7ef2d2b090.png">
**To Reproduce**
Steps to reproduce the behavior:
1. Send yourself 2 emails from discord or notion
2. both emails are in 1 thread when they should be in 2 separate threads
**Expected behavior**
2 separate threads should be created
| 1.0 | email threading is broken - **Describe the bug**
When a user receives email on their subdomain, emails that shouldn't be part of 1 thread are pushed into the same thread
<img width="1728" alt="Screen Shot 2022-11-16 at 8 00 57 AM" src="https://user-images.githubusercontent.com/35935196/202187292-59a74473-f7be-430b-a5b1-6c7ef2d2b090.png">
**To Reproduce**
Steps to reproduce the behavior:
1. Send yourself 2 emails from discord or notion
2. both emails are in 1 thread when they should be in 2 separate threads
**Expected behavior**
2 separate threads should be created
| non_test | email threading is broken describe the bug when a user receives email on their subdomain emails that shouldn t be part of thread are pushed into the same thread img width alt screen shot at am src to reproduce steps to reproduce the behavior send yourself emails from discord or notion both emails are in thread when they should be in separate threads expected behavior separate threads should be created | 0 |
315,705 | 27,097,945,474 | IssuesEvent | 2023-02-15 05:40:28 | streamr-turing/streamr-fe | https://api.github.com/repos/streamr-turing/streamr-fe | closed | Testing: Share Modal (wait) | front-end Testing | - [ ] Start on Share Modal
- [ ] Should see all users but not yourself
- [ ] Should be able to select friend
- [ ] Should be able to share to friend
- [ ] Should be able to see "sent" message if successfully sent
- [ ] Should be able to see "failed" message if it was unsuccessful
- [ ] Should be able to close out modal | 1.0 | Testing: Share Modal (wait) - - [ ] Start on Share Modal
- [ ] Should see all users but not yourself
- [ ] Should be able to select friend
- [ ] Should be able to share to friend
- [ ] Should be able to see "sent" message if successfully sent
- [ ] Should be able to see "failed" message if it was unsuccessful
- [ ] Should be able to close out modal | test | testing share modal wait start on share modal should see all users but not yourself should be able to select friend should be able to share to friend should be able to see sent message if successfully sent should be able to see failed message if it was unsuccessful should be able to close out modal | 1 |
126,997 | 10,441,792,208 | IssuesEvent | 2019-09-18 11:40:56 | MangopearUK/European-Boating-Association--Theme | https://api.github.com/repos/MangopearUK/European-Boating-Association--Theme | closed | Test & audit: Test page for permissions | Testing | Page URL: https://eba.eu.com/test-page-for-permissions/
## Table of contents
- [x] **Task 1:** Perform automated audits _(10 tasks)_
- [x] **Task 2:** Manual standards & accessibility tests _(61 tasks)_
- [x] **Task 3:** Breakpoint testing _(15 tasks)_
- [x] **Task 4:** Re-run automated audits _(10 tasks)_
## 1: Perform automated audits _(10 tasks)_
### Lighthouse:
- [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_
- [x] Run "Performance" audit in lighthouse _(using incognito tab)_
- [x] Run "Best practices" audit in lighthouse _(using incognito tab)_
- [x] Run "SEO" audit in lighthouse _(using incognito tab)_
- [x] Run "PWA" audit in lighthouse _(using incognito tab)_
### Pingdom
- [x] Run full audit of the the page's performance in Pingdom
### Browser's console
- [x] Check Chrome's console for errors
### Log results of audits
- [x] Screenshot snapshot of the lighthouse audits
- [x] Upload PDF of detailed lighthouse reports
- [x] Provide a screenshot of any console errors
## 2: Manual standards & accessibility tests _(61 tasks)_
### Forms
- [x] Give all form elements permanently visible labels
- [x] Place labels above form elements
- [x] Mark invalid fields clearly and provide associated error messages
- [x] Make forms as short as possible; offer shortcuts like autocompleting the address using the postcode
- [x] Ensure all form fields have the correct requried state
- [x] Provide status and error messages as WAI-ARIA live regions
### Readability of content
- [x] Ensure page has good grammar
- [x] Ensure page content has been spell-checked
- [x] Make sure headings are in logical order
- [x] Ensure the same content is available across different devices and platforms
- [x] Begin long, multi-section documents with a table of contents
### Presentation
- [x] Make sure all content is formatted correctly
- [x] Avoid all-caps text
- [x] Make sure data tables wider than their container can be scrolled horizontally
- [x] Use the same design patterns to solve the same problems
- [x] Do not mark up subheadings/straplines with separate heading elements
### Links & buttons
#### Links
- [x] Check all links to ensure they work
- [x] Check all links to third party websites use `rel="noopener"`
- [x] Make sure the purpose of a link is clearly described: "read more" vs. "read more about accessibility"
- [x] Provide a skip link if necessary
- [x] Underline links — at least in body copy
- [x] Warn users of links that have unusual behaviors, like linking off-site, or loading a new tab (i.e. aria-label)
#### Buttons
- [x] Ensure primary calls to action are easy to recognize and reach
- [x] Provide clear, unambiguous focus styles
- [x] Ensure states (pressed, expanded, invalid, etc) are communicated to assistive software
- [x] Ensure disabled controls are not focusable
- [x] Make sure controls within hidden content are not focusable
- [x] Provide large touch "targets" for interactive elements
- [x] Make controls look like controls; give them strong perceived affordance
- [x] Use well-established, therefore recognizable, icons and symbols
### Assistive technology
- [x] Ensure content is not obscured through zooming
- [x] Support Windows high contrast mode (use images, not background images)
- [x] Provide alternative text for salient images
- [x] Make scrollable elements focusable for keyboard users
- [x] Ensure keyboard focus order is logical regarding visual layout
- [x] Match semantics to behavior for assistive technology users
- [x] Provide a default language and use lang="[ISO code]" for subsections in different languages
- [x] Inform the user when there are important changes to the application state
- [x] Do not hijack standard scrolling behavior
- [x] Do not instate "infinite scroll" by default; provide buttons to load more items
### General accessibility
- [x] Make sure text and background colors contrast sufficiently
- [x] Do not rely on color for differentiation of visual elements
- [x] Avoid images of text — text that cannot be translated, selected, or understood by assistive tech
- [x] Provide a print stylesheet
- [x] Honour requests to remove animation via the prefers-reduced-motion media query
### SEO
- [x] Ensure all pages have appropriate title
- [x] Ensure all pages have meta descriptions
- [x] Make content easier to find and improve search results with structured data [Read more](https://developers.google.com/search/docs/guides/prototype)
- [x] Check whether page should be appearing in sitemap
- [x] Make sure page has Facebook and Twitter large image previews set correctly
- [x] Check canonical links for page
- [x] Mark as cornerstone content?
### Performance
- [x] Ensure all CSS assets are minified and concatenated
- [x] Ensure all JS assets are minified and concatenated
- [x] Ensure all images are compressed
- [x] Where possible, remove redundant code
- [x] Ensure all SVG assets have been optimised
- [x] Make sure styles and scripts are not render blocking
- [x] Ensure large image assets are lazy loaded
### Other
- [x] Make sure all content belongs to a landmark element
- [x] Provide a manifest.json file for identifiable homescreen entries
## 3: Breakpoint testing _(15 tasks)_
### Desktop
- [x] Provide a full screenshot of **1920px** wide page
- [x] Provide a full screenshot of **1500px** wide page
- [x] Provide a full screenshot of **1280px** wide page
- [x] Provide a full screenshot of **1024px** wide page
### Tablet
- [x] Provide a full screenshot of **960px** wide page
- [x] Provide a full screenshot of **800px** wide page
- [x] Provide a full screenshot of **760px** wide page
- [x] Provide a full screenshot of **650px** wide page
### Mobile
- [x] Provide a full screenshot of **600px** wide page
- [x] Provide a full screenshot of **500px** wide page
- [x] Provide a full screenshot of **450px** wide page
- [x] Provide a full screenshot of **380px** wide page
- [x] Provide a full screenshot of **320px** wide page
- [x] Provide a full screenshot of **280px** wide page
- [x] Provide a full screenshot of **250px** wide page
## 4: Re-run automated audits _(10 tasks)_
### Lighthouse:
- [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_
- [x] Run "Performance" audit in lighthouse _(using incognito tab)_
- [x] Run "Best practices" audit in lighthouse _(using incognito tab)_
- [x] Run "SEO" audit in lighthouse _(using incognito tab)_
- [x] Run "PWA" audit in lighthouse _(using incognito tab)_
### Pingdom
- [x] Run full audit of the the page's performance in Pingdom
### Browser's console
- [x] Check Chrome's console for errors
### Log results of audits
- [x] Screenshot snapshot of the lighthouse audits
- [x] Upload PDF of detailed lighthouse reports
- [x] Provide a screenshot of any console errors | 1.0 | Test & audit: Test page for permissions - Page URL: https://eba.eu.com/test-page-for-permissions/
## Table of contents
- [x] **Task 1:** Perform automated audits _(10 tasks)_
- [x] **Task 2:** Manual standards & accessibility tests _(61 tasks)_
- [x] **Task 3:** Breakpoint testing _(15 tasks)_
- [x] **Task 4:** Re-run automated audits _(10 tasks)_
## 1: Perform automated audits _(10 tasks)_
### Lighthouse:
- [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_
- [x] Run "Performance" audit in lighthouse _(using incognito tab)_
- [x] Run "Best practices" audit in lighthouse _(using incognito tab)_
- [x] Run "SEO" audit in lighthouse _(using incognito tab)_
- [x] Run "PWA" audit in lighthouse _(using incognito tab)_
### Pingdom
- [x] Run full audit of the the page's performance in Pingdom
### Browser's console
- [x] Check Chrome's console for errors
### Log results of audits
- [x] Screenshot snapshot of the lighthouse audits
- [x] Upload PDF of detailed lighthouse reports
- [x] Provide a screenshot of any console errors
## 2: Manual standards & accessibility tests _(61 tasks)_
### Forms
- [x] Give all form elements permanently visible labels
- [x] Place labels above form elements
- [x] Mark invalid fields clearly and provide associated error messages
- [x] Make forms as short as possible; offer shortcuts like autocompleting the address using the postcode
- [x] Ensure all form fields have the correct requried state
- [x] Provide status and error messages as WAI-ARIA live regions
### Readability of content
- [x] Ensure page has good grammar
- [x] Ensure page content has been spell-checked
- [x] Make sure headings are in logical order
- [x] Ensure the same content is available across different devices and platforms
- [x] Begin long, multi-section documents with a table of contents
### Presentation
- [x] Make sure all content is formatted correctly
- [x] Avoid all-caps text
- [x] Make sure data tables wider than their container can be scrolled horizontally
- [x] Use the same design patterns to solve the same problems
- [x] Do not mark up subheadings/straplines with separate heading elements
### Links & buttons
#### Links
- [x] Check all links to ensure they work
- [x] Check all links to third party websites use `rel="noopener"`
- [x] Make sure the purpose of a link is clearly described: "read more" vs. "read more about accessibility"
- [x] Provide a skip link if necessary
- [x] Underline links — at least in body copy
- [x] Warn users of links that have unusual behaviors, like linking off-site, or loading a new tab (i.e. aria-label)
#### Buttons
- [x] Ensure primary calls to action are easy to recognize and reach
- [x] Provide clear, unambiguous focus styles
- [x] Ensure states (pressed, expanded, invalid, etc) are communicated to assistive software
- [x] Ensure disabled controls are not focusable
- [x] Make sure controls within hidden content are not focusable
- [x] Provide large touch "targets" for interactive elements
- [x] Make controls look like controls; give them strong perceived affordance
- [x] Use well-established, therefore recognizable, icons and symbols
### Assistive technology
- [x] Ensure content is not obscured through zooming
- [x] Support Windows high contrast mode (use images, not background images)
- [x] Provide alternative text for salient images
- [x] Make scrollable elements focusable for keyboard users
- [x] Ensure keyboard focus order is logical regarding visual layout
- [x] Match semantics to behavior for assistive technology users
- [x] Provide a default language and use lang="[ISO code]" for subsections in different languages
- [x] Inform the user when there are important changes to the application state
- [x] Do not hijack standard scrolling behavior
- [x] Do not instate "infinite scroll" by default; provide buttons to load more items
### General accessibility
- [x] Make sure text and background colors contrast sufficiently
- [x] Do not rely on color for differentiation of visual elements
- [x] Avoid images of text — text that cannot be translated, selected, or understood by assistive tech
- [x] Provide a print stylesheet
- [x] Honour requests to remove animation via the prefers-reduced-motion media query
### SEO
- [x] Ensure all pages have appropriate title
- [x] Ensure all pages have meta descriptions
- [x] Make content easier to find and improve search results with structured data [Read more](https://developers.google.com/search/docs/guides/prototype)
- [x] Check whether page should be appearing in sitemap
- [x] Make sure page has Facebook and Twitter large image previews set correctly
- [x] Check canonical links for page
- [x] Mark as cornerstone content?
### Performance
- [x] Ensure all CSS assets are minified and concatenated
- [x] Ensure all JS assets are minified and concatenated
- [x] Ensure all images are compressed
- [x] Where possible, remove redundant code
- [x] Ensure all SVG assets have been optimised
- [x] Make sure styles and scripts are not render blocking
- [x] Ensure large image assets are lazy loaded
### Other
- [x] Make sure all content belongs to a landmark element
- [x] Provide a manifest.json file for identifiable homescreen entries
## 3: Breakpoint testing _(15 tasks)_
### Desktop
- [x] Provide a full screenshot of **1920px** wide page
- [x] Provide a full screenshot of **1500px** wide page
- [x] Provide a full screenshot of **1280px** wide page
- [x] Provide a full screenshot of **1024px** wide page
### Tablet
- [x] Provide a full screenshot of **960px** wide page
- [x] Provide a full screenshot of **800px** wide page
- [x] Provide a full screenshot of **760px** wide page
- [x] Provide a full screenshot of **650px** wide page
### Mobile
- [x] Provide a full screenshot of **600px** wide page
- [x] Provide a full screenshot of **500px** wide page
- [x] Provide a full screenshot of **450px** wide page
- [x] Provide a full screenshot of **380px** wide page
- [x] Provide a full screenshot of **320px** wide page
- [x] Provide a full screenshot of **280px** wide page
- [x] Provide a full screenshot of **250px** wide page
## 4: Re-run automated audits _(10 tasks)_
### Lighthouse:
- [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_
- [x] Run "Performance" audit in lighthouse _(using incognito tab)_
- [x] Run "Best practices" audit in lighthouse _(using incognito tab)_
- [x] Run "SEO" audit in lighthouse _(using incognito tab)_
- [x] Run "PWA" audit in lighthouse _(using incognito tab)_
### Pingdom
- [x] Run full audit of the the page's performance in Pingdom
### Browser's console
- [x] Check Chrome's console for errors
### Log results of audits
- [x] Screenshot snapshot of the lighthouse audits
- [x] Upload PDF of detailed lighthouse reports
- [x] Provide a screenshot of any console errors | test | test audit test page for permissions page url table of contents task perform automated audits tasks task manual standards accessibility tests tasks task breakpoint testing tasks task re run automated audits tasks perform automated audits tasks lighthouse run accessibility audit in lighthouse using incognito tab run performance audit in lighthouse using incognito tab run best practices audit in lighthouse using incognito tab run seo audit in lighthouse using incognito tab run pwa audit in lighthouse using incognito tab pingdom run full audit of the the page s performance in pingdom browser s console check chrome s console for errors log results of audits screenshot snapshot of the lighthouse audits upload pdf of detailed lighthouse reports provide a screenshot of any console errors manual standards accessibility tests tasks forms give all form elements permanently visible labels place labels above form elements mark invalid fields clearly and provide associated error messages make forms as short as possible offer shortcuts like autocompleting the address using the postcode ensure all form fields have the correct requried state provide status and error messages as wai aria live regions readability of content ensure page has good grammar ensure page content has been spell checked make sure headings are in logical order ensure the same content is available across different devices and platforms begin long multi section documents with a table of contents presentation make sure all content is formatted correctly avoid all caps text make sure data tables wider than their container can be scrolled horizontally use the same design patterns to solve the same problems do not mark up subheadings straplines with separate heading elements links buttons links check all links to ensure they work check all links to third party websites use rel noopener make sure the purpose of a link is clearly described read more vs read more about accessibility provide a skip link if necessary underline links — at least in body copy warn users of links that have unusual behaviors like linking off site or loading a new tab i e aria label buttons ensure primary calls to action are easy to recognize and reach provide clear unambiguous focus styles ensure states pressed expanded invalid etc are communicated to assistive software ensure disabled controls are not focusable make sure controls within hidden content are not focusable provide large touch targets for interactive elements make controls look like controls give them strong perceived affordance use well established therefore recognizable icons and symbols assistive technology ensure content is not obscured through zooming support windows high contrast mode use images not background images provide alternative text for salient images make scrollable elements focusable for keyboard users ensure keyboard focus order is logical regarding visual layout match semantics to behavior for assistive technology users provide a default language and use lang for subsections in different languages inform the user when there are important changes to the application state do not hijack standard scrolling behavior do not instate infinite scroll by default provide buttons to load more items general accessibility make sure text and background colors contrast sufficiently do not rely on color for differentiation of visual elements avoid images of text — text that cannot be translated selected or understood by assistive tech provide a print stylesheet honour requests to remove animation via the prefers reduced motion media query seo ensure all pages have appropriate title ensure all pages have meta descriptions make content easier to find and improve search results with structured data check whether page should be appearing in sitemap make sure page has facebook and twitter large image previews set correctly check canonical links for page mark as cornerstone content performance ensure all css assets are minified and concatenated ensure all js assets are minified and concatenated ensure all images are compressed where possible remove redundant code ensure all svg assets have been optimised make sure styles and scripts are not render blocking ensure large image assets are lazy loaded other make sure all content belongs to a landmark element provide a manifest json file for identifiable homescreen entries breakpoint testing tasks desktop provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page tablet provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page mobile provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page re run automated audits tasks lighthouse run accessibility audit in lighthouse using incognito tab run performance audit in lighthouse using incognito tab run best practices audit in lighthouse using incognito tab run seo audit in lighthouse using incognito tab run pwa audit in lighthouse using incognito tab pingdom run full audit of the the page s performance in pingdom browser s console check chrome s console for errors log results of audits screenshot snapshot of the lighthouse audits upload pdf of detailed lighthouse reports provide a screenshot of any console errors | 1 |
347,589 | 31,233,243,274 | IssuesEvent | 2023-08-20 00:34:33 | cca-ffodregamdi/running-hi-back | https://api.github.com/repos/cca-ffodregamdi/running-hi-back | closed | [Feature] 3주차 - [BOOKMARK] 즐겨찾기 삭제 | ✨ Feature 🎯 Test | ✏️Description
-
즐겨찾기 목록에서 해당 게시글 삭제
✅TODO
-
- [x] 즐겨찾기 목록에서 삭제
- [x] 삭제 여부 테스트 코드 작성
🐾ETC
-
| 1.0 | [Feature] 3주차 - [BOOKMARK] 즐겨찾기 삭제 - ✏️Description
-
즐겨찾기 목록에서 해당 게시글 삭제
✅TODO
-
- [x] 즐겨찾기 목록에서 삭제
- [x] 삭제 여부 테스트 코드 작성
🐾ETC
-
| test | 즐겨찾기 삭제 ✏️description 즐겨찾기 목록에서 해당 게시글 삭제 ✅todo 즐겨찾기 목록에서 삭제 삭제 여부 테스트 코드 작성 🐾etc | 1 |
308,450 | 26,608,365,368 | IssuesEvent | 2023-01-23 21:24:21 | NOAA-EMC/NCEPLIBS-bufr | https://api.github.com/repos/NOAA-EMC/NCEPLIBS-bufr | closed | Add local test file directory option and use it to cache test files in CI runs | test | I just built bufr and I see:
```
ed@koko:~/NCEPLIBS-bufr/b$ cmake -DCMAKE_BUILD_TYPE=Debug ..
-- The C compiler identification is GNU 9.4.0
-- The Fortran compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/bin/gfortran - skipped
-- Downloading bufr test files...
-- [download 0% complete]
-- [download 1% complete]
-- [download 2% complete]
-- [download 3% complete]
-- [download 4% complete]
```
On other projects, like the GRIB2 repos, we also have test files we download from FTP. But we don't download them unless the user adds an option:
`option(FTP_TEST_FILES "Fetch and test with files on FTP site." OFF)
`
It also has an option to first check a local data directory for the test files, and get them from there if possible, saving all the FTPing. This allows me to cache the test data files in GitHub actions, so GitHub does not have to keep downloading the same data files over and over.
`SET(TEST_FILE_DIR "." CACHE STRING "Check this directory for test files before using FTP.")
` | 1.0 | Add local test file directory option and use it to cache test files in CI runs - I just built bufr and I see:
```
ed@koko:~/NCEPLIBS-bufr/b$ cmake -DCMAKE_BUILD_TYPE=Debug ..
-- The C compiler identification is GNU 9.4.0
-- The Fortran compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/bin/gfortran - skipped
-- Downloading bufr test files...
-- [download 0% complete]
-- [download 1% complete]
-- [download 2% complete]
-- [download 3% complete]
-- [download 4% complete]
```
On other projects, like the GRIB2 repos, we also have test files we download from FTP. But we don't download them unless the user adds an option:
`option(FTP_TEST_FILES "Fetch and test with files on FTP site." OFF)
`
It also has an option to first check a local data directory for the test files, and get them from there if possible, saving all the FTPing. This allows me to cache the test data files in GitHub actions, so GitHub does not have to keep downloading the same data files over and over.
`SET(TEST_FILE_DIR "." CACHE STRING "Check this directory for test files before using FTP.")
` | test | add local test file directory option and use it to cache test files in ci runs i just built bufr and i see ed koko nceplibs bufr b cmake dcmake build type debug the c compiler identification is gnu the fortran compiler identification is gnu detecting c compiler abi info detecting c compiler abi info done check for working c compiler usr bin cc skipped detecting c compile features detecting c compile features done detecting fortran compiler abi info detecting fortran compiler abi info done check for working fortran compiler usr bin gfortran skipped downloading bufr test files on other projects like the repos we also have test files we download from ftp but we don t download them unless the user adds an option option ftp test files fetch and test with files on ftp site off it also has an option to first check a local data directory for the test files and get them from there if possible saving all the ftping this allows me to cache the test data files in github actions so github does not have to keep downloading the same data files over and over set test file dir cache string check this directory for test files before using ftp | 1 |
327,017 | 9,963,641,751 | IssuesEvent | 2019-07-08 01:23:23 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | Lobby members panel duplicates names | Priority: High Size: Medium Type: Bug | 
It seems to happen when users join or change map... I think it has to do with the order that it updates the panel (sort -> delete -> duplicates?)
Needs consistent steps to reproduce it.
Reported by Asteral | 1.0 | Lobby members panel duplicates names - 
It seems to happen when users join or change map... I think it has to do with the order that it updates the panel (sort -> delete -> duplicates?)
Needs consistent steps to reproduce it.
Reported by Asteral | non_test | lobby members panel duplicates names it seems to happen when users join or change map i think it has to do with the order that it updates the panel sort delete duplicates needs consistent steps to reproduce it reported by asteral | 0 |
64,504 | 26,759,289,371 | IssuesEvent | 2023-01-31 04:44:57 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Fix typos and clear information up | media-services/svc triaged assigned-to-author doc-enhancement Pri2 rest-v3/subsvc | There are quite a few typos and explanations that could be improved. For example, `package**s**.json` is often used instead of `package.json` (plural rather then singular), and under [Sample .env file](https://docs.microsoft.com/en-us/azure/media-services/latest/configure-connect-nodejs-howto#sample-env-file), there could be a little bit more explanation on how to get the values from the azure portal.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 1a9113d9-f462-3ecc-0600-4cf615ab1633
* Version Independent ID: 6695d5ce-1070-c48b-02c5-1f7e51ca4e46
* Content: [Connect to Azure Media Services v3 API - Node.js - Azure Media Services v3](https://docs.microsoft.com/en-us/azure/media-services/latest/configure-connect-nodejs-howto)
* Content Source: [articles/media-services/latest/configure-connect-nodejs-howto.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/media-services/latest/configure-connect-nodejs-howto.md)
* Service: **media-services**
* Sub-service: **rest-v3**
* GitHub Login: @IngridAtMicrosoft
* Microsoft Alias: **inhenkel** | 1.0 | Fix typos and clear information up - There are quite a few typos and explanations that could be improved. For example, `package**s**.json` is often used instead of `package.json` (plural rather then singular), and under [Sample .env file](https://docs.microsoft.com/en-us/azure/media-services/latest/configure-connect-nodejs-howto#sample-env-file), there could be a little bit more explanation on how to get the values from the azure portal.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 1a9113d9-f462-3ecc-0600-4cf615ab1633
* Version Independent ID: 6695d5ce-1070-c48b-02c5-1f7e51ca4e46
* Content: [Connect to Azure Media Services v3 API - Node.js - Azure Media Services v3](https://docs.microsoft.com/en-us/azure/media-services/latest/configure-connect-nodejs-howto)
* Content Source: [articles/media-services/latest/configure-connect-nodejs-howto.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/media-services/latest/configure-connect-nodejs-howto.md)
* Service: **media-services**
* Sub-service: **rest-v3**
* GitHub Login: @IngridAtMicrosoft
* Microsoft Alias: **inhenkel** | non_test | fix typos and clear information up there are quite a few typos and explanations that could be improved for example package s json is often used instead of package json plural rather then singular and under there could be a little bit more explanation on how to get the values from the azure portal document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service media services sub service rest github login ingridatmicrosoft microsoft alias inhenkel | 0 |
125,310 | 12,256,862,060 | IssuesEvent | 2020-05-06 12:49:48 | MDSplus/mdsplus | https://api.github.com/repos/MDSplus/mdsplus | closed | LIB->ROUTINE:T needs a LIB->ROUTINE:C that frees the returned pointer after copying into mds descriptor | documentation needed enhancement | We could reduce memory leaks if we allow another TdiCall Type specifier that will trigger a CLASS_D return value. API methods like TreeGetMinimumPath, TreeGetPath, TreeGetTags suffer from this issue. | 1.0 | LIB->ROUTINE:T needs a LIB->ROUTINE:C that frees the returned pointer after copying into mds descriptor - We could reduce memory leaks if we allow another TdiCall Type specifier that will trigger a CLASS_D return value. API methods like TreeGetMinimumPath, TreeGetPath, TreeGetTags suffer from this issue. | non_test | lib routine t needs a lib routine c that frees the returned pointer after copying into mds descriptor we could reduce memory leaks if we allow another tdicall type specifier that will trigger a class d return value api methods like treegetminimumpath treegetpath treegettags suffer from this issue | 0 |
97,920 | 20,574,435,215 | IssuesEvent | 2022-03-04 01:57:11 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | Fix Documentation for Bulk Edit Container | Function-ContainerOrBarcode NeedsDocumentation Tool - Bulk Edit Container | If we cannot create freezer box positions in bulk, which I disagree with, then we should update the documentation to say so or delete the extra columns from the bulk edit container template.

| 1.0 | Fix Documentation for Bulk Edit Container - If we cannot create freezer box positions in bulk, which I disagree with, then we should update the documentation to say so or delete the extra columns from the bulk edit container template.

| non_test | fix documentation for bulk edit container if we cannot create freezer box positions in bulk which i disagree with then we should update the documentation to say so or delete the extra columns from the bulk edit container template | 0 |
220,280 | 17,185,556,360 | IssuesEvent | 2021-07-16 00:56:06 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | [test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/elasticsearch/nodes_mb·js - Monitoring app Elasticsearch nodes listing mb with offline node "before all" hook for "should have an Elasticsearch Cluster Summary Status with correct info" | failed-test test-cloud | **Version: 7.14.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/elasticsearch/nodes_mb·js**
**Stack Trace:**
```
Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="alerts-modal-button"])
Wait timed out after 10028ms
at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:842:17
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at onFailure (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:57:13)
at RetryService.try (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry.ts:31:12)
at Proxy.clickByCssSelector (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/common/find.ts:360:5)
at TestSubjects.click (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/common/test_subjects.ts:105:5)
at Context.<anonymous> (test/functional/apps/monitoring/elasticsearch/nodes_mb.js:32:9)
at Object.apply (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
**Other test failures:**
- Monitoring app Elasticsearch nodes listing mb with only online nodes "before all" hook for "should have an Elasticsearch Cluster Summary Status with correct info"
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/2041/testReport/_ | 2.0 | [test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/elasticsearch/nodes_mb·js - Monitoring app Elasticsearch nodes listing mb with offline node "before all" hook for "should have an Elasticsearch Cluster Summary Status with correct info" - **Version: 7.14.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/elasticsearch/nodes_mb·js**
**Stack Trace:**
```
Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="alerts-modal-button"])
Wait timed out after 10028ms
at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:842:17
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at onFailure (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:57:13)
at RetryService.try (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry.ts:31:12)
at Proxy.clickByCssSelector (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/common/find.ts:360:5)
at TestSubjects.click (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/common/test_subjects.ts:105:5)
at Context.<anonymous> (test/functional/apps/monitoring/elasticsearch/nodes_mb.js:32:9)
at Object.apply (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
**Other test failures:**
- Monitoring app Elasticsearch nodes listing mb with only online nodes "before all" hook for "should have an Elasticsearch Cluster Summary Status with correct info"
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/2041/testReport/_ | test | chrome x pack ui functional x pack test functional apps monitoring elasticsearch nodes mb·js monitoring app elasticsearch nodes listing mb with offline node before all hook for should have an elasticsearch cluster summary status with correct info version class chrome x pack ui functional x pack test functional apps monitoring elasticsearch nodes mb·js stack trace error retry try timeout timeouterror waiting for element to be located by css selector wait timed out after at var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana node modules selenium webdriver lib webdriver js at runmicrotasks at processticksandrejections internal process task queues js at onfailure var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry for success ts at retryforsuccess var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry for success ts at retryservice try var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry ts at proxy clickbycssselector var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional services common find ts at testsubjects click var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional services common test subjects ts at context test functional apps monitoring elasticsearch nodes mb js at object apply var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana node modules kbn test target node functional test runner lib mocha wrap function js other test failures monitoring app elasticsearch nodes listing mb with only online nodes before all hook for should have an elasticsearch cluster summary status with correct info test report | 1 |
200,931 | 15,165,513,058 | IssuesEvent | 2021-02-12 15:09:44 | dariok/wdbplus | https://api.github.com/repos/dariok/wdbplus | closed | Navigation: sometimes, projects in the hierarchy disappear | bug needs testing | e.g. wrd: 170x, select 1706, then others disappear | 1.0 | Navigation: sometimes, projects in the hierarchy disappear - e.g. wrd: 170x, select 1706, then others disappear | test | navigation sometimes projects in the hierarchy disappear e g wrd select then others disappear | 1 |
301,832 | 9,231,604,355 | IssuesEvent | 2019-03-13 03:09:31 | Averynder/cssd | https://api.github.com/repos/Averynder/cssd | closed | Scraping Grades from Concordia | Back End Issue High Priority | This will require the dev to gather all of the courses on the concordia page via their user login | 1.0 | Scraping Grades from Concordia - This will require the dev to gather all of the courses on the concordia page via their user login | non_test | scraping grades from concordia this will require the dev to gather all of the courses on the concordia page via their user login | 0 |
1,653 | 2,564,357,094 | IssuesEvent | 2015-02-06 19:17:58 | d3athrow/vgstation13 | https://api.github.com/repos/d3athrow/vgstation13 | closed | AI missing door open hotlink on text | Bug Feature Loss Needs Moar Testing | when someone says something, the door open hotlink is missing now | 1.0 | AI missing door open hotlink on text - when someone says something, the door open hotlink is missing now | test | ai missing door open hotlink on text when someone says something the door open hotlink is missing now | 1 |
185,490 | 21,791,044,556 | IssuesEvent | 2022-05-14 22:46:41 | CliffCrerar/brendons-cat | https://api.github.com/repos/CliffCrerar/brendons-cat | closed | CVE-2020-11023 (Medium) detected in jquery-2.1.4.min.js - autoclosed | security vulnerability | ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: /node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: /node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/CliffCrerar/brendons-cat/commit/665b78166390b5e3881e43ae42cf1bc6cf1f6486">665b78166390b5e3881e43ae42cf1bc6cf1f6486</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11023 (Medium) detected in jquery-2.1.4.min.js - autoclosed - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: /node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: /node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/CliffCrerar/brendons-cat/commit/665b78166390b5e3881e43ae42cf1bc6cf1f6486">665b78166390b5e3881e43ae42cf1bc6cf1f6486</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in jquery min js autoclosed cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file node modules js attic test moment index html path to vulnerable library node modules js attic test moment index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails step up your open source security game with whitesource | 0 |
690,431 | 23,659,420,247 | IssuesEvent | 2022-08-26 14:15:20 | HabitRPG/habitica-ios | https://api.github.com/repos/HabitRPG/habitica-ios | closed | Implement Mystery Item modal | Type: Enhancement Priority: minor | Android has this. Instead of just showing a toast with the item name after opening an Mystery Item, we show the item itself in a modal with the option to equip it | 1.0 | Implement Mystery Item modal - Android has this. Instead of just showing a toast with the item name after opening an Mystery Item, we show the item itself in a modal with the option to equip it | non_test | implement mystery item modal android has this instead of just showing a toast with the item name after opening an mystery item we show the item itself in a modal with the option to equip it | 0 |
78,631 | 10,076,339,738 | IssuesEvent | 2019-07-24 16:01:23 | vuetifyjs/vuetify | https://api.github.com/repos/vuetifyjs/vuetify | closed | [Documentation] V1.5 search return 404 page | T: documentation | ### Environment
**Browsers:** Chrome 75.0.3770.142
**OS:** Windows 10
### Steps to reproduce
In v1.5 documentation search for something in the docs (ex. "Text Wrapping") and then click on the result
### Expected Behavior
It must be show the page selected from the search results
### Actual Behavior
Show the 404 page for everything you search
### Reproduction Link
<a href="https://v15.vuetifyjs.com/en/styles/typography#text-wrapping" target="_blank">https://v15.vuetifyjs.com/en/styles/typography#text-wrapping</a>

### Other comments
If you navigate to pages from the navbar it works perfectly
<!-- generated by vuetify-issue-helper. DO NOT REMOVE --> | 1.0 | [Documentation] V1.5 search return 404 page - ### Environment
**Browsers:** Chrome 75.0.3770.142
**OS:** Windows 10
### Steps to reproduce
In v1.5 documentation search for something in the docs (ex. "Text Wrapping") and then click on the result
### Expected Behavior
It must be show the page selected from the search results
### Actual Behavior
Show the 404 page for everything you search
### Reproduction Link
<a href="https://v15.vuetifyjs.com/en/styles/typography#text-wrapping" target="_blank">https://v15.vuetifyjs.com/en/styles/typography#text-wrapping</a>

### Other comments
If you navigate to pages from the navbar it works perfectly
<!-- generated by vuetify-issue-helper. DO NOT REMOVE --> | non_test | search return page environment browsers chrome os windows steps to reproduce in documentation search for something in the docs ex text wrapping and then click on the result expected behavior it must be show the page selected from the search results actual behavior show the page for everything you search reproduction link other comments if you navigate to pages from the navbar it works perfectly | 0 |
178,505 | 21,509,420,669 | IssuesEvent | 2022-04-28 01:39:23 | bsbtd/Teste | https://api.github.com/repos/bsbtd/Teste | closed | CVE-2020-14422 (Medium) detected in python-3.8.1-h357f687_2.tar.bz2 - autoclosed | security vulnerability | ## CVE-2020-14422 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>python-3.8.1-h357f687_2.tar.bz2</b></p></summary>
<p>General purpose programming language</p>
<p>Library home page: <a href="https://api.anaconda.org/download/conda-forge/python/3.8.1/linux-64/python-3.8.1-h357f687_2.tar.bz2">https://api.anaconda.org/download/conda-forge/python/3.8.1/linux-64/python-3.8.1-h357f687_2.tar.bz2</a></p>
<p>Path to dependency file: /proteomicslfq/environment.yml</p>
<p>Path to vulnerable library: /home/wss-scanner/anaconda3/pkgs/python-3.8.1-h357f687_2.tar.bz2,/anaconda3/pkgs/python-3.8.1-h357f687_2.tar.bz2</p>
<p>
Dependency Hierarchy:
- :x: **python-3.8.1-h357f687_2.tar.bz2** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lib/ipaddress.py in Python through 3.8.3 improperly computes hash values in the IPv4Interface and IPv6Interface classes, which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing IPv4Interface or IPv6Interface objects, and this attacker can cause many dictionary entries to be created. This is fixed in: v3.5.10, v3.5.10rc1; v3.6.12; v3.7.9; v3.8.4, v3.8.4rc1, v3.8.5, v3.8.6, v3.8.6rc1; v3.9.0, v3.9.0b4, v3.9.0b5, v3.9.0rc1, v3.9.0rc2.
<p>Publish Date: 2020-06-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14422>CVE-2020-14422</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14422">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14422</a></p>
<p>Release Date: 2020-06-18</p>
<p>Fix Resolution: v3.5.10,v3.6.12,v3.7.9,v3.8.4v3.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-14422 (Medium) detected in python-3.8.1-h357f687_2.tar.bz2 - autoclosed - ## CVE-2020-14422 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>python-3.8.1-h357f687_2.tar.bz2</b></p></summary>
<p>General purpose programming language</p>
<p>Library home page: <a href="https://api.anaconda.org/download/conda-forge/python/3.8.1/linux-64/python-3.8.1-h357f687_2.tar.bz2">https://api.anaconda.org/download/conda-forge/python/3.8.1/linux-64/python-3.8.1-h357f687_2.tar.bz2</a></p>
<p>Path to dependency file: /proteomicslfq/environment.yml</p>
<p>Path to vulnerable library: /home/wss-scanner/anaconda3/pkgs/python-3.8.1-h357f687_2.tar.bz2,/anaconda3/pkgs/python-3.8.1-h357f687_2.tar.bz2</p>
<p>
Dependency Hierarchy:
- :x: **python-3.8.1-h357f687_2.tar.bz2** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lib/ipaddress.py in Python through 3.8.3 improperly computes hash values in the IPv4Interface and IPv6Interface classes, which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing IPv4Interface or IPv6Interface objects, and this attacker can cause many dictionary entries to be created. This is fixed in: v3.5.10, v3.5.10rc1; v3.6.12; v3.7.9; v3.8.4, v3.8.4rc1, v3.8.5, v3.8.6, v3.8.6rc1; v3.9.0, v3.9.0b4, v3.9.0b5, v3.9.0rc1, v3.9.0rc2.
<p>Publish Date: 2020-06-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14422>CVE-2020-14422</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14422">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14422</a></p>
<p>Release Date: 2020-06-18</p>
<p>Fix Resolution: v3.5.10,v3.6.12,v3.7.9,v3.8.4v3.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in python tar autoclosed cve medium severity vulnerability vulnerable library python tar general purpose programming language library home page a href path to dependency file proteomicslfq environment yml path to vulnerable library home wss scanner pkgs python tar pkgs python tar dependency hierarchy x python tar vulnerable library found in head commit a href vulnerability details lib ipaddress py in python through improperly computes hash values in the and classes which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing or objects and this attacker can cause many dictionary entries to be created this is fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
Subsets and Splits