Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
277,943
8,634,681,157
IssuesEvent
2018-11-22 17:51:09
apollographql/apollo-client
https://api.github.com/repos/apollographql/apollo-client
closed
has been blocked by CORS policy: Request header field apollographql-client-version is not allowed by Access-Control-Allow-Headers in preflight response.
bug confirmed high-priority
**Actual outcome:** the request headers preflight is broke with the follow answer *has been blocked by CORS policy: Request header field apollographql-client-version is not allowed by Access-Control-Allow-Headers in preflight response*. **How to reproduce the issue:** Installing dependencies via yarn install **Versions** react-apollo: "^2.2.2
1.0
has been blocked by CORS policy: Request header field apollographql-client-version is not allowed by Access-Control-Allow-Headers in preflight response. - **Actual outcome:** the request headers preflight is broke with the follow answer *has been blocked by CORS policy: Request header field apollographql-client-version is not allowed by Access-Control-Allow-Headers in preflight response*. **How to reproduce the issue:** Installing dependencies via yarn install **Versions** react-apollo: "^2.2.2
non_test
has been blocked by cors policy request header field apollographql client version is not allowed by access control allow headers in preflight response actual outcome the request headers preflight is broke with the follow answer has been blocked by cors policy request header field apollographql client version is not allowed by access control allow headers in preflight response how to reproduce the issue installing dependencies via yarn install versions react apollo
0
105,819
23,120,985,060
IssuesEvent
2022-07-27 21:27:55
redhat-developer/quarkus-ls
https://api.github.com/repos/redhat-developer/quarkus-ls
closed
CodeActions to create Java field / getter method / template extension
code action qute
Given this Qute template: ``` {@org.acme.Item item} {item.XXX} ``` `XXX` property appears as an error. It should be nice to provide 3 code actions to fix the problem: * Create Java field `public String XXX` in `org.acme.Item` class. * Create Java getter method `public String getXXX() {return this.XXX}` in `org.acme.Item` class. * Create the template extension `@TemplateExtension public String static getXXX(Item item)` in a new class or in an existing class which defines already some template extension.
1.0
CodeActions to create Java field / getter method / template extension - Given this Qute template: ``` {@org.acme.Item item} {item.XXX} ``` `XXX` property appears as an error. It should be nice to provide 3 code actions to fix the problem: * Create Java field `public String XXX` in `org.acme.Item` class. * Create Java getter method `public String getXXX() {return this.XXX}` in `org.acme.Item` class. * Create the template extension `@TemplateExtension public String static getXXX(Item item)` in a new class or in an existing class which defines already some template extension.
non_test
codeactions to create java field getter method template extension given this qute template org acme item item item xxx xxx property appears as an error it should be nice to provide code actions to fix the problem create java field public string xxx in org acme item class create java getter method public string getxxx return this xxx in org acme item class create the template extension templateextension public string static getxxx item item in a new class or in an existing class which defines already some template extension
0
167,805
13,043,244,871
IssuesEvent
2020-07-29 00:56:53
microsoft/azuredatastudio
https://api.github.com/repos/microsoft/azuredatastudio
closed
A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader : Unnecessary state as 'Selected' is announced and the expand/collapse state is not announced for "Toggle More" control for screen reader user.
A11ys_July27_2020_TestPass Area - Dashboard Bug Triage: Done
**"[Check out Accessibility Insights!](https://nam06.safelinks.protection.outlook.com/?url=https://accessibilityinsights.io/&data=02%7c01%7cv-manai%40microsoft.com%7cb67b2c4b646d4f9561a208d6f4b5c39b%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c636965458847260936&sdata=T26HQfSGOlnuRQdX%2ByXk%2B2bxqgwFvCIVfuboZUWidYY%3D&reserved=0) - Identify accessibility bugs before check-in and make bug fixing faster and easier.”** GitHubTags:#A11y_AzureDataStudio_July2020;#A11yMAS;#A11yTCS;#SQL Azure Data Studio;#Benchmark;#MAC;#Screenreader;#VoiceOver;#A11ySev2;#Benchmark;#MAS1.3.1;#MAS4.2.1; ### Environment Details: Application Name: Azure Data Studio Application Version: 1.21.0-insider Commit: eccf3cf Date: 2020-07-24T09:28:31.172Z VS Code: 1.48.0 Electron: 9.1.0 Chrome: 83.0.4103.122 Node.js: 12.14.1 V8: 8.3.110.13-electron.0 OS: Darwin x64 19.6.0 Operating system: macOS Catalina (Version 10.15.6 (19G73) Screen Reader: VoiceOver MAS References: MAS1.3.1, MAS4.2.1 ### Repro Steps: 1. Launch Azure Data Studio Insiders application. 2. Connect to server. 3. Double click on connected server or right click on it & select manage option to open the Dashboard. 4. Navigate to Home under Dashboard & hit enter. 5. Start screen reader, the navigate to "Toggle More" control and listen if the proper state is announced for not. ### Actual: When screen reader users navigate to the "Toggle More" control, it's state is announced as 'selected' which is incorrect state that is announced. ### Expected: The "Toggle More" control should be provided with expand/collapse state for screen reader users so that users are able to identify it's state on interacting with it. ### User Impact: If proper state of the control is not announced to the scree reader users the they will not understand how to interact with that control. ### Attachment link for Reference: [11552_A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader Unnecessary state as 'Selected' is announced and the expand:collapse state is not announced for "Toggle More" control for screen reader user.zip](https://github.com/microsoft/azuredatastudio/files/4986909/11552_A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader.Unnecessary.state.as.Selected.is.announced.and.the.expand.collapse.state.is.not.announced.for.Toggle.More.control.for.screen.reader.user.zip)
1.0
A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader : Unnecessary state as 'Selected' is announced and the expand/collapse state is not announced for "Toggle More" control for screen reader user. - **"[Check out Accessibility Insights!](https://nam06.safelinks.protection.outlook.com/?url=https://accessibilityinsights.io/&data=02%7c01%7cv-manai%40microsoft.com%7cb67b2c4b646d4f9561a208d6f4b5c39b%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c636965458847260936&sdata=T26HQfSGOlnuRQdX%2ByXk%2B2bxqgwFvCIVfuboZUWidYY%3D&reserved=0) - Identify accessibility bugs before check-in and make bug fixing faster and easier.”** GitHubTags:#A11y_AzureDataStudio_July2020;#A11yMAS;#A11yTCS;#SQL Azure Data Studio;#Benchmark;#MAC;#Screenreader;#VoiceOver;#A11ySev2;#Benchmark;#MAS1.3.1;#MAS4.2.1; ### Environment Details: Application Name: Azure Data Studio Application Version: 1.21.0-insider Commit: eccf3cf Date: 2020-07-24T09:28:31.172Z VS Code: 1.48.0 Electron: 9.1.0 Chrome: 83.0.4103.122 Node.js: 12.14.1 V8: 8.3.110.13-electron.0 OS: Darwin x64 19.6.0 Operating system: macOS Catalina (Version 10.15.6 (19G73) Screen Reader: VoiceOver MAS References: MAS1.3.1, MAS4.2.1 ### Repro Steps: 1. Launch Azure Data Studio Insiders application. 2. Connect to server. 3. Double click on connected server or right click on it & select manage option to open the Dashboard. 4. Navigate to Home under Dashboard & hit enter. 5. Start screen reader, the navigate to "Toggle More" control and listen if the proper state is announced for not. ### Actual: When screen reader users navigate to the "Toggle More" control, it's state is announced as 'selected' which is incorrect state that is announced. ### Expected: The "Toggle More" control should be provided with expand/collapse state for screen reader users so that users are able to identify it's state on interacting with it. ### User Impact: If proper state of the control is not announced to the scree reader users the they will not understand how to interact with that control. ### Attachment link for Reference: [11552_A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader Unnecessary state as 'Selected' is announced and the expand:collapse state is not announced for "Toggle More" control for screen reader user.zip](https://github.com/microsoft/azuredatastudio/files/4986909/11552_A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader.Unnecessary.state.as.Selected.is.announced.and.the.expand.collapse.state.is.not.announced.for.Toggle.More.control.for.screen.reader.user.zip)
test
azuredatastudio dashboard home toolbar screenreader unnecessary state as selected is announced and the expand collapse state is not announced for toggle more control for screen reader user identify accessibility bugs before check in and make bug fixing faster and easier ” githubtags azuredatastudio sql azure data studio benchmark mac screenreader voiceover benchmark environment details application name azure data studio application version insider commit date vs code electron chrome node js electron os darwin operating system macos catalina version screen reader voiceover mas references repro steps launch azure data studio insiders application connect to server double click on connected server or right click on it select manage option to open the dashboard navigate to home under dashboard hit enter start screen reader the navigate to toggle more control and listen if the proper state is announced for not actual when screen reader users navigate to the toggle more control it s state is announced as selected which is incorrect state that is announced expected the toggle more control should be provided with expand collapse state for screen reader users so that users are able to identify it s state on interacting with it user impact if proper state of the control is not announced to the scree reader users the they will not understand how to interact with that control attachment link for reference
1
136,997
11,094,851,232
IssuesEvent
2019-12-16 07:37:19
Mirco469/ProgettoSushi
https://api.github.com/repos/Mirco469/ProgettoSushi
closed
Test generale parte front-end
Lato Utente Test
Controllare sia modalita' desktop che mobile come va il sito. Fare finta di doverlo usare. **Aspettare #379. Aspettare che venga messo lo slideshow.** Ricordare di testare anche la pagina di errore e di successo. Usare https://mirco469.github.io/ProgettoSushi/Sito/home_utente.html per il cellulare. Ad eccezione della home che ha l'immagine troppo grande e si sfancula tutto il resto sono errori. Cercare di testare tutti i link.
1.0
Test generale parte front-end - Controllare sia modalita' desktop che mobile come va il sito. Fare finta di doverlo usare. **Aspettare #379. Aspettare che venga messo lo slideshow.** Ricordare di testare anche la pagina di errore e di successo. Usare https://mirco469.github.io/ProgettoSushi/Sito/home_utente.html per il cellulare. Ad eccezione della home che ha l'immagine troppo grande e si sfancula tutto il resto sono errori. Cercare di testare tutti i link.
test
test generale parte front end controllare sia modalita desktop che mobile come va il sito fare finta di doverlo usare aspettare aspettare che venga messo lo slideshow ricordare di testare anche la pagina di errore e di successo usare per il cellulare ad eccezione della home che ha l immagine troppo grande e si sfancula tutto il resto sono errori cercare di testare tutti i link
1
10,354
3,103,700,879
IssuesEvent
2015-08-31 11:53:09
Jaspersoft/jrs-rest-java-client
https://api.github.com/repos/Jaspersoft/jrs-rest-java-client
closed
Add additional integration tests for Settings Service
enhancement unit/integration test
Regarding the request #86, it would be nice if we add additional integration tests for Setting Service. We can also put them into the docs to show user how to use Settings Service easily.
1.0
Add additional integration tests for Settings Service - Regarding the request #86, it would be nice if we add additional integration tests for Setting Service. We can also put them into the docs to show user how to use Settings Service easily.
test
add additional integration tests for settings service regarding the request it would be nice if we add additional integration tests for setting service we can also put them into the docs to show user how to use settings service easily
1
241,167
20,104,416,674
IssuesEvent
2022-02-07 09:06:12
ANL-Braid/DB
https://api.github.com/repos/ANL-Braid/DB
closed
Updating various shell scripts to run python scripts directly
tests
Rather than having to setup environment etc in the shell scripts, make the python utility files (including workflows) directly runnable.
1.0
Updating various shell scripts to run python scripts directly - Rather than having to setup environment etc in the shell scripts, make the python utility files (including workflows) directly runnable.
test
updating various shell scripts to run python scripts directly rather than having to setup environment etc in the shell scripts make the python utility files including workflows directly runnable
1
173,208
13,391,209,722
IssuesEvent
2020-09-02 22:02:42
dapr/dapr
https://api.github.com/repos/dapr/dapr
closed
[E2E Scenario Tests] for Actor State APIs
P1 area/test/e2e size/XS
<!-- If you need to report a security issue with Dapr, send an email to [email protected]. --> ## In what area(s)? <!-- Remove the '> ' to select --> > /area runtime > /area operator > /area placement > /area docs /area test-and-release ## Describe the feature Add E2E tests for Actor State APIs
1.0
[E2E Scenario Tests] for Actor State APIs - <!-- If you need to report a security issue with Dapr, send an email to [email protected]. --> ## In what area(s)? <!-- Remove the '> ' to select --> > /area runtime > /area operator > /area placement > /area docs /area test-and-release ## Describe the feature Add E2E tests for Actor State APIs
test
for actor state apis in what area s to select area runtime area operator area placement area docs area test and release describe the feature add tests for actor state apis
1
467
2,502,210,105
IssuesEvent
2015-01-09 05:18:33
ajency/Foodstree
https://api.github.com/repos/ajency/Foodstree
closed
Separate orders for each seller is displayed in customer order screen
Bug Pushed to test site
Steps: 1. Added items in the cart of two different sellers( 1 item for each seller) 2. Placed a order. 3, Clicked on 'my account' to view the order details Current behaviour: Three different order are shown to the customer Expected: The customer should see it as one single order ![diffrent orders](https://cloud.githubusercontent.com/assets/9797052/5578482/16ac25bc-904e-11e4-88b9-53eb798639d6.png)
1.0
Separate orders for each seller is displayed in customer order screen - Steps: 1. Added items in the cart of two different sellers( 1 item for each seller) 2. Placed a order. 3, Clicked on 'my account' to view the order details Current behaviour: Three different order are shown to the customer Expected: The customer should see it as one single order ![diffrent orders](https://cloud.githubusercontent.com/assets/9797052/5578482/16ac25bc-904e-11e4-88b9-53eb798639d6.png)
test
separate orders for each seller is displayed in customer order screen steps added items in the cart of two different sellers item for each seller placed a order clicked on my account to view the order details current behaviour three different order are shown to the customer expected the customer should see it as one single order
1
95,390
3,947,030,928
IssuesEvent
2016-04-28 08:12:40
raml-org/raml-js-parser-2
https://api.github.com/repos/raml-org/raml-js-parser-2
closed
Parsing inexisting file could throw or return a "readable" error
bug priority:high
Sorry if this is not the place to "ask a question / make a suggestion" like that. I have tried the following *really* useless code: ``` 'use strict'; var raml = require( 'raml-1-0-parser' ); try { var foo = raml.loadApi( './file-that-does-not-exists.file' ); console.log( foo ); } catch( e ) { console.log( e ); } ``` Although I realize the code is not realistic, the error (mistyping a file path) is. I wanted to know what happens in this situation while trying to determine the error behavior of raml-js-parser-2 to use it in another project. The result I have had is that `null` is returned in `foo` and printed to the console. No error is thrown and nothing to indicate the source of the error. If this is normal behavior, I would like to suggest that it should not be. Wouldn't it be possible to throw an error saying that the file does not exist.
1.0
Parsing inexisting file could throw or return a "readable" error - Sorry if this is not the place to "ask a question / make a suggestion" like that. I have tried the following *really* useless code: ``` 'use strict'; var raml = require( 'raml-1-0-parser' ); try { var foo = raml.loadApi( './file-that-does-not-exists.file' ); console.log( foo ); } catch( e ) { console.log( e ); } ``` Although I realize the code is not realistic, the error (mistyping a file path) is. I wanted to know what happens in this situation while trying to determine the error behavior of raml-js-parser-2 to use it in another project. The result I have had is that `null` is returned in `foo` and printed to the console. No error is thrown and nothing to indicate the source of the error. If this is normal behavior, I would like to suggest that it should not be. Wouldn't it be possible to throw an error saying that the file does not exist.
non_test
parsing inexisting file could throw or return a readable error sorry if this is not the place to ask a question make a suggestion like that i have tried the following really useless code use strict var raml require raml parser try var foo raml loadapi file that does not exists file console log foo catch e console log e although i realize the code is not realistic the error mistyping a file path is i wanted to know what happens in this situation while trying to determine the error behavior of raml js parser to use it in another project the result i have had is that null is returned in foo and printed to the console no error is thrown and nothing to indicate the source of the error if this is normal behavior i would like to suggest that it should not be wouldn t it be possible to throw an error saying that the file does not exist
0
11,668
3,511,685,314
IssuesEvent
2016-01-10 13:38:55
nodemcu/nodemcu-firmware
https://api.github.com/repos/nodemcu/nodemcu-firmware
closed
mqtt.client:connect() secure options
documentation
**mqtt.client sub-module issues** ***mqtt:connect( host, port, secure, function(client) )*** There is no information about the security standard used here. Is it SSL 3.0 or TLS 1.1/1.2 ? If it supports one of those standards please provide some examples where the secure flag is 1 and more details about how to use TLS or at least SSL.
1.0
mqtt.client:connect() secure options - **mqtt.client sub-module issues** ***mqtt:connect( host, port, secure, function(client) )*** There is no information about the security standard used here. Is it SSL 3.0 or TLS 1.1/1.2 ? If it supports one of those standards please provide some examples where the secure flag is 1 and more details about how to use TLS or at least SSL.
non_test
mqtt client connect secure options mqtt client sub module issues mqtt connect host port secure function client there is no information about the security standard used here is it ssl or tls if it supports one of those standards please provide some examples where the secure flag is and more details about how to use tls or at least ssl
0
95,888
8,580,374,388
IssuesEvent
2018-11-13 11:47:32
Kademi/kademi-dev
https://api.github.com/repos/Kademi/kademi-dev
closed
Update manage group emails page layout
Ready to Test - Dev bug enhancement
- split over 2 lines to give enough room for content to fit - make date sensitive - show recipient groups (included + excluded)
1.0
Update manage group emails page layout - - split over 2 lines to give enough room for content to fit - make date sensitive - show recipient groups (included + excluded)
test
update manage group emails page layout split over lines to give enough room for content to fit make date sensitive show recipient groups included excluded
1
258,241
22,295,043,677
IssuesEvent
2022-06-12 23:07:19
ParadoxAlarmInterface/pai
https://api.github.com/repos/ParadoxAlarmInterface/pai
closed
Wrong information on README? (It *is* working with EVO 7.50+)
question testing required to be confirmed stale
## Alarm system **EVO 192 256K (DIGIPLEX)** Firmware 7.52.001 Bootloader 1.00.015 **IP150** Firmware 1.34.000 Bootloader 2.12.001 **BabyWare 5.4.26** _Information from In-Field Paradox Upgrade Software._ ## Describe the bug The README of project warns: **Do not upgrade EVO firmware versions to 7.50.000+. Process is irreversible! Paradox introduces serial communication encryption which most probably will break our PAI ability to talk to the panel.** However, I got myself a new EVO 192 panel which came with 7.52.001. **And PAI is working fine with it.** (I'm using via home assistant). Has anybody more information about this? It seems clearly misleading or incomplete. What I discovered, so far: The [firmware changelog](https://www.paradox.com/DotNetApp/FirmwareUpdate/FirmwareUpdate.aspx?SUBCATID=68&group_id=5) available on paradox site indeed says: ``` V7.50.011(released on April 21, 2021) Firmware Download V7.50.011 What's new - Added serial port encryption - the serial output will be operational only with Paradox devices (IP150/+, PCS250/260/265/265LTE, USB307) - Added serial port unlock license key for 3rd party communication devices ( license can be purchased from Insite Gold Installer Menu) - The unlock license key needs to be entered in keypad programming only, section 3000 – (Only TM50/TM70 keypads can be used - A-Z characters needed) - Panel programming is compatible with BabyWare 5.4.26 version and up - This firmware version cannot be downgraded ``` But, as I said, it is working. Maybe it is working because my IP150 is running a VERY old firmware version? (pre-2016) Sadly, the [IP150 firmware changelog](https://www.paradox.com/DotNetApp/FirmwareUpdate/FirmwareUpdate.aspx?SUBCATID=38&group_id=64) doesn't mention anything about cryptographic communications. So, I don't know what would be a "safe" version to upgrade it. I'm using it on a test panel (not in production) with almost all default values (did a hard reset) except for users/zones/areas.
1.0
Wrong information on README? (It *is* working with EVO 7.50+) - ## Alarm system **EVO 192 256K (DIGIPLEX)** Firmware 7.52.001 Bootloader 1.00.015 **IP150** Firmware 1.34.000 Bootloader 2.12.001 **BabyWare 5.4.26** _Information from In-Field Paradox Upgrade Software._ ## Describe the bug The README of project warns: **Do not upgrade EVO firmware versions to 7.50.000+. Process is irreversible! Paradox introduces serial communication encryption which most probably will break our PAI ability to talk to the panel.** However, I got myself a new EVO 192 panel which came with 7.52.001. **And PAI is working fine with it.** (I'm using via home assistant). Has anybody more information about this? It seems clearly misleading or incomplete. What I discovered, so far: The [firmware changelog](https://www.paradox.com/DotNetApp/FirmwareUpdate/FirmwareUpdate.aspx?SUBCATID=68&group_id=5) available on paradox site indeed says: ``` V7.50.011(released on April 21, 2021) Firmware Download V7.50.011 What's new - Added serial port encryption - the serial output will be operational only with Paradox devices (IP150/+, PCS250/260/265/265LTE, USB307) - Added serial port unlock license key for 3rd party communication devices ( license can be purchased from Insite Gold Installer Menu) - The unlock license key needs to be entered in keypad programming only, section 3000 – (Only TM50/TM70 keypads can be used - A-Z characters needed) - Panel programming is compatible with BabyWare 5.4.26 version and up - This firmware version cannot be downgraded ``` But, as I said, it is working. Maybe it is working because my IP150 is running a VERY old firmware version? (pre-2016) Sadly, the [IP150 firmware changelog](https://www.paradox.com/DotNetApp/FirmwareUpdate/FirmwareUpdate.aspx?SUBCATID=38&group_id=64) doesn't mention anything about cryptographic communications. So, I don't know what would be a "safe" version to upgrade it. I'm using it on a test panel (not in production) with almost all default values (did a hard reset) except for users/zones/areas.
test
wrong information on readme it is working with evo alarm system evo digiplex firmware bootloader firmware bootloader babyware information from in field paradox upgrade software describe the bug the readme of project warns do not upgrade evo firmware versions to process is irreversible paradox introduces serial communication encryption which most probably will break our pai ability to talk to the panel however i got myself a new evo panel which came with and pai is working fine with it i m using via home assistant has anybody more information about this it seems clearly misleading or incomplete what i discovered so far the available on paradox site indeed says released on april firmware download what s new added serial port encryption the serial output will be operational only with paradox devices added serial port unlock license key for party communication devices license can be purchased from insite gold installer menu the unlock license key needs to be entered in keypad programming only section – only keypads can be used a z characters needed panel programming is compatible with babyware version and up this firmware version cannot be downgraded but as i said it is working maybe it is working because my is running a very old firmware version pre sadly the doesn t mention anything about cryptographic communications so i don t know what would be a safe version to upgrade it i m using it on a test panel not in production with almost all default values did a hard reset except for users zones areas
1
18,994
2,616,016,907
IssuesEvent
2015-03-02 00:59:01
jasonhall/bwapi
https://api.github.com/repos/jasonhall/bwapi
closed
Update Color palette for tileset
auto-migrated NewFeature Priority-None Type-Enhancement
``` Update the color palette depending on the tileset. ``` Original issue reported on code.google.com by `AHeinerm` on 16 Apr 2011 at 11:18
1.0
Update Color palette for tileset - ``` Update the color palette depending on the tileset. ``` Original issue reported on code.google.com by `AHeinerm` on 16 Apr 2011 at 11:18
non_test
update color palette for tileset update the color palette depending on the tileset original issue reported on code google com by aheinerm on apr at
0
346,083
30,865,664,542
IssuesEvent
2023-08-03 07:52:22
PerfectFit-project/virtual-coach-issues
https://api.github.com/repos/PerfectFit-project/virtual-coach-issues
closed
Initially incomplete self-initiated HRS dialog keeps being available via 'verder' after completion
bug testing ticket
I did not complete the HRS dialog. I restarted it via 'verder' and completed it. But if after completing I type 'verder', the HRS dialog starts again. So it seems that the dialog is not marked as completed. See here: https://github.com/PerfectFit-project/testing-tickets/issues/26.
1.0
Initially incomplete self-initiated HRS dialog keeps being available via 'verder' after completion - I did not complete the HRS dialog. I restarted it via 'verder' and completed it. But if after completing I type 'verder', the HRS dialog starts again. So it seems that the dialog is not marked as completed. See here: https://github.com/PerfectFit-project/testing-tickets/issues/26.
test
initially incomplete self initiated hrs dialog keeps being available via verder after completion i did not complete the hrs dialog i restarted it via verder and completed it but if after completing i type verder the hrs dialog starts again so it seems that the dialog is not marked as completed see here
1
196,604
14,881,250,283
IssuesEvent
2021-01-20 10:15:04
wix/wix-style-react
https://api.github.com/repos/wix/wix-style-react
closed
Testkit Smoketest fail for nested comopnents
Bug Priority: Low Testkit testing issues
# 🐛 Bug Report ### 🏗 Relevant Components [testkit-smoke.test.js ](https://github.com/wix/wix-style-react/blob/4b015f398cc60948234848227fec532a6ee9270f/testkit/testkit-smoke.test.js) ### 😯 Current Behavior Nested comopnents, components which their main dir is different from their name, for example: [EditableRow](https://github.com/wix/wix-style-react/blob/b5fd3bc23078087d5caa017e18044cb6d73a7fd8/src/EditableSelector/EditableRow/EditableRow.driver.js) fail in sanity test. They are currently marked with "skipSanityTest: true" inside "testkit-definitions.js" to prevent the test from failing. ### 🤔 Expected Behavior Tests should not be marked with skipSanityTest:true, but should run and pass. ### 👣 Steps to Reproduce go to [testkit-defenitions.js](https://github.com/wix/wix-style-react/blob/e0575683e84964dfaeca048279b5ede303ea3026/testkit/testkit-definitions.js) and remove "skipSanityTest" from a a nested comopnent. ### 👀 Severity Low/Major
2.0
Testkit Smoketest fail for nested comopnents - # 🐛 Bug Report ### 🏗 Relevant Components [testkit-smoke.test.js ](https://github.com/wix/wix-style-react/blob/4b015f398cc60948234848227fec532a6ee9270f/testkit/testkit-smoke.test.js) ### 😯 Current Behavior Nested comopnents, components which their main dir is different from their name, for example: [EditableRow](https://github.com/wix/wix-style-react/blob/b5fd3bc23078087d5caa017e18044cb6d73a7fd8/src/EditableSelector/EditableRow/EditableRow.driver.js) fail in sanity test. They are currently marked with "skipSanityTest: true" inside "testkit-definitions.js" to prevent the test from failing. ### 🤔 Expected Behavior Tests should not be marked with skipSanityTest:true, but should run and pass. ### 👣 Steps to Reproduce go to [testkit-defenitions.js](https://github.com/wix/wix-style-react/blob/e0575683e84964dfaeca048279b5ede303ea3026/testkit/testkit-definitions.js) and remove "skipSanityTest" from a a nested comopnent. ### 👀 Severity Low/Major
test
testkit smoketest fail for nested comopnents 🐛 bug report 🏗 relevant components testkit smoke test js 😯 current behavior nested comopnents components which their main dir is different from their name for example fail in sanity test they are currently marked with skipsanitytest true inside testkit definitions js to prevent the test from failing 🤔 expected behavior tests should not be marked with skipsanitytest true but should run and pass 👣 steps to reproduce go to and remove skipsanitytest from a a nested comopnent 👀 severity low major
1
73,517
7,342,490,554
IssuesEvent
2018-03-07 08:05:32
alibaba/pouch
https://api.github.com/repos/alibaba/pouch
closed
[bug] make unit-test failed
areas/test kind/bug
### Ⅰ. Issue Description make unit-test has the following error: ``` make unit-test unit-test volume_build.go:8:8: cannot find package "github.com/alibaba/pouch/volume/store/boltdb" in any of: /tmp/pouchbuild/src/github.com/alibaba/pouch/vendor/github.com/alibaba/pouch/volume/store/boltdb (vendor tree) /usr/local/go/src/github.com/alibaba/pouch/volume/store/boltdb (from $GOROOT) /tmp/pouchbuild/src/github.com/alibaba/pouch/volume/store/boltdb (from $GOPATH) /tmp/pouchbuild/src/github.com/docker/libnetwork/Godeps/_workspace/src/github.com/alibaba/pouch/volume/store/boltdb make: *** [unit-test] Error 1 ``` This is because unit-test target should depend on modules. ### Ⅱ. Describe what happened ### Ⅲ. Describe what you expected to happen ### Ⅳ. How to reproduce it (as minimally and precisely as possible) 1. 2. 3. ### Ⅴ. Anything else we need to know? ### Ⅵ. Environment: - pouch version (use `pouch version`): - OS (e.g. from /etc/os-release): - Kernel (e.g. `uname -a`): - Install tools: - Others:
1.0
[bug] make unit-test failed - ### Ⅰ. Issue Description make unit-test has the following error: ``` make unit-test unit-test volume_build.go:8:8: cannot find package "github.com/alibaba/pouch/volume/store/boltdb" in any of: /tmp/pouchbuild/src/github.com/alibaba/pouch/vendor/github.com/alibaba/pouch/volume/store/boltdb (vendor tree) /usr/local/go/src/github.com/alibaba/pouch/volume/store/boltdb (from $GOROOT) /tmp/pouchbuild/src/github.com/alibaba/pouch/volume/store/boltdb (from $GOPATH) /tmp/pouchbuild/src/github.com/docker/libnetwork/Godeps/_workspace/src/github.com/alibaba/pouch/volume/store/boltdb make: *** [unit-test] Error 1 ``` This is because unit-test target should depend on modules. ### Ⅱ. Describe what happened ### Ⅲ. Describe what you expected to happen ### Ⅳ. How to reproduce it (as minimally and precisely as possible) 1. 2. 3. ### Ⅴ. Anything else we need to know? ### Ⅵ. Environment: - pouch version (use `pouch version`): - OS (e.g. from /etc/os-release): - Kernel (e.g. `uname -a`): - Install tools: - Others:
test
make unit test failed ⅰ issue description make unit test has the following error make unit test unit test volume build go cannot find package github com alibaba pouch volume store boltdb in any of tmp pouchbuild src github com alibaba pouch vendor github com alibaba pouch volume store boltdb vendor tree usr local go src github com alibaba pouch volume store boltdb from goroot tmp pouchbuild src github com alibaba pouch volume store boltdb from gopath tmp pouchbuild src github com docker libnetwork godeps workspace src github com alibaba pouch volume store boltdb make error this is because unit test target should depend on modules ⅱ describe what happened ⅲ describe what you expected to happen ⅳ how to reproduce it as minimally and precisely as possible ⅴ anything else we need to know ⅵ environment pouch version use pouch version os e g from etc os release kernel e g uname a install tools others
1
167,421
13,024,815,524
IssuesEvent
2020-07-27 12:32:41
googleapis/google-cloud-cpp
https://api.github.com/repos/googleapis/google-cloud-cpp
closed
implement testing `Status` matchers
priority: p3 testing type: feature request
Implement matchers that can check a `google::cloud::Status` in a single `EXPECT` statement. We shouldn't spend too much effort on this until we resolve whether to switch to `absl::Status` (#4375), but we can implement our matchers in ways that are similar to (more specifically, subsets of) absl matchers to help the transition if we do go that direction.
1.0
implement testing `Status` matchers - Implement matchers that can check a `google::cloud::Status` in a single `EXPECT` statement. We shouldn't spend too much effort on this until we resolve whether to switch to `absl::Status` (#4375), but we can implement our matchers in ways that are similar to (more specifically, subsets of) absl matchers to help the transition if we do go that direction.
test
implement testing status matchers implement matchers that can check a google cloud status in a single expect statement we shouldn t spend too much effort on this until we resolve whether to switch to absl status but we can implement our matchers in ways that are similar to more specifically subsets of absl matchers to help the transition if we do go that direction
1
62,736
3,192,998,579
IssuesEvent
2015-09-30 00:48:12
fusioninventory/fusioninventory-for-glpi
https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi
closed
Entities rules for computer not right in wizard
Component: For junior contributor Component: Found in version Priority: Normal Status: Closed Tracker: Bug
--- Author Name: **David Durieux** (@ddurieux) Original Redmine Issue: 1077, http://forge.fusioninventory.org/issues/1077 Original Date: 2011-08-20 Original Assignee: David Durieux --- We have import rules items and not entities rules.
1.0
Entities rules for computer not right in wizard - --- Author Name: **David Durieux** (@ddurieux) Original Redmine Issue: 1077, http://forge.fusioninventory.org/issues/1077 Original Date: 2011-08-20 Original Assignee: David Durieux --- We have import rules items and not entities rules.
non_test
entities rules for computer not right in wizard author name david durieux ddurieux original redmine issue original date original assignee david durieux we have import rules items and not entities rules
0
133,396
5,202,492,124
IssuesEvent
2017-01-24 09:44:16
openvstorage/volumedriver
https://api.github.com/repos/openvstorage/volumedriver
opened
Edge communication on RDMA
priority_urgent SRP type_enhancement
Get the Edge to work reliably over RDMA. Please create the necessary tickets on the relevant repos.
1.0
Edge communication on RDMA - Get the Edge to work reliably over RDMA. Please create the necessary tickets on the relevant repos.
non_test
edge communication on rdma get the edge to work reliably over rdma please create the necessary tickets on the relevant repos
0
50,355
6,084,920,769
IssuesEvent
2017-06-17 09:23:59
haskell-tools/haskell-tools
https://api.github.com/repos/haskell-tools/haskell-tools
closed
NoImplicitPrelude causes transformation problem with a comment
category:bug origin:stackage-testing package:backend-ghc
Minimal example: ``` {-# LANGUAGE NoImplicitPrelude #-} -- Imports import System.Exit ```
1.0
NoImplicitPrelude causes transformation problem with a comment - Minimal example: ``` {-# LANGUAGE NoImplicitPrelude #-} -- Imports import System.Exit ```
test
noimplicitprelude causes transformation problem with a comment minimal example language noimplicitprelude imports import system exit
1
143,120
11,516,807,523
IssuesEvent
2020-02-14 06:29:56
apache/incubator-shardingsphere
https://api.github.com/repos/apache/incubator-shardingsphere
closed
Unify SQL parsing results of `VisitorParameterizedParsingTest` and `SQLParserParameterizedTest`
enhancement test
Use `SQLParserTestCasesRegistry` instead of `VisitorSQLParserTestCasesRegistryFactory` at `VisitorParameterizedParsingTest`
1.0
Unify SQL parsing results of `VisitorParameterizedParsingTest` and `SQLParserParameterizedTest` - Use `SQLParserTestCasesRegistry` instead of `VisitorSQLParserTestCasesRegistryFactory` at `VisitorParameterizedParsingTest`
test
unify sql parsing results of visitorparameterizedparsingtest and sqlparserparameterizedtest use sqlparsertestcasesregistry instead of visitorsqlparsertestcasesregistryfactory at visitorparameterizedparsingtest
1
54,816
3,071,314,747
IssuesEvent
2015-08-19 11:14:45
pavel-pimenov/flylinkdc-r5xx
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
closed
Поправить отображение времени в трее
bug imported Priority-Medium
_From [[email protected]](https://code.google.com/u/118374335061098442652/) on September 03, 2010 04:38:25_ Имеем: Флай версии r500beta16 _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=150_
1.0
Поправить отображение времени в трее - _From [[email protected]](https://code.google.com/u/118374335061098442652/) on September 03, 2010 04:38:25_ Имеем: Флай версии r500beta16 _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=150_
non_test
поправить отображение времени в трее from on september имеем флай версии original issue
0
260,822
22,667,272,775
IssuesEvent
2022-07-03 04:19:34
Uuvana-Studios/longvinter-windows-client
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
closed
GPU usage can't over 60% in low fps
Bug Not Tested
**Describe the bug** gpu usage can't over 60% in FHD but 60~70fps **Desktop:** - OS: Windows 11 - Game Version: 1.0.7b - Steam Version: newest version - CPU: i7-10700F - GPU: RTX 3060 12GB - RAM: samsung 16GB ram 2666mhz (overclocked to 3600mhz) x 1 - motherboard: gigabyte B560M DS3H
1.0
GPU usage can't over 60% in low fps - **Describe the bug** gpu usage can't over 60% in FHD but 60~70fps **Desktop:** - OS: Windows 11 - Game Version: 1.0.7b - Steam Version: newest version - CPU: i7-10700F - GPU: RTX 3060 12GB - RAM: samsung 16GB ram 2666mhz (overclocked to 3600mhz) x 1 - motherboard: gigabyte B560M DS3H
test
gpu usage can t over in low fps describe the bug gpu usage can t over in fhd but desktop os windows game version steam version newest version cpu gpu rtx ram samsung ram overclocked to x motherboard gigabyte
1
655,391
21,688,635,839
IssuesEvent
2022-05-09 13:35:57
DIT113-V22/group-14
https://api.github.com/repos/DIT113-V22/group-14
closed
application login screen
sprint #3 Medium Priority User Story
As a user I want there to be a login feature in the app so that my plant data is protected from other people - [x] The app shall have a login screen - [x] The screen shall follow the color and design choices made for other screens - [x] There should be a popup if you dont enter an email and/or password - [x] There shall be a forgot password button/link - [x] There shall be a join button/link - [x] There shall be an exit button/link to close the program - [x] The password field shall have a show/hide password toggle
1.0
application login screen - As a user I want there to be a login feature in the app so that my plant data is protected from other people - [x] The app shall have a login screen - [x] The screen shall follow the color and design choices made for other screens - [x] There should be a popup if you dont enter an email and/or password - [x] There shall be a forgot password button/link - [x] There shall be a join button/link - [x] There shall be an exit button/link to close the program - [x] The password field shall have a show/hide password toggle
non_test
application login screen as a user i want there to be a login feature in the app so that my plant data is protected from other people the app shall have a login screen the screen shall follow the color and design choices made for other screens there should be a popup if you dont enter an email and or password there shall be a forgot password button link there shall be a join button link there shall be an exit button link to close the program the password field shall have a show hide password toggle
0
263,561
23,067,110,249
IssuesEvent
2022-07-25 14:45:37
IRPTeam/IRP
https://api.github.com/repos/IRPTeam/IRP
closed
Feature update
need test update
- [ ] проверка закрытия Basis key в регистре TM1010B_RowIDMovements если Приход в регистре в регистре Basis key = Key и RowID (из документа) если Расход в регистре если колонка Basis заполнена то в регистре Basis key = Basis key (из документа) если колонка Basis пустая то расхода не будет вот так правильней бует - [x] Два SO в одном SI. SOC по второму заказу при проведении ругается на дублирование записей в T2014S_AdvancesInfo Запись с такими ключевыми полями существует! : T2014S_AdvancesInfo: 21.05.2021 13:23:59, , ООО "ААА'', СП - ТТТТ, RUB, ПартнерАА, КонтрагентПП, , Нет, Да (Регистр сведений: T2014 Advances info; Номер строки: 2) - [ ] В настройках пользователя по умолчанию указаны Company и Branch. Но в CTO указывается другие Company и Branch. При создании BP/BR/CP/CR на основании CTO подставялются значение из пользовательских настроек, а не из CTO - [x] На первом этапе делаем ввод CP/CR/BP/BR на основании SO/PO IncomingPaymentOrder и OutgoingPaymentOrder пока оставим В регистрах авансов добавляем измерение Заказ и отрабатываем через него закрытия (в инвойсах и в платежных документах заказы должны быть указаны) Если в SO/PO в ТЧ PaymentTerms реквизит CanBePaid = True Период записей - PaymentTerms.Date И кидаем в регистры только PaymentTerms.CalculationType = CalculationTypes.Prepaid R3024B_SalesOrdersToBePaid - Receipt (+) Company Branch Currency Partner LegalName Order Amount R3025B_PurchaseOrdersToBePaid - Receip (+) BP/CP/BR/CR если выбран заказ R3024B_SalesOrdersToBePaid - Expense (-) R3025B_PurchaseOrdersToBePaid - Expense (-) в SO галочка CanBePaid в ТЧ PaymentTerms проставляется по умолчанию из настроек пользователя при проведении SOC/POC регистры R3024B/R3025B закрываются тоже. - [ ] когда указана одна и та же интерфейсная группа у доп реквизитов и у реквизитов расширения, не может еще раз создать эту интерфейсную группу
1.0
Feature update - - [ ] проверка закрытия Basis key в регистре TM1010B_RowIDMovements если Приход в регистре в регистре Basis key = Key и RowID (из документа) если Расход в регистре если колонка Basis заполнена то в регистре Basis key = Basis key (из документа) если колонка Basis пустая то расхода не будет вот так правильней бует - [x] Два SO в одном SI. SOC по второму заказу при проведении ругается на дублирование записей в T2014S_AdvancesInfo Запись с такими ключевыми полями существует! : T2014S_AdvancesInfo: 21.05.2021 13:23:59, , ООО "ААА'', СП - ТТТТ, RUB, ПартнерАА, КонтрагентПП, , Нет, Да (Регистр сведений: T2014 Advances info; Номер строки: 2) - [ ] В настройках пользователя по умолчанию указаны Company и Branch. Но в CTO указывается другие Company и Branch. При создании BP/BR/CP/CR на основании CTO подставялются значение из пользовательских настроек, а не из CTO - [x] На первом этапе делаем ввод CP/CR/BP/BR на основании SO/PO IncomingPaymentOrder и OutgoingPaymentOrder пока оставим В регистрах авансов добавляем измерение Заказ и отрабатываем через него закрытия (в инвойсах и в платежных документах заказы должны быть указаны) Если в SO/PO в ТЧ PaymentTerms реквизит CanBePaid = True Период записей - PaymentTerms.Date И кидаем в регистры только PaymentTerms.CalculationType = CalculationTypes.Prepaid R3024B_SalesOrdersToBePaid - Receipt (+) Company Branch Currency Partner LegalName Order Amount R3025B_PurchaseOrdersToBePaid - Receip (+) BP/CP/BR/CR если выбран заказ R3024B_SalesOrdersToBePaid - Expense (-) R3025B_PurchaseOrdersToBePaid - Expense (-) в SO галочка CanBePaid в ТЧ PaymentTerms проставляется по умолчанию из настроек пользователя при проведении SOC/POC регистры R3024B/R3025B закрываются тоже. - [ ] когда указана одна и та же интерфейсная группа у доп реквизитов и у реквизитов расширения, не может еще раз создать эту интерфейсную группу
test
feature update проверка закрытия basis key в регистре rowidmovements если приход в регистре в регистре basis key key и rowid из документа если расход в регистре если колонка basis заполнена то в регистре basis key basis key из документа если колонка basis пустая то расхода не будет вот так правильней бует два so в одном si soc по второму заказу при проведении ругается на дублирование записей в advancesinfo запись с такими ключевыми полями существует advancesinfo ооо ааа сп тттт rub партнераа контрагентпп нет да регистр сведений advances info номер строки в настройках пользователя по умолчанию указаны company и branch но в cto указывается другие company и branch при создании bp br cp cr на основании cto подставялются значение из пользовательских настроек а не из cto на первом этапе делаем ввод cp cr bp br на основании so po incomingpaymentorder и outgoingpaymentorder пока оставим в регистрах авансов добавляем измерение заказ и отрабатываем через него закрытия в инвойсах и в платежных документах заказы должны быть указаны если в so po в тч paymentterms реквизит canbepaid true период записей paymentterms date и кидаем в регистры только paymentterms calculationtype calculationtypes prepaid salesorderstobepaid receipt company branch currency partner legalname order amount purchaseorderstobepaid receip bp cp br cr если выбран заказ salesorderstobepaid expense purchaseorderstobepaid expense в so галочка canbepaid в тч paymentterms проставляется по умолчанию из настроек пользователя при проведении soc poc регистры закрываются тоже когда указана одна и та же интерфейсная группа у доп реквизитов и у реквизитов расширения не может еще раз создать эту интерфейсную группу
1
13,018
3,298,622,676
IssuesEvent
2015-11-02 15:20:40
kumulsoft/Fixed-Assets
https://api.github.com/repos/kumulsoft/Fixed-Assets
closed
System Logs out 2nd time again after first login
bug Fixed HIGH Ready for testing
* When I login for the first time and tries to access a grid page, the system logouts me and I have to login again to access that grid. Unfortunately, it did not record history so did not know what page I was trying to access before it logs me out, so I had to start over again. * After login in, I am not using the system for sometimes and when trying to access a page it logs me out, so I have to login again, when login in, and trying to access any page, it logs me out again for the 2nd time.
1.0
System Logs out 2nd time again after first login - * When I login for the first time and tries to access a grid page, the system logouts me and I have to login again to access that grid. Unfortunately, it did not record history so did not know what page I was trying to access before it logs me out, so I had to start over again. * After login in, I am not using the system for sometimes and when trying to access a page it logs me out, so I have to login again, when login in, and trying to access any page, it logs me out again for the 2nd time.
test
system logs out time again after first login when i login for the first time and tries to access a grid page the system logouts me and i have to login again to access that grid unfortunately it did not record history so did not know what page i was trying to access before it logs me out so i had to start over again after login in i am not using the system for sometimes and when trying to access a page it logs me out so i have to login again when login in and trying to access any page it logs me out again for the time
1
229,184
18,286,650,552
IssuesEvent
2021-10-05 11:03:36
DILCISBoard/eark-ip-test-corpus
https://api.github.com/repos/DILCISBoard/eark-ip-test-corpus
closed
CSIP46 Test Case Description
test case
**Specification:** - **Name:** E-ARK CSIP - **Version:** 2.0-DRAFT - **URL:** http://earkcsip.dilcis.eu/ **Requirement:** - **Id:** CSIP46 - **Link:** http://earkcsip.dilcis.eu/#CSIP46 **Error Level:** ERROR **Description:** CSIP46 | Rights metadata identifier amdSec/rightsMD/@ID | An identifier for the rights metadata section (rightsMD) used for referencing inside the package. It must be unique within the package. The ID must follow the rules for xml:id described in the chapter of the textual description of CSIP named "General requirements for the use of metadata" | 1..1 MUST -- | -- | -- | --
1.0
CSIP46 Test Case Description - **Specification:** - **Name:** E-ARK CSIP - **Version:** 2.0-DRAFT - **URL:** http://earkcsip.dilcis.eu/ **Requirement:** - **Id:** CSIP46 - **Link:** http://earkcsip.dilcis.eu/#CSIP46 **Error Level:** ERROR **Description:** CSIP46 | Rights metadata identifier amdSec/rightsMD/@ID | An identifier for the rights metadata section (rightsMD) used for referencing inside the package. It must be unique within the package. The ID must follow the rules for xml:id described in the chapter of the textual description of CSIP named "General requirements for the use of metadata" | 1..1 MUST -- | -- | -- | --
test
test case description specification name e ark csip version draft url requirement id link error level error description rights metadata identifier amdsec rightsmd id an identifier for the rights metadata section rightsmd used for referencing inside the package it must be unique within the package the id must follow the rules for xml id described in the chapter of the textual description of csip named general requirements for the use of metadata must
1
73,905
7,369,082,178
IssuesEvent
2018-03-13 00:35:35
kubernetes/community
https://api.github.com/repos/kubernetes/community
closed
Soliciting Kubernetes CI requirements for Inclusive Integration with CNCF Projects
lifecycle/rotten sig/testing
> "CNCF is helping develop a cloud native software stack that enables cross-cloud deployments. Cross-project CI that ensures ongoing interoperability is especially valuable." - Dan Kohn Executive Director CNCF [[cncf-ci-public] CNCF CI Goals](https://lists.cncf.io/pipermail/cncf-ci-public/2017-February/000001.html) [[cncf-ci-public] Soliciting CI requirements via Project GitHub Issues](https://lists.cncf.io/pipermail/cncf-ci-public/2017-March/000024.html) This github issue is to provide a highly visible invite to be part of creating a cross-cloud cross-project CI within the diverse software communities of the Cloud Native Compute Foundation. To fully understand our needs and expectations, some help documenting the current state of the Kubernetes CI and ongoing requirements of the Kubernetes community would be useful. https://github.com/cncf/wg-ci/blob/master/projects/kubernetes.mkd As we collect Kubernetes and other project CI requirements, we'll use the @cncf/cncf-ci-working-group issue at https://github.com/cncf/wg-ci/issues/12 and encourage you to join the discussion on the [cncf-ci Mailing List](https://lists.cncf.io/mailman/listinfo/cncf-ci-public)
1.0
Soliciting Kubernetes CI requirements for Inclusive Integration with CNCF Projects - > "CNCF is helping develop a cloud native software stack that enables cross-cloud deployments. Cross-project CI that ensures ongoing interoperability is especially valuable." - Dan Kohn Executive Director CNCF [[cncf-ci-public] CNCF CI Goals](https://lists.cncf.io/pipermail/cncf-ci-public/2017-February/000001.html) [[cncf-ci-public] Soliciting CI requirements via Project GitHub Issues](https://lists.cncf.io/pipermail/cncf-ci-public/2017-March/000024.html) This github issue is to provide a highly visible invite to be part of creating a cross-cloud cross-project CI within the diverse software communities of the Cloud Native Compute Foundation. To fully understand our needs and expectations, some help documenting the current state of the Kubernetes CI and ongoing requirements of the Kubernetes community would be useful. https://github.com/cncf/wg-ci/blob/master/projects/kubernetes.mkd As we collect Kubernetes and other project CI requirements, we'll use the @cncf/cncf-ci-working-group issue at https://github.com/cncf/wg-ci/issues/12 and encourage you to join the discussion on the [cncf-ci Mailing List](https://lists.cncf.io/mailman/listinfo/cncf-ci-public)
test
soliciting kubernetes ci requirements for inclusive integration with cncf projects cncf is helping develop a cloud native software stack that enables cross cloud deployments cross project ci that ensures ongoing interoperability is especially valuable dan kohn executive director cncf cncf ci goals soliciting ci requirements via project github issues this github issue is to provide a highly visible invite to be part of creating a cross cloud cross project ci within the diverse software communities of the cloud native compute foundation to fully understand our needs and expectations some help documenting the current state of the kubernetes ci and ongoing requirements of the kubernetes community would be useful as we collect kubernetes and other project ci requirements we ll use the cncf cncf ci working group issue at and encourage you to join the discussion on the
1
631,416
20,151,602,984
IssuesEvent
2022-02-09 12:57:33
ita-social-projects/horondi_client_fe
https://api.github.com/repos/ita-social-projects/horondi_client_fe
closed
[Your Cart-Checkout page] translation in English 'Nova post' delivery is not correct
bug UI priority: medium severity: minor
Environment: Windows 10 Home edition, 20H2 browser Google Chrome Version 91.0.4472.77 (Official Build) (64-bit). Reproducible: always. Build found: last commit 5e32bd7" Preconditions: 1. Go to https://horondi-front-staging.azurewebsites.net/ 2. Choose a product from main menu catalog (e.g.- Bag shopper) 3. Add the chosen product to the shopping cart 4. Click the button “Buy now” Steps to reproduce: 1. Click on the cart label in the right top corner of the page 2. Click ‘Go to checkout’ button on the Your Cart-Checkout page 3. Choose Nova post delivery type 4. Click the Checkout button 5. Pay attention to the name translation information about delivery on the Your Cart-Checkout page and on the Payment and delivery page Actual result: 1. ![image](https://user-images.githubusercontent.com/84592689/124613250-b3928600-de7b-11eb-8da9-6a1167b531a5.png) ![image](https://user-images.githubusercontent.com/84592689/124613282-bc835780-de7b-11eb-9967-a8a0be4ae7e6.png) Expected result: 1. The name of the delivery type should be Nova Poshta, namely the same, on the Cart-Checkout page and on the Payment and delivery page User story and test case links E.g.: "User story #148 Labels to be added "Bug", Priority ("Medium"), Severity ("Minor"), Type ("UI,).
1.0
[Your Cart-Checkout page] translation in English 'Nova post' delivery is not correct - Environment: Windows 10 Home edition, 20H2 browser Google Chrome Version 91.0.4472.77 (Official Build) (64-bit). Reproducible: always. Build found: last commit 5e32bd7" Preconditions: 1. Go to https://horondi-front-staging.azurewebsites.net/ 2. Choose a product from main menu catalog (e.g.- Bag shopper) 3. Add the chosen product to the shopping cart 4. Click the button “Buy now” Steps to reproduce: 1. Click on the cart label in the right top corner of the page 2. Click ‘Go to checkout’ button on the Your Cart-Checkout page 3. Choose Nova post delivery type 4. Click the Checkout button 5. Pay attention to the name translation information about delivery on the Your Cart-Checkout page and on the Payment and delivery page Actual result: 1. ![image](https://user-images.githubusercontent.com/84592689/124613250-b3928600-de7b-11eb-8da9-6a1167b531a5.png) ![image](https://user-images.githubusercontent.com/84592689/124613282-bc835780-de7b-11eb-9967-a8a0be4ae7e6.png) Expected result: 1. The name of the delivery type should be Nova Poshta, namely the same, on the Cart-Checkout page and on the Payment and delivery page User story and test case links E.g.: "User story #148 Labels to be added "Bug", Priority ("Medium"), Severity ("Minor"), Type ("UI,).
non_test
translation in english nova post delivery is not correct environment windows home edition browser google chrome version official build bit reproducible always build found last commit preconditions go to choose a product from main menu catalog e g bag shopper add the chosen product to the shopping cart click the button “buy now” steps to reproduce click on the cart label in the right top corner of the page click ‘go to checkout’ button on the your cart checkout page choose nova post delivery type click the checkout button pay attention to the name translation information about delivery on the your cart checkout page and on the payment and delivery page actual result expected result the name of the delivery type should be nova poshta namely the same on the cart checkout page and on the payment and delivery page user story and test case links e g user story labels to be added bug priority medium severity minor type ui
0
42,658
5,514,901,241
IssuesEvent
2017-03-17 16:08:35
specify/specify7
https://api.github.com/repos/specify/specify7
opened
Editing records in flexible grid format.
type:design type:enhance
To give the ability that one would get by exporting records in to the WB and editing and reuploading but without the export/reupload downsides. Maybe editing in the query results grid.
1.0
Editing records in flexible grid format. - To give the ability that one would get by exporting records in to the WB and editing and reuploading but without the export/reupload downsides. Maybe editing in the query results grid.
non_test
editing records in flexible grid format to give the ability that one would get by exporting records in to the wb and editing and reuploading but without the export reupload downsides maybe editing in the query results grid
0
56,228
14,078,404,798
IssuesEvent
2020-11-04 13:31:36
themagicalmammal/android_kernel_samsung_a5xelte
https://api.github.com/repos/themagicalmammal/android_kernel_samsung_a5xelte
opened
CVE-2017-14051 (Medium) detected in linuxv3.10
security vulnerability
## CVE-2017-14051 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary> <p> <p>Linux kernel source tree</p> <p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_a5xelte/commit/738375813823cb33918102af385bdd5d82225e17">738375813823cb33918102af385bdd5d82225e17</a></p> <p>Found in base branch: <b>cosmic-1.6-experimental</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_a5xelte/drivers/scsi/qla2xxx/qla_attr.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An integer overflow in the qla2x00_sysfs_write_optrom_ctl function in drivers/scsi/qla2xxx/qla_attr.c in the Linux kernel through 4.12.10 allows local users to cause a denial of service (memory corruption and system crash) by leveraging root access. <p>Publish Date: 2017-08-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-14051>CVE-2017-14051</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-14051">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-14051</a></p> <p>Release Date: 2017-08-31</p> <p>Fix Resolution: v4.14-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-14051 (Medium) detected in linuxv3.10 - ## CVE-2017-14051 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary> <p> <p>Linux kernel source tree</p> <p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_a5xelte/commit/738375813823cb33918102af385bdd5d82225e17">738375813823cb33918102af385bdd5d82225e17</a></p> <p>Found in base branch: <b>cosmic-1.6-experimental</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_a5xelte/drivers/scsi/qla2xxx/qla_attr.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An integer overflow in the qla2x00_sysfs_write_optrom_ctl function in drivers/scsi/qla2xxx/qla_attr.c in the Linux kernel through 4.12.10 allows local users to cause a denial of service (memory corruption and system crash) by leveraging root access. <p>Publish Date: 2017-08-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-14051>CVE-2017-14051</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-14051">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-14051</a></p> <p>Release Date: 2017-08-31</p> <p>Fix Resolution: v4.14-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch cosmic experimental vulnerable source files android kernel samsung drivers scsi qla attr c vulnerability details an integer overflow in the sysfs write optrom ctl function in drivers scsi qla attr c in the linux kernel through allows local users to cause a denial of service memory corruption and system crash by leveraging root access publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
41,046
5,331,112,081
IssuesEvent
2017-02-15 18:41:25
fabric8io/fabric8-planner
https://api.github.com/repos/fabric8io/fabric8-planner
closed
Repair functional UI tests - all are failing on Centos CI and when run on Docker
bug test
What I found is that the tests are failing when run through the cico script - I ran the tests manually (docker exec almighty-ui-builder ./run_functional_tests.sh) and saw the same failures. Many ![6a3c424fda3585654e8985cda73c6e0e](https://cloud.githubusercontent.com/assets/642621/22481384/075ca2b6-e7c3-11e6-8746-488258f44b4c.png) ![b30c4aafc60593dd0039b7c9be2e2d28](https://cloud.githubusercontent.com/assets/642621/22481385/075cfd1a-e7c3-11e6-83f9-be67f86b2554.png) hours later, and I still do not have an answer. The last thing I tried was to create a new (very simple) test to ensure that nothing in the existing tests was causing this failure. (See attached .js file.) This new/simple test fails too. When I ran the test with a screen-shot generator, I see that while the tests are able to find and click on the UI elements that initiate the creation of a new workitem, the resulting pages are never displayed. It seems like PhantomJS gets stuck or lost and it not able to open the open the fields/page in which a new workitem's title, description, etc. fields are displayed. I have no idea why this is happening, but that is how/where all the tests are suddenly failing. The really odd things is that in the past, these tests were running cleanly. The tests, of course, are all running cleanly for me when they are run locally. They only fail when they are run in docker. I also just tried upgrading to the latest released (2.1.14) version of PhantomJS - no luck. The test is: ``` var WorkItemListPage = require('./page-objects/work-item-list.page'), testSupport = require('./testSupport'); var until = protractor.ExpectedConditions; var waitTime = 30000; describe('Work item list', function () { var page, items, browserMode; beforeEach(function () { testSupport.setBrowserMode('phone'); page = new WorkItemListPage(true); }); it('Creating a new quick add work item and delete - phone.', function () { testSupport.setBrowserMode('phone'); /* Click the add button */ page.clickWorkItemQuickAdd(); /* Enter the workitem title */ page.typeQuickAddWorkItemTitle("titleText"); /* The description field is not displayed on phones - If the UI element is present, enter the description */ page.workItemQuickAddDesc.isDisplayed().then(function(visible) { if (visible) { page.typeQuickAddWorkItemDesc("titleText"); } }); /* Click the save button */ page.clickQuickAddSave(); /* Return the newly created workitem */ browser.wait(until.presenceOf(page.workItemByTitle("titleText")), waitTime, 'Failed to find workItem'); }); it('Create WorkItem and creatorname and image is relecting', function () { testSupport.setBrowserMode('desktop'); //console.log (theText); /* Click the add button */ page.clickDetailedDialogButton(); /* Select the workitem type */ var detailPage = page.clickDetailedIcon("userstory"); /* Enter the workitem description */ browser.wait(until.visibilityOf(detailPage.workItemDetailTitle), waitTime, 'Failed to find workItemList'); detailPage.setWorkItemDetailTitle ("titleText", false); /* Select the workitem type */ browser.wait(until.visibilityOf(detailPage.workItemTitleSaveIcon), waitTime, 'Failed to find workItemList'); browser.wait(until.elementToBeClickable(detailPage.workItemTitleSaveIcon), waitTime, 'Failed to find workItemList'); detailPage.clickWorkItemTitleSaveIcon(); /* Enter the workitem description */ detailPage.clickWorkItemDetailDescription(); browser.wait(until.visibilityOf(detailPage.workItemDetailDescription), waitTime, 'Failed to find workItemList'); detailPage.setWorkItemDetailDescription ("titleText", false); detailPage.clickWorkItemDescriptionSaveIcon(); /* Close the workitem add dialog */ detailPage.clickWorkItemDetailCloseButton(); browser.wait(until.visibilityOf(page.workItemByTitle("titleText")), waitTime, 'Failed to find workItemList'); }); }); ```
1.0
Repair functional UI tests - all are failing on Centos CI and when run on Docker - What I found is that the tests are failing when run through the cico script - I ran the tests manually (docker exec almighty-ui-builder ./run_functional_tests.sh) and saw the same failures. Many ![6a3c424fda3585654e8985cda73c6e0e](https://cloud.githubusercontent.com/assets/642621/22481384/075ca2b6-e7c3-11e6-8746-488258f44b4c.png) ![b30c4aafc60593dd0039b7c9be2e2d28](https://cloud.githubusercontent.com/assets/642621/22481385/075cfd1a-e7c3-11e6-83f9-be67f86b2554.png) hours later, and I still do not have an answer. The last thing I tried was to create a new (very simple) test to ensure that nothing in the existing tests was causing this failure. (See attached .js file.) This new/simple test fails too. When I ran the test with a screen-shot generator, I see that while the tests are able to find and click on the UI elements that initiate the creation of a new workitem, the resulting pages are never displayed. It seems like PhantomJS gets stuck or lost and it not able to open the open the fields/page in which a new workitem's title, description, etc. fields are displayed. I have no idea why this is happening, but that is how/where all the tests are suddenly failing. The really odd things is that in the past, these tests were running cleanly. The tests, of course, are all running cleanly for me when they are run locally. They only fail when they are run in docker. I also just tried upgrading to the latest released (2.1.14) version of PhantomJS - no luck. The test is: ``` var WorkItemListPage = require('./page-objects/work-item-list.page'), testSupport = require('./testSupport'); var until = protractor.ExpectedConditions; var waitTime = 30000; describe('Work item list', function () { var page, items, browserMode; beforeEach(function () { testSupport.setBrowserMode('phone'); page = new WorkItemListPage(true); }); it('Creating a new quick add work item and delete - phone.', function () { testSupport.setBrowserMode('phone'); /* Click the add button */ page.clickWorkItemQuickAdd(); /* Enter the workitem title */ page.typeQuickAddWorkItemTitle("titleText"); /* The description field is not displayed on phones - If the UI element is present, enter the description */ page.workItemQuickAddDesc.isDisplayed().then(function(visible) { if (visible) { page.typeQuickAddWorkItemDesc("titleText"); } }); /* Click the save button */ page.clickQuickAddSave(); /* Return the newly created workitem */ browser.wait(until.presenceOf(page.workItemByTitle("titleText")), waitTime, 'Failed to find workItem'); }); it('Create WorkItem and creatorname and image is relecting', function () { testSupport.setBrowserMode('desktop'); //console.log (theText); /* Click the add button */ page.clickDetailedDialogButton(); /* Select the workitem type */ var detailPage = page.clickDetailedIcon("userstory"); /* Enter the workitem description */ browser.wait(until.visibilityOf(detailPage.workItemDetailTitle), waitTime, 'Failed to find workItemList'); detailPage.setWorkItemDetailTitle ("titleText", false); /* Select the workitem type */ browser.wait(until.visibilityOf(detailPage.workItemTitleSaveIcon), waitTime, 'Failed to find workItemList'); browser.wait(until.elementToBeClickable(detailPage.workItemTitleSaveIcon), waitTime, 'Failed to find workItemList'); detailPage.clickWorkItemTitleSaveIcon(); /* Enter the workitem description */ detailPage.clickWorkItemDetailDescription(); browser.wait(until.visibilityOf(detailPage.workItemDetailDescription), waitTime, 'Failed to find workItemList'); detailPage.setWorkItemDetailDescription ("titleText", false); detailPage.clickWorkItemDescriptionSaveIcon(); /* Close the workitem add dialog */ detailPage.clickWorkItemDetailCloseButton(); browser.wait(until.visibilityOf(page.workItemByTitle("titleText")), waitTime, 'Failed to find workItemList'); }); }); ```
test
repair functional ui tests all are failing on centos ci and when run on docker what i found is that the tests are failing when run through the cico script i ran the tests manually docker exec almighty ui builder run functional tests sh and saw the same failures many hours later and i still do not have an answer the last thing i tried was to create a new very simple test to ensure that nothing in the existing tests was causing this failure see attached js file this new simple test fails too when i ran the test with a screen shot generator i see that while the tests are able to find and click on the ui elements that initiate the creation of a new workitem the resulting pages are never displayed it seems like phantomjs gets stuck or lost and it not able to open the open the fields page in which a new workitem s title description etc fields are displayed i have no idea why this is happening but that is how where all the tests are suddenly failing the really odd things is that in the past these tests were running cleanly the tests of course are all running cleanly for me when they are run locally they only fail when they are run in docker i also just tried upgrading to the latest released version of phantomjs no luck the test is var workitemlistpage require page objects work item list page testsupport require testsupport var until protractor expectedconditions var waittime describe work item list function var page items browsermode beforeeach function testsupport setbrowsermode phone page new workitemlistpage true it creating a new quick add work item and delete phone function testsupport setbrowsermode phone click the add button page clickworkitemquickadd enter the workitem title page typequickaddworkitemtitle titletext the description field is not displayed on phones if the ui element is present enter the description page workitemquickadddesc isdisplayed then function visible if visible page typequickaddworkitemdesc titletext click the save button page clickquickaddsave return the newly created workitem browser wait until presenceof page workitembytitle titletext waittime failed to find workitem it create workitem and creatorname and image is relecting function testsupport setbrowsermode desktop console log thetext click the add button page clickdetaileddialogbutton select the workitem type var detailpage page clickdetailedicon userstory enter the workitem description browser wait until visibilityof detailpage workitemdetailtitle waittime failed to find workitemlist detailpage setworkitemdetailtitle titletext false select the workitem type browser wait until visibilityof detailpage workitemtitlesaveicon waittime failed to find workitemlist browser wait until elementtobeclickable detailpage workitemtitlesaveicon waittime failed to find workitemlist detailpage clickworkitemtitlesaveicon enter the workitem description detailpage clickworkitemdetaildescription browser wait until visibilityof detailpage workitemdetaildescription waittime failed to find workitemlist detailpage setworkitemdetaildescription titletext false detailpage clickworkitemdescriptionsaveicon close the workitem add dialog detailpage clickworkitemdetailclosebutton browser wait until visibilityof page workitembytitle titletext waittime failed to find workitemlist
1
287,168
24,813,914,925
IssuesEvent
2022-10-25 11:41:08
101Jay/The-Habit
https://api.github.com/repos/101Jay/The-Habit
closed
[코드 이슈] 이슈 테스트 중입니다.
💯Test
## 이슈명 - 이슈가 무엇인지 명확하게 기재해주세요. 이슈 템플릿 테스트 중이라 큰 이슈는 없습니다. ## 이슈 과정 - 어떤 과정에서 이슈를 겪게 되었는지 기재해주세요. ## 원하는 해결 방식 - 어떻게 이 문제가 해결되었으면 좋겠는지 기재해주세요. ## 캡처 - 관련 캡처 이미지가 존재한다면 올려주세요. ## 기타 - 추가로 논의가 필요한 부분에 대해서 기재해주세요.
1.0
[코드 이슈] 이슈 테스트 중입니다. - ## 이슈명 - 이슈가 무엇인지 명확하게 기재해주세요. 이슈 템플릿 테스트 중이라 큰 이슈는 없습니다. ## 이슈 과정 - 어떤 과정에서 이슈를 겪게 되었는지 기재해주세요. ## 원하는 해결 방식 - 어떻게 이 문제가 해결되었으면 좋겠는지 기재해주세요. ## 캡처 - 관련 캡처 이미지가 존재한다면 올려주세요. ## 기타 - 추가로 논의가 필요한 부분에 대해서 기재해주세요.
test
이슈 테스트 중입니다 이슈명 이슈가 무엇인지 명확하게 기재해주세요 이슈 템플릿 테스트 중이라 큰 이슈는 없습니다 이슈 과정 어떤 과정에서 이슈를 겪게 되었는지 기재해주세요 원하는 해결 방식 어떻게 이 문제가 해결되었으면 좋겠는지 기재해주세요 캡처 관련 캡처 이미지가 존재한다면 올려주세요 기타 추가로 논의가 필요한 부분에 대해서 기재해주세요
1
468,022
13,460,088,546
IssuesEvent
2020-09-09 13:10:51
jekyll/jekyll
https://api.github.com/repos/jekyll/jekyll
closed
include configuration option doesn't work for directories
bug priority 4 (maybe) stale
jekyll 4.0.0, macOS 10.15.2 Catalina The "include" [configuration option](https://jekyllrb.com/docs/configuration/options/) says: > Force inclusion of directories and/or files in the conversion However only files are successfully included; directories are ignored. To reproduce: > git clone https://github.com/ridiculousfish/jekyll-include-test.git > cd jekyll-include-test > echo stuff/* stuff/_testdir stuff/_testfile > jekyll build > echo _site/stuff/* _site/stuff/_testfile Note the `_config.yml` contains: ``` include: - stuff/_testfile - stuff/_testdir ``` however as shown above, only `_testfile` is copied; `_testdir` is not.
1.0
include configuration option doesn't work for directories - jekyll 4.0.0, macOS 10.15.2 Catalina The "include" [configuration option](https://jekyllrb.com/docs/configuration/options/) says: > Force inclusion of directories and/or files in the conversion However only files are successfully included; directories are ignored. To reproduce: > git clone https://github.com/ridiculousfish/jekyll-include-test.git > cd jekyll-include-test > echo stuff/* stuff/_testdir stuff/_testfile > jekyll build > echo _site/stuff/* _site/stuff/_testfile Note the `_config.yml` contains: ``` include: - stuff/_testfile - stuff/_testdir ``` however as shown above, only `_testfile` is copied; `_testdir` is not.
non_test
include configuration option doesn t work for directories jekyll macos catalina the include says force inclusion of directories and or files in the conversion however only files are successfully included directories are ignored to reproduce git clone cd jekyll include test echo stuff stuff testdir stuff testfile jekyll build echo site stuff site stuff testfile note the config yml contains include stuff testfile stuff testdir however as shown above only testfile is copied testdir is not
0
76,276
21,320,999,488
IssuesEvent
2022-04-17 04:14:57
goharbor/harbor
https://api.github.com/repos/goharbor/harbor
closed
Investigate distroless for building Harbor's images.
area/build kind/spike staled
Investigate the eligibility to build Harbor's images on top of distroless: https://github.com/GoogleContainerTools/distroless To see if we can achieve the goals: 1) Mitigate CVEs 2) Reduce the size We should also measure the impact to debugability as it doesn't contain shell, we may figure out best practice for debugging if we adopt it.
1.0
Investigate distroless for building Harbor's images. - Investigate the eligibility to build Harbor's images on top of distroless: https://github.com/GoogleContainerTools/distroless To see if we can achieve the goals: 1) Mitigate CVEs 2) Reduce the size We should also measure the impact to debugability as it doesn't contain shell, we may figure out best practice for debugging if we adopt it.
non_test
investigate distroless for building harbor s images investigate the eligibility to build harbor s images on top of distroless to see if we can achieve the goals mitigate cves reduce the size we should also measure the impact to debugability as it doesn t contain shell we may figure out best practice for debugging if we adopt it
0
9,544
6,924,355,548
IssuesEvent
2017-11-30 12:27:00
cortoproject/corto
https://api.github.com/repos/cortoproject/corto
opened
Add low-level store API that allows for more flexibility & better performance
Corto:ObjectManagement Corto:Performance
In some cases the regular store API (`corto_declareChild`, `corto_define` etc) is not flexible enough, or performs actions that in some contexts are redundant. To enable applications to do more powerful things with the store, and optimize certain scenarios, a lower-level API call is needed. Examples of redundancy are: - standard API calls by default attempt to resume objects, whereas sometimes the application context guarantees that this is not required (for example: when defining types in a model) - the `corto_declareChild` API always checks if the identifier is recursive (`foo/bar`) which adds performance overhead. In a lot of cases, this check is not needed. - the `fromcontent` functions all accept a string identifier for the content type which has to be looked up. It would be nice if a function could use a cached contentType handle directly. Additionally, the sequence of default store operations can become quite complex in multi-threaded scenarios where multiple threads can be instantiating the same object at the same time, values have to be assigned, and objects have to be either looked up or created. The current standard API is already a thin layer upon a smaller, more powerful (but also more complex) API. A new function should be added that wraps around the internal APIs, and allows for executing multiple low-level operations in the correct sequence. The API could look like this: ```c typedef enum corto_kind { CORTO_DO_DECLARE = 0x1, CORTO_DO_RECURSIVE_DECLARE = 0x3, CORTO_DO_DEFINE = 0x4, CORTO_DO_UPDATE = 0x8, CORTO_DO_ORPHAN = 0x10, CORTO_DO_RESUME = 0x20, CORTO_DO_FORCE_TYPE = 0x40, CORTO_DO_LOOKUP_TYPE = 0x80 } corto_kind; corto_object corto( corto_object parent, const char *id, corto_type type, corto_object ref, corto_contentType contentType, void *value, corto_attr attrs, corto_kind kind); ``` For example, an application that needs to either find or create an object in a multithreaded context, and define an object when its created, and update an object when its found, could be implemented with a single function call: ```c corto(parent, id, type, NULL, contentTypeHandle, valuePtr, CORTO_ATTR_DEFAULT, CORTO_DO_DECLARE | CORTO_DO_UPDATE | CORTO_DO_DEFINE); ```
True
Add low-level store API that allows for more flexibility & better performance - In some cases the regular store API (`corto_declareChild`, `corto_define` etc) is not flexible enough, or performs actions that in some contexts are redundant. To enable applications to do more powerful things with the store, and optimize certain scenarios, a lower-level API call is needed. Examples of redundancy are: - standard API calls by default attempt to resume objects, whereas sometimes the application context guarantees that this is not required (for example: when defining types in a model) - the `corto_declareChild` API always checks if the identifier is recursive (`foo/bar`) which adds performance overhead. In a lot of cases, this check is not needed. - the `fromcontent` functions all accept a string identifier for the content type which has to be looked up. It would be nice if a function could use a cached contentType handle directly. Additionally, the sequence of default store operations can become quite complex in multi-threaded scenarios where multiple threads can be instantiating the same object at the same time, values have to be assigned, and objects have to be either looked up or created. The current standard API is already a thin layer upon a smaller, more powerful (but also more complex) API. A new function should be added that wraps around the internal APIs, and allows for executing multiple low-level operations in the correct sequence. The API could look like this: ```c typedef enum corto_kind { CORTO_DO_DECLARE = 0x1, CORTO_DO_RECURSIVE_DECLARE = 0x3, CORTO_DO_DEFINE = 0x4, CORTO_DO_UPDATE = 0x8, CORTO_DO_ORPHAN = 0x10, CORTO_DO_RESUME = 0x20, CORTO_DO_FORCE_TYPE = 0x40, CORTO_DO_LOOKUP_TYPE = 0x80 } corto_kind; corto_object corto( corto_object parent, const char *id, corto_type type, corto_object ref, corto_contentType contentType, void *value, corto_attr attrs, corto_kind kind); ``` For example, an application that needs to either find or create an object in a multithreaded context, and define an object when its created, and update an object when its found, could be implemented with a single function call: ```c corto(parent, id, type, NULL, contentTypeHandle, valuePtr, CORTO_ATTR_DEFAULT, CORTO_DO_DECLARE | CORTO_DO_UPDATE | CORTO_DO_DEFINE); ```
non_test
add low level store api that allows for more flexibility better performance in some cases the regular store api corto declarechild corto define etc is not flexible enough or performs actions that in some contexts are redundant to enable applications to do more powerful things with the store and optimize certain scenarios a lower level api call is needed examples of redundancy are standard api calls by default attempt to resume objects whereas sometimes the application context guarantees that this is not required for example when defining types in a model the corto declarechild api always checks if the identifier is recursive foo bar which adds performance overhead in a lot of cases this check is not needed the fromcontent functions all accept a string identifier for the content type which has to be looked up it would be nice if a function could use a cached contenttype handle directly additionally the sequence of default store operations can become quite complex in multi threaded scenarios where multiple threads can be instantiating the same object at the same time values have to be assigned and objects have to be either looked up or created the current standard api is already a thin layer upon a smaller more powerful but also more complex api a new function should be added that wraps around the internal apis and allows for executing multiple low level operations in the correct sequence the api could look like this c typedef enum corto kind corto do declare corto do recursive declare corto do define corto do update corto do orphan corto do resume corto do force type corto do lookup type corto kind corto object corto corto object parent const char id corto type type corto object ref corto contenttype contenttype void value corto attr attrs corto kind kind for example an application that needs to either find or create an object in a multithreaded context and define an object when its created and update an object when its found could be implemented with a single function call c corto parent id type null contenttypehandle valueptr corto attr default corto do declare corto do update corto do define
0
181,480
14,877,630,949
IssuesEvent
2021-01-20 03:39:20
k8ssandra/k8ssandra
https://api.github.com/repos/k8ssandra/k8ssandra
opened
Update docs to reflect the fact that the provided Traefik examples are not suggested for production deployments
complexity: low documentation needs-triage
## Bug Report <!-- Thanks for filing an issue! Before hitting the button, please answer these questions. Fill in as much of the template below as you can. --> **Describe the bug** This was mentioned as part of #200 - our documentation should note that some of our examples are not suggested for production deployments.
1.0
Update docs to reflect the fact that the provided Traefik examples are not suggested for production deployments - ## Bug Report <!-- Thanks for filing an issue! Before hitting the button, please answer these questions. Fill in as much of the template below as you can. --> **Describe the bug** This was mentioned as part of #200 - our documentation should note that some of our examples are not suggested for production deployments.
non_test
update docs to reflect the fact that the provided traefik examples are not suggested for production deployments bug report thanks for filing an issue before hitting the button please answer these questions fill in as much of the template below as you can describe the bug this was mentioned as part of our documentation should note that some of our examples are not suggested for production deployments
0
103,587
8,922,765,977
IssuesEvent
2019-01-21 13:56:00
khartec/waltz
https://api.github.com/repos/khartec/waltz
closed
Add <waltz-entity-enum> to app overview section
fixed (test & close)
Currently being displayed on the Change Initiative overview section, Will be good to add it to the app overview, so that prominent fields can be displayed in the overview section (instead of entity notes) ``` <!-- Entity Enum --> <waltz-entity-enum parent-entity-ref="ctrl.entityRef"> </waltz-entity-enum> ```
1.0
Add <waltz-entity-enum> to app overview section - Currently being displayed on the Change Initiative overview section, Will be good to add it to the app overview, so that prominent fields can be displayed in the overview section (instead of entity notes) ``` <!-- Entity Enum --> <waltz-entity-enum parent-entity-ref="ctrl.entityRef"> </waltz-entity-enum> ```
test
add to app overview section currently being displayed on the change initiative overview section will be good to add it to the app overview so that prominent fields can be displayed in the overview section instead of entity notes
1
334,400
24,417,488,973
IssuesEvent
2022-10-05 17:12:07
Pradumnasaraf/open-source-with-pradumna
https://api.github.com/repos/Pradumnasaraf/open-source-with-pradumna
closed
[DOCS]Add process/guide - "How to create issue templates"
documentation good first issue EddieHub:good-first-issue how-to OSWP hacktoberfest
### Description #### Hey! Contributor, #### We are adding a guide/process for every activity (like Creating a PR or raising an issue) and making learning easy for new contributors. ### Changes/Action required - Add a complete process of **"How to create issue templates"**, steps will contain a text, screenshot, screen recording, and GIF (if needed) - Path of the file in which **steps** need to be added - [`open-source-with-pradumna/pages/How-to/guide/adding-issue-template.md`](https://github.com/Pradumnasaraf/open-source-with-pradumna/blob/main/pages/How-to/guide/adding-issue-template.md) --- If you have any suggestions feel free to Open an [Issue](https://github.com/Pradumnasaraf/open-source-with-pradumna/issues) ### Screenshots #### This addition of the documentation will get added and hyperlinked in [`How-to/README.md`](https://github.com/Pradumnasaraf/open-source-with-pradumna/tree/main/pages/How-to) and also get hosted on the website https://Opensource.pradumnasaraf.co ![Screenshot from 2022-05-15 14-11-21](https://user-images.githubusercontent.com/51878265/168464628-905b0a25-627b-4560-b8e4-bd09940d8c7b.png)
1.0
[DOCS]Add process/guide - "How to create issue templates" - ### Description #### Hey! Contributor, #### We are adding a guide/process for every activity (like Creating a PR or raising an issue) and making learning easy for new contributors. ### Changes/Action required - Add a complete process of **"How to create issue templates"**, steps will contain a text, screenshot, screen recording, and GIF (if needed) - Path of the file in which **steps** need to be added - [`open-source-with-pradumna/pages/How-to/guide/adding-issue-template.md`](https://github.com/Pradumnasaraf/open-source-with-pradumna/blob/main/pages/How-to/guide/adding-issue-template.md) --- If you have any suggestions feel free to Open an [Issue](https://github.com/Pradumnasaraf/open-source-with-pradumna/issues) ### Screenshots #### This addition of the documentation will get added and hyperlinked in [`How-to/README.md`](https://github.com/Pradumnasaraf/open-source-with-pradumna/tree/main/pages/How-to) and also get hosted on the website https://Opensource.pradumnasaraf.co ![Screenshot from 2022-05-15 14-11-21](https://user-images.githubusercontent.com/51878265/168464628-905b0a25-627b-4560-b8e4-bd09940d8c7b.png)
non_test
add process guide how to create issue templates description hey contributor we are adding a guide process for every activity like creating a pr or raising an issue and making learning easy for new contributors changes action required add a complete process of how to create issue templates steps will contain a text screenshot screen recording and gif if needed path of the file in which steps need to be added if you have any suggestions feel free to open an screenshots this addition of the documentation will get added and hyperlinked in and also get hosted on the website
0
277,568
24,085,624,701
IssuesEvent
2022-09-19 10:39:20
xsuite/xsuite
https://api.github.com/repos/xsuite/xsuite
closed
Faddeeva function should be tested covering all branches in the algorithm
testing
- All contexts should be covered Faddeeva function: https://github.com/xsuite/xfields/blob/main/xfields/fieldmaps/bigaussian_src/complex_error_function.h Tests are here: https://github.com/xsuite/xfields/blob/main/tests/test_cerrf.py
1.0
Faddeeva function should be tested covering all branches in the algorithm - - All contexts should be covered Faddeeva function: https://github.com/xsuite/xfields/blob/main/xfields/fieldmaps/bigaussian_src/complex_error_function.h Tests are here: https://github.com/xsuite/xfields/blob/main/tests/test_cerrf.py
test
faddeeva function should be tested covering all branches in the algorithm all contexts should be covered faddeeva function tests are here
1
392,264
26,932,387,625
IssuesEvent
2023-02-07 17:45:25
sophsuan/senior_project
https://api.github.com/repos/sophsuan/senior_project
closed
Design Pages on Figma
documentation
# Use Figma to develop a design for our browser extension. ## Should include pages for: - New tab opened - Viewing past notes - Mental health resources
1.0
Design Pages on Figma - # Use Figma to develop a design for our browser extension. ## Should include pages for: - New tab opened - Viewing past notes - Mental health resources
non_test
design pages on figma use figma to develop a design for our browser extension should include pages for new tab opened viewing past notes mental health resources
0
3,679
14,284,046,425
IssuesEvent
2020-11-23 11:55:47
rpa-tomorrow/substorm-nlp
https://api.github.com/repos/rpa-tomorrow/substorm-nlp
closed
Cannot send an email directly to an email address
automation bug
# Expected behavior `Send an email to [email protected]` should send an email to the given email address # Actual behavior Returning the error: `generator raised StopIteration` The model xx_ent_wiki_sm does not release the lock it has taken # Steps to reproduce 1. Start the CLI 2. Enter `Send an email to [email protected]` # Additional context
1.0
Cannot send an email directly to an email address - # Expected behavior `Send an email to [email protected]` should send an email to the given email address # Actual behavior Returning the error: `generator raised StopIteration` The model xx_ent_wiki_sm does not release the lock it has taken # Steps to reproduce 1. Start the CLI 2. Enter `Send an email to [email protected]` # Additional context
non_test
cannot send an email directly to an email address expected behavior send an email to john doe email com should send an email to the given email address actual behavior returning the error generator raised stopiteration the model xx ent wiki sm does not release the lock it has taken steps to reproduce start the cli enter send an email to john doe email com additional context
0
350,780
31,932,312,450
IssuesEvent
2023-09-19 08:16:15
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix jax_lax_operators.test_jax_full
JAX Frontend Sub Task Failing Test
| | | |---|---| |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a>
1.0
Fix jax_lax_operators.test_jax_full - | | | |---|---| |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6228243590/job/16904567476"><img src=https://img.shields.io/badge/-failure-red></a>
test
fix jax lax operators test jax full numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
1
56,136
6,502,262,464
IssuesEvent
2017-08-23 13:03:59
healthlocker/oxleas-adhd
https://api.github.com/repos/healthlocker/oxleas-adhd
closed
remove Welcome to healthlocker section on landing page home page
content change please-test priority-2 T25m
+ [x] As a user on the landing page of Oxleas site, it is edited. + [x] there is no sign up button
1.0
remove Welcome to healthlocker section on landing page home page - + [x] As a user on the landing page of Oxleas site, it is edited. + [x] there is no sign up button
test
remove welcome to healthlocker section on landing page home page as a user on the landing page of oxleas site it is edited there is no sign up button
1
208,448
7,154,521,199
IssuesEvent
2018-01-26 08:54:37
UniversityOfHelsinkiCS/front-grappa2
https://api.github.com/repos/UniversityOfHelsinkiCS/front-grappa2
opened
Thesis abstract page chooser
low priority
Currently user has no way of choosing the abstract page (we just pick the first 4 pages of a thesis) When uploading thesis user should choose which page is the abstract and we'd show the page.
1.0
Thesis abstract page chooser - Currently user has no way of choosing the abstract page (we just pick the first 4 pages of a thesis) When uploading thesis user should choose which page is the abstract and we'd show the page.
non_test
thesis abstract page chooser currently user has no way of choosing the abstract page we just pick the first pages of a thesis when uploading thesis user should choose which page is the abstract and we d show the page
0
191,172
14,593,420,487
IssuesEvent
2020-12-19 22:43:31
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
dataiku/dku-kubernetes: pkg/kubectl/util/i18n/i18n_test.go; 7 LoC
fresh test tiny
Found a possible issue in [dataiku/dku-kubernetes](https://www.github.com/dataiku/dku-kubernetes) at [pkg/kubectl/util/i18n/i18n_test.go](https://github.com/dataiku/dku-kubernetes/blob/6d29459b3b7485ebcff3a33b0a07f47759fe7da3/pkg/kubectl/util/i18n/i18n_test.go#L136-L142) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable envVar used in defer or goroutine at line 140 [Click here to see the code in its original context.](https://github.com/dataiku/dku-kubernetes/blob/6d29459b3b7485ebcff3a33b0a07f47759fe7da3/pkg/kubectl/util/i18n/i18n_test.go#L136-L142) <details> <summary>Click here to show the 7 line(s) of Go which triggered the analyzer.</summary> ```go for _, envVar := range envVarsToBackup { if envVarValue := os.Getenv(envVar); envVarValue != "" { os.Unsetenv(envVar) // Restore env var at the end defer func() { os.Setenv(envVar, envVarValue) }() } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 6d29459b3b7485ebcff3a33b0a07f47759fe7da3
1.0
dataiku/dku-kubernetes: pkg/kubectl/util/i18n/i18n_test.go; 7 LoC - Found a possible issue in [dataiku/dku-kubernetes](https://www.github.com/dataiku/dku-kubernetes) at [pkg/kubectl/util/i18n/i18n_test.go](https://github.com/dataiku/dku-kubernetes/blob/6d29459b3b7485ebcff3a33b0a07f47759fe7da3/pkg/kubectl/util/i18n/i18n_test.go#L136-L142) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable envVar used in defer or goroutine at line 140 [Click here to see the code in its original context.](https://github.com/dataiku/dku-kubernetes/blob/6d29459b3b7485ebcff3a33b0a07f47759fe7da3/pkg/kubectl/util/i18n/i18n_test.go#L136-L142) <details> <summary>Click here to show the 7 line(s) of Go which triggered the analyzer.</summary> ```go for _, envVar := range envVarsToBackup { if envVarValue := os.Getenv(envVar); envVarValue != "" { os.Unsetenv(envVar) // Restore env var at the end defer func() { os.Setenv(envVar, envVarValue) }() } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 6d29459b3b7485ebcff3a33b0a07f47759fe7da3
test
dataiku dku kubernetes pkg kubectl util test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable envvar used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for envvar range envvarstobackup if envvarvalue os getenv envvar envvarvalue os unsetenv envvar restore env var at the end defer func os setenv envvar envvarvalue leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
1
101,909
8,807,292,492
IssuesEvent
2018-12-27 09:14:25
nicolargo/glances
https://api.github.com/repos/nicolargo/glances
closed
Sorting by process time works not as expected
bug needs test
#### Description Sorting by process time not sorting in expected order ![2018-09-18_112021](https://user-images.githubusercontent.com/5748547/45677607-dde0f080-bb34-11e8-9532-a87de494a144.png) #### Versions * Glances v3.1.0_beta with psutil v5.4.7 * FreeBSD 11.2-RELEASE-p2 #### Logs Debug log offers no forther informations
1.0
Sorting by process time works not as expected - #### Description Sorting by process time not sorting in expected order ![2018-09-18_112021](https://user-images.githubusercontent.com/5748547/45677607-dde0f080-bb34-11e8-9532-a87de494a144.png) #### Versions * Glances v3.1.0_beta with psutil v5.4.7 * FreeBSD 11.2-RELEASE-p2 #### Logs Debug log offers no forther informations
test
sorting by process time works not as expected description sorting by process time not sorting in expected order versions glances beta with psutil freebsd release logs debug log offers no forther informations
1
544,500
15,894,081,413
IssuesEvent
2021-04-11 08:53:54
googleapis/github-repo-automation
https://api.github.com/repos/googleapis/github-repo-automation
closed
Synthesis failed for github-repo-automation
autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate github-repo-automation. :broken_heart: Please investigate and fix this issue within 5 business days. While it remains broken, this library cannot be updated with changes to the github-repo-automation API, and the library grows stale. See https://github.com/googleapis/synthtool/blob/master/autosynth/TroubleShooting.md for trouble shooting tips. Here's the output from running `synth.py`: ``` po-automation/synth.py. On branch autosynth-68 nothing to commit, working tree clean 2021-04-08 01:52:24,471 synthtool [DEBUG] > Using precloned repo /home/kbuilder/.cache/synthtool/synthtool DEBUG:synthtool:Using precloned repo /home/kbuilder/.cache/synthtool/synthtool .eslintignore .eslintrc.json .gitattributes .github/CODEOWNERS .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md .github/ISSUE_TEMPLATE/support_request.md .github/PULL_REQUEST_TEMPLATE.md .github/release-please.yml .github/workflows/ci.yaml .kokoro/.gitattributes .kokoro/common.cfg .kokoro/continuous/node10/common.cfg .kokoro/continuous/node10/docs.cfg .kokoro/continuous/node10/test.cfg .kokoro/continuous/node12/common.cfg .kokoro/continuous/node12/lint.cfg .kokoro/continuous/node12/samples-test.cfg .kokoro/continuous/node12/system-test.cfg .kokoro/continuous/node12/test.cfg .kokoro/docs.sh .kokoro/lint.sh .kokoro/populate-secrets.sh .kokoro/presubmit/node10/common.cfg .kokoro/presubmit/node12/common.cfg .kokoro/presubmit/node12/samples-test.cfg .kokoro/presubmit/node12/system-test.cfg .kokoro/presubmit/node12/test.cfg .kokoro/publish.sh .kokoro/release/docs-devsite.cfg .kokoro/release/docs-devsite.sh .kokoro/release/docs.cfg .kokoro/release/docs.sh .kokoro/release/publish.cfg .kokoro/samples-test.sh .kokoro/system-test.sh .kokoro/test.bat .kokoro/test.sh .kokoro/trampoline.sh .kokoro/trampoline_v2.sh .mocharc.js .nycrc .prettierignore .prettierrc.js .trampolinerc CODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE README.md 2021-04-08 01:52:24,571 synthtool [WARNING] > ensure you pass a string 'quality' to release_quality_badge WARNING:synthtool:ensure you pass a string 'quality' to release_quality_badge api-extractor.json renovate.json samples/README.md 2021-04-08 01:52:24,606 synthtool [DEBUG] > Installing dependencies... DEBUG:synthtool:Installing dependencies... npm WARN deprecated [email protected]: Breaking change found in this patch version npm WARN deprecated [email protected]: NOTICE: ts-simple-ast has been renamed to ts-morph and version reset to 1.0.0. Switch at your leisure... npm WARN deprecated [email protected]: Use cheerio-select instead npm WARN deprecated [email protected]: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies. npm WARN deprecated [email protected]: The package has been renamed to `open` npm WARN deprecated [email protected]: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2. npm WARN deprecated [email protected]: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3. npm WARN deprecated [email protected]: https://github.com/lydell/resolve-url#deprecated npm WARN deprecated [email protected]: Please see https://github.com/lydell/urix#deprecated > [email protected] postinstall /home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/core-js > node -e "try{require('./postinstall')}catch(e){}" > @compodoc/[email protected] postinstall /home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@compodoc/compodoc > opencollective-postinstall || exit 0 Thank you for using @compodoc/compodoc! If you rely on this package, please consider supporting our open collective: > https://opencollective.com/compodoc/donate > @google/[email protected] prepare /home/kbuilder/.cache/synthtool/github-repo-automation > npm run compile > @google/[email protected] precompile /home/kbuilder/.cache/synthtool/github-repo-automation > gts clean version: 14 Removing build ... > @google/[email protected] compile /home/kbuilder/.cache/synthtool/github-repo-automation > tsc -p . node_modules/@types/sinon/index.d.ts:778:36 - error TS2694: Namespace '"/home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@sinonjs/fake-timers/types/fake-timers-src"' has no exported member 'TimerId'. 778 type SinonTimerId = FakeTimers.TimerId;    ~~~~~~~ node_modules/@types/sinon/index.d.ts:780:39 - error TS2694: Namespace '"/home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@sinonjs/fake-timers/types/fake-timers-src"' has no exported member 'InstalledMethods'. 780 type SinonFakeTimers = FakeTimers.InstalledMethods &    ~~~~~~~~~~~~~~~~ node_modules/@types/sinon/index.d.ts:781:20 - error TS2694: Namespace '"/home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@sinonjs/fake-timers/types/fake-timers-src"' has no exported member 'NodeClock'. 781 FakeTimers.NodeClock &    ~~~~~~~~~ node_modules/@types/sinon/index.d.ts:782:20 - error TS2694: Namespace '"/home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@sinonjs/fake-timers/types/fake-timers-src"' has no exported member 'BrowserClock'. 782 FakeTimers.BrowserClock & {    ~~~~~~~~~~~~ Found 4 errors. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google/[email protected] compile: `tsc -p .` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google/[email protected] compile script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2021-04-08T08_52_48_030Z-debug.log npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google/[email protected] prepare: `npm run compile` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google/[email protected] prepare script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2021-04-08T08_52_48_090Z-debug.log 2021-04-08 01:52:48,118 synthtool [ERROR] > Failed executing npm install: None ERROR:synthtool:Failed executing npm install: None Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/kbuilder/.cache/synthtool/github-repo-automation/synth.py", line 13, in <module> node.install() File "/tmpfs/src/github/synthtool/synthtool/languages/node.py", line 171, in install shell.run(["npm", "install"], hide_output=hide_output) File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run raise exc File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run encoding="utf-8", File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['npm', 'install']' returned non-zero exit status 1. 2021-04-08 01:52:48,162 autosynth [ERROR] > Synthesis failed 2021-04-08 01:52:48,162 autosynth [DEBUG] > Running: git reset --hard HEAD HEAD is now at 7357932 chore: release 4.4.0 (#495) 2021-04-08 01:52:48,169 autosynth [DEBUG] > Running: git checkout autosynth Switched to branch 'autosynth' 2021-04-08 01:52:48,175 autosynth [DEBUG] > Running: git clean -fdx Removing __pycache__/ Removing node_modules/ Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 356, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 191, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 336, in _inner_main commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 68, in synthesize_loop has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest) File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch synthesizer.synthesize(synth_log_path, self.environ) File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](http://sponge2/results/invocations/f432741c-eb5e-4898-bd6c-3301edbf7588/targets/github%2Fsynthtool;config=default/tests;query=github-repo-automation;failed=false).
1.0
Synthesis failed for github-repo-automation - Hello! Autosynth couldn't regenerate github-repo-automation. :broken_heart: Please investigate and fix this issue within 5 business days. While it remains broken, this library cannot be updated with changes to the github-repo-automation API, and the library grows stale. See https://github.com/googleapis/synthtool/blob/master/autosynth/TroubleShooting.md for trouble shooting tips. Here's the output from running `synth.py`: ``` po-automation/synth.py. On branch autosynth-68 nothing to commit, working tree clean 2021-04-08 01:52:24,471 synthtool [DEBUG] > Using precloned repo /home/kbuilder/.cache/synthtool/synthtool DEBUG:synthtool:Using precloned repo /home/kbuilder/.cache/synthtool/synthtool .eslintignore .eslintrc.json .gitattributes .github/CODEOWNERS .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md .github/ISSUE_TEMPLATE/support_request.md .github/PULL_REQUEST_TEMPLATE.md .github/release-please.yml .github/workflows/ci.yaml .kokoro/.gitattributes .kokoro/common.cfg .kokoro/continuous/node10/common.cfg .kokoro/continuous/node10/docs.cfg .kokoro/continuous/node10/test.cfg .kokoro/continuous/node12/common.cfg .kokoro/continuous/node12/lint.cfg .kokoro/continuous/node12/samples-test.cfg .kokoro/continuous/node12/system-test.cfg .kokoro/continuous/node12/test.cfg .kokoro/docs.sh .kokoro/lint.sh .kokoro/populate-secrets.sh .kokoro/presubmit/node10/common.cfg .kokoro/presubmit/node12/common.cfg .kokoro/presubmit/node12/samples-test.cfg .kokoro/presubmit/node12/system-test.cfg .kokoro/presubmit/node12/test.cfg .kokoro/publish.sh .kokoro/release/docs-devsite.cfg .kokoro/release/docs-devsite.sh .kokoro/release/docs.cfg .kokoro/release/docs.sh .kokoro/release/publish.cfg .kokoro/samples-test.sh .kokoro/system-test.sh .kokoro/test.bat .kokoro/test.sh .kokoro/trampoline.sh .kokoro/trampoline_v2.sh .mocharc.js .nycrc .prettierignore .prettierrc.js .trampolinerc CODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE README.md 2021-04-08 01:52:24,571 synthtool [WARNING] > ensure you pass a string 'quality' to release_quality_badge WARNING:synthtool:ensure you pass a string 'quality' to release_quality_badge api-extractor.json renovate.json samples/README.md 2021-04-08 01:52:24,606 synthtool [DEBUG] > Installing dependencies... DEBUG:synthtool:Installing dependencies... npm WARN deprecated [email protected]: Breaking change found in this patch version npm WARN deprecated [email protected]: NOTICE: ts-simple-ast has been renamed to ts-morph and version reset to 1.0.0. Switch at your leisure... npm WARN deprecated [email protected]: Use cheerio-select instead npm WARN deprecated [email protected]: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies. npm WARN deprecated [email protected]: The package has been renamed to `open` npm WARN deprecated [email protected]: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2. npm WARN deprecated [email protected]: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3. npm WARN deprecated [email protected]: https://github.com/lydell/resolve-url#deprecated npm WARN deprecated [email protected]: Please see https://github.com/lydell/urix#deprecated > [email protected] postinstall /home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/core-js > node -e "try{require('./postinstall')}catch(e){}" > @compodoc/[email protected] postinstall /home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@compodoc/compodoc > opencollective-postinstall || exit 0 Thank you for using @compodoc/compodoc! If you rely on this package, please consider supporting our open collective: > https://opencollective.com/compodoc/donate > @google/[email protected] prepare /home/kbuilder/.cache/synthtool/github-repo-automation > npm run compile > @google/[email protected] precompile /home/kbuilder/.cache/synthtool/github-repo-automation > gts clean version: 14 Removing build ... > @google/[email protected] compile /home/kbuilder/.cache/synthtool/github-repo-automation > tsc -p . node_modules/@types/sinon/index.d.ts:778:36 - error TS2694: Namespace '"/home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@sinonjs/fake-timers/types/fake-timers-src"' has no exported member 'TimerId'. 778 type SinonTimerId = FakeTimers.TimerId;    ~~~~~~~ node_modules/@types/sinon/index.d.ts:780:39 - error TS2694: Namespace '"/home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@sinonjs/fake-timers/types/fake-timers-src"' has no exported member 'InstalledMethods'. 780 type SinonFakeTimers = FakeTimers.InstalledMethods &    ~~~~~~~~~~~~~~~~ node_modules/@types/sinon/index.d.ts:781:20 - error TS2694: Namespace '"/home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@sinonjs/fake-timers/types/fake-timers-src"' has no exported member 'NodeClock'. 781 FakeTimers.NodeClock &    ~~~~~~~~~ node_modules/@types/sinon/index.d.ts:782:20 - error TS2694: Namespace '"/home/kbuilder/.cache/synthtool/github-repo-automation/node_modules/@sinonjs/fake-timers/types/fake-timers-src"' has no exported member 'BrowserClock'. 782 FakeTimers.BrowserClock & {    ~~~~~~~~~~~~ Found 4 errors. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google/[email protected] compile: `tsc -p .` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google/[email protected] compile script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2021-04-08T08_52_48_030Z-debug.log npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google/[email protected] prepare: `npm run compile` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google/[email protected] prepare script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2021-04-08T08_52_48_090Z-debug.log 2021-04-08 01:52:48,118 synthtool [ERROR] > Failed executing npm install: None ERROR:synthtool:Failed executing npm install: None Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/kbuilder/.cache/synthtool/github-repo-automation/synth.py", line 13, in <module> node.install() File "/tmpfs/src/github/synthtool/synthtool/languages/node.py", line 171, in install shell.run(["npm", "install"], hide_output=hide_output) File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run raise exc File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run encoding="utf-8", File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['npm', 'install']' returned non-zero exit status 1. 2021-04-08 01:52:48,162 autosynth [ERROR] > Synthesis failed 2021-04-08 01:52:48,162 autosynth [DEBUG] > Running: git reset --hard HEAD HEAD is now at 7357932 chore: release 4.4.0 (#495) 2021-04-08 01:52:48,169 autosynth [DEBUG] > Running: git checkout autosynth Switched to branch 'autosynth' 2021-04-08 01:52:48,175 autosynth [DEBUG] > Running: git clean -fdx Removing __pycache__/ Removing node_modules/ Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 356, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 191, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 336, in _inner_main commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 68, in synthesize_loop has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest) File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch synthesizer.synthesize(synth_log_path, self.environ) File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](http://sponge2/results/invocations/f432741c-eb5e-4898-bd6c-3301edbf7588/targets/github%2Fsynthtool;config=default/tests;query=github-repo-automation;failed=false).
non_test
synthesis failed for github repo automation hello autosynth couldn t regenerate github repo automation broken heart please investigate and fix this issue within business days while it remains broken this library cannot be updated with changes to the github repo automation api and the library grows stale see for trouble shooting tips here s the output from running synth py po automation synth py on branch autosynth nothing to commit working tree clean synthtool using precloned repo home kbuilder cache synthtool synthtool debug synthtool using precloned repo home kbuilder cache synthtool synthtool eslintignore eslintrc json gitattributes github codeowners github issue template bug report md github issue template feature request md github issue template support request md github pull request template md github release please yml github workflows ci yaml kokoro gitattributes kokoro common cfg kokoro continuous common cfg kokoro continuous docs cfg kokoro continuous test cfg kokoro continuous common cfg kokoro continuous lint cfg kokoro continuous samples test cfg kokoro continuous system test cfg kokoro continuous test cfg kokoro docs sh kokoro lint sh kokoro populate secrets sh kokoro presubmit common cfg kokoro presubmit common cfg kokoro presubmit samples test cfg kokoro presubmit system test cfg kokoro presubmit test cfg kokoro publish sh kokoro release docs devsite cfg kokoro release docs devsite sh kokoro release docs cfg kokoro release docs sh kokoro release publish cfg kokoro samples test sh kokoro system test sh kokoro test bat kokoro test sh kokoro trampoline sh kokoro trampoline sh mocharc js nycrc prettierignore prettierrc js trampolinerc code of conduct md contributing md license readme md synthtool ensure you pass a string quality to release quality badge warning synthtool ensure you pass a string quality to release quality badge api extractor json renovate json samples readme md synthtool installing dependencies debug synthtool installing dependencies npm warn deprecated sinon breaking change found in this patch version npm warn deprecated ts simple ast notice ts simple ast has been renamed to ts morph and version reset to switch at your leisure npm warn deprecated cheerio select tmp use cheerio select instead npm warn deprecated chokidar chokidar will break on node upgrade to chokidar with less dependencies npm warn deprecated opn the package has been renamed to open npm warn deprecated fsevents fsevents will break on node and could be using insecure binaries upgrade to fsevents npm warn deprecated core js core js is no longer maintained and not recommended for usage due to the number of issues please upgrade your dependencies to the actual version of core js npm warn deprecated resolve url npm warn deprecated urix please see core js postinstall home kbuilder cache synthtool github repo automation node modules core js node e try require postinstall catch e compodoc compodoc postinstall home kbuilder cache synthtool github repo automation node modules compodoc compodoc opencollective postinstall exit   you for using compodoc compodoc     you rely on this package please consider supporting our open collective    google repo prepare home kbuilder cache synthtool github repo automation npm run compile google repo precompile home kbuilder cache synthtool github repo automation gts clean version removing build google repo compile home kbuilder cache synthtool github repo automation tsc p  modules types sinon index d ts         home kbuilder cache synthtool github repo automation node modules sinonjs fake timers types fake timers src has no exported member timerid   type sinontimerid faketimers timerid      modules types sinon index d ts         home kbuilder cache synthtool github repo automation node modules sinonjs fake timers types fake timers src has no exported member installedmethods   type sinonfaketimers faketimers installedmethods      modules types sinon index d ts         home kbuilder cache synthtool github repo automation node modules sinonjs fake timers types fake timers src has no exported member nodeclock   faketimers nodeclock      modules types sinon index d ts         home kbuilder cache synthtool github repo automation node modules sinonjs fake timers types fake timers src has no exported member browserclock   faketimers browserclock     found errors npm err code elifecycle npm err errno npm err google repo compile tsc p npm err exit status npm err npm err failed at the google repo compile script npm err this is probably not a problem with npm there is likely additional logging output above npm err a complete log of this run can be found in npm err home kbuilder npm logs debug log npm err code elifecycle npm err errno npm err google repo prepare npm run compile npm err exit status npm err npm err failed at the google repo prepare script npm err this is probably not a problem with npm there is likely additional logging output above npm err a complete log of this run can be found in npm err home kbuilder npm logs debug log synthtool failed executing npm install none error synthtool failed executing npm install none traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file home kbuilder cache synthtool github repo automation synth py line in node install file tmpfs src github synthtool synthtool languages node py line in install shell run hide output hide output file tmpfs src github synthtool synthtool shell py line in run raise exc file tmpfs src github synthtool synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status autosynth synthesis failed autosynth running git reset hard head head is now at chore release autosynth running git checkout autosynth switched to branch autosynth autosynth running git clean fdx removing pycache removing node modules traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main commit count synthesize loop x multiple prs change pusher synthesizer file tmpfs src github synthtool autosynth synth py line in synthesize loop has changes toolbox synthesize version in new branch synthesizer youngest file tmpfs src github synthtool autosynth synth toolbox py line in synthesize version in new branch synthesizer synthesize synth log path self environ file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
0
35,982
5,026,018,753
IssuesEvent
2016-12-15 11:06:46
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
github.com/cockroachdb/cockroach/pkg/kv: TestMultiRangeEmptyAfterTruncate failed under stress
Robot test-failure
SHA: https://github.com/cockroachdb/cockroach/commits/ee0292a306126d9fa9d81d4cd3df45a6e38ad578 Parameters: ``` COCKROACH_PROPOSER_EVALUATED_KV=false TAGS= GOFLAGS= ``` Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=91227&tab=buildLog ``` I161215 08:17:38.298771 16536 gossip/gossip.go:248 [n?] initial resolvers: [] W161215 08:17:38.298791 16536 gossip/gossip.go:1124 [n?] no resolvers found; use --join to specify a connected node W161215 08:17:38.299389 16536 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I161215 08:17:38.299570 16536 storage/engine/rocksdb.go:340 opening in memory rocksdb instance I161215 08:17:38.300203 16536 server/config.go:447 1 storage engine initialized I161215 08:17:38.300488 16536 server/node.go:426 [n?] store [n0,s0] not bootstrapped I161215 08:17:38.301687 16574 storage/replica_proposal.go:390 [n?,s1,r1/1:/M{in-ax},@c420528d80] new range lease repl={1 1 1} start=0.000000000,0 exp=1481789867.301286503,0 pro=1481789858.301293203,0 following repl={0 0 0} start=0.000000000,0 exp=0.000000000,0 [physicalTime=2016-12-15 08:17:38.301652107 +0000 UTC] I161215 08:17:38.302324 16536 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks I161215 08:17:38.302408 16536 server/node.go:355 [n?] **** cluster c96a78d1-5eac-4011-a3d9-c188b4d2ee52 has been created I161215 08:17:38.302424 16536 server/node.go:356 [n?] **** add additional nodes by specifying --join=127.0.0.1:45102 I161215 08:17:38.302689 16536 base/node_id.go:62 [n1] NodeID set to 1 I161215 08:17:38.303014 16536 storage/store.go:1240 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I161215 08:17:38.303041 16536 server/node.go:439 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:0} I161215 08:17:38.303057 16536 server/node.go:324 [n1] node ID 1 initialized I161215 08:17:38.303090 16536 gossip/gossip.go:290 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45102" > attrs:<> locality:<> I161215 08:17:38.304078 16536 storage/stores.go:296 [n1] read 0 node addresses from persistent storage I161215 08:17:38.304154 16536 server/node.go:569 [n1] connecting to gossip network to verify cluster ID... I161215 08:17:38.304175 16536 server/node.go:589 [n1] node connected via gossip and verified as part of cluster "c96a78d1-5eac-4011-a3d9-c188b4d2ee52" I161215 08:17:38.304201 16536 server/node.go:374 [n1] node=1: started with [[]=] engine(s) and attributes [] I161215 08:17:38.304226 16536 sql/executor.go:313 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:45102} I161215 08:17:38.314267 16536 server/server.go:623 [n1] starting https server at 127.0.0.1:39962 I161215 08:17:38.314289 16536 server/server.go:624 [n1] starting grpc/postgres server at 127.0.0.1:45102 I161215 08:17:38.314301 16536 server/server.go:625 [n1] advertising CockroachDB node at 127.0.0.1:45102 I161215 08:17:38.314566 16644 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:45102} Attrs: Locality:} ClusterID:c96a78d1-5eac-4011-a3d9-c188b4d2ee52 StartedAt:1481789858304182039} I161215 08:17:38.325786 16536 sql/event_log.go:95 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]} I161215 08:17:38.333837 16536 server/server.go:678 [n1] done ensuring all necessary migrations have run I161215 08:17:38.333858 16536 server/server.go:680 [n1] serving sql connections I161215 08:17:48.418717 16536 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks I161215 08:17:48.418833 16617 vendor/google.golang.org/grpc/transport/http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF. I161215 08:17:48.418853 16632 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:45102->127.0.0.1:38901: use of closed network connection I161215 08:17:48.418934 16593 vendor/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:45102: operation was canceled"; Reconnecting to {"127.0.0.1:45102" <nil>} I161215 08:17:48.418943 16593 vendor/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing test_server_shim.go:130: had 1 ranges at startup, expected 5 ```
1.0
github.com/cockroachdb/cockroach/pkg/kv: TestMultiRangeEmptyAfterTruncate failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/ee0292a306126d9fa9d81d4cd3df45a6e38ad578 Parameters: ``` COCKROACH_PROPOSER_EVALUATED_KV=false TAGS= GOFLAGS= ``` Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=91227&tab=buildLog ``` I161215 08:17:38.298771 16536 gossip/gossip.go:248 [n?] initial resolvers: [] W161215 08:17:38.298791 16536 gossip/gossip.go:1124 [n?] no resolvers found; use --join to specify a connected node W161215 08:17:38.299389 16536 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I161215 08:17:38.299570 16536 storage/engine/rocksdb.go:340 opening in memory rocksdb instance I161215 08:17:38.300203 16536 server/config.go:447 1 storage engine initialized I161215 08:17:38.300488 16536 server/node.go:426 [n?] store [n0,s0] not bootstrapped I161215 08:17:38.301687 16574 storage/replica_proposal.go:390 [n?,s1,r1/1:/M{in-ax},@c420528d80] new range lease repl={1 1 1} start=0.000000000,0 exp=1481789867.301286503,0 pro=1481789858.301293203,0 following repl={0 0 0} start=0.000000000,0 exp=0.000000000,0 [physicalTime=2016-12-15 08:17:38.301652107 +0000 UTC] I161215 08:17:38.302324 16536 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks I161215 08:17:38.302408 16536 server/node.go:355 [n?] **** cluster c96a78d1-5eac-4011-a3d9-c188b4d2ee52 has been created I161215 08:17:38.302424 16536 server/node.go:356 [n?] **** add additional nodes by specifying --join=127.0.0.1:45102 I161215 08:17:38.302689 16536 base/node_id.go:62 [n1] NodeID set to 1 I161215 08:17:38.303014 16536 storage/store.go:1240 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I161215 08:17:38.303041 16536 server/node.go:439 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:0} I161215 08:17:38.303057 16536 server/node.go:324 [n1] node ID 1 initialized I161215 08:17:38.303090 16536 gossip/gossip.go:290 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45102" > attrs:<> locality:<> I161215 08:17:38.304078 16536 storage/stores.go:296 [n1] read 0 node addresses from persistent storage I161215 08:17:38.304154 16536 server/node.go:569 [n1] connecting to gossip network to verify cluster ID... I161215 08:17:38.304175 16536 server/node.go:589 [n1] node connected via gossip and verified as part of cluster "c96a78d1-5eac-4011-a3d9-c188b4d2ee52" I161215 08:17:38.304201 16536 server/node.go:374 [n1] node=1: started with [[]=] engine(s) and attributes [] I161215 08:17:38.304226 16536 sql/executor.go:313 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:45102} I161215 08:17:38.314267 16536 server/server.go:623 [n1] starting https server at 127.0.0.1:39962 I161215 08:17:38.314289 16536 server/server.go:624 [n1] starting grpc/postgres server at 127.0.0.1:45102 I161215 08:17:38.314301 16536 server/server.go:625 [n1] advertising CockroachDB node at 127.0.0.1:45102 I161215 08:17:38.314566 16644 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:45102} Attrs: Locality:} ClusterID:c96a78d1-5eac-4011-a3d9-c188b4d2ee52 StartedAt:1481789858304182039} I161215 08:17:38.325786 16536 sql/event_log.go:95 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]} I161215 08:17:38.333837 16536 server/server.go:678 [n1] done ensuring all necessary migrations have run I161215 08:17:38.333858 16536 server/server.go:680 [n1] serving sql connections I161215 08:17:48.418717 16536 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks I161215 08:17:48.418833 16617 vendor/google.golang.org/grpc/transport/http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF. I161215 08:17:48.418853 16632 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:45102->127.0.0.1:38901: use of closed network connection I161215 08:17:48.418934 16593 vendor/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:45102: operation was canceled"; Reconnecting to {"127.0.0.1:45102" <nil>} I161215 08:17:48.418943 16593 vendor/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing test_server_shim.go:130: had 1 ranges at startup, expected 5 ```
test
github com cockroachdb cockroach pkg kv testmultirangeemptyaftertruncate failed under stress sha parameters cockroach proposer evaluated kv false tags goflags stress build found a failed test gossip gossip go initial resolvers gossip gossip go no resolvers found use join to specify a connected node server status runtime go could not parse build timestamp parsing time as cannot parse as storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage replica proposal go new range lease repl start exp pro following repl start exp util stop stopper go stop has been called stopping or quiescing all running tasks server node go cluster has been created server node go add additional nodes by specifying join base node id go nodeid set to storage store go failed initial metrics computation system config not yet available server node go initialized store capacity available rangecount leasecount server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat sql event log go event alter table target info tablename eventlog statement alter table system eventlog alter column uniqueid set default uuid user node mutationid cascadedroppedviews server server go done ensuring all necessary migrations have run server server go serving sql connections util stop stopper go stop has been called stopping or quiescing all running tasks vendor google golang org grpc transport client go transport notifyerror got notified that the client transport was broken eof vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection vendor google golang org grpc clientconn go grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp operation was canceled reconnecting to vendor google golang org grpc clientconn go grpc addrconn transportmonitor exits due to grpc the connection is closing test server shim go had ranges at startup expected
1
41,923
22,095,361,646
IssuesEvent
2022-06-01 09:30:37
nextcloud/spreed
https://api.github.com/repos/nextcloud/spreed
closed
Combine session updates of multiple rooms into 1 request from the HPB
1. to develop enhancement feature: api 🛠️ feature: signaling 📶 performance 🚀
<!--- Please keep this note for other contributors --> ### How to use GitHub * Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are interested into the same feature. * Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue. * Subscribe to receive notifications on status change and new comments. --- Background numbers of a company call: ``` $ cat nextcloud.log | grep "nextcloud-spreed-signaling" | wc -l 7510 $ cat nextcloud.log | grep "nextcloud-spreed-signaling" | grep " sessions in room " | wc -l 6279 $ cat nextcloud.log | grep "nextcloud-spreed-signaling" | grep -v " sessions in room " | wc -l 1231 ``` about 6.2k ping requests happen. Those could mostly be combined to ~6 per minute => 360 per hour (if the server can handle ~200 sessions in one request). The idea is to add a capability based on a config which signals the HPB how many sessions can be sent per request and then drop the room check from the controller as we don't need to know it afterwards to update the sessions: https://github.com/nextcloud/spreed/blob/c07251f0c67d3c02debe1f6dbe0e12ee3fe9f99a/lib/Controller/SignalingController.php#L732-L746 Once implemented a ticket in https://github.com/strukturag/nextcloud-spreed-signaling/issues should be created
True
Combine session updates of multiple rooms into 1 request from the HPB - <!--- Please keep this note for other contributors --> ### How to use GitHub * Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are interested into the same feature. * Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue. * Subscribe to receive notifications on status change and new comments. --- Background numbers of a company call: ``` $ cat nextcloud.log | grep "nextcloud-spreed-signaling" | wc -l 7510 $ cat nextcloud.log | grep "nextcloud-spreed-signaling" | grep " sessions in room " | wc -l 6279 $ cat nextcloud.log | grep "nextcloud-spreed-signaling" | grep -v " sessions in room " | wc -l 1231 ``` about 6.2k ping requests happen. Those could mostly be combined to ~6 per minute => 360 per hour (if the server can handle ~200 sessions in one request). The idea is to add a capability based on a config which signals the HPB how many sessions can be sent per request and then drop the room check from the controller as we don't need to know it afterwards to update the sessions: https://github.com/nextcloud/spreed/blob/c07251f0c67d3c02debe1f6dbe0e12ee3fe9f99a/lib/Controller/SignalingController.php#L732-L746 Once implemented a ticket in https://github.com/strukturag/nextcloud-spreed-signaling/issues should be created
non_test
combine session updates of multiple rooms into request from the hpb how to use github please use the 👍 to show that you are interested into the same feature please don t comment if you have no relevant information to add it s just extra noise for everyone subscribed to this issue subscribe to receive notifications on status change and new comments background numbers of a company call cat nextcloud log grep nextcloud spreed signaling wc l cat nextcloud log grep nextcloud spreed signaling grep sessions in room wc l cat nextcloud log grep nextcloud spreed signaling grep v sessions in room wc l about ping requests happen those could mostly be combined to per minute per hour if the server can handle sessions in one request the idea is to add a capability based on a config which signals the hpb how many sessions can be sent per request and then drop the room check from the controller as we don t need to know it afterwards to update the sessions once implemented a ticket in should be created
0
8,342
2,611,493,540
IssuesEvent
2015-02-27 05:33:49
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
opened
Health/name hedgehogs tags missing after startup
auto-migrated Priority-Medium Type-Defect
``` No health/name tags above some hogs staying on a bridge on the left part of the map. To reproduce just load the save attached (tested in WinXP 32 bit, Intel 965 chipset videocard, revision 3e8fbc917f32). ``` Original issue reported on code.google.com by `unC0Rr` on 14 Dec 2011 at 11:14 Attachments: * [tagsBug.42.hws](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-334/comment-0/tagsBug.42.hws)
1.0
Health/name hedgehogs tags missing after startup - ``` No health/name tags above some hogs staying on a bridge on the left part of the map. To reproduce just load the save attached (tested in WinXP 32 bit, Intel 965 chipset videocard, revision 3e8fbc917f32). ``` Original issue reported on code.google.com by `unC0Rr` on 14 Dec 2011 at 11:14 Attachments: * [tagsBug.42.hws](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-334/comment-0/tagsBug.42.hws)
non_test
health name hedgehogs tags missing after startup no health name tags above some hogs staying on a bridge on the left part of the map to reproduce just load the save attached tested in winxp bit intel chipset videocard revision original issue reported on code google com by on dec at attachments
0
211,397
16,237,838,702
IssuesEvent
2021-05-07 04:40:34
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Accessing clusters and v2 charts page is slow
[zube]: To Test kind/bug-qa
**What kind of request is this (question/bug/enhancement/feature request):** bug **Steps to reproduce (least amount of steps as possible):** - Deploy a DO cluster - all roles 3 nodes on a rancher master-head setup - When the cluster is up and active, Try accessing the Default/System project of the deployed cluster. - The page takes too long to load and rancher logs show: - Rancher logs show: ``` I0407 04:02:28.336283 24 request.go:645] Throttling request took 1.045507152s, request: GET:https://127.0.0.1:6444/apis/management.cattle.io/v3?timeout=32s I0407 04:02:45.444299 24 request.go:645] Throttling request took 1.045014219s, request: GET:https://127.0.0.1:6444/apis/management.cattle.io/v3?timeout=32s I0407 04:02:59.388484 24 request.go:645] Throttling request took 1.045439151s, request: GET:https://127.0.0.1:6444/apis/management.cattle.io/v3?timeout=32s ``` **Note:** Loading charts/apps also takes sometime, and these appear in rancher logs **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): master-head commit id: `215677` - Installation option (single install/HA): single
1.0
Accessing clusters and v2 charts page is slow - **What kind of request is this (question/bug/enhancement/feature request):** bug **Steps to reproduce (least amount of steps as possible):** - Deploy a DO cluster - all roles 3 nodes on a rancher master-head setup - When the cluster is up and active, Try accessing the Default/System project of the deployed cluster. - The page takes too long to load and rancher logs show: - Rancher logs show: ``` I0407 04:02:28.336283 24 request.go:645] Throttling request took 1.045507152s, request: GET:https://127.0.0.1:6444/apis/management.cattle.io/v3?timeout=32s I0407 04:02:45.444299 24 request.go:645] Throttling request took 1.045014219s, request: GET:https://127.0.0.1:6444/apis/management.cattle.io/v3?timeout=32s I0407 04:02:59.388484 24 request.go:645] Throttling request took 1.045439151s, request: GET:https://127.0.0.1:6444/apis/management.cattle.io/v3?timeout=32s ``` **Note:** Loading charts/apps also takes sometime, and these appear in rancher logs **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): master-head commit id: `215677` - Installation option (single install/HA): single
test
accessing clusters and charts page is slow what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible deploy a do cluster all roles nodes on a rancher master head setup when the cluster is up and active try accessing the default system project of the deployed cluster the page takes too long to load and rancher logs show rancher logs show request go throttling request took request get request go throttling request took request get request go throttling request took request get note loading charts apps also takes sometime and these appear in rancher logs environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui master head commit id installation option single install ha single
1
39,700
5,242,731,745
IssuesEvent
2017-01-31 18:50:32
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Port forwarding tests won't work with rkt-kvm (propably with all hypervisor based runtimes)
area/kubelet area/test sig/node
Inside portforwardtester container in portforwardtester.go line 52 ``` go listener, err := net.Listen("tcp", fmt.Sprintf("localhost:%s", bindPort)) ``` It causes app to only listen on the local interface, its not the problem with namespace based runtimes cause we can use nsenter. With hypervisor based runtimes we need app to be available on external address, so line mentioned above should look like: ``` go listener, err := net.Listen("tcp", fmt.Sprintf("0.0.0.0:%s", bindPort)) ``` @ncdc PTAL cause i don't know who is owner of this container @feiskyer could you check if this is a problem for hyper too ?
1.0
Port forwarding tests won't work with rkt-kvm (propably with all hypervisor based runtimes) - Inside portforwardtester container in portforwardtester.go line 52 ``` go listener, err := net.Listen("tcp", fmt.Sprintf("localhost:%s", bindPort)) ``` It causes app to only listen on the local interface, its not the problem with namespace based runtimes cause we can use nsenter. With hypervisor based runtimes we need app to be available on external address, so line mentioned above should look like: ``` go listener, err := net.Listen("tcp", fmt.Sprintf("0.0.0.0:%s", bindPort)) ``` @ncdc PTAL cause i don't know who is owner of this container @feiskyer could you check if this is a problem for hyper too ?
test
port forwarding tests won t work with rkt kvm propably with all hypervisor based runtimes inside portforwardtester container in portforwardtester go line go listener err net listen tcp fmt sprintf localhost s bindport it causes app to only listen on the local interface its not the problem with namespace based runtimes cause we can use nsenter with hypervisor based runtimes we need app to be available on external address so line mentioned above should look like go listener err net listen tcp fmt sprintf s bindport ncdc ptal cause i don t know who is owner of this container feiskyer could you check if this is a problem for hyper too
1
124,146
10,295,251,284
IssuesEvent
2019-08-27 20:44:41
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
teamcity: failed test: TestImportMysql
C-test-failure O-robot
The following tests appear to have failed on master (testrace): TestImportMysql/all_from_multi_gzip, TestImportMysql/second_from_multi, TestImportMysql/all_from_multi, TestImportMysql/simple_from_multi, TestImportMysql, TestImportMysql/single_table_dump, TestImportMysql/all_from_multi_bzip, TestImportMysql/read_data_only, TestImportMysql/second_table_dump You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestImportMysql). [#1452062](https://teamcity.cockroachdb.com/viewLog.html?buildId=1452062): ``` TestImportMysql/all_from_multi_gzip ...Table/65/3 - /Max that contains live data I190824 01:39:08.533188 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/61/1 - /Table/61/2 that contains live data I190824 01:39:08.533467 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/61/2 - /Table/62 that contains live data I190824 01:39:08.533592 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/61/3 - /Table/62 that contains live data I190824 01:39:08.533708 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/62/1 - /Table/63 that contains live data I190824 01:39:08.533808 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/63/1 - /Table/64 that contains live data I190824 01:39:08.533950 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/65/1 - /Max that contains live data I190824 01:39:08.534062 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/65/2 - /Max that contains live data I190824 01:39:08.534169 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/65/3 - /Max that contains live data W190824 01:39:08.543470 135203 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.545405 135203 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.550549 136032 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.551036 136032 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.553883 136377 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.562007 136377 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:39:09.174714 150260 storage/replica_command.go:284 [n1,split,s1,r59/1:/Table/{69/1-70/1}] initiating a split of this range at key /Table/70 [r66] (zone config) I190824 01:39:09.275899 150561 storage/replica_command.go:284 [n1,split,s1,r58/1:/Table/6{7/3-8/1}] initiating a split of this range at key /Table/68 [r67] (zone config) I190824 01:39:09.280326 150560 storage/replica_command.go:284 [n1,split,s1,r64/1:/Table/7{0/1-1/1}] initiating a split of this range at key /Table/71 [r68] (zone config) I190824 01:39:09.307457 150562 storage/replica_command.go:284 [n1,split,s1,r60/1:/Table/6{8/1-9/1}] initiating a split of this range at key /Table/69 [r69] (zone config) I190824 01:39:09.909036 150710 storage/replica_command.go:284 [n1,split,s1,r65/1:/Table/6{6/1-7/1}] initiating a split of this range at key /Table/67 [r70] (zone config) I190824 01:39:10.390914 150744 storage/replica_command.go:284 [n1,split,s1,r50/1:/Table/6{5-6/1}] initiating a split of this range at key /Table/66 [r71] (zone config) I190824 01:39:10.732640 135269 server/status/runtime.go:498 [n1] runtime stats: 2.1 GiB RSS, 754 goroutines, 218 MiB/71 MiB/337 MiB GO alloc/idle/total, 121 MiB/166 MiB CGO alloc/total, 1597.7 CGO/sec, 122.6/9.1 %(u/s)time, 0.3 %gc (2x), 3.3 MiB/3.3 MiB (r/w)net W190824 01:39:11.511031 150745 storage/replica_raft.go:105 [n1,s1,r50/1:/Table/6{5-6}] context canceled before proposing: 1 HeartbeatTxn TestImportMysql/second_table_dump ...1:38:16.547518 135204 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.548022 135204 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.562783 136376 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.563404 136376 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.587413 136048 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.587937 136048 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.625666 140768 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.671070 136066 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.671597 136066 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.675353 135178 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.675862 135178 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.678860 136397 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.679333 136397 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.753479 140825 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.808433 135227 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.819064 135227 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.875552 136069 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.876018 136069 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.907062 136409 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.907678 136409 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:17.387836 140862 storage/replica_command.go:284 [n1,split,s1,r26/1:/Table/5{5-6/1}] initiating a split of this range at key /Table/56 [r30] (zone config) TestImportMysql/read_data_only --- FAIL: testrace/TestImportMysql/read_data_only (0.000s) Test ended in panic. ------- Stdout: ------- I190824 01:38:03.938592 138480 storage/replica_command.go:284 [n1,s1,r20/1:/{Table/24-Max}] initiating a split of this range at key /Table/53/1 [r21] (manual) I190824 01:38:04.180612 136110 server/status/runtime.go:498 [n2] runtime stats: 2.0 GiB RSS, 699 goroutines, 188 MiB/101 MiB/335 MiB GO alloc/idle/total, 86 MiB/131 MiB CGO alloc/total, 2430.3 CGO/sec, 134.7/13.8 %(u/s)time, 0.3 %gc (2x), 2.6 MiB/2.6 MiB (r/w)net W190824 01:38:04.356984 138480 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.401556 135222 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.403641 136053 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.404119 136053 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.408604 136406 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.409240 136406 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.410374 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.472075 138480 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.507311 135221 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.507848 135221 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.509145 136361 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.509682 136361 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.517936 136051 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.518603 136051 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.519872 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:04.927539 138689 storage/replica_command.go:284 [n1,split,s1,r20/1:/Table/{24-53/1}] initiating a split of this range at key /Table/53 [r22] (zone config) TestImportMysql ...90824 01:38:01.636589 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:01.638439 138104 storage/replica_raftstorage.go:829 [n2,s2,r17/2:/Table/2{1-2}] applied LEARNER snapshot [total=13ms ingestion=4@11ms id=27f1644e index=19] W190824 01:38:01.640474 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:01.657226 135033 storage/replica_command.go:1264 [n1,replicate,s1,r17/1:/Table/2{1-2}] change replicas (add [(n2,s2):2] remove []): existing descriptor r17:/Table/2{1-2} [(n1,s1):1, (n2,s2):2LEARNER, next=3, gen=1] W190824 01:38:01.662319 135264 server/node.go:817 [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:2}]} I190824 01:38:01.766610 135205 storage/queue.go:518 [n1,s1,r9/1:/Table/1{3-4}] rate limited in MaybeAdd (raftlog): context canceled I190824 01:38:01.813636 135033 storage/replica_raft.go:291 [n1,s1,r17/1:/Table/2{1-2}] proposing ADD_REPLICA[(n2,s2):2]: after=[(n1,s1):1 (n2,s2):2] next=3 I190824 01:38:01.826479 135033 storage/replica_command.go:1264 [n1,replicate,s1,r17/1:/Table/2{1-2}] change replicas (add [(n3,s3):3LEARNER] remove []): existing descriptor r17:/Table/2{1-2} [(n1,s1):1, (n2,s2):2, next=3, gen=2] I190824 01:38:01.942358 135033 storage/replica_raft.go:291 [n1,s1,r17/1:/Table/2{1-2}] proposing ADD_REPLICA[(n3,s3):3LEARNER]: after=[(n1,s1):1 (n2,s2):2 (n3,s3):3LEARNER] next=4 I190824 01:38:02.000633 135033 storage/store_snapshot.go:995 [n1,replicate,s1,r17/1:/Table/2{1-2}] sending LEARNER snapshot 4fd488de at applied index 23 I190824 01:38:02.001662 135033 storage/store_snapshot.go:1038 [n1,replicate,s1,r17/1:/Table/2{1-2}] streamed snapshot to (n3,s3):3: kv pairs: 11, log entries: 0, rate-limit: 8.0 MiB/sec, 0.04s I190824 01:38:02.038247 138049 storage/replica_raftstorage.go:808 [n3,s3,r17/3:{-}] applying LEARNER snapshot [id=4fd488de index=23] W190824 01:38:02.040966 138049 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:02.041947 138049 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:02.042306 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:02.042781 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:02.044827 138049 storage/replica_raftstorage.go:829 [n3,s3,r17/3:/Table/2{1-2}] applied LEARNER snapshot [total=6ms ingestion=4@2ms id=4fd488de index=23] I190824 01:38:02.050228 135033 storage/replica_command.go:1264 [n1,replicate,s1,r17/1:/Table/2{1-2}] change replicas (add [(n3,s3):3] remove []): existing descriptor r17:/Table/2{1-2} [(n1,s1):1, (n2,s2):2, (n3,s3):3LEARNER, next=4, gen=3] I190824 01:38:02.120058 135033 storage/replica_raft.go:291 [n1,s1,r17/1:/Table/2{1-2}] proposing ADD_REPLICA[(n3,s3):3]: after=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4 I190824 01:38:02.292623 135033 testutils/testcluster/testcluster.go:656 WaitForFullReplication took: 16.431277583s I190824 01:38:02.601801 138093 sql/event_log.go:130 [n1,client=127.0.0.1:43292,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:kv.bulk_ingest.batch_size Value:10KB User:root} I190824 01:38:02.916703 138093 sql/event_log.go:130 [n1,client=127.0.0.1:43292,user=root] Event: "create_database", target: 52, info: {DatabaseName:foo Statement:CREATE DATABASE foo User:root} TestImportMysql/simple_from_multi ...compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/53/1 - /Max that contains live data I190824 01:38:21.975925 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/54/1 - /Table/55 that contains live data I190824 01:38:21.976059 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/55/1 - /Max that contains live data I190824 01:38:21.976220 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/56/1 - /Table/56/2 that contains live data I190824 01:38:21.976330 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/56/2 - /Max that contains live data I190824 01:38:21.976465 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/56/3 - /Max that contains live data W190824 01:38:22.362937 141707 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.406100 135230 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.406754 135230 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.410253 136388 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.412577 136388 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.413366 136023 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.414053 136023 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.461158 141707 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.474626 135177 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.476313 135177 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.476978 136021 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.477518 136021 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.478089 136386 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.478974 136386 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:22.821751 142022 storage/replica_command.go:284 [n1,split,s1,r30/1:/Table/5{6-7/1}] initiating a split of this range at key /Table/57 [r34] (zone config) I190824 01:38:22.830339 142023 storage/replica_command.go:284 [n1,split,s1,r32/1:/Table/5{7/1-8/1}] initiating a split of this range at key /Table/58 [r35] (zone config) TestImportMysql/all_from_multi ...eed. W190824 01:38:42.955057 136398 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:42.987416 144709 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.017498 135139 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.017995 135139 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.020172 136047 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.020332 136409 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.020647 136047 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.020951 136409 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:43.399108 145140 storage/replica_command.go:284 [n1,split,s1,r46/1:/Table/6{4/1-5/1}] initiating a split of this range at key /Table/65 [r50] (zone config) I190824 01:38:43.714260 145476 storage/replica_command.go:284 [n1,split,s1,r45/1:/Table/6{3/1-4/1}] initiating a split of this range at key /Table/64 [r51] (zone config) I190824 01:38:43.751646 145475 storage/replica_command.go:284 [n1,split,s1,r40/1:/Table/6{0/1-1/1}] initiating a split of this range at key /Table/61 [r52] (zone config) I190824 01:38:43.801634 145456 storage/replica_command.go:284 [n1,split,s1,r44/1:/Table/6{2/1-3/1}] initiating a split of this range at key /Table/63 [r53] (zone config) I190824 01:38:43.858271 145457 storage/replica_command.go:284 [n1,split,s1,r39/1:/Table/{59-60/1}] initiating a split of this range at key /Table/60 [r54] (zone config) I190824 01:38:44.150077 136107 gossip/gossip.go:566 [n2] gossip status (ok, 3 nodes) gossip client (1/3 cur/max conns) 1: 127.0.0.1:42279 (1m0s: infos 139/340 sent/received, bytes 35041B/322667B sent/received) gossip server (0/3 cur/max conns, infos 139/340 sent/received, bytes 35041B/322667B sent/received) gossip connectivity n2 [sentinel]; n2 -> n1; n3 -> n1; I190824 01:38:44.205248 136110 server/status/runtime.go:498 [n2] runtime stats: 2.1 GiB RSS, 759 goroutines, 216 MiB/72 MiB/336 MiB GO alloc/idle/total, 111 MiB/156 MiB CGO alloc/total, 1518.0 CGO/sec, 126.0/10.3 %(u/s)time, 0.1 %gc (2x), 3.7 MiB/3.7 MiB (r/w)net I190824 01:38:45.127981 145735 storage/replica_command.go:284 [n1,split,s1,r43/1:/Table/6{1/3-2/1}] initiating a split of this range at key /Table/62 [r55] (zone config) W190824 01:38:45.289510 145547 storage/replica_raft.go:105 [n1,s1,r39/1:/Table/{59-60}] context canceled before proposing: 1 HeartbeatTxn I190824 01:38:45.517105 136436 gossip/gossip.go:566 [n3] gossip status (ok, 3 nodes) gossip client (1/3 cur/max conns) 1: 127.0.0.1:42279 (1m1s: infos 121/369 sent/received, bytes 33618B/326778B sent/received) gossip server (0/3 cur/max conns, infos 121/369 sent/received, bytes 33618B/326778B sent/received) gossip connectivity n2 [sentinel]; n2 -> n1; n3 -> n1; I190824 01:38:45.560335 136439 server/status/runtime.go:498 [n3] runtime stats: 2.1 GiB RSS, 757 goroutines, 246 MiB/48 MiB/337 MiB GO alloc/idle/total, 110 MiB/155 MiB CGO alloc/total, 1661.1 CGO/sec, 128.4/11.1 %(u/s)time, 0.3 %gc (3x), 3.7 MiB/3.7 MiB (r/w)net TestImportMysql/second_from_multi ...1:38:31.136848 135213 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.137374 135213 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.144668 136024 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.145162 136024 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.149474 136386 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.150048 136386 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.193187 143333 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.210770 136407 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.210887 135222 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.211340 136407 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.211466 135222 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.215396 136040 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.215896 136040 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.251905 143333 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.270247 135201 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.270898 135201 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.272609 136067 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.273086 136067 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.280426 136395 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.289184 136395 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:31.609846 143517 storage/replica_command.go:284 [n1,split,s1,r35/1:/Table/5{8-9/1}] initiating a split of this range at key /Table/59 [r39] (zone config) TestImportMysql/single_table_dump ... removing replica r21/1 I190824 01:38:06.928405 135233 storage/queue.go:518 [n1,s1,r3/1:/System/{NodeLive…-tsd}] rate limited in MaybeAdd (raftlog): context canceled I190824 01:38:06.934908 136393 storage/store.go:2593 [n3,s3,r22/3:/Table/53{-/1}] removing replica r21/3 I190824 01:38:06.937354 136056 storage/store.go:2593 [n2,s2,r22/2:/Table/53{-/1}] removing replica r21/2 I190824 01:38:08.504314 139230 storage/replica_command.go:284 [n1,s1,r22/1:/{Table/53-Max}] initiating a split of this range at key /Table/55/1 [r23] (manual) I190824 01:38:08.876990 139307 storage/replica_command.go:284 [n1,s1,r22/1:/Table/5{3-5/1}] initiating a split of this range at key /Table/54/1 [r24] (manual) W190824 01:38:09.394191 139339 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.457128 135233 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.458889 136075 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.459118 136419 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.460717 136075 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.460861 136419 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.459477 135233 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.528934 139326 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.556723 135227 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.557254 135227 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.562592 136020 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.563148 136020 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.565944 136421 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.566525 136421 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:09.824107 139294 storage/replica_command.go:284 [n1,split,s1,r22/1:/Table/5{3-4/1}] initiating a split of this range at key /Table/54 [r25] (zone config) I190824 01:38:10.070313 139495 storage/replica_command.go:284 [n1,split,s1,r24/1:/Table/5{4/1-5/1}] initiating a split of this range at key /Table/55 [r26] (zone config) I190824 01:38:10.726351 135269 server/status/runtime.go:498 [n1] runtime stats: 2.0 GiB RSS, 705 goroutines, 156 MiB/128 MiB/335 MiB GO alloc/idle/total, 91 MiB/136 MiB CGO alloc/total, 1297.6 CGO/sec, 127.3/12.5 %(u/s)time, 0.3 %gc (3x), 3.2 MiB/3.2 MiB (r/w)net TestImportMysql/all_from_multi_bzip ...op.(*Stopper).RunWorker.func1(0xc008a60ef0, 0xc0050d92c0, 0xc008a60eb0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4 goroutine 135215 [sync.Cond.Wait]: runtime.goparkunlock(...) /usr/local/go/src/runtime/proc.go:307 sync.runtime_notifyListWait(0xc006266ad0, 0xc00001e381) /usr/local/go/src/runtime/sema.go:510 +0xf9 sync.(*Cond).Wait(0xc006266ac0) /usr/local/go/src/sync/cond.go:56 +0x8e github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc00a3241b0, 0x61afac0, 0xc0037c1110) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:192 +0x9c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x61afac0, 0xc0037c1110) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:161 +0x56 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc0094f5c20, 0xc00475c1e0, 0xc0094f5c10) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4 goroutine 139961 [select, 36 minutes]: github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch(0xc0011a5e60, 0x61f8220, 0xc00a479c90, 0x616b880, 0xc0011a5e60) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:384 +0x26c github.com/cockroachdb/cockroach/pkg/storage._MultiRaft_RaftMessageBatch_Handler(0x51010c0, 0xc0011a5e60, 0x61e7300, 0xc00577b500, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft.pb.go:691 +0xce github.com/cockroachdb/cockroach/pkg/rpc.NewServerWithInterceptor.func3(0x51010c0, 0xc0011a5e60, 0x61e7300, 0xc00577b500, 0xc00a4bc140, 0x5312908, 0xc008837ab8, 0xc008837ab8) /go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:238 +0x170 github.com/cockroachdb/cockroach/pkg/rpc.NewServerWithInterceptor.func5(0x51010c0, 0xc0011a5e60, 0x61e7300, 0xc00577b500, 0xc00a4bc140, 0x5312908, 0x61afac0, 0xc006aad560) /go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:263 +0xe4 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processStreamingRPC(0xc002b89200, 0x61fd440, 0xc00612b680, 0xc007518900, 0xc0043289f0, 0x8a5bb40, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1210 +0xa4d github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc002b89200, 0x61fd440, 0xc00612b680, 0xc007518900, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1283 +0x12e6 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00724d290, 0xc002b89200, 0x61fd440, 0xc00612b680, 0xc007518900) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:717 +0xad created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1 /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:715 +0xb9 goroutine 136663 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal/transport.(*recvBufferReader).read(0xc000e710e0, 0xc00040ef90, 0x5, 0x5, 0x0, 0xc00332cac0, 0xc001379800) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal/transport/transport.go:146 +0x179 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal/transport.(*recvBufferReader).Read(0xc000e710e0, 0xc00040ef90, 0x5, 0x5, 0xc0013798c0, 0xfee640, 0xc00332cac0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal/transport/transport.go:140 +0x31f ***** Text was not loaded fully because its' size exceeds 2 MB, see full log for the whole text ***** ``` Please assign, take a look and update the issue accordingly.
1.0
teamcity: failed test: TestImportMysql - The following tests appear to have failed on master (testrace): TestImportMysql/all_from_multi_gzip, TestImportMysql/second_from_multi, TestImportMysql/all_from_multi, TestImportMysql/simple_from_multi, TestImportMysql, TestImportMysql/single_table_dump, TestImportMysql/all_from_multi_bzip, TestImportMysql/read_data_only, TestImportMysql/second_table_dump You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestImportMysql). [#1452062](https://teamcity.cockroachdb.com/viewLog.html?buildId=1452062): ``` TestImportMysql/all_from_multi_gzip ...Table/65/3 - /Max that contains live data I190824 01:39:08.533188 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/61/1 - /Table/61/2 that contains live data I190824 01:39:08.533467 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/61/2 - /Table/62 that contains live data I190824 01:39:08.533592 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/61/3 - /Table/62 that contains live data I190824 01:39:08.533708 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/62/1 - /Table/63 that contains live data I190824 01:39:08.533808 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/63/1 - /Table/64 that contains live data I190824 01:39:08.533950 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/65/1 - /Max that contains live data I190824 01:39:08.534062 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/65/2 - /Max that contains live data I190824 01:39:08.534169 136434 storage/compactor/compactor.go:325 [n3,s3,compactor] purging suggested compaction for range /Table/65/3 - /Max that contains live data W190824 01:39:08.543470 135203 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.545405 135203 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.550549 136032 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.551036 136032 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.553883 136377 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:39:08.562007 136377 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:39:09.174714 150260 storage/replica_command.go:284 [n1,split,s1,r59/1:/Table/{69/1-70/1}] initiating a split of this range at key /Table/70 [r66] (zone config) I190824 01:39:09.275899 150561 storage/replica_command.go:284 [n1,split,s1,r58/1:/Table/6{7/3-8/1}] initiating a split of this range at key /Table/68 [r67] (zone config) I190824 01:39:09.280326 150560 storage/replica_command.go:284 [n1,split,s1,r64/1:/Table/7{0/1-1/1}] initiating a split of this range at key /Table/71 [r68] (zone config) I190824 01:39:09.307457 150562 storage/replica_command.go:284 [n1,split,s1,r60/1:/Table/6{8/1-9/1}] initiating a split of this range at key /Table/69 [r69] (zone config) I190824 01:39:09.909036 150710 storage/replica_command.go:284 [n1,split,s1,r65/1:/Table/6{6/1-7/1}] initiating a split of this range at key /Table/67 [r70] (zone config) I190824 01:39:10.390914 150744 storage/replica_command.go:284 [n1,split,s1,r50/1:/Table/6{5-6/1}] initiating a split of this range at key /Table/66 [r71] (zone config) I190824 01:39:10.732640 135269 server/status/runtime.go:498 [n1] runtime stats: 2.1 GiB RSS, 754 goroutines, 218 MiB/71 MiB/337 MiB GO alloc/idle/total, 121 MiB/166 MiB CGO alloc/total, 1597.7 CGO/sec, 122.6/9.1 %(u/s)time, 0.3 %gc (2x), 3.3 MiB/3.3 MiB (r/w)net W190824 01:39:11.511031 150745 storage/replica_raft.go:105 [n1,s1,r50/1:/Table/6{5-6}] context canceled before proposing: 1 HeartbeatTxn TestImportMysql/second_table_dump ...1:38:16.547518 135204 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.548022 135204 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.562783 136376 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.563404 136376 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.587413 136048 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.587937 136048 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.625666 140768 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.671070 136066 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.671597 136066 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.675353 135178 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.675862 135178 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.678860 136397 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.679333 136397 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.753479 140825 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.808433 135227 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.819064 135227 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.875552 136069 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.876018 136069 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.907062 136409 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:16.907678 136409 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:17.387836 140862 storage/replica_command.go:284 [n1,split,s1,r26/1:/Table/5{5-6/1}] initiating a split of this range at key /Table/56 [r30] (zone config) TestImportMysql/read_data_only --- FAIL: testrace/TestImportMysql/read_data_only (0.000s) Test ended in panic. ------- Stdout: ------- I190824 01:38:03.938592 138480 storage/replica_command.go:284 [n1,s1,r20/1:/{Table/24-Max}] initiating a split of this range at key /Table/53/1 [r21] (manual) I190824 01:38:04.180612 136110 server/status/runtime.go:498 [n2] runtime stats: 2.0 GiB RSS, 699 goroutines, 188 MiB/101 MiB/335 MiB GO alloc/idle/total, 86 MiB/131 MiB CGO alloc/total, 2430.3 CGO/sec, 134.7/13.8 %(u/s)time, 0.3 %gc (2x), 2.6 MiB/2.6 MiB (r/w)net W190824 01:38:04.356984 138480 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.401556 135222 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.403641 136053 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.404119 136053 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.408604 136406 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.409240 136406 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.410374 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.472075 138480 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.507311 135221 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.507848 135221 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.509145 136361 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.509682 136361 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.517936 136051 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.518603 136051 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:04.519872 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:04.927539 138689 storage/replica_command.go:284 [n1,split,s1,r20/1:/Table/{24-53/1}] initiating a split of this range at key /Table/53 [r22] (zone config) TestImportMysql ...90824 01:38:01.636589 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:01.638439 138104 storage/replica_raftstorage.go:829 [n2,s2,r17/2:/Table/2{1-2}] applied LEARNER snapshot [total=13ms ingestion=4@11ms id=27f1644e index=19] W190824 01:38:01.640474 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:01.657226 135033 storage/replica_command.go:1264 [n1,replicate,s1,r17/1:/Table/2{1-2}] change replicas (add [(n2,s2):2] remove []): existing descriptor r17:/Table/2{1-2} [(n1,s1):1, (n2,s2):2LEARNER, next=3, gen=1] W190824 01:38:01.662319 135264 server/node.go:817 [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:2}]} I190824 01:38:01.766610 135205 storage/queue.go:518 [n1,s1,r9/1:/Table/1{3-4}] rate limited in MaybeAdd (raftlog): context canceled I190824 01:38:01.813636 135033 storage/replica_raft.go:291 [n1,s1,r17/1:/Table/2{1-2}] proposing ADD_REPLICA[(n2,s2):2]: after=[(n1,s1):1 (n2,s2):2] next=3 I190824 01:38:01.826479 135033 storage/replica_command.go:1264 [n1,replicate,s1,r17/1:/Table/2{1-2}] change replicas (add [(n3,s3):3LEARNER] remove []): existing descriptor r17:/Table/2{1-2} [(n1,s1):1, (n2,s2):2, next=3, gen=2] I190824 01:38:01.942358 135033 storage/replica_raft.go:291 [n1,s1,r17/1:/Table/2{1-2}] proposing ADD_REPLICA[(n3,s3):3LEARNER]: after=[(n1,s1):1 (n2,s2):2 (n3,s3):3LEARNER] next=4 I190824 01:38:02.000633 135033 storage/store_snapshot.go:995 [n1,replicate,s1,r17/1:/Table/2{1-2}] sending LEARNER snapshot 4fd488de at applied index 23 I190824 01:38:02.001662 135033 storage/store_snapshot.go:1038 [n1,replicate,s1,r17/1:/Table/2{1-2}] streamed snapshot to (n3,s3):3: kv pairs: 11, log entries: 0, rate-limit: 8.0 MiB/sec, 0.04s I190824 01:38:02.038247 138049 storage/replica_raftstorage.go:808 [n3,s3,r17/3:{-}] applying LEARNER snapshot [id=4fd488de index=23] W190824 01:38:02.040966 138049 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:02.041947 138049 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:02.042306 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:02.042781 1650 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:02.044827 138049 storage/replica_raftstorage.go:829 [n3,s3,r17/3:/Table/2{1-2}] applied LEARNER snapshot [total=6ms ingestion=4@2ms id=4fd488de index=23] I190824 01:38:02.050228 135033 storage/replica_command.go:1264 [n1,replicate,s1,r17/1:/Table/2{1-2}] change replicas (add [(n3,s3):3] remove []): existing descriptor r17:/Table/2{1-2} [(n1,s1):1, (n2,s2):2, (n3,s3):3LEARNER, next=4, gen=3] I190824 01:38:02.120058 135033 storage/replica_raft.go:291 [n1,s1,r17/1:/Table/2{1-2}] proposing ADD_REPLICA[(n3,s3):3]: after=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4 I190824 01:38:02.292623 135033 testutils/testcluster/testcluster.go:656 WaitForFullReplication took: 16.431277583s I190824 01:38:02.601801 138093 sql/event_log.go:130 [n1,client=127.0.0.1:43292,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:kv.bulk_ingest.batch_size Value:10KB User:root} I190824 01:38:02.916703 138093 sql/event_log.go:130 [n1,client=127.0.0.1:43292,user=root] Event: "create_database", target: 52, info: {DatabaseName:foo Statement:CREATE DATABASE foo User:root} TestImportMysql/simple_from_multi ...compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/53/1 - /Max that contains live data I190824 01:38:21.975925 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/54/1 - /Table/55 that contains live data I190824 01:38:21.976059 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/55/1 - /Max that contains live data I190824 01:38:21.976220 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/56/1 - /Table/56/2 that contains live data I190824 01:38:21.976330 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/56/2 - /Max that contains live data I190824 01:38:21.976465 136105 storage/compactor/compactor.go:325 [n2,s2,compactor] purging suggested compaction for range /Table/56/3 - /Max that contains live data W190824 01:38:22.362937 141707 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.406100 135230 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.406754 135230 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.410253 136388 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.412577 136388 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.413366 136023 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.414053 136023 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.461158 141707 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.474626 135177 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.476313 135177 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.476978 136021 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.477518 136021 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.478089 136386 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:22.478974 136386 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:22.821751 142022 storage/replica_command.go:284 [n1,split,s1,r30/1:/Table/5{6-7/1}] initiating a split of this range at key /Table/57 [r34] (zone config) I190824 01:38:22.830339 142023 storage/replica_command.go:284 [n1,split,s1,r32/1:/Table/5{7/1-8/1}] initiating a split of this range at key /Table/58 [r35] (zone config) TestImportMysql/all_from_multi ...eed. W190824 01:38:42.955057 136398 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:42.987416 144709 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.017498 135139 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.017995 135139 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.020172 136047 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.020332 136409 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.020647 136047 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:43.020951 136409 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:43.399108 145140 storage/replica_command.go:284 [n1,split,s1,r46/1:/Table/6{4/1-5/1}] initiating a split of this range at key /Table/65 [r50] (zone config) I190824 01:38:43.714260 145476 storage/replica_command.go:284 [n1,split,s1,r45/1:/Table/6{3/1-4/1}] initiating a split of this range at key /Table/64 [r51] (zone config) I190824 01:38:43.751646 145475 storage/replica_command.go:284 [n1,split,s1,r40/1:/Table/6{0/1-1/1}] initiating a split of this range at key /Table/61 [r52] (zone config) I190824 01:38:43.801634 145456 storage/replica_command.go:284 [n1,split,s1,r44/1:/Table/6{2/1-3/1}] initiating a split of this range at key /Table/63 [r53] (zone config) I190824 01:38:43.858271 145457 storage/replica_command.go:284 [n1,split,s1,r39/1:/Table/{59-60/1}] initiating a split of this range at key /Table/60 [r54] (zone config) I190824 01:38:44.150077 136107 gossip/gossip.go:566 [n2] gossip status (ok, 3 nodes) gossip client (1/3 cur/max conns) 1: 127.0.0.1:42279 (1m0s: infos 139/340 sent/received, bytes 35041B/322667B sent/received) gossip server (0/3 cur/max conns, infos 139/340 sent/received, bytes 35041B/322667B sent/received) gossip connectivity n2 [sentinel]; n2 -> n1; n3 -> n1; I190824 01:38:44.205248 136110 server/status/runtime.go:498 [n2] runtime stats: 2.1 GiB RSS, 759 goroutines, 216 MiB/72 MiB/336 MiB GO alloc/idle/total, 111 MiB/156 MiB CGO alloc/total, 1518.0 CGO/sec, 126.0/10.3 %(u/s)time, 0.1 %gc (2x), 3.7 MiB/3.7 MiB (r/w)net I190824 01:38:45.127981 145735 storage/replica_command.go:284 [n1,split,s1,r43/1:/Table/6{1/3-2/1}] initiating a split of this range at key /Table/62 [r55] (zone config) W190824 01:38:45.289510 145547 storage/replica_raft.go:105 [n1,s1,r39/1:/Table/{59-60}] context canceled before proposing: 1 HeartbeatTxn I190824 01:38:45.517105 136436 gossip/gossip.go:566 [n3] gossip status (ok, 3 nodes) gossip client (1/3 cur/max conns) 1: 127.0.0.1:42279 (1m1s: infos 121/369 sent/received, bytes 33618B/326778B sent/received) gossip server (0/3 cur/max conns, infos 121/369 sent/received, bytes 33618B/326778B sent/received) gossip connectivity n2 [sentinel]; n2 -> n1; n3 -> n1; I190824 01:38:45.560335 136439 server/status/runtime.go:498 [n3] runtime stats: 2.1 GiB RSS, 757 goroutines, 246 MiB/48 MiB/337 MiB GO alloc/idle/total, 110 MiB/155 MiB CGO alloc/total, 1661.1 CGO/sec, 128.4/11.1 %(u/s)time, 0.3 %gc (3x), 3.7 MiB/3.7 MiB (r/w)net TestImportMysql/second_from_multi ...1:38:31.136848 135213 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.137374 135213 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.144668 136024 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.145162 136024 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.149474 136386 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.150048 136386 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.193187 143333 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.210770 136407 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.210887 135222 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.211340 136407 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.211466 135222 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.215396 136040 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.215896 136040 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.251905 143333 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.270247 135201 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.270898 135201 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.272609 136067 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.273086 136067 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.280426 136395 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:31.289184 136395 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:31.609846 143517 storage/replica_command.go:284 [n1,split,s1,r35/1:/Table/5{8-9/1}] initiating a split of this range at key /Table/59 [r39] (zone config) TestImportMysql/single_table_dump ... removing replica r21/1 I190824 01:38:06.928405 135233 storage/queue.go:518 [n1,s1,r3/1:/System/{NodeLive…-tsd}] rate limited in MaybeAdd (raftlog): context canceled I190824 01:38:06.934908 136393 storage/store.go:2593 [n3,s3,r22/3:/Table/53{-/1}] removing replica r21/3 I190824 01:38:06.937354 136056 storage/store.go:2593 [n2,s2,r22/2:/Table/53{-/1}] removing replica r21/2 I190824 01:38:08.504314 139230 storage/replica_command.go:284 [n1,s1,r22/1:/{Table/53-Max}] initiating a split of this range at key /Table/55/1 [r23] (manual) I190824 01:38:08.876990 139307 storage/replica_command.go:284 [n1,s1,r22/1:/Table/5{3-5/1}] initiating a split of this range at key /Table/54/1 [r24] (manual) W190824 01:38:09.394191 139339 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.457128 135233 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.458889 136075 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.459118 136419 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.460717 136075 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.460861 136419 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.459477 135233 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.528934 139326 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.556723 135227 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.557254 135227 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.562592 136020 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.563148 136020 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.565944 136421 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190824 01:38:09.566525 136421 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190824 01:38:09.824107 139294 storage/replica_command.go:284 [n1,split,s1,r22/1:/Table/5{3-4/1}] initiating a split of this range at key /Table/54 [r25] (zone config) I190824 01:38:10.070313 139495 storage/replica_command.go:284 [n1,split,s1,r24/1:/Table/5{4/1-5/1}] initiating a split of this range at key /Table/55 [r26] (zone config) I190824 01:38:10.726351 135269 server/status/runtime.go:498 [n1] runtime stats: 2.0 GiB RSS, 705 goroutines, 156 MiB/128 MiB/335 MiB GO alloc/idle/total, 91 MiB/136 MiB CGO alloc/total, 1297.6 CGO/sec, 127.3/12.5 %(u/s)time, 0.3 %gc (3x), 3.2 MiB/3.2 MiB (r/w)net TestImportMysql/all_from_multi_bzip ...op.(*Stopper).RunWorker.func1(0xc008a60ef0, 0xc0050d92c0, 0xc008a60eb0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4 goroutine 135215 [sync.Cond.Wait]: runtime.goparkunlock(...) /usr/local/go/src/runtime/proc.go:307 sync.runtime_notifyListWait(0xc006266ad0, 0xc00001e381) /usr/local/go/src/runtime/sema.go:510 +0xf9 sync.(*Cond).Wait(0xc006266ac0) /usr/local/go/src/sync/cond.go:56 +0x8e github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc00a3241b0, 0x61afac0, 0xc0037c1110) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:192 +0x9c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x61afac0, 0xc0037c1110) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:161 +0x56 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc0094f5c20, 0xc00475c1e0, 0xc0094f5c10) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4 goroutine 139961 [select, 36 minutes]: github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch(0xc0011a5e60, 0x61f8220, 0xc00a479c90, 0x616b880, 0xc0011a5e60) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:384 +0x26c github.com/cockroachdb/cockroach/pkg/storage._MultiRaft_RaftMessageBatch_Handler(0x51010c0, 0xc0011a5e60, 0x61e7300, 0xc00577b500, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft.pb.go:691 +0xce github.com/cockroachdb/cockroach/pkg/rpc.NewServerWithInterceptor.func3(0x51010c0, 0xc0011a5e60, 0x61e7300, 0xc00577b500, 0xc00a4bc140, 0x5312908, 0xc008837ab8, 0xc008837ab8) /go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:238 +0x170 github.com/cockroachdb/cockroach/pkg/rpc.NewServerWithInterceptor.func5(0x51010c0, 0xc0011a5e60, 0x61e7300, 0xc00577b500, 0xc00a4bc140, 0x5312908, 0x61afac0, 0xc006aad560) /go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:263 +0xe4 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processStreamingRPC(0xc002b89200, 0x61fd440, 0xc00612b680, 0xc007518900, 0xc0043289f0, 0x8a5bb40, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1210 +0xa4d github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc002b89200, 0x61fd440, 0xc00612b680, 0xc007518900, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1283 +0x12e6 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00724d290, 0xc002b89200, 0x61fd440, 0xc00612b680, 0xc007518900) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:717 +0xad created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1 /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:715 +0xb9 goroutine 136663 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal/transport.(*recvBufferReader).read(0xc000e710e0, 0xc00040ef90, 0x5, 0x5, 0x0, 0xc00332cac0, 0xc001379800) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal/transport/transport.go:146 +0x179 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal/transport.(*recvBufferReader).Read(0xc000e710e0, 0xc00040ef90, 0x5, 0x5, 0xc0013798c0, 0xfee640, 0xc00332cac0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal/transport/transport.go:140 +0x31f ***** Text was not loaded fully because its' size exceeds 2 MB, see full log for the whole text ***** ``` Please assign, take a look and update the issue accordingly.
test
teamcity failed test testimportmysql the following tests appear to have failed on master testrace testimportmysql all from multi gzip testimportmysql second from multi testimportmysql all from multi testimportmysql simple from multi testimportmysql testimportmysql single table dump testimportmysql all from multi bzip testimportmysql read data only testimportmysql second table dump you may want to check testimportmysql all from multi gzip table max that contains live data storage compactor compactor go purging suggested compaction for range table table that contains live data storage compactor compactor go purging suggested compaction for range table table that contains live data storage compactor compactor go purging suggested compaction for range table table that contains live data storage compactor compactor go purging suggested compaction for range table table that contains live data storage compactor compactor go purging suggested compaction for range table table that contains live data storage compactor compactor go purging suggested compaction for range table max that contains live data storage compactor compactor go purging suggested compaction for range table max that contains live data storage compactor compactor go purging suggested compaction for range table max that contains live data storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config server status runtime go runtime stats gib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net storage replica raft go context canceled before proposing heartbeattxn testimportmysql second table dump storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica command go initiating a split of this range at key table zone config testimportmysql read data only fail testrace testimportmysql read data only test ended in panic stdout storage replica command go initiating a split of this range at key table manual server status runtime go runtime stats gib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica command go initiating a split of this range at key table zone config testimportmysql storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica raftstorage go applied learner snapshot storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica command go change replicas add remove existing descriptor table server node go health alerts detected alerts storage queue go rate limited in maybeadd raftlog context canceled storage replica raft go proposing add replica after next storage replica command go change replicas add remove existing descriptor table storage replica raft go proposing add replica after next storage store snapshot go sending learner snapshot at applied index storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying learner snapshot storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica raftstorage go applied learner snapshot storage replica command go change replicas add remove existing descriptor table storage replica raft go proposing add replica after next testutils testcluster testcluster go waitforfullreplication took sql event log go event set cluster setting target info settingname kv bulk ingest batch size value user root sql event log go event create database target info databasename foo statement create database foo user root testimportmysql simple from multi compactor go purging suggested compaction for range table max that contains live data storage compactor compactor go purging suggested compaction for range table table that contains live data storage compactor compactor go purging suggested compaction for range table max that contains live data storage compactor compactor go purging suggested compaction for range table table that contains live data storage compactor compactor go purging suggested compaction for range table max that contains live data storage compactor compactor go purging suggested compaction for range table max that contains live data storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config testimportmysql all from multi eed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config gossip gossip go gossip status ok nodes gossip client cur max conns infos sent received bytes sent received gossip server cur max conns infos sent received bytes sent received gossip connectivity server status runtime go runtime stats gib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net storage replica command go initiating a split of this range at key table zone config storage replica raft go context canceled before proposing heartbeattxn gossip gossip go gossip status ok nodes gossip client cur max conns infos sent received bytes sent received gossip server cur max conns infos sent received bytes sent received gossip connectivity server status runtime go runtime stats gib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net testimportmysql second from multi storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica command go initiating a split of this range at key table zone config testimportmysql single table dump removing replica storage queue go rate limited in maybeadd raftlog context canceled storage store go removing replica storage store go removing replica storage replica command go initiating a split of this range at key table manual storage replica command go initiating a split of this range at key table manual storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage replica command go initiating a split of this range at key table zone config storage replica command go initiating a split of this range at key table zone config server status runtime go runtime stats gib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net testimportmysql all from multi bzip op stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine runtime goparkunlock usr local go src runtime proc go sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage rafttransport raftmessagebatch go src github com cockroachdb cockroach pkg storage raft transport go github com cockroachdb cockroach pkg storage multiraft raftmessagebatch handler go src github com cockroachdb cockroach pkg storage raft pb go github com cockroachdb cockroach pkg rpc newserverwithinterceptor go src github com cockroachdb cockroach pkg rpc context go github com cockroachdb cockroach pkg rpc newserverwithinterceptor go src github com cockroachdb cockroach pkg rpc context go github com cockroachdb cockroach vendor google golang org grpc server processstreamingrpc go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach vendor google golang org grpc server handlestream go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach vendor google golang org grpc server servestreams go src github com cockroachdb cockroach vendor google golang org grpc server go created by github com cockroachdb cockroach vendor google golang org grpc server servestreams go src github com cockroachdb cockroach vendor google golang org grpc server go goroutine github com cockroachdb cockroach vendor google golang org grpc internal transport recvbufferreader read go src github com cockroachdb cockroach vendor google golang org grpc internal transport transport go github com cockroachdb cockroach vendor google golang org grpc internal transport recvbufferreader read go src github com cockroachdb cockroach vendor google golang org grpc internal transport transport go text was not loaded fully because its size exceeds mb see full log for the whole text please assign take a look and update the issue accordingly
1
93,440
26,952,469,283
IssuesEvent
2023-02-08 12:44:36
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Update third party dependencies
T: Enhancement C: Functionality C: Build P: Medium E: All Editions
For the upcoming release of jOOQ 3.18.0, let's upgrade a few third party dependencies including: - [ ] `org.apache.logging.log4j:log4j-core` from 2.18.0 to 2.19.0 This issue will be amended
1.0
Update third party dependencies - For the upcoming release of jOOQ 3.18.0, let's upgrade a few third party dependencies including: - [ ] `org.apache.logging.log4j:log4j-core` from 2.18.0 to 2.19.0 This issue will be amended
non_test
update third party dependencies for the upcoming release of jooq let s upgrade a few third party dependencies including org apache logging core from to this issue will be amended
0
140,326
11,310,008,243
IssuesEvent
2020-01-19 16:46:33
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Failing test - ci-kubernetes-e2e-gci-gce-ingress
area/ingress kind/failing-test priority/critical-urgent sig/network triage/unresolved
**Which jobs are failing**: ci-kubernetes-e2e-gci-gce-ingress **Which test(s) are failing**: Stages: listResources After listResources Before Overall **Since when has it been failing**: Overall stage has been failing since 1/17 1:44 PST **Testgrid link**: https://testgrid.k8s.io/sig-release-master-blocking#gci-gce-ingress **Reason for failure**: Multiple occurrences of the following error W0117 22:01:34.080] - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone. W0117 22:01:34.080] - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone. W0117 22:01:34.081] - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone. /cc @kubernetes/ci-signal /priority critical-urgent /area ingress
1.0
Failing test - ci-kubernetes-e2e-gci-gce-ingress - **Which jobs are failing**: ci-kubernetes-e2e-gci-gce-ingress **Which test(s) are failing**: Stages: listResources After listResources Before Overall **Since when has it been failing**: Overall stage has been failing since 1/17 1:44 PST **Testgrid link**: https://testgrid.k8s.io/sig-release-master-blocking#gci-gce-ingress **Reason for failure**: Multiple occurrences of the following error W0117 22:01:34.080] - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone. W0117 22:01:34.080] - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone. W0117 22:01:34.081] - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone. /cc @kubernetes/ci-signal /priority critical-urgent /area ingress
test
failing test ci kubernetes gci gce ingress which jobs are failing ci kubernetes gci gce ingress which test s are failing stages listresources after listresources before overall since when has it been failing overall stage has been failing since pst testgrid link reason for failure multiple occurrences of the following error invalid value for field zone asia a unknown zone invalid value for field zone asia b unknown zone invalid value for field zone asia c unknown zone cc kubernetes ci signal priority critical urgent area ingress
1
316,151
27,141,742,268
IssuesEvent
2023-02-16 16:48:59
wazuh/wazuh
https://api.github.com/repos/wazuh/wazuh
closed
Release 4.4.0 - Release Candidate 1 - Installation metrics
team/cicd type/release tracking release test/4.4.0
### Packages tests metrics information ||| | :-- | :-- | | **Main release candidate issue** | #16132 | | **Main packages metrics issue** | #16142 | | **Version** | 4.4.0 | | **Release candidate** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/v4.4.0-rc1 | --- ### Packages used - Repository: `packages-dev.wazuh.com` - Package path: `pre-release` - Package revision: `1` --- | Arquitecture | Build | | :-- | :-- | | AMD64 | https://ci.wazuh.info/view/Tests/job/Test_install_tier/536/| | ARM64 | https://ci.wazuh.info/view/Tests/job/Test_install_tier/538/| | I386 | https://ci.wazuh.info/view/Tests/job/Test_install_tier/537/ | | ARM32 | https://ci.wazuh.info/view/Tests/job/Test_install_tier/539/ | --- | System | AMD64 | ARM64 | I386 | ARM32 | | :-- | :--: | :--: | :--: | :--: | | Amazon Linux 1 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Amazon Linux 2 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | CentOS 5 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | CentOS 6 | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | CentOS 7 | :green_circle: | :green_circle: | :green_circle: | :black_circle: | | CentOS 8 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Debian 7 | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Debian 8 | :green_circle: | :black_circle: | :red_circle: | :black_circle: | | Debian 9 | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | Debian 10 | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Debian 11 | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Fedora 29 | :green_circle: | :green_circle: | :black_circle: | :green_circle: | | Fedora 31 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Fedora 32 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Fedora 34 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Fedora 35 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Fedora 36 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | openSUSE Tumbleweed | :red_circle: | :black_circle: | :black_circle: | :black_circle: | | Oracle Linux 6 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Oracle Linux 7 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Oracle Linux 8 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Red Hat 6 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Red Hat 7 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Red Hat 8 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Red Hat 9 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Solaris 11 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Solaris 10 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Ubuntu Bionic | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Ubuntu Focal | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Ubuntu Precise | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Ubuntu Trusty | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Ubuntu Xenial | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Windows 2016 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | macOS 10.15 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | --- Status legend: :black_circle: - Pending/In progress :white_circle: - Skipped :red_circle: - Rejected :yellow_circle: - Ready to review :green_circle: - Approved --- ## Auditor's validation In order to close and proceed with the release or the next candidate version, the following auditors must give the green light to this RC. - [ ] @alberpilot - [ ] @okynos ---
1.0
Release 4.4.0 - Release Candidate 1 - Installation metrics - ### Packages tests metrics information ||| | :-- | :-- | | **Main release candidate issue** | #16132 | | **Main packages metrics issue** | #16142 | | **Version** | 4.4.0 | | **Release candidate** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/v4.4.0-rc1 | --- ### Packages used - Repository: `packages-dev.wazuh.com` - Package path: `pre-release` - Package revision: `1` --- | Arquitecture | Build | | :-- | :-- | | AMD64 | https://ci.wazuh.info/view/Tests/job/Test_install_tier/536/| | ARM64 | https://ci.wazuh.info/view/Tests/job/Test_install_tier/538/| | I386 | https://ci.wazuh.info/view/Tests/job/Test_install_tier/537/ | | ARM32 | https://ci.wazuh.info/view/Tests/job/Test_install_tier/539/ | --- | System | AMD64 | ARM64 | I386 | ARM32 | | :-- | :--: | :--: | :--: | :--: | | Amazon Linux 1 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Amazon Linux 2 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | CentOS 5 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | CentOS 6 | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | CentOS 7 | :green_circle: | :green_circle: | :green_circle: | :black_circle: | | CentOS 8 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Debian 7 | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Debian 8 | :green_circle: | :black_circle: | :red_circle: | :black_circle: | | Debian 9 | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | Debian 10 | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Debian 11 | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Fedora 29 | :green_circle: | :green_circle: | :black_circle: | :green_circle: | | Fedora 31 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Fedora 32 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Fedora 34 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Fedora 35 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Fedora 36 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | openSUSE Tumbleweed | :red_circle: | :black_circle: | :black_circle: | :black_circle: | | Oracle Linux 6 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Oracle Linux 7 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Oracle Linux 8 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Red Hat 6 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Red Hat 7 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Red Hat 8 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Red Hat 9 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Solaris 11 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Solaris 10 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Ubuntu Bionic | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Ubuntu Focal | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Ubuntu Precise | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | Ubuntu Trusty | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Ubuntu Xenial | :green_circle: | :black_circle: | :green_circle: | :black_circle: | | Windows 2016 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | | macOS 10.15 | :green_circle: | :black_circle: | :black_circle: | :black_circle: | --- Status legend: :black_circle: - Pending/In progress :white_circle: - Skipped :red_circle: - Rejected :yellow_circle: - Ready to review :green_circle: - Approved --- ## Auditor's validation In order to close and proceed with the release or the next candidate version, the following auditors must give the green light to this RC. - [ ] @alberpilot - [ ] @okynos ---
test
release release candidate installation metrics packages tests metrics information main release candidate issue main packages metrics issue version release candidate tag packages used repository packages dev wazuh com package path pre release package revision arquitecture build system amazon linux green circle black circle black circle black circle amazon linux green circle black circle black circle black circle centos green circle black circle black circle black circle centos green circle black circle green circle black circle centos green circle green circle green circle black circle centos green circle black circle black circle black circle debian green circle black circle green circle black circle debian green circle black circle red circle black circle debian green circle green circle green circle green circle debian green circle black circle green circle black circle debian green circle black circle green circle black circle fedora green circle green circle black circle green circle fedora green circle black circle black circle black circle fedora green circle black circle black circle black circle fedora green circle black circle black circle black circle fedora green circle black circle black circle black circle fedora green circle black circle black circle black circle opensuse tumbleweed red circle black circle black circle black circle oracle linux green circle black circle black circle black circle oracle linux green circle black circle black circle black circle oracle linux green circle black circle black circle black circle red hat green circle black circle black circle black circle red hat green circle black circle black circle black circle red hat green circle black circle black circle black circle red hat green circle black circle black circle black circle solaris green circle black circle black circle black circle solaris green circle black circle black circle black circle ubuntu bionic green circle black circle black circle black circle ubuntu focal green circle black circle black circle black circle ubuntu precise green circle black circle black circle black circle ubuntu trusty green circle black circle green circle black circle ubuntu xenial green circle black circle green circle black circle windows green circle black circle black circle black circle macos green circle black circle black circle black circle status legend black circle pending in progress white circle skipped red circle rejected yellow circle ready to review green circle approved auditor s validation in order to close and proceed with the release or the next candidate version the following auditors must give the green light to this rc alberpilot okynos
1
188,764
14,474,886,804
IssuesEvent
2020-12-10 00:18:13
kalexmills/github-vet-tests-dec2020
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
closed
restic/restic: cmd/restic/integration_test.go; 6 LoC
fresh test tiny
Found a possible issue in [restic/restic](https://www.github.com/restic/restic) at [cmd/restic/integration_test.go](https://github.com/restic/restic/blob/7facc8ccc1e3fd5cf7a396bd9e06ed4560bcc051/cmd/restic/integration_test.go#L231-L236) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to sn is reassigned at line 234 [Click here to see the code in its original context.](https://github.com/restic/restic/blob/7facc8ccc1e3fd5cf7a396bd9e06ed4560bcc051/cmd/restic/integration_test.go#L231-L236) <details> <summary>Click here to show the 6 line(s) of Go which triggered the analyzer.</summary> ```go for _, sn := range snapshots { snapmap[*sn.ID] = sn if newest == nil || sn.Time.After(newest.Time) { newest = &sn } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 7facc8ccc1e3fd5cf7a396bd9e06ed4560bcc051
1.0
restic/restic: cmd/restic/integration_test.go; 6 LoC - Found a possible issue in [restic/restic](https://www.github.com/restic/restic) at [cmd/restic/integration_test.go](https://github.com/restic/restic/blob/7facc8ccc1e3fd5cf7a396bd9e06ed4560bcc051/cmd/restic/integration_test.go#L231-L236) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to sn is reassigned at line 234 [Click here to see the code in its original context.](https://github.com/restic/restic/blob/7facc8ccc1e3fd5cf7a396bd9e06ed4560bcc051/cmd/restic/integration_test.go#L231-L236) <details> <summary>Click here to show the 6 line(s) of Go which triggered the analyzer.</summary> ```go for _, sn := range snapshots { snapmap[*sn.ID] = sn if newest == nil || sn.Time.After(newest.Time) { newest = &sn } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 7facc8ccc1e3fd5cf7a396bd9e06ed4560bcc051
test
restic restic cmd restic integration test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to sn is reassigned at line click here to show the line s of go which triggered the analyzer go for sn range snapshots snapmap sn if newest nil sn time after newest time newest sn leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
1
423,456
12,297,133,708
IssuesEvent
2020-05-11 08:18:33
georchestra/mapstore2-georchestra
https://api.github.com/repos/georchestra/mapstore2-georchestra
closed
Review of the Extension upload/install UI
Accepted New Priority: High
The UI of the extensions upload and install tool need to be reviewed so the draft UI to test backend functionalities can be removed. Below the points to consider for the implementation and a mockup: 1 - The upload/install UI must be a **modal window** that appear over the existing plugin configuration UI. The modal must not include the close X button on top right corner 2 - The drop zone must conform to the other drop zones in the mapstore in terms of style 3 - For each of uploaded extension there should be a button to remove it from the upload list 4 - For each not valid uploaded extension a warning icon must appear in the corresponding item inside the list beside the remove button. If the mouse clicks or hovers over the warning icon, a message appears in a pop-up listing the corresponding occurred error translated correctly. 5 - There must be two buttons in the bottom right corner: Cancel and Install - Clicking on **Cancel** a confirmation dialog appear so the user can discard the installation or not - Clicking on **Install** all the uploaded extensions will be installed ![install_tool_2](https://user-images.githubusercontent.com/1280027/76090235-7e420380-5fbb-11ea-938f-8b991b6fde61.png)
1.0
Review of the Extension upload/install UI - The UI of the extensions upload and install tool need to be reviewed so the draft UI to test backend functionalities can be removed. Below the points to consider for the implementation and a mockup: 1 - The upload/install UI must be a **modal window** that appear over the existing plugin configuration UI. The modal must not include the close X button on top right corner 2 - The drop zone must conform to the other drop zones in the mapstore in terms of style 3 - For each of uploaded extension there should be a button to remove it from the upload list 4 - For each not valid uploaded extension a warning icon must appear in the corresponding item inside the list beside the remove button. If the mouse clicks or hovers over the warning icon, a message appears in a pop-up listing the corresponding occurred error translated correctly. 5 - There must be two buttons in the bottom right corner: Cancel and Install - Clicking on **Cancel** a confirmation dialog appear so the user can discard the installation or not - Clicking on **Install** all the uploaded extensions will be installed ![install_tool_2](https://user-images.githubusercontent.com/1280027/76090235-7e420380-5fbb-11ea-938f-8b991b6fde61.png)
non_test
review of the extension upload install ui the ui of the extensions upload and install tool need to be reviewed so the draft ui to test backend functionalities can be removed below the points to consider for the implementation and a mockup the upload install ui must be a modal window that appear over the existing plugin configuration ui the modal must not include the close x button on top right corner the drop zone must conform to the other drop zones in the mapstore in terms of style for each of uploaded extension there should be a button to remove it from the upload list for each not valid uploaded extension a warning icon must appear in the corresponding item inside the list beside the remove button if the mouse clicks or hovers over the warning icon a message appears in a pop up listing the corresponding occurred error translated correctly there must be two buttons in the bottom right corner cancel and install clicking on cancel a confirmation dialog appear so the user can discard the installation or not clicking on install all the uploaded extensions will be installed
0
122,432
10,222,839,693
IssuesEvent
2019-08-16 08:00:02
eclipse/openj9
https://api.github.com/repos/eclipse/openj9
closed
JTReg Failure: java/lang/StackWalker/ReflectionFrames.java
test failure
Failure link ------------ - Link: https://github.com/ibmruntimes/openj9-openjdk-jdk11/blob/965d0c0df359f3da224b865701bbcc044b7104c2/test/jdk/java/lang/StackWalker/ReflectionFrames.java#L1 (2 tests out of 7 are failing) - Category: openjdk - Target: _jdk_custom (jave/lang/StackWalker/ReflectionFrames.java) NOTE: `StackWalker` appears to be causing a lot of problems for JDK11 - Architecture: Consistent failure for x86_linux and windows for JDK11, JDK8 and JDK12. Hotspot builds appear to be unaffected. - Java Version: ``` openjdk version "11.0.4" 2019-07-16 11:35:22 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.4+11-201908091810) 11:35:22 Eclipse OpenJ9 VM AdoptOpenJDK (build master-5dd23af84, JRE 11 Linux amd64-64-Bit Compressed References 20190809_303 (JIT enabled, AOT enabled) 11:35:22 OpenJ9 - 5dd23af84 11:35:22 OMR - 6e99760b 11:35:22 JCL - 965d0c0df3 based on jdk-11.0.4+11) ``` Summary ---------- `testConstructor()` and `testNewInstance()`are the culprits of this issue. [#L456](https://github.com/ibmruntimes/openj9-openjdk-jdk11/blob/965d0c0df359f3da224b865701bbcc044b7104c2/test/jdk/java/lang/StackWalker/ReflectionFrames.java#L436) and [#L252](https://github.com/ibmruntimes/openj9-openjdk-jdk11/blob/965d0c0df359f3da224b865701bbcc044b7104c2/test/jdk/java/lang/StackWalker/ReflectionFrames.java#L252) are the `assertEquals` points of failure. It appears that there are more items in the stack than expected at runtime. The only significant difference between the tests at the start is how they implement `StackInspector`. Will investigate further to see if this is the case. Failure output (captured from console output) --------------------------------------------- ``` 11:37:37 STDOUT: 11:37:37 [TestNG] Running: 11:37:37 java/lang/StackWalker/ReflectionFrames.java 11:37:37 11:37:37 testConstructor: create 11:37:37 java.lang.StackWalker$StackFrameImpl@1507e9b1 11:37:37 java.lang.StackWalker$StackFrameImpl@6c2df9cf 11:37:37 java.lang.StackWalker$StackFrameImpl@94270930 11:37:37 java.lang.StackWalker$StackFrameImpl@955dd26e 11:37:37 java.lang.StackWalker$StackFrameImpl@519f289c 11:37:37 java.lang.StackWalker$StackFrameImpl@c20836b9 11:37:37 java.lang.StackWalker$StackFrameImpl@c94e56af 11:37:37 test ReflectionFrames.testConstructor(): failure 11:37:37 java.lang.AssertionError: lists don't have the same size expected [3] but found [7] 11:37:37 at org.testng.Assert.fail(Assert.java:94) 11:37:37 at org.testng.Assert.failNotEquals(Assert.java:496) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:125) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:372) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:539) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:516) 11:37:37 at ReflectionFrames.testConstructor(ReflectionFrames.java:252) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85) 11:37:37 at org.testng.internal.Invoker.invokeMethod(Invoker.java:639) 11:37:37 at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:821) 11:37:37 at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1131) 11:37:37 at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125) 11:37:37 at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108) 11:37:37 at org.testng.TestRunner.privateRun(TestRunner.java:773) 11:37:37 at org.testng.TestRunner.run(TestRunner.java:623) 11:37:37 at org.testng.SuiteRunner.runTest(SuiteRunner.java:357) 11:37:37 at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:352) 11:37:37 at org.testng.SuiteRunner.privateRun(SuiteRunner.java:310) 11:37:37 at org.testng.SuiteRunner.run(SuiteRunner.java:259) 11:37:37 at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) 11:37:37 at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) 11:37:37 at org.testng.TestNG.runSuitesSequentially(TestNG.java:1185) 11:37:37 at org.testng.TestNG.runSuitesLocally(TestNG.java:1110) 11:37:37 at org.testng.TestNG.run(TestNG.java:1018) 11:37:37 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:94) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:298) 11:37:37 at java.base/java.lang.Thread.run(Thread.java:831) 11:37:37 test ReflectionFrames.testGetCaller(): success 11:37:37 test ReflectionFrames.testHandleCaller(): success 11:37:37 testNewInstance: create 11:37:37 java.lang.StackWalker$StackFrameImpl@b4c5d93 11:37:37 java.lang.StackWalker$StackFrameImpl@2248b817 11:37:37 java.lang.StackWalker$StackFrameImpl@d8433b72 11:37:37 java.lang.StackWalker$StackFrameImpl@42ea6ed6 11:37:37 java.lang.StackWalker$StackFrameImpl@a96999da 11:37:37 test ReflectionFrames.testNewInstance(): failure 11:37:37 java.lang.AssertionError: lists don't have the same size expected [4] but found [5] 11:37:37 at org.testng.Assert.fail(Assert.java:94) 11:37:37 at org.testng.Assert.failNotEquals(Assert.java:496) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:125) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:372) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:539) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:516) 11:37:37 at ReflectionFrames.testNewInstance(ReflectionFrames.java:436) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85) 11:37:37 at org.testng.internal.Invoker.invokeMethod(Invoker.java:639) 11:37:37 at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:821) 11:37:37 at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1131) 11:37:37 at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125) 11:37:37 at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108) 11:37:37 at org.testng.TestRunner.privateRun(TestRunner.java:773) 11:37:37 at org.testng.TestRunner.run(TestRunner.java:623) 11:37:37 at org.testng.SuiteRunner.runTest(SuiteRunner.java:357) 11:37:37 at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:352) 11:37:37 at org.testng.SuiteRunner.privateRun(SuiteRunner.java:310) 11:37:37 at org.testng.SuiteRunner.run(SuiteRunner.java:259) 11:37:37 at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) 11:37:37 at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) 11:37:37 at org.testng.TestNG.runSuitesSequentially(TestNG.java:1185) 11:37:37 at org.testng.TestNG.runSuitesLocally(TestNG.java:1110) 11:37:37 at org.testng.TestNG.run(TestNG.java:1018) 11:37:37 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:94) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:298) 11:37:37 at java.base/java.lang.Thread.run(Thread.java:831) 11:37:37 testNewStackInspector: create 11:37:37 java.lang.StackWalker$StackFrameImpl@467b6709 11:37:37 java.lang.StackWalker$StackFrameImpl@374d6d30 11:37:37 java.lang.StackWalker$StackFrameImpl@7057916a 11:37:37 testNewStackInspector: reflect 11:37:37 java.lang.StackWalker$StackFrameImpl@ecc824f3 11:37:37 java.lang.StackWalker$StackFrameImpl@310a3a61 11:37:37 java.lang.StackWalker$StackFrameImpl@e31ba4eb 11:37:37 java.lang.StackWalker$StackFrameImpl@c645e474 11:37:37 testNewStackInspector: handle 11:37:37 java.lang.StackWalker$StackFrameImpl@df288fc4 11:37:37 java.lang.StackWalker$StackFrameImpl@dfea8c8c 11:37:37 java.lang.StackWalker$StackFrameImpl@b5b809a6 11:37:37 java.lang.StackWalker$StackFrameImpl@17617016 11:37:37 testNewStackInspector: create: show reflect 11:37:37 java.lang.StackWalker$StackFrameImpl@8b7b6e62 11:37:37 java.lang.StackWalker$StackFrameImpl@3809200a 11:37:37 java.lang.StackWalker$StackFrameImpl@6c1e8826 11:37:37 java.lang.StackWalker$StackFrameImpl@cc09f13d 11:37:37 testNewStackInspector: reflect: show reflect 11:37:37 java.lang.StackWalker$StackFrameImpl@e4ada85c 11:37:37 java.lang.StackWalker$StackFrameImpl@fb49ff75 11:37:37 java.lang.StackWalker$StackFrameImpl@98cbdd8e 11:37:37 java.lang.StackWalker$StackFrameImpl@a2fa4da5 11:37:37 java.lang.StackWalker$StackFrameImpl@685e4aad 11:37:37 java.lang.StackWalker$StackFrameImpl@f1c5eb3e 11:37:37 testNewStackInspector: handle: show reflect 11:37:37 java.lang.StackWalker$StackFrameImpl@5f847f72 11:37:37 java.lang.StackWalker$StackFrameImpl@57f41f80 11:37:37 java.lang.StackWalker$StackFrameImpl@43031815 11:37:37 java.lang.StackWalker$StackFrameImpl@bb996dc1 11:37:37 java.lang.StackWalker$StackFrameImpl@d3abea5e 11:37:37 test ReflectionFrames.testNewStackInspector(): success 11:37:37 test ReflectionFrames.testReflectCaller(): success 11:37:37 test ReflectionFrames.testSupplyCaller(): success 11:37:37 11:37:37 =============================================== 11:37:37 java/lang/StackWalker/ReflectionFrames.java 11:37:37 Total tests run: 7, Failures: 2, Skips: 0 11:37:37 =============================================== 11:37:37 11:37:37 STDERR: 11:37:37 java.lang.Exception: failures: 2 11:37:37 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:96) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:298) 11:37:37 at java.base/java.lang.Thread.run(Thread.java:831) 11:37:37 11:37:37 JavaTest Message: Test threw exception: java.lang.Exception 11:37:37 JavaTest Message: shutting down test 11:37:37 11:37:37 11:37:37 TEST RESULT: Failed. Execution failed: `main' threw exception: java.lang.Exception: failures: 2 ``` - CoreDump: https://na.artifactory.swg-devops.com/artifactory/sys-rt-generic-local/hyc-runtimes-jenkins.swg-devops.com/Grinder_Advanced/476/openjdk_test_output.tar.gz
1.0
JTReg Failure: java/lang/StackWalker/ReflectionFrames.java - Failure link ------------ - Link: https://github.com/ibmruntimes/openj9-openjdk-jdk11/blob/965d0c0df359f3da224b865701bbcc044b7104c2/test/jdk/java/lang/StackWalker/ReflectionFrames.java#L1 (2 tests out of 7 are failing) - Category: openjdk - Target: _jdk_custom (jave/lang/StackWalker/ReflectionFrames.java) NOTE: `StackWalker` appears to be causing a lot of problems for JDK11 - Architecture: Consistent failure for x86_linux and windows for JDK11, JDK8 and JDK12. Hotspot builds appear to be unaffected. - Java Version: ``` openjdk version "11.0.4" 2019-07-16 11:35:22 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.4+11-201908091810) 11:35:22 Eclipse OpenJ9 VM AdoptOpenJDK (build master-5dd23af84, JRE 11 Linux amd64-64-Bit Compressed References 20190809_303 (JIT enabled, AOT enabled) 11:35:22 OpenJ9 - 5dd23af84 11:35:22 OMR - 6e99760b 11:35:22 JCL - 965d0c0df3 based on jdk-11.0.4+11) ``` Summary ---------- `testConstructor()` and `testNewInstance()`are the culprits of this issue. [#L456](https://github.com/ibmruntimes/openj9-openjdk-jdk11/blob/965d0c0df359f3da224b865701bbcc044b7104c2/test/jdk/java/lang/StackWalker/ReflectionFrames.java#L436) and [#L252](https://github.com/ibmruntimes/openj9-openjdk-jdk11/blob/965d0c0df359f3da224b865701bbcc044b7104c2/test/jdk/java/lang/StackWalker/ReflectionFrames.java#L252) are the `assertEquals` points of failure. It appears that there are more items in the stack than expected at runtime. The only significant difference between the tests at the start is how they implement `StackInspector`. Will investigate further to see if this is the case. Failure output (captured from console output) --------------------------------------------- ``` 11:37:37 STDOUT: 11:37:37 [TestNG] Running: 11:37:37 java/lang/StackWalker/ReflectionFrames.java 11:37:37 11:37:37 testConstructor: create 11:37:37 java.lang.StackWalker$StackFrameImpl@1507e9b1 11:37:37 java.lang.StackWalker$StackFrameImpl@6c2df9cf 11:37:37 java.lang.StackWalker$StackFrameImpl@94270930 11:37:37 java.lang.StackWalker$StackFrameImpl@955dd26e 11:37:37 java.lang.StackWalker$StackFrameImpl@519f289c 11:37:37 java.lang.StackWalker$StackFrameImpl@c20836b9 11:37:37 java.lang.StackWalker$StackFrameImpl@c94e56af 11:37:37 test ReflectionFrames.testConstructor(): failure 11:37:37 java.lang.AssertionError: lists don't have the same size expected [3] but found [7] 11:37:37 at org.testng.Assert.fail(Assert.java:94) 11:37:37 at org.testng.Assert.failNotEquals(Assert.java:496) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:125) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:372) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:539) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:516) 11:37:37 at ReflectionFrames.testConstructor(ReflectionFrames.java:252) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85) 11:37:37 at org.testng.internal.Invoker.invokeMethod(Invoker.java:639) 11:37:37 at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:821) 11:37:37 at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1131) 11:37:37 at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125) 11:37:37 at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108) 11:37:37 at org.testng.TestRunner.privateRun(TestRunner.java:773) 11:37:37 at org.testng.TestRunner.run(TestRunner.java:623) 11:37:37 at org.testng.SuiteRunner.runTest(SuiteRunner.java:357) 11:37:37 at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:352) 11:37:37 at org.testng.SuiteRunner.privateRun(SuiteRunner.java:310) 11:37:37 at org.testng.SuiteRunner.run(SuiteRunner.java:259) 11:37:37 at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) 11:37:37 at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) 11:37:37 at org.testng.TestNG.runSuitesSequentially(TestNG.java:1185) 11:37:37 at org.testng.TestNG.runSuitesLocally(TestNG.java:1110) 11:37:37 at org.testng.TestNG.run(TestNG.java:1018) 11:37:37 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:94) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:298) 11:37:37 at java.base/java.lang.Thread.run(Thread.java:831) 11:37:37 test ReflectionFrames.testGetCaller(): success 11:37:37 test ReflectionFrames.testHandleCaller(): success 11:37:37 testNewInstance: create 11:37:37 java.lang.StackWalker$StackFrameImpl@b4c5d93 11:37:37 java.lang.StackWalker$StackFrameImpl@2248b817 11:37:37 java.lang.StackWalker$StackFrameImpl@d8433b72 11:37:37 java.lang.StackWalker$StackFrameImpl@42ea6ed6 11:37:37 java.lang.StackWalker$StackFrameImpl@a96999da 11:37:37 test ReflectionFrames.testNewInstance(): failure 11:37:37 java.lang.AssertionError: lists don't have the same size expected [4] but found [5] 11:37:37 at org.testng.Assert.fail(Assert.java:94) 11:37:37 at org.testng.Assert.failNotEquals(Assert.java:496) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:125) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:372) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:539) 11:37:37 at org.testng.Assert.assertEquals(Assert.java:516) 11:37:37 at ReflectionFrames.testNewInstance(ReflectionFrames.java:436) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85) 11:37:37 at org.testng.internal.Invoker.invokeMethod(Invoker.java:639) 11:37:37 at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:821) 11:37:37 at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1131) 11:37:37 at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125) 11:37:37 at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108) 11:37:37 at org.testng.TestRunner.privateRun(TestRunner.java:773) 11:37:37 at org.testng.TestRunner.run(TestRunner.java:623) 11:37:37 at org.testng.SuiteRunner.runTest(SuiteRunner.java:357) 11:37:37 at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:352) 11:37:37 at org.testng.SuiteRunner.privateRun(SuiteRunner.java:310) 11:37:37 at org.testng.SuiteRunner.run(SuiteRunner.java:259) 11:37:37 at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) 11:37:37 at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) 11:37:37 at org.testng.TestNG.runSuitesSequentially(TestNG.java:1185) 11:37:37 at org.testng.TestNG.runSuitesLocally(TestNG.java:1110) 11:37:37 at org.testng.TestNG.run(TestNG.java:1018) 11:37:37 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:94) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:298) 11:37:37 at java.base/java.lang.Thread.run(Thread.java:831) 11:37:37 testNewStackInspector: create 11:37:37 java.lang.StackWalker$StackFrameImpl@467b6709 11:37:37 java.lang.StackWalker$StackFrameImpl@374d6d30 11:37:37 java.lang.StackWalker$StackFrameImpl@7057916a 11:37:37 testNewStackInspector: reflect 11:37:37 java.lang.StackWalker$StackFrameImpl@ecc824f3 11:37:37 java.lang.StackWalker$StackFrameImpl@310a3a61 11:37:37 java.lang.StackWalker$StackFrameImpl@e31ba4eb 11:37:37 java.lang.StackWalker$StackFrameImpl@c645e474 11:37:37 testNewStackInspector: handle 11:37:37 java.lang.StackWalker$StackFrameImpl@df288fc4 11:37:37 java.lang.StackWalker$StackFrameImpl@dfea8c8c 11:37:37 java.lang.StackWalker$StackFrameImpl@b5b809a6 11:37:37 java.lang.StackWalker$StackFrameImpl@17617016 11:37:37 testNewStackInspector: create: show reflect 11:37:37 java.lang.StackWalker$StackFrameImpl@8b7b6e62 11:37:37 java.lang.StackWalker$StackFrameImpl@3809200a 11:37:37 java.lang.StackWalker$StackFrameImpl@6c1e8826 11:37:37 java.lang.StackWalker$StackFrameImpl@cc09f13d 11:37:37 testNewStackInspector: reflect: show reflect 11:37:37 java.lang.StackWalker$StackFrameImpl@e4ada85c 11:37:37 java.lang.StackWalker$StackFrameImpl@fb49ff75 11:37:37 java.lang.StackWalker$StackFrameImpl@98cbdd8e 11:37:37 java.lang.StackWalker$StackFrameImpl@a2fa4da5 11:37:37 java.lang.StackWalker$StackFrameImpl@685e4aad 11:37:37 java.lang.StackWalker$StackFrameImpl@f1c5eb3e 11:37:37 testNewStackInspector: handle: show reflect 11:37:37 java.lang.StackWalker$StackFrameImpl@5f847f72 11:37:37 java.lang.StackWalker$StackFrameImpl@57f41f80 11:37:37 java.lang.StackWalker$StackFrameImpl@43031815 11:37:37 java.lang.StackWalker$StackFrameImpl@bb996dc1 11:37:37 java.lang.StackWalker$StackFrameImpl@d3abea5e 11:37:37 test ReflectionFrames.testNewStackInspector(): success 11:37:37 test ReflectionFrames.testReflectCaller(): success 11:37:37 test ReflectionFrames.testSupplyCaller(): success 11:37:37 11:37:37 =============================================== 11:37:37 java/lang/StackWalker/ReflectionFrames.java 11:37:37 Total tests run: 7, Failures: 2, Skips: 0 11:37:37 =============================================== 11:37:37 11:37:37 STDERR: 11:37:37 java.lang.Exception: failures: 2 11:37:37 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:96) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:37:37 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:37:37 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:37:37 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:37:37 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:298) 11:37:37 at java.base/java.lang.Thread.run(Thread.java:831) 11:37:37 11:37:37 JavaTest Message: Test threw exception: java.lang.Exception 11:37:37 JavaTest Message: shutting down test 11:37:37 11:37:37 11:37:37 TEST RESULT: Failed. Execution failed: `main' threw exception: java.lang.Exception: failures: 2 ``` - CoreDump: https://na.artifactory.swg-devops.com/artifactory/sys-rt-generic-local/hyc-runtimes-jenkins.swg-devops.com/Grinder_Advanced/476/openjdk_test_output.tar.gz
test
jtreg failure java lang stackwalker reflectionframes java failure link link tests out of are failing category openjdk target jdk custom jave lang stackwalker reflectionframes java note stackwalker appears to be causing a lot of problems for architecture consistent failure for linux and windows for and hotspot builds appear to be unaffected java version openjdk version openjdk runtime environment adoptopenjdk build eclipse vm adoptopenjdk build master jre linux bit compressed references jit enabled aot enabled omr jcl based on jdk summary testconstructor and testnewinstance are the culprits of this issue and are the assertequals points of failure it appears that there are more items in the stack than expected at runtime the only significant difference between the tests at the start is how they implement stackinspector will investigate further to see if this is the case failure output captured from console output stdout running java lang stackwalker reflectionframes java testconstructor create java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl test reflectionframes testconstructor failure java lang assertionerror lists don t have the same size expected but found at org testng assert fail assert java at org testng assert failnotequals assert java at org testng assert assertequals assert java at org testng assert assertequals assert java at org testng assert assertequals assert java at org testng assert assertequals assert java at reflectionframes testconstructor reflectionframes java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invoker invokemethod invoker java at org testng internal invoker invoketestmethod invoker java at org testng internal invoker invoketestmethods invoker java at org testng internal testmethodworker invoketestmethods testmethodworker java at org testng internal testmethodworker run testmethodworker java at org testng testrunner privaterun testrunner java at org testng testrunner run testrunner java at org testng suiterunner runtest suiterunner java at org testng suiterunner runsequentially suiterunner java at org testng suiterunner privaterun suiterunner java at org testng suiterunner run suiterunner java at org testng suiterunnerworker runsuite suiterunnerworker java at org testng suiterunnerworker run suiterunnerworker java at org testng testng runsuitessequentially testng java at org testng testng runsuiteslocally testng java at org testng testng run testng java at com sun javatest regtest agent testngrunner main testngrunner java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javatest regtest agent mainactionhelper agentvmrunnable run mainactionhelper java at java base java lang thread run thread java test reflectionframes testgetcaller success test reflectionframes testhandlecaller success testnewinstance create java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl test reflectionframes testnewinstance failure java lang assertionerror lists don t have the same size expected but found at org testng assert fail assert java at org testng assert failnotequals assert java at org testng assert assertequals assert java at org testng assert assertequals assert java at org testng assert assertequals assert java at org testng assert assertequals assert java at reflectionframes testnewinstance reflectionframes java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invoker invokemethod invoker java at org testng internal invoker invoketestmethod invoker java at org testng internal invoker invoketestmethods invoker java at org testng internal testmethodworker invoketestmethods testmethodworker java at org testng internal testmethodworker run testmethodworker java at org testng testrunner privaterun testrunner java at org testng testrunner run testrunner java at org testng suiterunner runtest suiterunner java at org testng suiterunner runsequentially suiterunner java at org testng suiterunner privaterun suiterunner java at org testng suiterunner run suiterunner java at org testng suiterunnerworker runsuite suiterunnerworker java at org testng suiterunnerworker run suiterunnerworker java at org testng testng runsuitessequentially testng java at org testng testng runsuiteslocally testng java at org testng testng run testng java at com sun javatest regtest agent testngrunner main testngrunner java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javatest regtest agent mainactionhelper agentvmrunnable run mainactionhelper java at java base java lang thread run thread java testnewstackinspector create java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl testnewstackinspector reflect java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl testnewstackinspector handle java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl testnewstackinspector create show reflect java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl testnewstackinspector reflect show reflect java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl testnewstackinspector handle show reflect java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl java lang stackwalker stackframeimpl test reflectionframes testnewstackinspector success test reflectionframes testreflectcaller success test reflectionframes testsupplycaller success java lang stackwalker reflectionframes java total tests run failures skips stderr java lang exception failures at com sun javatest regtest agent testngrunner main testngrunner java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javatest regtest agent mainactionhelper agentvmrunnable run mainactionhelper java at java base java lang thread run thread java javatest message test threw exception java lang exception javatest message shutting down test test result failed execution failed main threw exception java lang exception failures coredump
1
249,282
21,156,631,740
IssuesEvent
2022-04-07 04:30:56
stores-cedcommerce/Internal-Diat-Food-Due-15th-March
https://api.github.com/repos/stores-cedcommerce/Internal-Diat-Food-Due-15th-March
closed
Browse by category section on home page below the banner, the speeling of the chocolate is not correct for the category
Home page content Desktop Issue Ready to test Fixed
Actual result: The spelling of chocolate is not correct. ![image](https://user-images.githubusercontent.com/102131636/161697380-74db7bfa-daa0-4208-a937-b8cb3fe4f8d0.png) **Expected result:** The spelling should be correct.
1.0
Browse by category section on home page below the banner, the speeling of the chocolate is not correct for the category - Actual result: The spelling of chocolate is not correct. ![image](https://user-images.githubusercontent.com/102131636/161697380-74db7bfa-daa0-4208-a937-b8cb3fe4f8d0.png) **Expected result:** The spelling should be correct.
test
browse by category section on home page below the banner the speeling of the chocolate is not correct for the category actual result the spelling of chocolate is not correct expected result the spelling should be correct
1
226,609
18,042,054,708
IssuesEvent
2021-09-18 07:47:55
SAP/ui5-webcomponents
https://api.github.com/repos/SAP/ui5-webcomponents
reopened
Input fields do not cancel user input on Escape
High Prio TOPIC RL 1.0 Release Testing
### **Bug Description** Input, MultiInput, ComboBox & MultiComboBox controls should cancel user input on Escape. Per component: 1. Input/MultiInput/MultiComboBox > When the picker is closed the entered value is not cleared. When the picker is opened, it closes but the value remains in the input field. > > Browser Specific: > Safari: Default browser behaviour is not prevented. 2. ComboBox > When the picker is closed the entered value is not cleared. When the picker is opened, it closes but the value remains in the input field. > > Browser Specific: > > Safari: Default browser behaviour is not prevented. > IE: When the picker is closed the entered value is cleared but then if you open the picker, the entered value is presented again. ### **Priority** - [ ] Low - [ ] Medium - [x] High - [ ] Very High
1.0
Input fields do not cancel user input on Escape - ### **Bug Description** Input, MultiInput, ComboBox & MultiComboBox controls should cancel user input on Escape. Per component: 1. Input/MultiInput/MultiComboBox > When the picker is closed the entered value is not cleared. When the picker is opened, it closes but the value remains in the input field. > > Browser Specific: > Safari: Default browser behaviour is not prevented. 2. ComboBox > When the picker is closed the entered value is not cleared. When the picker is opened, it closes but the value remains in the input field. > > Browser Specific: > > Safari: Default browser behaviour is not prevented. > IE: When the picker is closed the entered value is cleared but then if you open the picker, the entered value is presented again. ### **Priority** - [ ] Low - [ ] Medium - [x] High - [ ] Very High
test
input fields do not cancel user input on escape bug description input multiinput combobox multicombobox controls should cancel user input on escape per component input multiinput multicombobox when the picker is closed the entered value is not cleared when the picker is opened it closes but the value remains in the input field browser specific safari default browser behaviour is not prevented combobox when the picker is closed the entered value is not cleared when the picker is opened it closes but the value remains in the input field browser specific safari default browser behaviour is not prevented ie when the picker is closed the entered value is cleared but then if you open the picker the entered value is presented again priority low medium high very high
1
187,809
6,761,306,480
IssuesEvent
2017-10-25 00:52:21
PrairieLearn/PrairieLearn
https://api.github.com/repos/PrairieLearn/PrairieLearn
closed
Add option to use min element score as question score
enhancement high priority
Chris Schmitz needs this for ECE 110 by Tuesday midnight.
1.0
Add option to use min element score as question score - Chris Schmitz needs this for ECE 110 by Tuesday midnight.
non_test
add option to use min element score as question score chris schmitz needs this for ece by tuesday midnight
0
25,364
4,155,607,470
IssuesEvent
2016-06-16 15:23:26
RevolutionAnalytics/AzureML
https://api.github.com/repos/RevolutionAnalytics/AzureML
opened
Fix unit tests and code for download.datasets() to deal with multiple datasets
bug tests
Unit tests failed, and code incorrectly handled scenarios.
1.0
Fix unit tests and code for download.datasets() to deal with multiple datasets - Unit tests failed, and code incorrectly handled scenarios.
test
fix unit tests and code for download datasets to deal with multiple datasets unit tests failed and code incorrectly handled scenarios
1
249,969
21,219,487,890
IssuesEvent
2022-04-11 10:30:31
LimeChain/hashport-validator
https://api.github.com/repos/LimeChain/hashport-validator
closed
Unit test for NewNft in app/model/transfer/transfer_test.go
unit tests
Implement unit test in **app/model/transfer/transfer_test.go** for **NewNft** function from **app/model/transfer/transfer.go**
1.0
Unit test for NewNft in app/model/transfer/transfer_test.go - Implement unit test in **app/model/transfer/transfer_test.go** for **NewNft** function from **app/model/transfer/transfer.go**
test
unit test for newnft in app model transfer transfer test go implement unit test in app model transfer transfer test go for newnft function from app model transfer transfer go
1
363,222
25,413,992,381
IssuesEvent
2022-11-22 21:46:06
joshuacc/ahkpm
https://api.github.com/repos/joshuacc/ahkpm
closed
Fix social media previews on ahkpm.dev
bug documentation good first issue
This is what it looks like on Twitter right now. ![image](https://user-images.githubusercontent.com/171086/200577775-2a0142e5-6db6-41fd-9d5f-cee9e0a5f79c.png) The website repo is here: https://github.com/joshuacc/ahkpm.dev
1.0
Fix social media previews on ahkpm.dev - This is what it looks like on Twitter right now. ![image](https://user-images.githubusercontent.com/171086/200577775-2a0142e5-6db6-41fd-9d5f-cee9e0a5f79c.png) The website repo is here: https://github.com/joshuacc/ahkpm.dev
non_test
fix social media previews on ahkpm dev this is what it looks like on twitter right now the website repo is here
0
57,461
7,058,247,769
IssuesEvent
2018-01-04 19:35:38
7sharp9/ReQuetzal
https://api.github.com/repos/7sharp9/ReQuetzal
opened
Generic and simple compiler flow
prio:high type:Design
We should define a graph, which described the general approach for the compiler flow. - [ ] Compiler flow - [ ] Modules/functions API for following stages - [ ] Detailed explanation about each step in the flow, with the associated documentation explaining the reasoning (papers, pdfs, link to blogs ...) to extend in the future maybe
1.0
Generic and simple compiler flow - We should define a graph, which described the general approach for the compiler flow. - [ ] Compiler flow - [ ] Modules/functions API for following stages - [ ] Detailed explanation about each step in the flow, with the associated documentation explaining the reasoning (papers, pdfs, link to blogs ...) to extend in the future maybe
non_test
generic and simple compiler flow we should define a graph which described the general approach for the compiler flow compiler flow modules functions api for following stages detailed explanation about each step in the flow with the associated documentation explaining the reasoning papers pdfs link to blogs to extend in the future maybe
0
101,075
8,773,839,518
IssuesEvent
2018-12-18 17:59:45
mozilla-services/syncstorage-rs
https://api.github.com/repos/mozilla-services/syncstorage-rs
opened
Attempt a retry on ConflictErrors
bug e2e tests
We lack the equivalent of the python's sleep_and_retry_on_conflict decorator. It attempts to retry a request once in the face of timestamp ConflictErrors. This can affect the e2e tests: seeing ConflictErrors occasionally reported because the tests can potentially hit the server with successive requests in less than 10 milliseconds
1.0
Attempt a retry on ConflictErrors - We lack the equivalent of the python's sleep_and_retry_on_conflict decorator. It attempts to retry a request once in the face of timestamp ConflictErrors. This can affect the e2e tests: seeing ConflictErrors occasionally reported because the tests can potentially hit the server with successive requests in less than 10 milliseconds
test
attempt a retry on conflicterrors we lack the equivalent of the python s sleep and retry on conflict decorator it attempts to retry a request once in the face of timestamp conflicterrors this can affect the tests seeing conflicterrors occasionally reported because the tests can potentially hit the server with successive requests in less than milliseconds
1
17,696
10,758,659,955
IssuesEvent
2019-10-31 15:20:16
edgexfoundry/edgex-go
https://api.github.com/repos/edgexfoundry/edgex-go
opened
security-secrets-setup says it succeeded when it also says it fails (and it did fail)
bug security-services
<!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 Hello there! 😄 To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅--> # 🐞 Bug Report ### Affected Services <!-- Can you pin-point one or more EdgeX services (Data, Metadata, Command, etc...) as the source of the bug? --> <!-- ✍️edit: --> The issue is located in: security-secrets-setup ### Is this a regression? maybe? ### Description and Minimal Reproduction clone fuji branch of edgex-go ``` $ ./cmd/security-secrets-setup/security-secrets-setup -confdir ./cmd/security-secrets-setup/res generate level=ERROR ts=2019-10-31T15:16:32.496926328Z app=edgex-security-secrets-setup source=logger.go:73 msg="logTarget cannot be blank, using stdout only" level=ERROR ts=2019-10-31T15:16:32.497171655Z app=edgex-security-secrets-setup source=main.go:124 msg="CertConfigDir from config file does not exist in: ./res" level=INFO ts=2019-10-31T15:16:32.507093231Z app=edgex-security-secrets-setup source=main.go:128 duration=10.184285ms msg="security-secrets-setup complete" ``` note that the referenced files from cmd/security-secrets-setup/res/configuration.toml in the git tree don't actually exist when run from a different directory, so this is expected to fail, however we shouldn't have the last message in the log above where it says security-secrets-setup is complete because it didn't complete, it failed ## 🌍 Your Environment **Deployment Environment:** **EdgeX Version:** fuji @ dd2af2191e85b0217a30ea176a345c30e0df5ac9
1.0
security-secrets-setup says it succeeded when it also says it fails (and it did fail) - <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 Hello there! 😄 To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅--> # 🐞 Bug Report ### Affected Services <!-- Can you pin-point one or more EdgeX services (Data, Metadata, Command, etc...) as the source of the bug? --> <!-- ✍️edit: --> The issue is located in: security-secrets-setup ### Is this a regression? maybe? ### Description and Minimal Reproduction clone fuji branch of edgex-go ``` $ ./cmd/security-secrets-setup/security-secrets-setup -confdir ./cmd/security-secrets-setup/res generate level=ERROR ts=2019-10-31T15:16:32.496926328Z app=edgex-security-secrets-setup source=logger.go:73 msg="logTarget cannot be blank, using stdout only" level=ERROR ts=2019-10-31T15:16:32.497171655Z app=edgex-security-secrets-setup source=main.go:124 msg="CertConfigDir from config file does not exist in: ./res" level=INFO ts=2019-10-31T15:16:32.507093231Z app=edgex-security-secrets-setup source=main.go:128 duration=10.184285ms msg="security-secrets-setup complete" ``` note that the referenced files from cmd/security-secrets-setup/res/configuration.toml in the git tree don't actually exist when run from a different directory, so this is expected to fail, however we shouldn't have the last message in the log above where it says security-secrets-setup is complete because it didn't complete, it failed ## 🌍 Your Environment **Deployment Environment:** **EdgeX Version:** fuji @ dd2af2191e85b0217a30ea176a345c30e0df5ac9
non_test
security secrets setup says it succeeded when it also says it fails and it did fail 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 hello there 😄 to expedite issue processing please search open and closed issues before submitting a new one existing issues often contain information about workarounds resolution or progress updates 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 🐞 bug report affected services the issue is located in security secrets setup is this a regression maybe description and minimal reproduction clone fuji branch of edgex go cmd security secrets setup security secrets setup confdir cmd security secrets setup res generate level error ts app edgex security secrets setup source logger go msg logtarget cannot be blank using stdout only level error ts app edgex security secrets setup source main go msg certconfigdir from config file does not exist in res level info ts app edgex security secrets setup source main go duration msg security secrets setup complete note that the referenced files from cmd security secrets setup res configuration toml in the git tree don t actually exist when run from a different directory so this is expected to fail however we shouldn t have the last message in the log above where it says security secrets setup is complete because it didn t complete it failed 🌍 your environment deployment environment edgex version fuji
0
331,049
28,503,519,803
IssuesEvent
2023-04-18 19:19:46
apache/beam
https://api.github.com/repos/apache/beam
reopened
Flink testParDoRequiresStableInput flaky
runners flink P1 failing test flake
https://ci-beam.apache.org/job/beam_PreCommit_Java_Commit/20253/ org.apache.beam.runners.flink.FlinkRequiresStableInputTest.testParDoRequiresStableInput java.util.concurrent.ExecutionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint triggering task Source: Impulse -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(OutputSingleSource)/ParMultiDo(OutputSingleSource) -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(BoundedSourceAsSDFWrapper)/ParMultiDo(BoundedSourceAsSDFWrapper)/Pair with initial restriction/ParMultiDo(PairWithRestriction) -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(BoundedSourceAsSDFWrapper)/ParMultiDo(BoundedSourceAsSDFWrapper)/Split restriction/ParMultiDo(SplitRestriction) -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(BoundedSourceAsSDFWrapper)/ParMultiDo(BoundedSourceAsSDFWrapper)/Explode windows/ParMultiDo(ExplodeWindows) -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(BoundedSourceAsSDFWrapper)/ParMultiDo(BoundedSourceAsSDFWrapper)/Assign unique key/AddKeys/Map/ParMultiDo(Anonymous) -\> ToKeyedWorkItem (1/1) of job 7bbb425ba325dbc1dc4d3cdf1c8b88f9 is not being executed at the moment. Aborting checkpoint. Failure reason: Not all required tasks are currently running. Imported from Jira [BEAM-13575](https://issues.apache.org/jira/browse/BEAM-13575). Original Jira may contain additional context. Reported by: ibzib.
1.0
Flink testParDoRequiresStableInput flaky - https://ci-beam.apache.org/job/beam_PreCommit_Java_Commit/20253/ org.apache.beam.runners.flink.FlinkRequiresStableInputTest.testParDoRequiresStableInput java.util.concurrent.ExecutionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint triggering task Source: Impulse -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(OutputSingleSource)/ParMultiDo(OutputSingleSource) -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(BoundedSourceAsSDFWrapper)/ParMultiDo(BoundedSourceAsSDFWrapper)/Pair with initial restriction/ParMultiDo(PairWithRestriction) -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(BoundedSourceAsSDFWrapper)/ParMultiDo(BoundedSourceAsSDFWrapper)/Split restriction/ParMultiDo(SplitRestriction) -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(BoundedSourceAsSDFWrapper)/ParMultiDo(BoundedSourceAsSDFWrapper)/Explode windows/ParMultiDo(ExplodeWindows) -\> CreatePCollectionOfOneValue/Read(CreateSource)/ParDo(BoundedSourceAsSDFWrapper)/ParMultiDo(BoundedSourceAsSDFWrapper)/Assign unique key/AddKeys/Map/ParMultiDo(Anonymous) -\> ToKeyedWorkItem (1/1) of job 7bbb425ba325dbc1dc4d3cdf1c8b88f9 is not being executed at the moment. Aborting checkpoint. Failure reason: Not all required tasks are currently running. Imported from Jira [BEAM-13575](https://issues.apache.org/jira/browse/BEAM-13575). Original Jira may contain additional context. Reported by: ibzib.
test
flink testpardorequiresstableinput flaky org apache beam runners flink flinkrequiresstableinputtest testpardorequiresstableinput java util concurrent executionexception java util concurrent completionexception org apache flink runtime checkpoint checkpointexception checkpoint triggering task source impulse createpcollectionofonevalue read createsource pardo outputsinglesource parmultido outputsinglesource createpcollectionofonevalue read createsource pardo boundedsourceassdfwrapper parmultido boundedsourceassdfwrapper pair with initial restriction parmultido pairwithrestriction createpcollectionofonevalue read createsource pardo boundedsourceassdfwrapper parmultido boundedsourceassdfwrapper split restriction parmultido splitrestriction createpcollectionofonevalue read createsource pardo boundedsourceassdfwrapper parmultido boundedsourceassdfwrapper explode windows parmultido explodewindows createpcollectionofonevalue read createsource pardo boundedsourceassdfwrapper parmultido boundedsourceassdfwrapper assign unique key addkeys map parmultido anonymous tokeyedworkitem of job is not being executed at the moment aborting checkpoint failure reason not all required tasks are currently running imported from jira original jira may contain additional context reported by ibzib
1
131,621
18,248,381,071
IssuesEvent
2021-10-01 22:10:07
ghc-dev/Ryan-Rasmussen
https://api.github.com/repos/ghc-dev/Ryan-Rasmussen
opened
CVE-2018-3721 (Medium) detected in multiple libraries
security vulnerability
## CVE-2018-3721 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-3.7.0.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-0.9.2.tgz</b>, <b>lodash-0.10.0.tgz</b></p></summary> <p> <details><summary><b>lodash-3.7.0.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz</a></p> <p>Path to dependency file: Ryan-Rasmussen/package.json</p> <p>Path to vulnerable library: Ryan-Rasmussen/node_modules/htmlhint/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - grunt-htmlhint-0.9.13.tgz (Root Library) - htmlhint-0.9.13.tgz - jshint-2.8.0.tgz - :x: **lodash-3.7.0.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-3.10.1.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p> <p>Path to dependency file: Ryan-Rasmussen/package.json</p> <p>Path to vulnerable library: Ryan-Rasmussen/node_modules/grunt-usemin/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - grunt-usemin-3.1.1.tgz (Root Library) - :x: **lodash-3.10.1.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-0.9.2.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, and extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz">https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz</a></p> <p>Path to dependency file: Ryan-Rasmussen/package.json</p> <p>Path to vulnerable library: Ryan-Rasmussen/node_modules/grunt-connect-proxy-updated/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - grunt-connect-proxy-updated-0.2.1.tgz (Root Library) - :x: **lodash-0.9.2.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-0.10.0.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, and extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz">https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz</a></p> <p>Path to dependency file: Ryan-Rasmussen/package.json</p> <p>Path to vulnerable library: Ryan-Rasmussen/node_modules/grunt-bower-task/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - grunt-bower-task-0.5.0.tgz (Root Library) - :x: **lodash-0.10.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Ryan-Rasmussen/commit/4ab6cb55863cc1731cd89a0da07290be9ef8799e">4ab6cb55863cc1731cd89a0da07290be9ef8799e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p> <p>Release Date: 2018-06-07</p> <p>Fix Resolution: 4.17.5</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.7.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-htmlhint:0.9.13;htmlhint:0.9.13;jshint:2.8.0;lodash:3.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-usemin:3.1.1;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.9.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-connect-proxy-updated:0.2.1;lodash:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.10.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-bower-task:0.5.0;lodash:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-3721","vulnerabilityDetails":"lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of \"Object\" via __proto__, causing the addition or modification of an existing property that will exist on all objects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-3721 (Medium) detected in multiple libraries - ## CVE-2018-3721 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-3.7.0.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-0.9.2.tgz</b>, <b>lodash-0.10.0.tgz</b></p></summary> <p> <details><summary><b>lodash-3.7.0.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz</a></p> <p>Path to dependency file: Ryan-Rasmussen/package.json</p> <p>Path to vulnerable library: Ryan-Rasmussen/node_modules/htmlhint/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - grunt-htmlhint-0.9.13.tgz (Root Library) - htmlhint-0.9.13.tgz - jshint-2.8.0.tgz - :x: **lodash-3.7.0.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-3.10.1.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p> <p>Path to dependency file: Ryan-Rasmussen/package.json</p> <p>Path to vulnerable library: Ryan-Rasmussen/node_modules/grunt-usemin/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - grunt-usemin-3.1.1.tgz (Root Library) - :x: **lodash-3.10.1.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-0.9.2.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, and extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz">https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz</a></p> <p>Path to dependency file: Ryan-Rasmussen/package.json</p> <p>Path to vulnerable library: Ryan-Rasmussen/node_modules/grunt-connect-proxy-updated/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - grunt-connect-proxy-updated-0.2.1.tgz (Root Library) - :x: **lodash-0.9.2.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-0.10.0.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, and extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz">https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz</a></p> <p>Path to dependency file: Ryan-Rasmussen/package.json</p> <p>Path to vulnerable library: Ryan-Rasmussen/node_modules/grunt-bower-task/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - grunt-bower-task-0.5.0.tgz (Root Library) - :x: **lodash-0.10.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Ryan-Rasmussen/commit/4ab6cb55863cc1731cd89a0da07290be9ef8799e">4ab6cb55863cc1731cd89a0da07290be9ef8799e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p> <p>Release Date: 2018-06-07</p> <p>Fix Resolution: 4.17.5</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.7.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-htmlhint:0.9.13;htmlhint:0.9.13;jshint:2.8.0;lodash:3.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-usemin:3.1.1;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.9.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-connect-proxy-updated:0.2.1;lodash:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.10.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-bower-task:0.5.0;lodash:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-3721","vulnerabilityDetails":"lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of \"Object\" via __proto__, causing the addition or modification of an existing property that will exist on all objects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_test
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file ryan rasmussen package json path to vulnerable library ryan rasmussen node modules htmlhint node modules lodash package json dependency hierarchy grunt htmlhint tgz root library htmlhint tgz jshint tgz x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file ryan rasmussen package json path to vulnerable library ryan rasmussen node modules grunt usemin node modules lodash package json dependency hierarchy grunt usemin tgz root library x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file ryan rasmussen package json path to vulnerable library ryan rasmussen node modules grunt connect proxy updated node modules lodash package json dependency hierarchy grunt connect proxy updated tgz root library x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file ryan rasmussen package json path to vulnerable library ryan rasmussen node modules grunt bower task node modules lodash package json dependency hierarchy grunt bower task tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash node module before suffers from a modification of assumed immutable data maid vulnerability via defaultsdeep merge and mergewith functions which allows a malicious user to modify the prototype of object via proto causing the addition or modification of an existing property that will exist on all objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt htmlhint htmlhint jshint lodash isminimumfixversionavailable true minimumfixversion packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt usemin lodash isminimumfixversionavailable true minimumfixversion packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt connect proxy updated lodash isminimumfixversionavailable true minimumfixversion packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt bower task lodash isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash node module before suffers from a modification of assumed immutable data maid vulnerability via defaultsdeep merge and mergewith functions which allows a malicious user to modify the prototype of object via proto causing the addition or modification of an existing property that will exist on all objects vulnerabilityurl
0
147,205
11,778,662,503
IssuesEvent
2020-03-16 16:39:24
HotelsDotCom/bull
https://api.github.com/repos/HotelsDotCom/bull
closed
100% Test Coverage
enhancement good first issue help wanted testing
**Description** As per today, the test coverage is: **97,3%**, the purpose of this task is to reach **100%**. **How the coverage is calculated** The plugin in charge of the test coverage calculation are **[Jacoco](https://github.com/jacoco/jacoco)** along with [SonarCloud](https://sonarcloud.io/dashboard?id=BULL). Jacoco it's executed during each build and produces a report inside the target folder of each module. You can find it at: `[module-name]/target/site/jacoco` instead, the SonarCloud report is updated during each release **The purpose** The purpose is to identify all the "not covered" code areas and implement specific Unit Test for covering them. **Additional context** There are classes and packages excluded from the coverage calculation, such as enum, Java Bean, etc. The whole list of the excluded items are: * [SonarCloud](https://github.com/HotelsDotCom/bull/blob/master/pom.xml#L73) * [Jacoco](https://github.com/HotelsDotCom/bull/blob/master/pom.xml#L464)
1.0
100% Test Coverage - **Description** As per today, the test coverage is: **97,3%**, the purpose of this task is to reach **100%**. **How the coverage is calculated** The plugin in charge of the test coverage calculation are **[Jacoco](https://github.com/jacoco/jacoco)** along with [SonarCloud](https://sonarcloud.io/dashboard?id=BULL). Jacoco it's executed during each build and produces a report inside the target folder of each module. You can find it at: `[module-name]/target/site/jacoco` instead, the SonarCloud report is updated during each release **The purpose** The purpose is to identify all the "not covered" code areas and implement specific Unit Test for covering them. **Additional context** There are classes and packages excluded from the coverage calculation, such as enum, Java Bean, etc. The whole list of the excluded items are: * [SonarCloud](https://github.com/HotelsDotCom/bull/blob/master/pom.xml#L73) * [Jacoco](https://github.com/HotelsDotCom/bull/blob/master/pom.xml#L464)
test
test coverage description as per today the test coverage is the purpose of this task is to reach how the coverage is calculated the plugin in charge of the test coverage calculation are along with jacoco it s executed during each build and produces a report inside the target folder of each module you can find it at target site jacoco instead the sonarcloud report is updated during each release the purpose the purpose is to identify all the not covered code areas and implement specific unit test for covering them additional context there are classes and packages excluded from the coverage calculation such as enum java bean etc the whole list of the excluded items are
1
2,440
2,525,857,517
IssuesEvent
2015-01-21 06:51:18
graybeal/ont
https://api.github.com/repos/graybeal/ont
opened
Allow multi-line fields in voc2rdf
1 star enhancement imported Priority-Low voc2rdf
_From [[email protected]](https://code.google.com/u/113886747689301365533/) on April 06, 2009 11:57:37_ (thanks John for this feedback) I suspect multi-line fields (that is, fields with carriage returns in them) will not be possible to insert. _Original issue: http://code.google.com/p/mmisw/issues/detail?id=117_
1.0
Allow multi-line fields in voc2rdf - _From [[email protected]](https://code.google.com/u/113886747689301365533/) on April 06, 2009 11:57:37_ (thanks John for this feedback) I suspect multi-line fields (that is, fields with carriage returns in them) will not be possible to insert. _Original issue: http://code.google.com/p/mmisw/issues/detail?id=117_
non_test
allow multi line fields in from on april thanks john for this feedback i suspect multi line fields that is fields with carriage returns in them will not be possible to insert original issue
0
251,150
21,427,744,679
IssuesEvent
2022-04-23 00:15:00
cosmos/cosmos-sdk
https://api.github.com/repos/cosmos/cosmos-sdk
closed
tests: TestNewAnyWithCustomTypeURLWithErrorNoAllocation flakey race test failure
Type: Tests C:Encoding
I've noticed the `TestNewAnyWithCustomTypeURLWithErrorNoAllocation` (race) test fail pretty frequently in CI. We need to investigate and fix. Test output: https://pastebin.com/rW5UnK0Q ref: https://github.com/cosmos/cosmos-sdk/runs/6053424807?check_suite_focus=true
1.0
tests: TestNewAnyWithCustomTypeURLWithErrorNoAllocation flakey race test failure - I've noticed the `TestNewAnyWithCustomTypeURLWithErrorNoAllocation` (race) test fail pretty frequently in CI. We need to investigate and fix. Test output: https://pastebin.com/rW5UnK0Q ref: https://github.com/cosmos/cosmos-sdk/runs/6053424807?check_suite_focus=true
test
tests testnewanywithcustomtypeurlwitherrornoallocation flakey race test failure i ve noticed the testnewanywithcustomtypeurlwitherrornoallocation race test fail pretty frequently in ci we need to investigate and fix test output ref
1
37,891
5,147,844,260
IssuesEvent
2017-01-13 09:11:10
varnishcache/varnish-cache
https://api.github.com/repos/varnishcache/varnish-cache
closed
varnishtest passes for vtc assert
b=bug c=varnishtest r=trunk
seen with printf debug in cache_esi_pase enabled ``` diff --git a/bin/varnishd/cache/cache_esi_parse.c b/bin/varnishd/cache/cache_esi_parse.c index d2b2306..39ea6bd 100644 --- a/bin/varnishd/cache/cache_esi_parse.c +++ b/bin/varnishd/cache/cache_esi_parse.c @@ -37,8 +37,9 @@ #include "vend.h" #include "vgz.h" -//#define Debug(fmt, ...) printf(fmt, __VA_ARGS__) -#define Debug(fmt, ...) /**/ +#include <stdio.h> +#define Debug(fmt, ...) printf(fmt, __VA_ARGS__) +//#define Debug(fmt, ...) /**/ struct vep_state; ``` -> ``` ######## tests/e00019.vtc ######## Assert error in vtc_log_emit(), vtc_log.c line 104: Condition(vtclog_left > l) not true. ... # top TEST tests/e00019.vtc passed (0.564) ``` the workaround is to raise `vtc_bufsiz`, but varnishtest should not pass for failed assertions
1.0
varnishtest passes for vtc assert - seen with printf debug in cache_esi_pase enabled ``` diff --git a/bin/varnishd/cache/cache_esi_parse.c b/bin/varnishd/cache/cache_esi_parse.c index d2b2306..39ea6bd 100644 --- a/bin/varnishd/cache/cache_esi_parse.c +++ b/bin/varnishd/cache/cache_esi_parse.c @@ -37,8 +37,9 @@ #include "vend.h" #include "vgz.h" -//#define Debug(fmt, ...) printf(fmt, __VA_ARGS__) -#define Debug(fmt, ...) /**/ +#include <stdio.h> +#define Debug(fmt, ...) printf(fmt, __VA_ARGS__) +//#define Debug(fmt, ...) /**/ struct vep_state; ``` -> ``` ######## tests/e00019.vtc ######## Assert error in vtc_log_emit(), vtc_log.c line 104: Condition(vtclog_left > l) not true. ... # top TEST tests/e00019.vtc passed (0.564) ``` the workaround is to raise `vtc_bufsiz`, but varnishtest should not pass for failed assertions
test
varnishtest passes for vtc assert seen with printf debug in cache esi pase enabled diff git a bin varnishd cache cache esi parse c b bin varnishd cache cache esi parse c index a bin varnishd cache cache esi parse c b bin varnishd cache cache esi parse c include vend h include vgz h define debug fmt printf fmt va args define debug fmt include define debug fmt printf fmt va args define debug fmt struct vep state tests vtc assert error in vtc log emit vtc log c line condition vtclog left l not true top test tests vtc passed the workaround is to raise vtc bufsiz but varnishtest should not pass for failed assertions
1
274,315
29,997,237,660
IssuesEvent
2023-06-26 06:47:27
SonarSource/sonar-dotnet
https://api.github.com/repos/SonarSource/sonar-dotnet
closed
FP S5332: Improve detection of namespace uris
Type: False Positive Area: VB.NET Area: C# Area: Security Sprint: MMF-3253
Uris used for namespaces start with `http` and are not reported by S5332. The detection of namespace uris can be improved: Copied from https://github.com/SonarSource/sonar-dotnet/issues/6141#issuecomment-1277250021 * [x] [XmlnsDictionary](https://learn.microsoft.com/dotnet/api/system.windows.markup.xmlnsdictionary) * [x] [XmlSerializerNamespaces](https://learn.microsoft.com/dotnet/api/system.xml.serialization.xmlserializernamespaces) * [x] [XmlQualifiedName construktor](https://learn.microsoft.com/de-de/dotnet/api/system.xml.xmlqualifiedname.-ctor?view=net-6.0#system-xml-xmlqualifiedname-ctor(system-string-system-string)) * [x] [XmlNamespaceManager Class (System.Xml)](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlnamespacemanager?view=net-6.0) * [x] [XmlAttribute(String, String, String, XmlDocument) Constructor (System.Xml) | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlattribute.-ctor?view=net-6.0#system-xml-xmlattribute-ctor(system-string-system-string-system-string-system-xml-xmldocument)) * [x] [XmlElement(String, String, String, XmlDocument) Constructor (System.Xml) | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlelement.-ctor?view=net-6.0#system-xml-xmlelement-ctor(system-string-system-string-system-string-system-xml-xmldocument)) * [x] [XmlWriter.WriteStartElement Method (System.Xml)](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlwriter.writestartelement?view=net-6.0#system-xml-xmlwriter-writestartelement(system-string-system-string)) * [x] [XmlWriter.WriteElementString Method (System.Xml) | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlwriter.writeelementstring?view=net-6.0#system-xml-xmlwriter-writeelementstring(system-string-system-string-system-string)) * [x] [XmlSerializationWriter.WriteElementStringRaw Method (System.Xml.Serialization) | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.serialization.xmlserializationwriter.writeelementstringraw?view=netcore-3.1) Update list of well-known namespaces * [x] https://referencesource.microsoft.com/#system.xml/System/Xml/XmlReservedNs.cs * [x] Add oasis-open.org namespaces (e.g. wss, see e.g. [peach](https://peach.sonarsource.com/security_hotspots?id=wcf&hotspots=AXddXtF0Po2w9IEVshSJ)): http://docs.oasis-open.org/wss/2004/01/ * [x] Add ws-i.org namespaces http://ws-i.org/ e.g. [peach](https://peach.sonarsource.com/security_hotspots?id=wcf&hotspots=AXZ4YUlp7R5umuxffeXo)
True
FP S5332: Improve detection of namespace uris - Uris used for namespaces start with `http` and are not reported by S5332. The detection of namespace uris can be improved: Copied from https://github.com/SonarSource/sonar-dotnet/issues/6141#issuecomment-1277250021 * [x] [XmlnsDictionary](https://learn.microsoft.com/dotnet/api/system.windows.markup.xmlnsdictionary) * [x] [XmlSerializerNamespaces](https://learn.microsoft.com/dotnet/api/system.xml.serialization.xmlserializernamespaces) * [x] [XmlQualifiedName construktor](https://learn.microsoft.com/de-de/dotnet/api/system.xml.xmlqualifiedname.-ctor?view=net-6.0#system-xml-xmlqualifiedname-ctor(system-string-system-string)) * [x] [XmlNamespaceManager Class (System.Xml)](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlnamespacemanager?view=net-6.0) * [x] [XmlAttribute(String, String, String, XmlDocument) Constructor (System.Xml) | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlattribute.-ctor?view=net-6.0#system-xml-xmlattribute-ctor(system-string-system-string-system-string-system-xml-xmldocument)) * [x] [XmlElement(String, String, String, XmlDocument) Constructor (System.Xml) | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlelement.-ctor?view=net-6.0#system-xml-xmlelement-ctor(system-string-system-string-system-string-system-xml-xmldocument)) * [x] [XmlWriter.WriteStartElement Method (System.Xml)](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlwriter.writestartelement?view=net-6.0#system-xml-xmlwriter-writestartelement(system-string-system-string)) * [x] [XmlWriter.WriteElementString Method (System.Xml) | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlwriter.writeelementstring?view=net-6.0#system-xml-xmlwriter-writeelementstring(system-string-system-string-system-string)) * [x] [XmlSerializationWriter.WriteElementStringRaw Method (System.Xml.Serialization) | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.serialization.xmlserializationwriter.writeelementstringraw?view=netcore-3.1) Update list of well-known namespaces * [x] https://referencesource.microsoft.com/#system.xml/System/Xml/XmlReservedNs.cs * [x] Add oasis-open.org namespaces (e.g. wss, see e.g. [peach](https://peach.sonarsource.com/security_hotspots?id=wcf&hotspots=AXddXtF0Po2w9IEVshSJ)): http://docs.oasis-open.org/wss/2004/01/ * [x] Add ws-i.org namespaces http://ws-i.org/ e.g. [peach](https://peach.sonarsource.com/security_hotspots?id=wcf&hotspots=AXZ4YUlp7R5umuxffeXo)
non_test
fp improve detection of namespace uris uris used for namespaces start with http and are not reported by the detection of namespace uris can be improved copied from update list of well known namespaces add oasis open org namespaces e g wss see e g add ws i org namespaces e g
0
335,915
30,106,454,659
IssuesEvent
2023-06-30 02:01:57
EddieHubCommunity/LinkFree
https://api.github.com/repos/EddieHubCommunity/LinkFree
closed
New Testimonial for Rupali Haldiya
testimonial
### Name rupali-codes ### Title Great partner ### Description I had the pleasure to work with Rupali as an Open Source Maintainer for LinksHub. She is a wonderful leader, receptive to new ideas, and always eager to learn and implement new skills, something that is very inspirational to see as a woman navigating the open source community. If you are looking for a self-directed learner and an all-around humble individual, I highly recommend hiring Rupali for your team.
1.0
New Testimonial for Rupali Haldiya - ### Name rupali-codes ### Title Great partner ### Description I had the pleasure to work with Rupali as an Open Source Maintainer for LinksHub. She is a wonderful leader, receptive to new ideas, and always eager to learn and implement new skills, something that is very inspirational to see as a woman navigating the open source community. If you are looking for a self-directed learner and an all-around humble individual, I highly recommend hiring Rupali for your team.
test
new testimonial for rupali haldiya name rupali codes title great partner description i had the pleasure to work with rupali as an open source maintainer for linkshub she is a wonderful leader receptive to new ideas and always eager to learn and implement new skills something that is very inspirational to see as a woman navigating the open source community if you are looking for a self directed learner and an all around humble individual i highly recommend hiring rupali for your team
1
324,939
24,026,190,422
IssuesEvent
2022-09-15 11:43:50
pmndrs/zustand
https://api.github.com/repos/pmndrs/zustand
closed
[documentation update] Dark mode
documentation
Core conversation here: https://github.com/pmndrs/zustand/discussions/1033 Add a dark mode to the new documentation site. This is a "nice to have" Blocked by https://github.com/pmndrs/zustand/issues/1215
1.0
[documentation update] Dark mode - Core conversation here: https://github.com/pmndrs/zustand/discussions/1033 Add a dark mode to the new documentation site. This is a "nice to have" Blocked by https://github.com/pmndrs/zustand/issues/1215
non_test
dark mode core conversation here add a dark mode to the new documentation site this is a nice to have blocked by
0
63,963
6,889,085,025
IssuesEvent
2017-11-22 09:09:38
geosolutions-it/decat_geonode
https://api.github.com/repos/geosolutions-it/decat_geonode
reopened
Profile page gives an error
bug Priority: Medium test
NoReverseMatch at /people/profile/TTX3-FORTH-ia/ Reverse for 'capabilities_user' with arguments '(u'TTX3-FORTH-ia',)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['capabilities/user/(?P\w+)/$'] To reproduce: - log in - open the "Profile" page from the menu
1.0
Profile page gives an error - NoReverseMatch at /people/profile/TTX3-FORTH-ia/ Reverse for 'capabilities_user' with arguments '(u'TTX3-FORTH-ia',)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['capabilities/user/(?P\w+)/$'] To reproduce: - log in - open the "Profile" page from the menu
test
profile page gives an error noreversematch at people profile forth ia reverse for capabilities user with arguments u forth ia and keyword arguments not found pattern s tried to reproduce log in open the profile page from the menu
1
171,820
6,494,968,889
IssuesEvent
2017-08-22 01:34:20
way-cooler/way-cooler
https://api.github.com/repos/way-cooler/way-cooler
closed
Allow backgrounds to dictate their geometry
Client Program Low Priority Tiling
As discussed in #234, the background program should be able dictate where it would like to be drawn. If it wants to drawn e.g below the title bar (which it can do so by querying it in D-Bus, hypothetically), then it can do that. It's the one dealing with the background image, it knows the correct thing to do.
1.0
Allow backgrounds to dictate their geometry - As discussed in #234, the background program should be able dictate where it would like to be drawn. If it wants to drawn e.g below the title bar (which it can do so by querying it in D-Bus, hypothetically), then it can do that. It's the one dealing with the background image, it knows the correct thing to do.
non_test
allow backgrounds to dictate their geometry as discussed in the background program should be able dictate where it would like to be drawn if it wants to drawn e g below the title bar which it can do so by querying it in d bus hypothetically then it can do that it s the one dealing with the background image it knows the correct thing to do
0
825,073
31,271,172,248
IssuesEvent
2023-08-22 00:10:26
Berreip/VSP_Issues_tracker
https://api.github.com/repos/Berreip/VSP_Issues_tracker
closed
[HIGH] L'historique de la visite n'affiche pas l'ensemble des animaux vus
bug high priority
Step to Reproduce: 1) lancer une visite 2) Créer un animal et lui attribuer des lésions 3) Créer un second animal et lui attribuer des lésions 4) terminer la visite 5) la ré-ouvrir et cliquer sur l'historique Expected: Les 2 animaux créés sont visible Observed: Seul le premier animal est visible
1.0
[HIGH] L'historique de la visite n'affiche pas l'ensemble des animaux vus - Step to Reproduce: 1) lancer une visite 2) Créer un animal et lui attribuer des lésions 3) Créer un second animal et lui attribuer des lésions 4) terminer la visite 5) la ré-ouvrir et cliquer sur l'historique Expected: Les 2 animaux créés sont visible Observed: Seul le premier animal est visible
non_test
l historique de la visite n affiche pas l ensemble des animaux vus step to reproduce lancer une visite créer un animal et lui attribuer des lésions créer un second animal et lui attribuer des lésions terminer la visite la ré ouvrir et cliquer sur l historique expected les animaux créés sont visible observed seul le premier animal est visible
0
51,411
6,157,803,050
IssuesEvent
2017-06-28 19:47:18
acstech/corkboard-sample
https://api.github.com/repos/acstech/corkboard-sample
closed
Connect SignUp Component with API
bug feature MEDIUM testable
Set up the SignUp component to the authentication and corkboard API to allow the user to be added from the front-end. **Test connection locally and then from the server?**
1.0
Connect SignUp Component with API - Set up the SignUp component to the authentication and corkboard API to allow the user to be added from the front-end. **Test connection locally and then from the server?**
test
connect signup component with api set up the signup component to the authentication and corkboard api to allow the user to be added from the front end test connection locally and then from the server
1
267,857
23,325,406,795
IssuesEvent
2022-08-08 20:38:31
episphere/questionnaire
https://api.github.com/repos/episphere/questionnaire
closed
End message when MENS1=YES is incorrect
Menstrual Cycle Survey RE-TEST THE FIX MARKUP ISSUE
MENS1 = YES MES2 = 06/07/2022 ![Screen Shot 2022-06-23 at 4 21 23 PM](https://user-images.githubusercontent.com/83971268/175393796-197797f7-e40c-450f-b2fc-b1c62ed48536.png) When MENS1 = YES, message should say, "Thank you for completing the survey."
1.0
End message when MENS1=YES is incorrect - MENS1 = YES MES2 = 06/07/2022 ![Screen Shot 2022-06-23 at 4 21 23 PM](https://user-images.githubusercontent.com/83971268/175393796-197797f7-e40c-450f-b2fc-b1c62ed48536.png) When MENS1 = YES, message should say, "Thank you for completing the survey."
test
end message when yes is incorrect yes when yes message should say thank you for completing the survey
1
817
2,590,968,809
IssuesEvent
2015-02-18 22:11:31
robcalcroft/YRM2015
https://api.github.com/repos/robcalcroft/YRM2015
closed
Fix incorrect tracks for speakers
defect
Could “Probability and Stochastic Processes” be one track, not two tracks? Likewise, “Combinatorics and Graph Theory” should be one track, not two tracks.
1.0
Fix incorrect tracks for speakers - Could “Probability and Stochastic Processes” be one track, not two tracks? Likewise, “Combinatorics and Graph Theory” should be one track, not two tracks.
non_test
fix incorrect tracks for speakers could “probability and stochastic processes” be one track not two tracks likewise “combinatorics and graph theory” should be one track not two tracks
0
297,162
25,604,398,524
IssuesEvent
2022-12-01 23:39:55
balena-os/meta-balena
https://api.github.com/repos/balena-os/meta-balena
closed
Supervisor unreachable after data reset test (generic-aarch64)
os/tests
https://jenkins.product-os.io/job/leviathan-v2-template/8715/consoleFull ``` jenkins-leviathan-v2-template-8715-worker-1 | dnsmasq-dhcp: DHCPREQUEST(brrnm9asba) 10.10.10.76 52:54:00:12:34:56 jenkins-leviathan-v2-template-8715-worker-1 | dnsmasq-dhcp: DHCPACK(brrnm9asba) 10.10.10.76 52:54:00:12:34:56 7775148 jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:06:06.011Z][worker-os] DUT has rebooted & is back online jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.025Z][worker-os] # Writing test file to data partition... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.025Z][worker-os] # Clearing reset flag from data partition... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.025Z][worker-os] # Waiting for engine service to be active... jenkins-leviathan-v2-template-8715-client-1 | # Waiting for supervisor service to be active... jenkins-leviathan-v2-template-8715-client-1 | # Waiting for DUT supervisor to be reachable on port 48484... jenkins-leviathan-v2-template-8715-client-1 | # Waiting for DUT supervisor to be reachable on port 48484... ... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.035Z][worker-os] not ok 2 - Condition async () => { test.comment(`Waiting for DUT supervisor to be reachable on port 48484...`) return ( (await request({ method: 'GET', uri: `[http://${ip}:48484/ping`](http://%24%7Bip%7D:48484/ping%60), })) === 'OK' ); } timed out jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.035Z][worker-os] --- jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.035Z][worker-os] stack: | jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.035Z][worker-os] } timed out jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.036Z][worker-os] _waitUntil (lib/common/utils.js:103:11) jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.036Z][worker-os] _waitUntil (lib/common/utils.js:118:11) jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.036Z][worker-os] Object.waitUntil (lib/common/utils.js:121:3) jenkins-leviathan-v2-template-8715-client-1 | waitUntilSupervisorActive (/data/suite/tests/purge-data/index.js:21:2) jenkins-leviathan-v2-template-8715-client-1 | Proxy.run (/data/suite/tests/purge-data/index.js:168:5) jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.036Z][worker-os] test: data partition reset jenkins-leviathan-v2-template-8715-client-1 | ... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.045Z][worker-os] jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.045Z][worker-os] Bail out! Condition async () => { jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.050Z][worker-os] Bail out! Condition async () => { jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.060Z][worker-os] Test suite completed. Tearing down now. jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.060Z][worker-os] Retrieving journalctl logs to the file /tmp/journalctl-8r8xs2ye.log ... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:37.320Z][worker-os] Delete SSH key with label: zmdyczw9 jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:38.646Z][worker-os] Worker teardown ``` [consoleText.txt](https://github.com/balena-os/meta-balena/files/10123816/consoleText.txt)
1.0
Supervisor unreachable after data reset test (generic-aarch64) - https://jenkins.product-os.io/job/leviathan-v2-template/8715/consoleFull ``` jenkins-leviathan-v2-template-8715-worker-1 | dnsmasq-dhcp: DHCPREQUEST(brrnm9asba) 10.10.10.76 52:54:00:12:34:56 jenkins-leviathan-v2-template-8715-worker-1 | dnsmasq-dhcp: DHCPACK(brrnm9asba) 10.10.10.76 52:54:00:12:34:56 7775148 jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:06:06.011Z][worker-os] DUT has rebooted & is back online jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.025Z][worker-os] # Writing test file to data partition... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.025Z][worker-os] # Clearing reset flag from data partition... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.025Z][worker-os] # Waiting for engine service to be active... jenkins-leviathan-v2-template-8715-client-1 | # Waiting for supervisor service to be active... jenkins-leviathan-v2-template-8715-client-1 | # Waiting for DUT supervisor to be reachable on port 48484... jenkins-leviathan-v2-template-8715-client-1 | # Waiting for DUT supervisor to be reachable on port 48484... ... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.035Z][worker-os] not ok 2 - Condition async () => { test.comment(`Waiting for DUT supervisor to be reachable on port 48484...`) return ( (await request({ method: 'GET', uri: `[http://${ip}:48484/ping`](http://%24%7Bip%7D:48484/ping%60), })) === 'OK' ); } timed out jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.035Z][worker-os] --- jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.035Z][worker-os] stack: | jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.035Z][worker-os] } timed out jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.036Z][worker-os] _waitUntil (lib/common/utils.js:103:11) jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.036Z][worker-os] _waitUntil (lib/common/utils.js:118:11) jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.036Z][worker-os] Object.waitUntil (lib/common/utils.js:121:3) jenkins-leviathan-v2-template-8715-client-1 | waitUntilSupervisorActive (/data/suite/tests/purge-data/index.js:21:2) jenkins-leviathan-v2-template-8715-client-1 | Proxy.run (/data/suite/tests/purge-data/index.js:168:5) jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.036Z][worker-os] test: data partition reset jenkins-leviathan-v2-template-8715-client-1 | ... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.045Z][worker-os] jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.045Z][worker-os] Bail out! Condition async () => { jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.050Z][worker-os] Bail out! Condition async () => { jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.060Z][worker-os] Test suite completed. Tearing down now. jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:34.060Z][worker-os] Retrieving journalctl logs to the file /tmp/journalctl-8r8xs2ye.log ... jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:37.320Z][worker-os] Delete SSH key with label: zmdyczw9 jenkins-leviathan-v2-template-8715-client-1 | [2022-11-29T23:08:38.646Z][worker-os] Worker teardown ``` [consoleText.txt](https://github.com/balena-os/meta-balena/files/10123816/consoleText.txt)
test
supervisor unreachable after data reset test generic jenkins leviathan template worker dnsmasq dhcp dhcprequest jenkins leviathan template worker dnsmasq dhcp dhcpack jenkins leviathan template client dut has rebooted is back online jenkins leviathan template client writing test file to data partition jenkins leviathan template client clearing reset flag from data partition jenkins leviathan template client waiting for engine service to be active jenkins leviathan template client waiting for supervisor service to be active jenkins leviathan template client waiting for dut supervisor to be reachable on port jenkins leviathan template client waiting for dut supervisor to be reachable on port jenkins leviathan template client not ok condition async test comment waiting for dut supervisor to be reachable on port return await request method get uri ok timed out jenkins leviathan template client jenkins leviathan template client stack jenkins leviathan template client timed out jenkins leviathan template client waituntil lib common utils js jenkins leviathan template client waituntil lib common utils js jenkins leviathan template client object waituntil lib common utils js jenkins leviathan template client waituntilsupervisoractive data suite tests purge data index js jenkins leviathan template client proxy run data suite tests purge data index js jenkins leviathan template client test data partition reset jenkins leviathan template client jenkins leviathan template client jenkins leviathan template client bail out condition async jenkins leviathan template client bail out condition async jenkins leviathan template client test suite completed tearing down now jenkins leviathan template client retrieving journalctl logs to the file tmp journalctl log jenkins leviathan template client delete ssh key with label jenkins leviathan template client worker teardown
1
129,865
10,589,417,138
IssuesEvent
2019-10-09 06:05:04
elastic/cloud-on-k8s
https://api.github.com/repos/elastic/cloud-on-k8s
closed
TestMutationLessNodes is flaky
>flaky_test >test v1.0.0-beta1
``` 12:24:01 --- FAIL: TestMutationLessNodes/ES_cluster_health_should_eventually_be_green#01 (300.00s) 12:24:01 require.go:794: 12:24:01 Error Trace: utils.go:80 12:24:01 Error: Received unexpected error: 12:24:01 health is red 12:24:01 Test: TestMutationLessNodes/ES_cluster_health_should_eventually_be_green#01 ``` I'm not sure what's going on here. This test is supposed to test a downscale from 3 to 1 master. I think it happened once on my workstation: ``` data-integrity-check 1 p STARTED 4 3.5kb 10.233.65.17 test-mutation-less-nodes-6snf-es-masterdata-0 data-integrity-check 2 p UNASSIGNED data-integrity-check 0 p STARTED 0 283b 10.233.65.17 test-mutation-less-nodes-6snf-es-masterdata-0 ``` ``` curl -k -u elastic:$PASSWORD https://localhost:9200/_cluster/allocation/explain\?pretty { "index" : "data-integrity-check", "shard" : 2, "primary" : true, "current_state" : "unassigned", "unassigned_info" : { "reason" : "NODE_LEFT", "at" : "2019-09-18T08:59:21.568Z", "details" : "node_left [DUsKggcYQ9Gy-gyrfkIflA]", "last_allocation_status" : "no_valid_shard_copy" }, "can_allocate" : "no_valid_shard_copy", "allocate_explanation" : "cannot allocate because all found copies of the shard are either stale or corrupt", "node_allocation_decisions" : [ { "node_id" : "EzWbSghXTvObgJtlPRXu_Q", "node_name" : "test-mutation-less-nodes-6snf-es-masterdata-0", "transport_address" : "10.233.65.17:9300", "node_attributes" : { "ml.machine_memory" : "2147483648", "xpack.installed" : "true", "ml.max_open_jobs" : "20" }, "node_decision" : "no", "store" : { "in_sync" : false, "allocation_id" : "kKGUEcpNTGucBURVH5gKYQ" } } ] } ``` I have run this test a lot of time locally but I never managed to reproduce it more than one time.
2.0
TestMutationLessNodes is flaky - ``` 12:24:01 --- FAIL: TestMutationLessNodes/ES_cluster_health_should_eventually_be_green#01 (300.00s) 12:24:01 require.go:794: 12:24:01 Error Trace: utils.go:80 12:24:01 Error: Received unexpected error: 12:24:01 health is red 12:24:01 Test: TestMutationLessNodes/ES_cluster_health_should_eventually_be_green#01 ``` I'm not sure what's going on here. This test is supposed to test a downscale from 3 to 1 master. I think it happened once on my workstation: ``` data-integrity-check 1 p STARTED 4 3.5kb 10.233.65.17 test-mutation-less-nodes-6snf-es-masterdata-0 data-integrity-check 2 p UNASSIGNED data-integrity-check 0 p STARTED 0 283b 10.233.65.17 test-mutation-less-nodes-6snf-es-masterdata-0 ``` ``` curl -k -u elastic:$PASSWORD https://localhost:9200/_cluster/allocation/explain\?pretty { "index" : "data-integrity-check", "shard" : 2, "primary" : true, "current_state" : "unassigned", "unassigned_info" : { "reason" : "NODE_LEFT", "at" : "2019-09-18T08:59:21.568Z", "details" : "node_left [DUsKggcYQ9Gy-gyrfkIflA]", "last_allocation_status" : "no_valid_shard_copy" }, "can_allocate" : "no_valid_shard_copy", "allocate_explanation" : "cannot allocate because all found copies of the shard are either stale or corrupt", "node_allocation_decisions" : [ { "node_id" : "EzWbSghXTvObgJtlPRXu_Q", "node_name" : "test-mutation-less-nodes-6snf-es-masterdata-0", "transport_address" : "10.233.65.17:9300", "node_attributes" : { "ml.machine_memory" : "2147483648", "xpack.installed" : "true", "ml.max_open_jobs" : "20" }, "node_decision" : "no", "store" : { "in_sync" : false, "allocation_id" : "kKGUEcpNTGucBURVH5gKYQ" } } ] } ``` I have run this test a lot of time locally but I never managed to reproduce it more than one time.
test
testmutationlessnodes is flaky fail testmutationlessnodes es cluster health should eventually be green require go error trace utils go error received unexpected error health is red test testmutationlessnodes es cluster health should eventually be green i m not sure what s going on here this test is supposed to test a downscale from to master i think it happened once on my workstation data integrity check p started test mutation less nodes es masterdata data integrity check p unassigned data integrity check p started test mutation less nodes es masterdata curl k u elastic password index data integrity check shard primary true current state unassigned unassigned info reason node left at details node left last allocation status no valid shard copy can allocate no valid shard copy allocate explanation cannot allocate because all found copies of the shard are either stale or corrupt node allocation decisions node id ezwbsghxtvobgjtlprxu q node name test mutation less nodes es masterdata transport address node attributes ml machine memory xpack installed true ml max open jobs node decision no store in sync false allocation id i have run this test a lot of time locally but i never managed to reproduce it more than one time
1
27,002
12,504,865,978
IssuesEvent
2020-06-02 09:43:21
Financial-Times/origami-screencap-service
https://api.github.com/repos/Financial-Times/origami-screencap-service
closed
Biz-ops entry
maintenance service
Currently this only has an entry for the repository which was created a an automated importer --> https://biz-ops.in.ft.com/Repository/github%3AFinancial-Times%2Forigami-screencap-service We should add an entry which is for the system and contains the url `origami-screencap.ft.com` so that it is easily searchable via biz-ops --> https://biz-ops.in.ft.com/search?q=origami-screencap.ft.com
1.0
Biz-ops entry - Currently this only has an entry for the repository which was created a an automated importer --> https://biz-ops.in.ft.com/Repository/github%3AFinancial-Times%2Forigami-screencap-service We should add an entry which is for the system and contains the url `origami-screencap.ft.com` so that it is easily searchable via biz-ops --> https://biz-ops.in.ft.com/search?q=origami-screencap.ft.com
non_test
biz ops entry currently this only has an entry for the repository which was created a an automated importer we should add an entry which is for the system and contains the url origami screencap ft com so that it is easily searchable via biz ops
0
233,760
19,044,053,230
IssuesEvent
2021-11-25 04:11:28
Azuto-S/Calidad-Instagram
https://api.github.com/repos/Azuto-S/Calidad-Instagram
closed
[Publicación de un Post] Publicación de una fotografía satisfactorio - Pass
bug TestQuality
#### Precondition Haber iniciado sesion #### Steps to Reproduce: | Step | Action | Expected | Status | | -------- | -------- | -------- | -------- | | 1| Elegir la fotografía| <img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27056.png" alt="" />| Pass | | 2| Editar la fotografía| <img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27062.png" alt="" />| Pass | | 3| Dar en publicar| <img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27065.png" alt="" />| Pass |
1.0
[Publicación de un Post] Publicación de una fotografía satisfactorio - Pass - #### Precondition Haber iniciado sesion #### Steps to Reproduce: | Step | Action | Expected | Status | | -------- | -------- | -------- | -------- | | 1| Elegir la fotografía| <img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27056.png" alt="" />| Pass | | 2| Editar la fotografía| <img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27062.png" alt="" />| Pass | | 3| Dar en publicar| <img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27065.png" alt="" />| Pass |
test
publicación de una fotografía satisfactorio pass precondition haber iniciado sesion steps to reproduce step action expected status elegir la fotografía pass editar la fotografía pass dar en publicar pass
1
589,588
17,754,014,642
IssuesEvent
2021-08-28 11:31:40
woowa-techcamp-2021/store-6
https://api.github.com/repos/woowa-techcamp-2021/store-6
closed
새로고침(?) 아이콘과 실제 동작이 일치하지 않는 듯한 경험
bug low priority improve
## ⚠️ 버그 설명 새로고침 아이콘이 실제로 새로고침이 아닌 해당 카테고리의 디폴트 카테고리?가 되는거 같아서 아이콘의 형태와 기대 동작이 일치하지 않는 거 같다 ![스크린샷 2021-08-27 오후 2 23 51](https://user-images.githubusercontent.com/41738385/131076182-35108e46-43d7-4a77-ab1a-2f2612d03454.png) 해당 아이콘은 **현재 상태에서 새로고침**한다는 느낌을 주는데 막상 누르면 해당 카테고리/필터의 기본값으로 이동하는 결과가 나오는 거 같다. 일치하지 않는 느낌 ## 📑 완료 조건 - [x] 새로 고침 아이콘을 지운다. - [x] ALL 같은 아이콘을 추가하여 전체 카테고리를 선택할 수 있게 한다. ## :thought_balloon: 관련 Backlog > [대분류] - [중분류] - [Backlog 이름]
1.0
새로고침(?) 아이콘과 실제 동작이 일치하지 않는 듯한 경험 - ## ⚠️ 버그 설명 새로고침 아이콘이 실제로 새로고침이 아닌 해당 카테고리의 디폴트 카테고리?가 되는거 같아서 아이콘의 형태와 기대 동작이 일치하지 않는 거 같다 ![스크린샷 2021-08-27 오후 2 23 51](https://user-images.githubusercontent.com/41738385/131076182-35108e46-43d7-4a77-ab1a-2f2612d03454.png) 해당 아이콘은 **현재 상태에서 새로고침**한다는 느낌을 주는데 막상 누르면 해당 카테고리/필터의 기본값으로 이동하는 결과가 나오는 거 같다. 일치하지 않는 느낌 ## 📑 완료 조건 - [x] 새로 고침 아이콘을 지운다. - [x] ALL 같은 아이콘을 추가하여 전체 카테고리를 선택할 수 있게 한다. ## :thought_balloon: 관련 Backlog > [대분류] - [중분류] - [Backlog 이름]
non_test
새로고침 아이콘과 실제 동작이 일치하지 않는 듯한 경험 ⚠️ 버그 설명 새로고침 아이콘이 실제로 새로고침이 아닌 해당 카테고리의 디폴트 카테고리 가 되는거 같아서 아이콘의 형태와 기대 동작이 일치하지 않는 거 같다 해당 아이콘은 현재 상태에서 새로고침 한다는 느낌을 주는데 막상 누르면 해당 카테고리 필터의 기본값으로 이동하는 결과가 나오는 거 같다 일치하지 않는 느낌 📑 완료 조건 새로 고침 아이콘을 지운다 all 같은 아이콘을 추가하여 전체 카테고리를 선택할 수 있게 한다 thought balloon 관련 backlog
0
99,061
8,688,891,611
IssuesEvent
2018-12-03 17:11:32
ingresse/websdk
https://api.github.com/repos/ingresse/websdk
closed
Add: 'companyId' to multi-cookies
feature tested to deploy
## Cookies by Company We need to avoid the cross application error about user token issue. Before the apps do the requests to Ingresse API, we check if the company cookies was storaged. ### Describing the thing Put the `companyId` in SDK Preferences and as cookie's prefix. - Prefix: **`ing_`** - Definitions: - `companyId` + '_userId'; - `companyId` + '_token'; - `companyId` + '_jwt'; - `ingresseApiPreferences.setCompanyId(:companyId)`
1.0
Add: 'companyId' to multi-cookies - ## Cookies by Company We need to avoid the cross application error about user token issue. Before the apps do the requests to Ingresse API, we check if the company cookies was storaged. ### Describing the thing Put the `companyId` in SDK Preferences and as cookie's prefix. - Prefix: **`ing_`** - Definitions: - `companyId` + '_userId'; - `companyId` + '_token'; - `companyId` + '_jwt'; - `ingresseApiPreferences.setCompanyId(:companyId)`
test
add companyid to multi cookies cookies by company we need to avoid the cross application error about user token issue before the apps do the requests to ingresse api we check if the company cookies was storaged describing the thing put the companyid in sdk preferences and as cookie s prefix prefix ing definitions companyid userid companyid token companyid jwt ingresseapipreferences setcompanyid companyid
1
359,774
25,255,546,436
IssuesEvent
2022-11-15 17:44:57
angelolab/ark-analysis
https://api.github.com/repos/angelolab/ark-analysis
closed
Populations specified in post clustering notebook don't work with example dataset
documentation
**Describe the bug** The current populations specified for the post clustering notebook aren't specified in the `cell_meta_cluster` column of the example `cell_table`, which throws a `ValueError`. This happens in both plotting steps. **Expected behavior** Notebook should run seamlessly using the example dataset. **To Reproduce** Run notebook as is. <img width="1251" alt="Screen Shot 2022-11-10 at 11 45 20 AM" src="https://user-images.githubusercontent.com/31424707/201191475-f26dc0cb-372c-4ff5-b198-92847528500c.png">
1.0
Populations specified in post clustering notebook don't work with example dataset - **Describe the bug** The current populations specified for the post clustering notebook aren't specified in the `cell_meta_cluster` column of the example `cell_table`, which throws a `ValueError`. This happens in both plotting steps. **Expected behavior** Notebook should run seamlessly using the example dataset. **To Reproduce** Run notebook as is. <img width="1251" alt="Screen Shot 2022-11-10 at 11 45 20 AM" src="https://user-images.githubusercontent.com/31424707/201191475-f26dc0cb-372c-4ff5-b198-92847528500c.png">
non_test
populations specified in post clustering notebook don t work with example dataset describe the bug the current populations specified for the post clustering notebook aren t specified in the cell meta cluster column of the example cell table which throws a valueerror this happens in both plotting steps expected behavior notebook should run seamlessly using the example dataset to reproduce run notebook as is img width alt screen shot at am src
0
61,167
8,491,926,486
IssuesEvent
2018-10-27 17:45:13
dipakkr/A-to-Z-Resources-for-Students
https://api.github.com/repos/dipakkr/A-to-Z-Resources-for-Students
opened
Fix : Fix the Format of Contents in CheatSheet.md and make the file look more interactive
beginner bug documentation easy pick enhancement first-time-contributor first-timers-only good first issue
Send your patch and reference this issue.
1.0
Fix : Fix the Format of Contents in CheatSheet.md and make the file look more interactive - Send your patch and reference this issue.
non_test
fix fix the format of contents in cheatsheet md and make the file look more interactive send your patch and reference this issue
0
2,723
3,828,051,513
IssuesEvent
2016-03-31 02:38:18
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Win7 run failed with coreclr bind error
2 - In Progress blocking-clean-ci Infrastructure
``` 10:08:22 Installing dotnet cli... 10:08:26 Restoring BuildTools version 1.0.25-prerelease-00228-02... 10:08:26 Failed to load the dll from d:\j\workspace\outerloop_win---9c9e7d59\Tools\dotnetcli\shared\Microsoft.NETCore.App\1.0.0-rc2-23928\coreclr.dll, HRESULT: 0x8007007E 10:08:26 Failed to bind to coreclr 10:08:26 ERROR: Could not restore build tools correctly. See 'd:\j\workspace\outerloop_win---9c9e7d59\init-tools.log' for more details. ``` This may be from dotnetcli update? cc @dagood
1.0
Win7 run failed with coreclr bind error - ``` 10:08:22 Installing dotnet cli... 10:08:26 Restoring BuildTools version 1.0.25-prerelease-00228-02... 10:08:26 Failed to load the dll from d:\j\workspace\outerloop_win---9c9e7d59\Tools\dotnetcli\shared\Microsoft.NETCore.App\1.0.0-rc2-23928\coreclr.dll, HRESULT: 0x8007007E 10:08:26 Failed to bind to coreclr 10:08:26 ERROR: Could not restore build tools correctly. See 'd:\j\workspace\outerloop_win---9c9e7d59\init-tools.log' for more details. ``` This may be from dotnetcli update? cc @dagood
non_test
run failed with coreclr bind error installing dotnet cli restoring buildtools version prerelease failed to load the dll from d j workspace outerloop win tools dotnetcli shared microsoft netcore app coreclr dll hresult failed to bind to coreclr error could not restore build tools correctly see d j workspace outerloop win init tools log for more details this may be from dotnetcli update cc dagood
0
78,609
15,036,403,178
IssuesEvent
2021-02-02 15:12:50
GeoNode/geonode
https://api.github.com/repos/GeoNode/geonode
opened
GNIP 81 - GeoNode Core Cleanup
code quality gnip
# GNIP 81 - GeoNode Core Cleanup ## Overview The following activities are considered at the moment: - removal of QGIS support - removal of GeoNetwork support - move of Django views / templates to a separate and pluggable module - ... This GNIP will be handled as an **epic** along with atteched issues for each specific topic. Each topic will be discussed separately in the following ways: 1. Analysis and effort estimation first. 2. Discussion and proposals. 3. Implementation with a single dedicated PR. ### Proposed By @afabiani @giohappy ### Assigned to Release This proposal is for GeoNode 3.2 ### State * [x] Under Discussion * [x] In Progress * [ ] Completed * [ ] Rejected * [ ] Deferred ### Motivation 1. Get rid of old, unused stuff 2. Make Core more modular and less monolitic 3. Envisage (where possible) to plug not-strictly needed stuff 4. Split GeoNode in different modules; e.g. someone might want to take only the geospatial engine w/ APIs, without the Django template views. ## Proposal Technical details for developers. ### Backwards Compatibility No backward compatible with old versions. The "model" won't change though (except for removed/deleted ones, like QGis support). ## Future evolution Transform GeoNode into a set of pluggable lightweight modules, allowing people to plugin their own stuff if needed and get rid of a huge amount of code that they will never use. ## Feedback Update this section with relevant feedbacks, if any. ## Voting Project Steering Committee: * Alessio Fabiani: * Francesco Bartoli: * Giovanni Allegri: * Simone Dalmasso: * Toni Schoenbuchner: * Florian Hoedt: ## Links Remove unused links below. * [Email Discussion]() * [Pull Request]() * [Mail Discussion]() * [Linked Issue]()
1.0
GNIP 81 - GeoNode Core Cleanup - # GNIP 81 - GeoNode Core Cleanup ## Overview The following activities are considered at the moment: - removal of QGIS support - removal of GeoNetwork support - move of Django views / templates to a separate and pluggable module - ... This GNIP will be handled as an **epic** along with atteched issues for each specific topic. Each topic will be discussed separately in the following ways: 1. Analysis and effort estimation first. 2. Discussion and proposals. 3. Implementation with a single dedicated PR. ### Proposed By @afabiani @giohappy ### Assigned to Release This proposal is for GeoNode 3.2 ### State * [x] Under Discussion * [x] In Progress * [ ] Completed * [ ] Rejected * [ ] Deferred ### Motivation 1. Get rid of old, unused stuff 2. Make Core more modular and less monolitic 3. Envisage (where possible) to plug not-strictly needed stuff 4. Split GeoNode in different modules; e.g. someone might want to take only the geospatial engine w/ APIs, without the Django template views. ## Proposal Technical details for developers. ### Backwards Compatibility No backward compatible with old versions. The "model" won't change though (except for removed/deleted ones, like QGis support). ## Future evolution Transform GeoNode into a set of pluggable lightweight modules, allowing people to plugin their own stuff if needed and get rid of a huge amount of code that they will never use. ## Feedback Update this section with relevant feedbacks, if any. ## Voting Project Steering Committee: * Alessio Fabiani: * Francesco Bartoli: * Giovanni Allegri: * Simone Dalmasso: * Toni Schoenbuchner: * Florian Hoedt: ## Links Remove unused links below. * [Email Discussion]() * [Pull Request]() * [Mail Discussion]() * [Linked Issue]()
non_test
gnip geonode core cleanup gnip geonode core cleanup overview the following activities are considered at the moment removal of qgis support removal of geonetwork support move of django views templates to a separate and pluggable module this gnip will be handled as an epic along with atteched issues for each specific topic each topic will be discussed separately in the following ways analysis and effort estimation first discussion and proposals implementation with a single dedicated pr proposed by afabiani giohappy assigned to release this proposal is for geonode state under discussion in progress completed rejected deferred motivation get rid of old unused stuff make core more modular and less monolitic envisage where possible to plug not strictly needed stuff split geonode in different modules e g someone might want to take only the geospatial engine w apis without the django template views proposal technical details for developers backwards compatibility no backward compatible with old versions the model won t change though except for removed deleted ones like qgis support future evolution transform geonode into a set of pluggable lightweight modules allowing people to plugin their own stuff if needed and get rid of a huge amount of code that they will never use feedback update this section with relevant feedbacks if any voting project steering committee alessio fabiani francesco bartoli giovanni allegri simone dalmasso toni schoenbuchner florian hoedt links remove unused links below
0
37,768
12,489,944,532
IssuesEvent
2020-05-31 21:18:30
the-benchmarker/web-frameworks
https://api.github.com/repos/the-benchmarker/web-frameworks
closed
CVE-2019-3888 (High) detected in undertow-core-1.4.28.Final.jar
security vulnerability
## CVE-2019-3888 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>undertow-core-1.4.28.Final.jar</b></p></summary> <p>Undertow</p> <p>Path to dependency file: /tmp/ws-scm/web-frameworks/java/act/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/io/undertow/undertow-core/1.4.28.Final/undertow-core-1.4.28.Final.jar</p> <p> Dependency Hierarchy: - act-1.8.32.jar (Root Library) - :x: **undertow-core-1.4.28.Final.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/the-benchmarker/web-frameworks/commit/ab64b8404e01abede0aa4aa810306b3705409b30">ab64b8404e01abede0aa4aa810306b3705409b30</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was found in Undertow web server before 2.0.21. An information exposure of plain text credentials through log files because Connectors.executeRootHandler:402 logs the HttpServerExchange object at ERROR level using UndertowLogger.REQUEST_LOGGER.undertowRequestFailed(t, exchange) <p>Publish Date: 2019-06-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-3888>CVE-2019-3888</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-3888">https://nvd.nist.gov/vuln/detail/CVE-2019-3888</a></p> <p>Release Date: 2019-06-12</p> <p>Fix Resolution: 2.0.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-3888 (High) detected in undertow-core-1.4.28.Final.jar - ## CVE-2019-3888 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>undertow-core-1.4.28.Final.jar</b></p></summary> <p>Undertow</p> <p>Path to dependency file: /tmp/ws-scm/web-frameworks/java/act/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/io/undertow/undertow-core/1.4.28.Final/undertow-core-1.4.28.Final.jar</p> <p> Dependency Hierarchy: - act-1.8.32.jar (Root Library) - :x: **undertow-core-1.4.28.Final.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/the-benchmarker/web-frameworks/commit/ab64b8404e01abede0aa4aa810306b3705409b30">ab64b8404e01abede0aa4aa810306b3705409b30</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was found in Undertow web server before 2.0.21. An information exposure of plain text credentials through log files because Connectors.executeRootHandler:402 logs the HttpServerExchange object at ERROR level using UndertowLogger.REQUEST_LOGGER.undertowRequestFailed(t, exchange) <p>Publish Date: 2019-06-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-3888>CVE-2019-3888</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-3888">https://nvd.nist.gov/vuln/detail/CVE-2019-3888</a></p> <p>Release Date: 2019-06-12</p> <p>Fix Resolution: 2.0.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in undertow core final jar cve high severity vulnerability vulnerable library undertow core final jar undertow path to dependency file tmp ws scm web frameworks java act pom xml path to vulnerable library root repository io undertow undertow core final undertow core final jar dependency hierarchy act jar root library x undertow core final jar vulnerable library found in head commit a href vulnerability details a vulnerability was found in undertow web server before an information exposure of plain text credentials through log files because connectors executeroothandler logs the httpserverexchange object at error level using undertowlogger request logger undertowrequestfailed t exchange publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
23,206
3,776,266,608
IssuesEvent
2016-03-17 16:08:24
obophenotype/upheno
https://api.github.com/repos/obophenotype/upheno
closed
HP logical defs that use molecular entities from FMA
Priority-Medium Status-Accepted Type-Defect
Originally reported on Google Code with ID 30 ``` We should not be using 'hemoglobin' etc from FMA ``` Reported by `cmungall` on 2014-05-26 23:13:28
1.0
HP logical defs that use molecular entities from FMA - Originally reported on Google Code with ID 30 ``` We should not be using 'hemoglobin' etc from FMA ``` Reported by `cmungall` on 2014-05-26 23:13:28
non_test
hp logical defs that use molecular entities from fma originally reported on google code with id we should not be using hemoglobin etc from fma reported by cmungall on
0
265,082
23,146,434,261
IssuesEvent
2022-07-29 01:42:28
MPMG-DCC-UFMG/F01
https://api.github.com/repos/MPMG-DCC-UFMG/F01
closed
Teste de generalizacao para a tag Servidores - Proventos de pensão - Catuti
generalization test development template-Síntese tecnologia informatica subtag-Proventos de Pensão tag-Servidores
DoD: Realizar o teste de Generalização do validador da tag Servidores - Proventos de pensão para o Município de Catuti.
1.0
Teste de generalizacao para a tag Servidores - Proventos de pensão - Catuti - DoD: Realizar o teste de Generalização do validador da tag Servidores - Proventos de pensão para o Município de Catuti.
test
teste de generalizacao para a tag servidores proventos de pensão catuti dod realizar o teste de generalização do validador da tag servidores proventos de pensão para o município de catuti
1
13,098
8,792,896,573
IssuesEvent
2018-12-21 17:45:05
jowein/forever
https://api.github.com/repos/jowein/forever
opened
CVE-2015-8472 High Severity Vulnerability detected by WhiteSource
security vulnerability
## CVE-2015-8472 - High Severity Vulnerability <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libpngv1.2.2</b></p></summary> <p> <p>LIBPNG: Portable Network Graphics support, official libpng repository</p> <p>Library home page: <a href=https://github.com/hunter-packages/libpng.git>https://github.com/hunter-packages/libpng.git</a></p> </p> </details> </p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (4)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /forever/pngset.c - /forever/pngrio.c - /forever/pngrtran.c - /forever/pngrutil.c </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Buffer overflow in the png_set_PLTE function in libpng before 1.0.65, 1.1.x and 1.2.x before 1.2.55, 1.3.x, 1.4.x before 1.4.18, 1.5.x before 1.5.25, and 1.6.x before 1.6.20 allows remote attackers to cause a denial of service (application crash) or possibly have unspecified other impact via a small bit-depth value in an IHDR (aka image header) chunk in a PNG image. NOTE: this vulnerability exists because of an incomplete fix for CVE-2015-8126. <p>Publish Date: 2016-01-21 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8472>CVE-2015-8472</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2015-8472 High Severity Vulnerability detected by WhiteSource - ## CVE-2015-8472 - High Severity Vulnerability <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libpngv1.2.2</b></p></summary> <p> <p>LIBPNG: Portable Network Graphics support, official libpng repository</p> <p>Library home page: <a href=https://github.com/hunter-packages/libpng.git>https://github.com/hunter-packages/libpng.git</a></p> </p> </details> </p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (4)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /forever/pngset.c - /forever/pngrio.c - /forever/pngrtran.c - /forever/pngrutil.c </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Buffer overflow in the png_set_PLTE function in libpng before 1.0.65, 1.1.x and 1.2.x before 1.2.55, 1.3.x, 1.4.x before 1.4.18, 1.5.x before 1.5.25, and 1.6.x before 1.6.20 allows remote attackers to cause a denial of service (application crash) or possibly have unspecified other impact via a small bit-depth value in an IHDR (aka image header) chunk in a PNG image. NOTE: this vulnerability exists because of an incomplete fix for CVE-2015-8126. <p>Publish Date: 2016-01-21 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8472>CVE-2015-8472</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high severity vulnerability detected by whitesource cve high severity vulnerability vulnerable library libpng portable network graphics support official libpng repository library home page a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries forever pngset c forever pngrio c forever pngrtran c forever pngrutil c vulnerability details buffer overflow in the png set plte function in libpng before x and x before x x before x before and x before allows remote attackers to cause a denial of service application crash or possibly have unspecified other impact via a small bit depth value in an ihdr aka image header chunk in a png image note this vulnerability exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href step up your open source security game with whitesource
0
46,990
10,014,619,937
IssuesEvent
2019-07-15 17:59:17
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[4.0] Draggable.js to ES6
J4 Issue No Code Attached Yet
Convert the draggable script to ES6. Already done here if anyone wants to open a new PR: https://github.com/joomla/joomla-cms/pull/25227/files#diff-5de13dc8b49ef8469c0f1c2f25cbbbba
1.0
[4.0] Draggable.js to ES6 - Convert the draggable script to ES6. Already done here if anyone wants to open a new PR: https://github.com/joomla/joomla-cms/pull/25227/files#diff-5de13dc8b49ef8469c0f1c2f25cbbbba
non_test
draggable js to convert the draggable script to already done here if anyone wants to open a new pr
0
691,860
23,714,270,382
IssuesEvent
2022-08-30 10:23:41
ZPTXDev/Quaver
https://api.github.com/repos/ZPTXDev/Quaver
closed
Clicking cancel button from search command doesn't clear the timeout and the search state
type:bug priority:p0 status:confirmed branch:next
**Describe the bug** What isn't working as intended, and what does it affect? https://github.com/ZPTXDev/Quaver/blob/next/src/components/buttons/clear.js **Severity** - [x] Critical - [ ] High - [ ] Medium - [ ] Low **Affected branches** - [ ] Stable - [x] Next **Steps to reproduce** Steps to reproduce the behavior. (e.g. click on a button, enter a value, etc. and see error) 1. Search any track with /search 2. Click cancel button **Expected behavior** What is expected to happen? Clear and delete the searchState **Actual behavior** What actually happens? Attach or add errors or screenshots here as well. Stays there until the timeout expires.
1.0
Clicking cancel button from search command doesn't clear the timeout and the search state - **Describe the bug** What isn't working as intended, and what does it affect? https://github.com/ZPTXDev/Quaver/blob/next/src/components/buttons/clear.js **Severity** - [x] Critical - [ ] High - [ ] Medium - [ ] Low **Affected branches** - [ ] Stable - [x] Next **Steps to reproduce** Steps to reproduce the behavior. (e.g. click on a button, enter a value, etc. and see error) 1. Search any track with /search 2. Click cancel button **Expected behavior** What is expected to happen? Clear and delete the searchState **Actual behavior** What actually happens? Attach or add errors or screenshots here as well. Stays there until the timeout expires.
non_test
clicking cancel button from search command doesn t clear the timeout and the search state describe the bug what isn t working as intended and what does it affect severity critical high medium low affected branches stable next steps to reproduce steps to reproduce the behavior e g click on a button enter a value etc and see error search any track with search click cancel button expected behavior what is expected to happen clear and delete the searchstate actual behavior what actually happens attach or add errors or screenshots here as well stays there until the timeout expires
0