added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
---|---|---|---|---|---|
2025-04-01T04:10:41.706450
| 2017-09-07T17:58:13 |
256021236
|
{
"authors": [
"Freezon",
"MrSaints"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15211",
"repo": "MrSaints/Morphext",
"url": "https://github.com/MrSaints/Morphext/issues/31"
}
|
gharchive/issue
|
more compress and seccure this plugin
(function($,c,l,d){"use strict";d={animation:"fadeInDown",separator:",",speed:2e3,complete:$.noop};function a(b,f){var e=$(b).addClass(c),se=$.extend({},d,f),t=this,ph=[],ival,i;t.animate=function(){i=++i%ph.length;e.html('<span class="animated '+se.animation+'">'+ph[i]+"</span>");$.isFunction(se.complete)&&se.complete.call(t)};t.start=function(){ival=setInterval(t.animate,se.speed)};t.stop=function(){clearInterval(ival)};$.each(e.html().split(se.separator),function(k,v){ph.push($.trim(v))});i=-1;t.animate();t.start()}$.fn[c]=function(d){return this.each(function(){$.data(this,l+c)||$.data(this,l+c,new a(this,d))})}})(jQuery,"morphext","plugin_");
Can you elaborate @Freezon ?
I just rebuilt the minimal version with the layout and removed _init and _interval from the global scope and also removed the text variables for the wrapper.
Did you do it programmatically? If so, I wouldn't mind taking in a PR to automate the process. Otherwise, I'm not sure if it is worth saving a few extra bytes.
Closing due to inactivity.
|
2025-04-01T04:10:41.873415
| 2022-03-03T19:34:36 |
1158820785
|
{
"authors": [
"Mr-Technician",
"henon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15212",
"repo": "MudBlazor/MudBlazor",
"url": "https://github.com/MudBlazor/MudBlazor/issues/4101"
}
|
gharchive/issue
|
TextField: Masks cause JS errors when using InputType.Email
Bug type
Component
Component name
MudTextField
What happened?
When using a mask and InputType.Email, a js error is thrown in the console:
Expected behavior
No error should occur.
There probably needs to be some sort of check to present this from happening.
Reproduction link
https://try.mudblazor.com/snippet/wEcwYxOxLudGkQtq
Reproduction steps
Open the try.mudblazor link and click in the first input.
Relevant log output
No response
Version (bug)
6.0.7
Version (working)
No response
What browsers are you seeing the problem on?
Chrome
On what operating system are you experiencing the issue?
Windows
Pull Request
[ ] I would like to do a Pull Request
Code of Conduct
[X] I agree to follow this project's Code of Conduct
@henon I'm not sure we can work around this since it is a browser limitation.
Right. Then the only thing we can do is catch or avoid the exception
@henon Can users be warned that their masks won't work correctly?
Not sure how. Probably in the Documentation
But yeah sure, we can also console.log a warning in the JS part
|
2025-04-01T04:10:41.879353
| 2023-03-25T17:16:10 |
1640590758
|
{
"authors": [
"KBM-Kevin-Kessler",
"Leo-pepe",
"Tyme-Bleyaert"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15213",
"repo": "MudBlazor/MudBlazor",
"url": "https://github.com/MudBlazor/MudBlazor/issues/6532"
}
|
gharchive/issue
|
Wrong Links on Mudblazor Icons Web-Page
Bug type
Docs (mudblazor.com)
Component name
No response
What happened?
On the Icons page in the docs two links overlap with the icons.
Expected behavior
The two links should be on the end of the Page
Reproduction link
https://mudblazor.com/features/icons#icons
Reproduction steps
Load page
go down to about T
Relevant log output
No response
Version (bug)
6.2.0
Version (working)
No response
What browsers are you seeing the problem on?
Firefox
On what operating system are you experiencing the issue?
Windows
Pull Request
[ ] I would like to do a Pull Request
Code of Conduct
[X] I agree to follow this project's Code of Conduct
I had a look on it. It seem like the fixed height is the problem.
I created a Pull Request on that.
Duplicate of #5493
|
2025-04-01T04:10:41.886187
| 2023-04-25T15:19:55 |
1683383161
|
{
"authors": [
"ScarletKuro",
"nima-ghomri"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15214",
"repo": "MudBlazor/MudBlazor",
"url": "https://github.com/MudBlazor/MudBlazor/issues/6740"
}
|
gharchive/issue
|
MudDropContainer stop working when I try to hide the original element view while the dragging started.
Bug type
Component
Component name
MudDropContainer
What happened?
By default, when there is a bunch of elements in a container and we try to drag one of them, the original view of that element stays intact while the ghost of the dragged it moves with the mouse cursor. But for some reasons specific to my container, I want the space of the dragged element be freed when the dragging starts. so, I'm trying to collapse the view of the original element free up the space for the preview of the destination position of the element.
Now setting the display of the element to none will make the MuDropContainer to stop working just in web assembly. it works perfectly in server side Blazor. here's the like of the example which is nothing but the MudBlazor docs example adding ItemDraggingClass="d-none" to the container.
https://try.mudblazor.com/snippet/mOmROoGJzopLvjvN
I want to know what is the problem, how to fix it.
Expected behavior
It should hide the original element, and disappear when the dragging is finished.
Reproduction link
https://try.mudblazor.com/snippet/mOmROoGJzopLvjvN
Reproduction steps
Create a solution with 2 project, one wasm and the other server.
Copy and paste this code example in the index folder.
The server side version works exactly as it should, but the wasm version element is not getting dragged at all.
Relevant log output
No response
Version (bug)
6.2.2
Version (working)
No response
What browsers are you seeing the problem on?
Microsoft Edge
On what operating system are you experiencing the issue?
Windows
Pull Request
[ ] I would like to do a Pull Request
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Note: In case someone will work on fix - it works perfectly fine in Firefox
Note: In case someone will work on fix - it works perfectly fine in Firefox
It doesn't work on Chrome neither.
|
2025-04-01T04:10:41.902213
| 2021-07-09T06:14:32 |
940453218
|
{
"authors": [
"henon",
"mikes-gh",
"umeshvenkat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15215",
"repo": "MudBlazor/MudBlazor",
"url": "https://github.com/MudBlazor/MudBlazor/pull/2187"
}
|
gharchive/pull-request
|
Added Aria-Label attribute to various controls.
Added Aria-Label attribute to various controls. Its part of MudComponentBase and its available for all controls inheriting from this class.
Fixed the merge conflict. The aria labels are good of course. However we need to think of a solution for allowing their internationalization. Any ideas?
Its failing due to the presence of extra attribute(aria-label), which is required for Accessibility feature. Can we update the Bunit test to accommodate this new feature.
Actual HTML:
Expected HTML:
Its failing due to the presence of extra attribute(aria-label), which is required for Accessibility feature. Can we update the Bunit test to accommodate this new feature.
Actual HTML:
Expected HTML:
Sure, that is what I meant.
Couldn't find the below test
MudBlazor.UnitTests.Components.ElementTests.MudElement_Should_Not_Attach_A_Null_Event
ElementTests contain only one test "ShouldRenderAnAnchorAndThenAButton". This I fixed it.
Four tests broke due to the added aria labels. Can you execute the test suite and add the labels to the tests so they pass again?
Tests are passing after the fix. Thanks.
Fixed the merge conflict. The aria labels are good of course. However we need to think of a solution for allowing their internationalization. Any ideas?
Thanks for the Merge.
Need to think how we can allow internationalization. Soon will update on this.
We have discussed this on the team and arrived at the conclusion that internationalizaton would be solved if there is an AriaLabel property for everything a component needs, like for the DatePicker a AriaLabelNextMonth, AriaLabelPreviousMonth, etc.
Also, we think adding AriaLabel to MudComponentBase is not the best way, because it would add the label also to those components which need multiple labels, not just one. We think it is better to add an AriaLabel property to Button, Checkbox, etc individually and to the bigger components add all their sub-labels as individual properties.
@KamilBugnoKrk you seem to be an expert in the field. Can you take a look at this PR please?
We have discussed this on the team and arrived at the conclusion that internationalizaton would be solved if there is an AriaLabel property for everything a component needs, like for the DatePicker a AriaNextMonth, AriaPreviousMonth, etc.
Also, we think adding AriaLabel to MudComponentBase is not the best way, because it would add the label also to those components which need multiple labels, not just one. We think it is better to add an AriaLabel property to Button, Checkbox, etc individually and to the bigger components add all their sub-labels as individual properties.
@henon thanks for the feedback. I too was in the same mindset to introduce multiple attributes to complex controls like Datepicker. I will do the suggested changes and commit it.
Elements on the view/page with tab-index attribute provide better navigation using the keyboard.
Is it ok to add at each component level or add at the MudComponentBase.
@henon & @KamilBugnoKrk please provide your input here.
Thanks @KamilBugnoKrk Fundamentally I think AriaLabel on ComponentBase is not going to work for the reasons @henon stated. Also I am concerned this PR removes existing aria-labels and replaces them with less specific labels reducing the accessibility of the existing code.
Also lets stick to increasing the accessibility using one PR per control. I think it focuses the PR.
Thanks @henon, @KamilBugnoKrk and @mikes-gh for the review.
Shall we discard these changes and start implementing the Accessibility for each controls or related controls in separate PR.
Shall we discard these changes and start implementing the Accessibility for each controls or related controls in separate PR.
How about this: add the AriaLabel property to all components that have only a single label in this PR and start new PRs for the more complicated ones like DatePicker etc. Is that alright for you @mikes-gh ?
Shall we discard these changes and start implementing the Accessibility for each controls or related controls in separate PR.
How about this: add the AriaLabel property to all components that have only a single label in this PR and start new PRs for the more complicated ones like DatePicker etc. Is that alright for you @mikes-gh ?
There are other changes that need reverting so this PR will get messy if continued .
I would like to get @KamilBugnoKrk opinion on the best way to implement Aria labels in a multi lingual way. Particularly when an Aria label is the result of a function. Should we use delegates? All things to discuss before starting a new PR so the direction is clear and we don't get waste anyone's time
https://github.com/Garderoben/MudBlazor/discussions/2233
I added the team decision required label. We need to reach a consens on how to do the internationalization and whether or not we want to have Parameters for the labels or a localization service or both.
@henon @umeshvenkat I think we should concentrate of setting the for attribute for our controls internally see https://github.com/Garderoben/MudBlazor/issues/2337 so That screen readers pick this up. Then we look at aria labels for controls where this a won't work.
@henon @umeshvenkat I think we should concentrate of setting the for attribute for our controls internally see #2337 so That screen readers pick this up. Then we look at aria labels for controls where this won't work.
How about doing for in a new PR? Let's just add the single AriaLabel components Button, Checkbox etc. in this PR @umeshvenkat . And not in ComponentBase please, just as an independent property in every component that has a single label, ok?
For the internationalization issue I'll collaborate with you in a new PR and provide the necessary service code for I18N and a docs example that shows how to use it with resx files.
@henon @umeshvenkat I think we should concentrate of setting the for attribute for our controls internally see #2337 so That screen readers pick this up. Then we look at aria labels for controls where this won't work.
How about doing for in a new PR? Let's just add the single AriaLabel components Button, Checkbox etc. in this PR @umeshvenkat . And not in ComponentBase please, just as an independent property in every component that has a single label, ok?
For the internationalization issue I'll collaborate with you in a new PR and provide the necessary service code for I18N and a docs example that shows how to use it with resx files.
@henon I will submit a new PR by adding a single Arialabel attribute in components like Button, Textbox, checkbox etc. and not in ComponentBase. "Internationalization" and "For" can be handled in separate PR. Is this fine?
sure, thanks!
See #4074, it is resolved now
|
2025-04-01T04:10:41.906396
| 2023-12-18T20:37:33 |
2047411657
|
{
"authors": [
"mikes-gh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15216",
"repo": "MudBlazor/MudBlazor",
"url": "https://github.com/MudBlazor/MudBlazor/pull/7928"
}
|
gharchive/pull-request
|
Build: Fix NRE warnings in ThemeProvider
Description
Fix possible NRE by coalescing to and empty string array.
Still not convinced this is the best solution but this is the way we want to go to avoid any API breakages.
How Has This Been Tested?
Types of changes
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist:
[x] The PR is submitted to the correct branch (dev).
[x] My code follows the code style of this project.
[ ] I've added relevant tests.
Thanks for looking @JonBunator
|
2025-04-01T04:10:41.913169
| 2022-09-01T09:09:20 |
1358511553
|
{
"authors": [
"Mugen87"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15217",
"repo": "Mugen87/three-m2loader",
"url": "https://github.com/Mugen87/three-m2loader/issues/3"
}
|
gharchive/issue
|
Looking for M2 asset with embedded skin data.
Since Wrath of the Lich King, skin profiles are stored in separate .skin files. However, earlier M2 assets stored the data in the M2 itself. It would be good to know the filename of such an asset for testing (https://wow.tools/) so the below code path can be implemented and verified:
https://github.com/Mugen87/three-m2loader/blob/e71c7760f5b19c828a2befc0eaf8781db888d91d/M2Loader.js#L160-L164
Fixed via 89f443dbc26f6a5d86e15d5a14f9d1568f45ac2d.
|
2025-04-01T04:10:41.915548
| 2023-05-01T13:36:19 |
1690817718
|
{
"authors": [
"LamaAgent",
"MuhammedKpln"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15218",
"repo": "MuhammedKpln/peerdart",
"url": "https://github.com/MuhammedKpln/peerdart/issues/22"
}
|
gharchive/issue
|
How to use with multiple users?
How can I use it with multiple users?
Instead of spamming, try it out? Like i said as long as webrtc allows it you're good to go.
|
2025-04-01T04:10:41.919434
| 2022-01-09T14:56:55 |
1097217522
|
{
"authors": [
"Mukosame",
"kitoriaaa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15219",
"repo": "Mukosame/Anime2Sketch",
"url": "https://github.com/Mukosame/Anime2Sketch/pull/18"
}
|
gharchive/pull-request
|
Add docker environment
Summary
Add Dockerfile and example Makefile. You can run demo just by customizing mount volumes for input/output images directory on Makefile.
#3
Build docker image
make docker-build
Run
Customize mount volumes for input/output directory on Makefile
docker run -it --rm --gpus all -v `pwd`:/workspace -v <input images dir>:/input -v <output images dir>:/output anime2sketch
Run container make docker-run
if you run cpu only, you have to fix Dockerfile CMD to CMD [ "python", "test.py", "--dataroot", "/input", "--load_size", "512", "--output_dir", "/output" ] and Makefile docker-run to docker run -it --rm -v `pwd`:/workspace -v `pwd`/images/input:/input -v `pwd`/images/output:/output anime2sketch
Hi @kitoriaaa , thanks a lot for your great commit!
I would just request one follow-up: would you mind adding the above instruction to README.md, and also add your name proudly to the updates (like https://github.com/Mukosame/Anime2Sketch/blame/master/README.md#L12)?
Hi @Mukosame , thanks feedback.
I have added docker instruction to README.md
|
2025-04-01T04:10:41.941282
| 2022-10-26T20:29:28 |
1424655453
|
{
"authors": [
"kb-1000",
"peterix"
],
"license": "MS-PL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15220",
"repo": "MultiMC/meta",
"url": "https://github.com/MultiMC/meta/pull/12"
}
|
gharchive/pull-request
|
Use version comparison rather than equality for the log4shell workaround
This will stay working when Mojang updates log4j again.
:+1:
|
2025-04-01T04:10:41.962879
| 2023-01-30T01:32:41 |
1561578908
|
{
"authors": [
"coveralls",
"nimmolo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15221",
"repo": "MushroomObserver/mushroom-observer",
"url": "https://github.com/MushroomObserver/mushroom-observer/pull/1319"
}
|
gharchive/pull-request
|
Create new panel helper
This consolidates a repetitive pattern of nested Bootstrap divs (most often used on show_name and show_obs) into a single helper, and implements it.
Desirable because it
helps eliminate typos and ensures consistency of the pattern
the css classes are about to change for Bootstrap 4
still allows rendering partials inside the block
allows separating the partial content from the panel wrap (needed for example in the identify-obs UX).
Before:
# layout.html.erb
render(partial: "partial")
# _partial.html.erb
<div class="panel panel-default unexpected" id="special_id">
<div class="panel-body modifiers">
<%= complicated_stuff_in_ruby %>
</div>
</div>
After:
# layout.html.erb
<%= panel_block(id: "special_id", class: "unexpected",
inner_class: "modifiers") do
render(partial: "partial")
end %>
# _partial.html.erb
<%= complicated_stuff_in_ruby %>
Coverage: 93.36% (+0.003%) from 93.357% when pulling 9489dba413dfc43a290dd602a374f2b144bccd09 on nimmo-panel-helper into ea581808e6b56fe750ce0d7369199b8a3c98173a on main.
|
2025-04-01T04:10:41.964784
| 2024-03-23T20:29:49 |
2204036399
|
{
"authors": [
"coveralls",
"nimmolo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15222",
"repo": "MushroomObserver/mushroom-observer",
"url": "https://github.com/MushroomObserver/mushroom-observer/pull/2057"
}
|
gharchive/pull-request
|
Quiet SCSS deprecation warnings in all environments
Not just production
coverage: 94.418%. remained the same
when pulling f9c71644b9319c1c8cd1599de5af32e08e7c0ed4 on nimmo-simplify-gemfile
into ad40264059c209f4e81a754cf3abda510db20cca on main.
|
2025-04-01T04:10:41.965659
| 2021-10-14T18:02:45 |
1026673426
|
{
"authors": [
"linsungc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15223",
"repo": "MusicAsLanguage/mobileapp",
"url": "https://github.com/MusicAsLanguage/mobileapp/issues/65"
}
|
gharchive/issue
|
Update service endpoint reflecting the lessons / activities media name change
TODO: Update program.json once we have file names for lessons and activities
This should be done. Any further changes should be logged as separate issue.
|
2025-04-01T04:10:42.036107
| 2018-07-20T15:14:43 |
343146289
|
{
"authors": [
"Logics4",
"MylesIsCool"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15224",
"repo": "MylesIsCool/ViaVersion",
"url": "https://github.com/MylesIsCool/ViaVersion/pull/892"
}
|
gharchive/pull-request
|
Don't use Paper block placement patch in 1.12.
Apparently the bug that caused the block placement issues with Paper was fixed in 1.12 (according to Aikar from its development team). So, with this commit PR the patch to fix it won't run if the server is running Paper 1.12 or higher (which means it will only be used in 1.11.2 and lower server versions).
Cheers :)
|
2025-04-01T04:10:42.040753
| 2022-10-27T17:47:02 |
1426048887
|
{
"authors": [
"nitronit"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15225",
"repo": "MystenLabs/ed25519-unsafe-libs",
"url": "https://github.com/MystenLabs/ed25519-unsafe-libs/pull/9"
}
|
gharchive/pull-request
|
Update README.md
See the changes for comments.
For further comments se the paper:
Provably Secure Distributed Schnorr Signatures and a (t, n) Threshold Scheme for Implicit Certificates
http://cacr.uwaterloo.ca/techreports/2001/corr2001-13.ps
ping @kchalkias and @LoupVaillant
Great. Feel free to close the PR your edits is done.
Thanks Kostas!
|
2025-04-01T04:10:42.053147
| 2024-05-17T05:21:18 |
2301849091
|
{
"authors": [
"0xfrost-wonder",
"hayes-mysten",
"nullbitx8",
"stefan-mysten"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15226",
"repo": "MystenLabs/sui",
"url": "https://github.com/MystenLabs/sui/issues/17790"
}
|
gharchive/issue
|
Sui Code Bug in Typescript SDK
Steps to Reproduce Issue
Install the sdk npm i @mysten/sui.js
Create a file test.js with the following contents:
import { getFullnodeUrl, SuiClient } from<EMAIL_ADDRESS>
// use getFullnodeUrl to define Devnet RPC location
const rpcUrl = getFullnodeUrl('mainnet');
// create a client connected to devnet
const client = new SuiClient({ url: rpcUrl });
// get coins owned by an address
// replace <OWNER_ADDRESS> with actual address in the form of 0x123...
await client.getCoins({
owner: '0xb29c543ed7ad7169441c02fc14ffd57e486b7be7177e319de0e0933453a78d49',
});
Run the code with node test.js
Expected Result
Code runs.
Actual Result
Code crashes with the following
file:///home/dev/project/node_modules/@mysten/sui.js/dist/esm/client/http-transport.js:38
throw new Error(
^
Error: The current environment does not support fetch, you can provide a fetch implementation in the options for SuiHTTPTransport.
at SuiHTTPTransport.fetch (file:///home/dev/project/node_modules/@mysten/sui.js/dist/esm/client/http-transport.js:38:13)
at SuiHTTPTransport.request (file:///home/dev/project/node_modules/@mysten/sui.js/dist/esm/client/http-transport.js:46:28)
at SuiClient.getCoins (file:///home/dev/project/node_modules/@mysten/sui.js/dist/esm/client/client.js:42:33)
at file:///home/dev/project/src/testSuiClient.js:11:14
at ModuleJob.run (node:internal/modules/esm/module_job:197:25)
at async Promise.all (index 0)
at async ESMLoader.import (node:internal/modules/esm/loader:337:24)
at async loadESM (node:internal/process/esm_loader:88:5)
at async handleMainPromise (node:internal/modules/run_main:61:12)
System Information
OS: Linux, ubuntu
Package version<EMAIL_ADDRESS>"^0.54.1",
It appears that the default configuration of the sdk is using fetch, but not importing node-fetch or another implementation.
To fix this, I followed this
I installed node-fetch and added the following to the top of my script:
// fetch-polyfill.js
import fetch, {
Blob,
blobFrom,
blobFromSync,
File,
fileFrom,
fileFromSync,
FormData,
Headers,
Request,
Response,
} from 'node-fetch'
if (!globalThis.fetch) {
globalThis.fetch = fetch
globalThis.Headers = Headers
globalThis.Request = Request
globalThis.Response = Response
}
// index.js
import './fetch-polyfill'
// ...
Perhaps this should be documented, or the SDK should import node-fetch explicitly before using it.
This may be affecting other packages as well.
I ran into this issue when using the cetus sdk, see issue here
The easiest way to fix this is to just use a newer version of node that supports fetch, I believe node 16 was the last version that did not support node natively and reached end of life in october of last year, so upgrading node would be my recommendation
The easiest way to fix this is to just use a newer version of node that supports fetch, I believe node 16 was the last version that did not support node natively and reached end of life in october of last year, so upgrading node would be my recommendation
Thanks for the heads up!
Maybe adding Node 18 as a minimum requirement in the README or Docs would be helpful for newcomers.
I was chasing this down for a couple of hours
There is a note in the README for troubleshooting connection errors.
If you see errors like ECONNRESET or "socket hang up", run node -v to make sure your node version is v18.x.x. Refer to this [guide](https://blog.logrocket.com/how-switch-node-js-versions-nvm/) to switch node version.
Happy to add this error as one of the other examples of errors in there.
@nullbitx8 feel free to submit a PR! Thanks!
Steps to Reproduce Issue
Install the sdk npm i @mysten/sui.js
Create a file test.js with the following contents:
import { getFullnodeUrl, SuiClient } from<EMAIL_ADDRESS>
// use getFullnodeUrl to define Devnet RPC location
const rpcUrl = getFullnodeUrl('mainnet');
// create a client connected to devnet
const client = new SuiClient({ url: rpcUrl });
// get coins owned by an address
// replace <OWNER_ADDRESS> with actual address in the form of 0x123...
await client.getCoins({
owner: '0xb29c543ed7ad7169441c02fc14ffd57e486b7be7177e319de0e0933453a78d49',
});
Run the code with node test.js
Expected Result
Code runs.
Actual Result
Code crashes with the following
file:///home/dev/project/node_modules/@mysten/sui.js/dist/esm/client/http-transport.js:38
throw new Error(
^
Error: The current environment does not support fetch, you can provide a fetch implementation in the options for SuiHTTPTransport.
at SuiHTTPTransport.fetch (file:///home/dev/project/node_modules/@mysten/sui.js/dist/esm/client/http-transport.js:38:13)
at SuiHTTPTransport.request (file:///home/dev/project/node_modules/@mysten/sui.js/dist/esm/client/http-transport.js:46:28)
at SuiClient.getCoins (file:///home/dev/project/node_modules/@mysten/sui.js/dist/esm/client/client.js:42:33)
at file:///home/dev/project/src/testSuiClient.js:11:14
at ModuleJob.run (node:internal/modules/esm/module_job:197:25)
at async Promise.all (index 0)
at async ESMLoader.import (node:internal/modules/esm/loader:337:24)
at async loadESM (node:internal/process/esm_loader:88:5)
at async handleMainPromise (node:internal/modules/run_main:61:12)
System Information
OS: Linux, ubuntu
Package version<EMAIL_ADDRESS>"^0.54.1",
It appears that the default configuration of the sdk is using fetch, but not importing node-fetch or another implementation. To fix this, I followed this
I installed node-fetch and added the following to the top of my script:
// fetch-polyfill.js
import fetch, {
Blob,
blobFrom,
blobFromSync,
File,
fileFrom,
fileFromSync,
FormData,
Headers,
Request,
Response,
} from 'node-fetch'
if (!globalThis.fetch) {
globalThis.fetch = fetch
globalThis.Headers = Headers
globalThis.Request = Request
globalThis.Response = Response
}
// index.js
import './fetch-polyfill'
// ...
Perhaps this should be documented, or the SDK should import node-fetch explicitly before using it. This may be affecting other packages as well. I ran into this issue when using the cetus sdk, see issue here
I also have this kind of issue even on node 20.7.0
|
2025-04-01T04:10:42.054736
| 2023-03-15T00:50:40 |
1624523191
|
{
"authors": [
"kchalkias",
"randall-Mysten"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15227",
"repo": "MystenLabs/sui",
"url": "https://github.com/MystenLabs/sui/issues/9317"
}
|
gharchive/issue
|
exchange-integration-guide is not updated
https://docs.sui.io/learn/exchange-integration-guide
As of today we have blake2 in both main and 0.28 branch, sha3 is NOT used anymore
Thanks for pointing this out. There are numerous places in the docs that we need to update for the changes in .28
|
2025-04-01T04:10:42.060181
| 2022-10-13T23:45:11 |
1408547920
|
{
"authors": [
"666lcz",
"imougadir"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15228",
"repo": "MystenLabs/sui",
"url": "https://github.com/MystenLabs/sui/pull/5242"
}
|
gharchive/pull-request
|
wallet-ext: use fullnode for executing transaction
This PR aims to add the ability to use fullnode for executing transactions on the Sui wallet. The feature is gated through a feature flag called deprecate-gateway. Other notable change in this PR include:
make the breaking change in https://github.com/MystenLabs/sui/pull/4919 backward compatible. We would like to do a wallet release before the next DevNet release, therefore it's important that the upcoming wallet release works with both the current DevNet(0.11.0) and 0.12.0
Add rpcAPIVersion to JsonRpcProvider to support multiple RPC versions. This is used together with the feature flag so that the wallet extension can work with multiple RPC versions by switching the feature flag on the server side without going through the chrome web store review
Add TypeTagSerializer that can convert a string to TypeTag, e.g., 0x2::sui::SUI into { "struct": { "address": "0x2", "module": "sui", "name": "SUI", "typeParams": [] } }. This make sure the DApp Move calls can continue working regardless whether using LocalTxnDataSerializer or not
Add gas selection to LocalTxnDataSerializer. If the gas payment object is not provided to the transaction, the SDK will select a gas coin that is not used in the transaction input and with balance greater than or equal to the gas budget.
Testing
Tested common wallet functionalities with feature flag turned on and off:
send coins
mint/transfer nft
DApp: demo NFT app and https://gotbeef.app/
agreed with the all
|
2025-04-01T04:10:42.071285
| 2016-06-27T05:44:51 |
162378081
|
{
"authors": [
"KuerbisGnu",
"Mytherin",
"fernandorahn"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15230",
"repo": "Mytherin/Tibialyzer",
"url": "https://github.com/Mytherin/Tibialyzer/issues/109"
}
|
gharchive/issue
|
Suggestion Browse NPC
I think the function browser NPC name is not useful, most of players don't know the names of the NPCs to search for it, maybe you can change it to browse a NPC of X service, NPC who sell potions, boat npcs and more.
Sorry if this isn't the right place to post it.
Tibialyzer is a wonderful program, great job.
All this is already included. You can find the boat by city@ command.
You can find the Magic Store when you use item@ command.
You can easily find the name of any shopkeeper in Tibia.
To add to what KuerbisGnu said, you can use the command [cityname]@[itemname] to find an NPC that buys or sells a specific item in a specific city; so if you want to find the item that sells mana potions in edron you can type edron@mana potion.
|
2025-04-01T04:10:42.084633
| 2023-02-11T15:43:28 |
1580895291
|
{
"authors": [
"N00nDay",
"dominic-schmid"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15231",
"repo": "N00nDay/stwui",
"url": "https://github.com/N00nDay/stwui/issues/83"
}
|
gharchive/issue
|
Handle named form actions using Button component
I'm sorry if this is a trivial thing but I'm new to SvelteKit and can't figure out how to use named server-side form actions specified in +page.server.ts with Buttons from stwui.
I couldn't find anything about forms in the STWUI docs either. Is this not yet supported?
As per the official SvelteKit documentation, you can create a form and
specify the formaction for each button
leave the formaction blank and use the specified form action
leave the formaction blank and use the default form action
to have the submission handled by the server.
+page.server.ts
export const actions: Actions = {
login: async ({cookies, request}) => {
const data = await request.formData();
const email = data.get("email");
const password = data.get("password");
console.log(`LOGIN: email: ${email}\npassword: ${password}`);
return {success: true};
},
register: async ({cookies, request}) => {
const data = await request.formData();
const email = data.get("email");
const password = data.get("password");
console.log(`REGISTER: email: ${email}\npassword: ${password}`);
return {success: true};
}
};
Vanilla Sveltekit: +page.svelte
As an example, each button calls a different action.
<form method="POST" action="?/login" use:enhance>
<button type="submit">Login</button>
<button type="submit" formaction="?/register">Register</button>
</form>
STWUI: +page.svelte
Using the Button for the form/default action works fine, but what if I want to use named actions? Where can I do this?
<form method="POST" use:enhance>
<Button type="primary" htmlType="submit">Login</Button>
<Button type="link">Register</Button>
</form>
Have you tried adding formaction to the Button component? All actions should be forwarded to the component automatically.
Well that was an easy fix, sorry 😄
Interesting, so components take in any props, even when they aren't directly exported. That's awesome!
Well that was an easy fix, sorry 😄
Interesting, so components take in any props, even when they aren't directly exported. That's awesome!
Yeah, I can work on documenting this in the future as it shouldn't be let to assumption.
|
2025-04-01T04:10:42.086912
| 2022-12-22T01:59:56 |
1507136926
|
{
"authors": [
"LeonardsonCC",
"N1ck"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15232",
"repo": "N1ck/gifs-for-github",
"url": "https://github.com/N1ck/gifs-for-github/issues/57"
}
|
gharchive/issue
|
Add more GIF providers
I would like to get GIFs from other providers, like Tenor.
I actually made a fork and made it work, just would like to know if I should open a PR, or if it is out of scope for this project.
This is awesome @LeonardsonCC I think it would totally be within the scope of this project, we don't want people running out of GIFs 😂
It would be great if you could open a PR and we can see how we could include something like this in the extension 😃
Opened #60
|
2025-04-01T04:10:42.093157
| 2024-02-06T22:01:36 |
2121743280
|
{
"authors": [
"Nate-Louder"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15233",
"repo": "NACAGA/Tweeter-Frontend",
"url": "https://github.com/NACAGA/Tweeter-Frontend/issues/10"
}
|
gharchive/issue
|
Create Component: Post
Component Name:
Post
Other components needed for this component:
Like
Comment
Functionality
This component will display the information for a post including (groupname, username, date, likes, comments, and post content). This component will take in a post id and make requests to the backend to get the info for the post. Some of the requests it needs to make include (getPost, joinGroup, and toggleUserLike). If the comment button is clicked it will open a comment component and pass the post id into the comment's props where the comment component will handle it from there. If the user is not a member of the group the post is from, then the post will provide a button the user can click to join the group. If the user is a member of the group then the card itself should be clickable and route the user to that post on the group page.
Design Mockups
Figma Mockup
Requires the completion of #11 and create like component to move forward.
|
2025-04-01T04:10:42.103522
| 2017-03-30T15:22:13 |
218234241
|
{
"authors": [
"asedge",
"mltsy"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15234",
"repo": "NARKOZ/gitlab",
"url": "https://github.com/NARKOZ/gitlab/pull/268"
}
|
gharchive/pull-request
|
Add appropriate Content-Type header
This makes it easier to tests using a rack-based Gitlab mock API. (Sinatra in my case)
Rack will assume POST requests include www-form-urlencoded data, but it won't assume the same of PUT or PATCH requests, and therefore won't parse the body unless a header is present. Granted, this header is obviously not required for Gitlab to accept the request, but it's the right way to construct the request in general, which should make it more reusable/compatible with testing frameworks, etc.
It seems like HTTParty or even Net::HTTP is making the decision to encode the data as form-encoded data, so why it doesn't automatically attach that header, I'm not sure. But that's further than I have time to investigate/fix at the moment :)
Note: I have not yet tested this patch with an actual Gitlab instance, but if all the requests are sent as form-encoded data (and as far as I know they all are, or the data is included in the query-string), it shouldn't cause any problems.
Thank you!
|
2025-04-01T04:10:42.131391
| 2024-01-18T18:22:47 |
2088830291
|
{
"authors": [
"dabail10",
"justin-richling",
"phillips-ad"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15235",
"repo": "NCAR/CUPiD",
"url": "https://github.com/NCAR/CUPiD/issues/45"
}
|
gharchive/issue
|
Compute global / hemispheric integrated quantities.
This is something the CVDP does. It creates global and hemispheric integrated quantities including:
globally averaged temperature
NH sea ice area
SH sea ice area
others?
For the sea ice we need:
NH / SH total sea ice extent (area where concentration is >= 15%)
NH / SH total sea ice area
NH / SH total sea ice volume
NH / SH total snow (on sea ice) volume
This uses tarea from the sea ice model (constant gridcell area from sea ice history files or grid files).
Convert aice to fraction if it is not already.
NH extent = where(aice.ge.0.15,tarea,0.) where tlat > 0.
NH area = sum(aicetarea) for tlat > 0. x 1.0e-12
NH ice volume = sum(hitarea) for tlat > 0. x 1.0e-13
NH snow volume = sum(hs*tarea) for tlat > 0. x 1.0e-13
Is this already in python? @nusbaume @justin-richling
Does the ADF call the CVDP?
Here is some python code. Assuming that you have sea ice files uploaded:
ds_area = (ds.TAREA*ds.aice).where(TLAT>0.).sum(dim=['ni','nj'])
@dabail10 Yeah, we can call the CVDP from the ADF
Aha. I see that the ADF calls the CVDP NCL scripts.
Yeah, I should've mentions that. It runs in the background of the ADF via NCL scripts. I'm working with Adam Phillips to eventually get the major parts of the CVDP and CVDP-LE into python.
@dabail10 the current plan is for a python-only CVDP release to happen in the latter half of 2024. That release will not have all the current CVDP/CVDP-LE capabilities in it. I will speak with you at a later date about what sea ice metrics we should carry over into the new python CVDP package.
Got it. I still think we should work on this one step asap. I will prototype something in python.
@klindsay28 has a number of functions for doing this type of calculation.
|
2025-04-01T04:10:42.226148
| 2022-12-02T14:09:01 |
1472924522
|
{
"authors": [
"kr15tyk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15236",
"repo": "ND-iTC/Documents",
"url": "https://github.com/ND-iTC/Documents/issues/171"
}
|
gharchive/issue
|
Section <IP_ADDRESS> and <IP_ADDRESS>, FCS_(D)TLSS_EXT.2.1 and FCS_(D)TLSS_EXT.2.2, Tests Correction
Section <IP_ADDRESS>, FCS_DTLSS_EXT.2.1 and FCS_DTLSS_EXT.2.2, Tests
Section <IP_ADDRESS>, FCS_TLSS_EXT.2.1 and FCS_TLSS_EXT.2.2, Tests
Test 4: Both parts of the test are conditional based on whether support for (D)TLS 1.2 is claimed
Suggested change:
Move the [conditional] to after “Test 4”
Fixed in TLSWG-v3-editorial-fixes branch.
|
2025-04-01T04:10:42.229431
| 2023-01-06T01:11:54 |
1521702606
|
{
"authors": [
"amaltaro",
"btovar",
"klannon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15237",
"repo": "NDCMS/lobster",
"url": "https://github.com/NDCMS/lobster/issues/656"
}
|
gharchive/issue
|
Analyze Lobster with dependency tool to understand how to plan Py3 migration
For converting from Py2 to Py3, it's helpful to know if we can work in subsets. However, you can't convert c.py to Py3 if it imports a and b from a.py and b.py which are still written in Py2. There are techniques for analyzing your tree of imports/dependencies. For example, Dario Mapelli used pydeps for the WMCore py3 migration.
See: https://cms-dmwm-test.docs.cern.ch/py2py3/tools/pydeps/
(Actually, that whole site is a really great resource, but note that we don't need to make Lobster compatible with both Py2 and Py3. We can just update it to Py3 and move forward, which is a bit simpler than what Dario was working on.)
I don't know if it already exists somewhere else, but it would be helpful to take this opportunity and create a requirements.txt file with all the python dependencies. This will make management of dependencies easier, one could enable the GitHub feature to alert on security vulnerability and make it easier to build specific environments/images.
@mjcurran, could you take a first crack at this one? It would be helpful to know which packages in lobster_env.yaml are dependencies of other packages (i.e., not use directly by lobster).
Once we have that, we can look for either intermediate versions that both support py2 and py3, or maybe directly move to py3.
I removed already snakebite and elastic search, so it may be that some dependencies are not needed in lobster.
|
2025-04-01T04:10:42.257163
| 2024-04-23T12:28:33 |
2258751488
|
{
"authors": [
"glutjenkossink"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15239",
"repo": "NG-ZORRO/ng-zorro-antd",
"url": "https://github.com/NG-ZORRO/ng-zorro-antd/issues/8521"
}
|
gharchive/issue
|
Docs not working
Reproduction link
https://github.com/NG-ZORRO/ng-zorro-antd?tab=readme-ov-file#-usage
Steps to reproduce
Go to the NG-Zorro documentation and select any component (ex. Button).
Open your developer tools and check your console for any errors.
Try opening an example and see it not doing anything
Try interacting with certain components (ex. Drawer)
What is expected?
The components working properly and being able to see the example code.
What is actually happening?
Theres an console error and non of the components are responding.
ERROR TypeError: t.hasAttribute is not a function
Environment
Info
ng-zorro-antd
17.4.0
Browser
Firefox, Chrome
Noticed that if you do a 'hard' refresh it will send you back to the homepage. After that you can go to a components page and interact them, but you can never manually refresh the page the page the normal way or you'll get the error again.
|
2025-04-01T04:10:42.267037
| 2020-10-12T17:56:26 |
719556548
|
{
"authors": [
"hsuanxyz",
"vazgen6",
"vthinkxie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15240",
"repo": "NG-ZORRO/ng-zorro-antd",
"url": "https://github.com/NG-ZORRO/ng-zorro-antd/pull/5904"
}
|
gharchive/pull-request
|
feat(module:select): accept 0 value on enter
PR Checklist
Please check if your PR fulfills the following requirements:
[x] The commit message follows our guidelines: https://github.com/NG-ZORRO/ng-zorro-antd/blob/master/CONTRIBUTING.md#commit
[ ] Tests for the changes have been added (for bug fixes / features)
[x] Docs have been added / updated (for bug fixes / features)
PR Type
What kind of change does this PR introduce?
[ ] Bugfix
[x] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] Build related changes
[ ] CI related changes
[x] Documentation content changes
[ ] Application (the showcase website) / infrastructure changes
[ ] Other... Please describe:
What is the current behavior?
When using nz-select component with an option where [nzValue]="0" and hitting enter the value is not being saved, because this.activatedValue = 0 and it is a falsy value thus not entering the if block to call the onItemClick method.
Issue Number: N/A
What is the new behavior?
Optional @Input() was added to solve this problem. By passing [nzAcceptZeroValue]="true" to NzSelectComponent
Does this PR introduce a breaking change?
[ ] Yes
[x] No
Other information
I don't know Chinese the Chinese docs were Google translated.
@vthinkxie I think it's a bug, if 0 is valid value, it should also be handled correctly by the keyboard.
@vazgen6 Thanks for the PR, no need to a new input here, just correct judgement!
+ import { isNotNil } from 'ng-zorro-antd/core/util';
- if (this.activatedValue)
+ if (isNotNil(this.activatedValue))
Hi @vazgen6
thanks for your PR, I agree with @hsuanxyz
could you update your PR following @hsuanxyz 's suggestion?
@vazgen6 Thanks for the PR, no need to a new input here, just correct judgement!
+ import { isNotNil } from 'ng-zorro-antd/core/util';
- if (this.activatedValue)
+ if (isNotNil(this.activatedValue))
I also agree with this, initially I thought doing something similar but was afraid to break existing behavior but since user can click on it they should be able to use enter to select too.
@vthinkxie I think it's a bug, if 0 is valid value, it should also be handled correctly by the keyboard.
@hsuanxyz If this is a bug, should I update my commit message? Or let me know what is more accepted, if I amend with new message and force push will it cause any problems for you?
@vthinkxie I think it's a bug, if 0 is valid value, it should also be handled correctly by the keyboard.
@hsuanxyz If this is a bug, should I update my commit message? Or let me know what is more accepted, if I amend with new message and force push will it cause any problems for you?
no problem, you can --amend and --force .
@vthinkxie I think it's a bug, if 0 is valid value, it should also be handled correctly by the keyboard.
@hsuanxyz If this is a bug, should I update my commit message? Or let me know what is more accepted, if I amend with new message and force push will it cause any problems for you?
no problem, you can --amend and --force .
Done! I just fetched from upstream and rebased. Please let me know if there's something else I can do and thank you for your quick reply and support.
|
2025-04-01T04:10:42.296029
| 2023-09-18T13:26:30 |
1900941370
|
{
"authors": [
"FedorSteeman",
"bhsi-snm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15241",
"repo": "NHMDenmark/Mass-Digitizer",
"url": "https://github.com/NHMDenmark/Mass-Digitizer/issues/410"
}
|
gharchive/issue
|
Tab get stuck when going back
Template for isues/tickets in DigiApp
What is the issue ?
Tab navigation gets stuck on taxonomic name when going backwards up
the fields, if you don’t enter anything into this box and won’t move beyond it.
This is a recurrence of a previous issue #200.
Why is it needed/relevant ?
it is relevant when digitisers want to correct a record and want to move up to previous fields.
What is the expected acceptable result.
the cursor should follow the sequence and move up regardsless of text in
the taxanomic name field
Can no longer replicate in v1.1.30
|
2025-04-01T04:10:42.305302
| 2022-03-18T11:57:28 |
1173507934
|
{
"authors": [
"AngelaFaulding",
"cach1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15242",
"repo": "NHSDigital/DataDictionaryPublication",
"url": "https://github.com/NHSDigital/DataDictionaryPublication/issues/421"
}
|
gharchive/issue
|
Retired Item Dates
#269 has been fixed but it contains an issue relating to the "Retired date", where does it come from?
28/2/22 - Discussion with James: drop the "Retired Date" from the profiles.
There is now no requirment to add a retired date, however this isn't showing correctly on the Orchestrator preview.
This will be checked when the retirement processes are walked through with James. This will to be done once all the "create" processes are finished.
DMDS also need to agree the business process of when a date is added, i.e. it will be added when a National Code/Deafult Code is retired but what is required when an item is retired.
@cach1 - Will a date be added in the "Retired date" field for an item?
If so, will it be the implementation date for a standard and publication date (release month) for a DDCN and patch?
Wiki page started, but we need to confirm re the date and generally how to retire items, see #412
@AngelaFaulding for a dictionary data item (class, attribute, data element, business definition, supporting info) which is part of an Information Standard, it would be the Implementation Date of the standard.
For a DDCN, thinking about the sort of thing we put in there - I think this might vary a bit - the publication of a DDCN on our page, isn't really linked to the incorporation in the dictionary release - so if a DDCN with a retirement in went up on our page, and we couldn't have a release for several months - the DDCN would still be the 'current position'. So in theory, should it be the date that we publish the DDCN on the DDCN web page, and not necessarily the date the DDCN is reflected in live dictionary?
Patch - these are always linked to a release so should reflect the release month I would say (01/xx/xxxx)
Thanks @cach1.
This will work for ISNs and patches (we known what patch the item will be in).
We can test this out for a DDCN as during authoring we don't always know when the DDCN will be published.
Text to be added to the Wiki:
For an item which is part of an Information Standard, the retired date will be the Implementation Date of the standard
For an item which is part of a Data Dictionary Change Notice (DDCN), the retired date will be the 1st of the month that the DDCN is published on the website
For or an item which is part of a patch, the retired date will be the 1st of the month of the release patch , i.e. 01/##/####
Issue can be closed as it has now been documented.
This needs looking at again.
For an item which is part of an Information Standard, the retired date will be the Implementation Date of the standard
This is OK.
For an item which is part of a Data Dictionary Change Notice (DDCN), the retired date will be the 1st of the month that the DDCN is published on the website
The author wouldn't always know when the DDCN will be published and what release it will go in
For or an item which is part of a patch, the retired date will be the 1st of the month of the release patch , i.e. 01/##/####
The item wouldn't be retired on the 1st of the month as the release is always towards the end of the month
22/10/24 - James does not think the retired date is used anywhere other than in a Term retirement.
DMDS need to decide whether to use this as it is not required as there no history for this.
Wiki to be updated to remove all references to retired dates (unless it is a Term) and it needs adding to the Decision Log.
|
2025-04-01T04:10:42.354643
| 2018-10-09T16:52:41 |
368304082
|
{
"authors": [
"LHenrique42",
"sangamcse"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15243",
"repo": "NITSkmOS/Algorithms",
"url": "https://github.com/NITSkmOS/Algorithms/issues/423"
}
|
gharchive/issue
|
[Algo] Heap Sort [C]
https://en.wikipedia.org/wiki/Heapsort
GitMate.io thinks possibly related issues are https://github.com/NITSkmOS/Algorithms/issues/422 ([Algo] Heap Sort [CPP]), https://github.com/NITSkmOS/Algorithms/issues/147 ([Algo] Merge Sort [C]), https://github.com/NITSkmOS/Algorithms/issues/23 ([Algo]: Add Heapsort in C), https://github.com/NITSkmOS/Algorithms/issues/135 ([Algo] Shell Sort [C]), and https://github.com/NITSkmOS/Algorithms/issues/145 ([Algo] Selection Sort [C]).
Duplicate of https://github.com/NITSkmOS/Algorithms/issues/23
|
2025-04-01T04:10:42.361557
| 2023-06-15T15:09:20 |
1759046599
|
{
"authors": [
"riddhis5",
"stijnh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15244",
"repo": "NLeSC/litstudy",
"url": "https://github.com/NLeSC/litstudy/issues/47"
}
|
gharchive/issue
|
Most Cited Papers and Correlating Authors to Topic Clusters Questions
Hi,
Is there any way of checking which countries / authors / institutes have been citing the papers within my csv file, as I would like to find out where the papers within my csv file are most prominently being cited.
I would also be interested in knowing if there is a way to correlate authors to topic clusters, in order to see which authors are most active in each of the clusters.
Thank you!
Every Document has a citations attribute that gives you a list of other papers that cited that particular document. You can use that to find out where papers are being cited. Not all data sources supported it (I believe the Scopus source does if you use search_scopus or refine_scopus).
For your second question, it is possible but it will require some Python scripting. First, train a topic model to get a topic vector per document. Next, loop over all the documents, for each document get the topic vector and loop over all the authors. You can then calculate the average over all the topic vectors for each author. It's definitely possible, but it will require some Python programming work.
|
2025-04-01T04:10:42.374607
| 2021-02-04T10:15:40 |
801136532
|
{
"authors": [
"ArchangeGabriel",
"diabonas",
"dvzrv",
"trofi",
"wcawijngaards"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15245",
"repo": "NLnetLabs/unbound",
"url": "https://github.com/NLnetLabs/unbound/issues/418"
}
|
gharchive/issue
|
Segfault with glibc 2.33
Hi! I'm packaging unbound for Arch Linux. We have recently upgraded glibc to 2.33 in our testing repository and are now looking into an issue with unbound, which manifests as a segfault on start: https://bugs.archlinux.org/task/69532
I will try to provide more info later, if the original author of that ticket does not do that before me.
So it fails in initgroups which is a system call at startup that unbound uses to drop user privileges. Unbound calls it with the username and the group id. The stack trace shows grouplist, is there something wrong with the users's setup for the unbound user name? Or what user is configured to be used by the config file. With the option username in unbound.conf, it can also be set with ./configure --with-username and defaults to 'unbound'.
That said, the code does not check the gid value passed to initgroups, and could pass -1 for the groupid. When I do this the result is an error for Invalid argument and it does not crash for that. So I do not see how it could crash there, what with unbound passing a nonNULL username. It just checks that on the line above that the username argument is not NULL.
The issue is that initgroups() in glibc 2.33 crashes if the function is called within a chroot. There is already an upstream glibc bug report for this, so hopefully the issue should get fixed without any necessary changes in Unbound.
As a workaround for systems with glibc 2.33, you can disable the chrooting in Unbound by setting chroot: "" in unbound.conf.
FWIW, I haven't been able to create a minimal chroot that makes glibc happy yet (e.g. by adding /etc/unbound/etc/nsswitch.conf, /etc/unbound/etc/group, ... if /etc/unbound is the chroot directory).
initgroups() should probably be called before a chroot(). It's unfortunate that glibc crashes in this scenario, but it will probably not work as expected anyway.
But if you drop user privileges, the chroot call may not be possible any more, because that needs root privilege.
Ah, good point! I somehow thought initgroups() does not call setgoups() internally (and only preloads the values into memory), but it does.
How about something like:
Before:
chroot();
initgroups();
After:
getgroups(); // uses libc, NSS facility
chroot();
setgroups(); // only calls kernel directly
I would prefer to not mess with group ids in unbound, if this is fixed in the glibc bug report (at some point). The aim is to drop privileges as much as possible, and messing with the group lists sounds counterproductive for security.
Additionally, getgroups gets the groups for the current user. So the setgroups call would then not actually drop group privileges, I guess.
Waiting on glibc fix makes sense.
I did not realize getgroups() is so limited, my apologies. For completeness I'll amend my suggestion slightly with s/getgroups()/getgrouplist()/. That's what glibc implementation does https://sourceware.org/git/?p=glibc.git;a=blob;f=grp/initgroups.c;h=6164d16fc03affec29b3e76760e73689b21af435;hb=HEAD#l174 and FreeBSD manpages suggest to happen: https://www.freebsd.org/cgi/man.cgi?query=initgroups&apropos=0&sektion=3#end.
getgrouplist(); // uses libc, NSS facility
chroot();
setgroups(); // only calls kernel directly
This has been fixed upstream and backported in Arch (https://github.com/archlinux/svntogit-packages/commit/88b2820a8b13b42531de683cffda921d46b8671b), should be closed now I think.
The group permission drop needs to have correct security, the old code has it, so it stays, now that it is fixed elsewhere. Thanks for the suggestion for fixed code, that is very helpful.
|
2025-04-01T04:10:42.381427
| 2016-07-27T12:55:55 |
167846767
|
{
"authors": [
"304NotModified",
"tishion"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15246",
"repo": "NLog/NLog",
"url": "https://github.com/NLog/NLog/issues/1568"
}
|
gharchive/issue
|
All Logging methods may throw exceptions?
Type : Question
NLog version: 4.3.6
Platform: .Net 3.5 / .Net 4.0 / .Net 4.5 / Mono 4 / Silverlight 4 / Silverlight 5 / Xamarin Android / Xamarin iOs
I noted that all the logging methods could throw exceptions when there are something goes wrong in the Target implementation.
For example in the FileTarget:
private void ReplaceFileContent(string fileName, byte[] bytes, bool firstAttempt)
{
try
{
using (FileStream fs = File.Create(fileName))
{
byte[] headerBytes = this.GetHeaderBytes();
if (headerBytes != null)
{
fs.Write(headerBytes, 0, headerBytes.Length);
}
fs.Write(bytes, 0, bytes.Length);
byte[] footerBytes = this.GetFooterBytes();
if (footerBytes != null)
{
fs.Write(footerBytes, 0, footerBytes.Length);
}
}
}
catch (DirectoryNotFoundException)
{
if (!this.CreateDirs || !firstAttempt)
{
throw;
}
Directory.CreateDirectory(Path.GetDirectoryName(fileName));
//retry.
ReplaceFileContent(fileName, bytes, false);
}
}
This function may throw exception: UnauthorizedAccessException, and then this exception will be thrown
to the user's code scope, if the user doesn't catch this exception the application will crash.
And the API documents didn't show this throwing behavior to the user.
So is this a bug of NLog?
These exceptions should be catched on a higher level, depending on the throwexcepions option. If that isn't the case, then it's a bug ;)
Hi @304NotModified, Thanks for your information. I agree with you. But I don't know the meaning of the higher lever.
Do you mean these exceptions should be caught in the scope of user code? If so I think we should add one more section about exception handling into the document, right?
If you mean the exceptions should be caught within the scope of NLog framework, I am afraid this is absolutely a bug.
I put only one file name in the config file, and then NLog will try to generate the log file in to the Current Directory of the process, but the Current Directory of process will be changed by the progress, for example when we change to C:\windows\system, then NLog will try to create log file in this folder, but the process has no write permission for this directory, and then it will throw the exception. Unfortunately, this exception is not handled by anyone, it just be propagated to the user's code field and the user didn't try to catch it, then the user's application crashed.
The higher level is in NLog.
A short summary of the levels:
UserCode
Global NLog code (initialize, configure, routing)
Layout and Target NLog code.
The exceptions are catched in 2 (if throwExceptions = false, the default).
Do you mean these exceptions should be caught in the scope of user code?
No this is not needed. Set throwExceptions = false
@304NotModified Thanks a lot! I will check this.
|
2025-04-01T04:10:42.383425
| 2019-02-09T12:11:13 |
408425871
|
{
"authors": [
"304NotModified"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15247",
"repo": "NLog/NLog",
"url": "https://github.com/NLog/NLog/issues/3118"
}
|
gharchive/issue
|
Structured logging page is gone? (docs)
fixed. Was renamed to "IsEnabled"
@bewen81 please be careful with renaming pages / titles, thanks! (I this was a mistake ;) )
|
2025-04-01T04:10:42.390061
| 2015-02-02T15:20:04 |
56248957
|
{
"authors": [
"304NotModified",
"ilya-g",
"sbeaudoin"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15248",
"repo": "NLog/NLog",
"url": "https://github.com/NLog/NLog/issues/549"
}
|
gharchive/issue
|
How to override a configured rule?
Hi,
I configured NLog to write to the event log. By default, the configuration says to log only messages of level Error and above.
But, for some messages, from the code, I want to override this rule and write explicitly to the event log. Is there a way I can do this.
Do like to configure this in the config or programmatically?
I want to override it programmatically from some messages only.
De : Julian
Envoyé : mardi, 3 février 2015 15:28
À : NLog/NLog
Cc : Steve Beaudoin
Do like to configure this in the config or programmatically?
—
Reply to this email directly or view it on GitHub.
You can use the When filter for this.
In code:
LogManager.Configuration.LoggingRules[0].Filters.Add(...);
The problem is, adding a filter surely have a performance impact and this, for only one line who will be added maybe once a month.
I thought there was a method I could call to override the default only one time.
Thank you for your help. I will wee what I can do.
Envoyé depuis Windows Mail
De : Julian
Envoyé : vendredi, 6 février 2015 14:56
À : NLog/NLog
Cc : Steve Beaudoin
You can use the When filter for this.
In code:
LogManager.Configuration.LoggingRules[0].Filters.Add(...);
—
Reply to this email directly or view it on GitHub.
@sbeaudoin In this situation you'd better use separate Logger with a distinct name for such events. Then you can setup individual rules for this logger in configuration.
I’ll do that. Thank you.
Envoyé depuis Windows Mail
De : ilya-g
Envoyé : samedi, 7 février 2015 11:24
À : NLog/NLog
Cc : Steve Beaudoin
@sbeaudoin In this situation you'd better use separate Logger with a distinct name for such events. Then you can setup individual rules for this logger in configuration.
—
Reply to this email directly or view it on GitHub.
|
2025-04-01T04:10:42.392542
| 2019-05-21T15:43:19 |
446693228
|
{
"authors": [
"304NotModified",
"codecov-io",
"snakefoot"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15249",
"repo": "NLog/NLog",
"url": "https://github.com/NLog/NLog/pull/3411"
}
|
gharchive/pull-request
|
LoggingConfigurationParser - Recognize LoggingRule.RuleName property
See also #3409 and #3377
Codecov Report
:exclamation: No coverage uploaded for pull request base (release/4.6.4@df0c1b6). Click here to learn what that means.
The diff coverage is 0%.
@@ Coverage Diff @@
## release/4.6.4 #3411 +/- ##
===============================================
Coverage ? 80%
===============================================
Files ? 358
Lines ? 28380
Branches ? 3786
===============================================
Hits ? 22736
Misses ? 4554
Partials ? 1090
Thanks!
|
2025-04-01T04:10:42.528725
| 2022-01-28T14:51:32 |
1117493632
|
{
"authors": [
"robertbartel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15250",
"repo": "NOAA-OWP/DMOD",
"url": "https://github.com/NOAA-OWP/DMOD/issues/122"
}
|
gharchive/issue
|
Determine if remotely hosted hydrofabrics may be used
Determine if using remotely hosted hydrofabrics should be supported.
If so, this will lead to additional issues, likely linked under #122.
Blocked by need for feedback from various stakeholders on how much benefit this provides compared to the resource costs, especially leading up to the associated milestone.
|
2025-04-01T04:10:42.531831
| 2017-06-02T11:19:37 |
233152821
|
{
"authors": [
"DelphiDie",
"NPBruce"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15251",
"repo": "NPBruce/valkyrie",
"url": "https://github.com/NPBruce/valkyrie/issues/499"
}
|
gharchive/issue
|
Descent QItems
Is there a possibility to remove a QItem which was granted by an event?
I have an event which awards the heroes an item by "add=QItemExample", because they need to know the stats of the item.
Depending on the outcome of the event the heroes may either keep the item, or they should lose it. Unfortunately "remove=QItemExample" does nothing here, and the item is still in the inventory.
By the way, is there an ETA for the implementation of the Descent items?
I can have remove remove the item from inventory if the party has it. All D2E items should be done, are there things missing?
Strange, I seem to be only getting iron shields and leather armor
In the demo shop you only get those if you don't increase fame in the first quest (don't have a fame button in the shop). This is because at minimum fame and the filters I have on the two shop slots mean that is all that fits. I have the fame levels set so that each time you gain fame you go up one bracket and will get different items.
Thanks for the explanation Bruce, that explains it. I set the first threshold $%fameimpressive,=,10 to 10 and the first quest only yields 2 to 5 fame. Is there a list of the items, fame levels, traits and prices somewhere?
Have a look at the items.ini files for each of the expansions in content/d2e
|
2025-04-01T04:10:42.574610
| 2023-12-12T18:02:13 |
2038298438
|
{
"authors": [
"brtietz",
"dguittet"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15252",
"repo": "NREL/SAM",
"url": "https://github.com/NREL/SAM/issues/1661"
}
|
gharchive/issue
|
SAM_AdjustmentFactors missing variables required for PySAM
Describe the bug
Currently the AdjustmentFactors module is manually maintained, not regenerated by export_config. This was done because adjust used to have names with ":" that conflicted with the libSAMapi interface. Adjustment factors have since been modified and new variables have been added, but SAM_AdjustmentFactors and the PySAM adjustment factors haven't been modified to follow those changes.
The automatic generation is not possible with the code as-is for the following reason. In order to use hourly, period, or timeindex adjustment factors, the enabling variable is required, such as "adjust_en_hourly". These variables are however commented out: https://github.com/NREL/ssc/blob/75b4b1778d88234149bbb693d72914c2293ee721/ssc/common.cpp#L524
So they wouldn't make it to the PySAM interface.
Expected behavior
Adjustment factors need to have all the variables required for its use.
Propose manually modifying the SAM_AdjustmentFactors and PySAM files for now.
@dguittet should this be closed based on #1826 ?
|
2025-04-01T04:10:42.586658
| 2017-12-15T21:11:44 |
282549027
|
{
"authors": [
"ZLLentz",
"codecov-io"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15253",
"repo": "NSLS-II/bluesky",
"url": "https://github.com/NSLS-II/bluesky/pull/936"
}
|
gharchive/pull-request
|
ENH: Implement repeat plan
Description
Quick implementation of the repeat plan previously described in #929, with a refactoring of count to use repeat.
Motivation and Context
There are cases where we want to have a repeatable plan where we can leverage the kind of looping and delaying from count without rewriting the control flow. The logic in that plan isn't just a trivial for loop.
How Has This Been Tested?
count still works after being reimplemented using repeat
Misc Thoughts
I'm not in love with this implementation because I feel like it's cleaner to make a new plan with a simple for loop
This is better for the health of the library than implementing a per_step kwarg in count, and it accomplishes almost the same thing
It is up to the user to define a descriptive metadata dict and provide start and stop documents because the repeat plan doesn't do anything without being handed a plan. Fundamentally, a plan like this is not a count and each case needs to be treated differently.
I haven't pushed any docs updates related to this new stub plan
Codecov Report
Merging #936 into master will decrease coverage by 0.01%.
The diff coverage is 93.47%.
@@ Coverage Diff @@
## master #936 +/- ##
==========================================
- Coverage 90.38% 90.37% -0.02%
==========================================
Files 46 46
Lines 7623 7634 +11
==========================================
+ Hits 6890 6899 +9
- Misses 733 735 +2
Impacted Files
Coverage Δ
bluesky/plans.py
85.96% <100%> (-0.56%)
:arrow_down:
bluesky/plan_stubs.py
95.3% <92.5%> (-1.19%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3c38886...580f0d5. Read the comment docs.
@danielballan I plan to implement all your suggestions soon
|
2025-04-01T04:10:42.591539
| 2017-01-09T16:32:13 |
199597992
|
{
"authors": [
"Brudhu",
"arkilic",
"danielballan"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15254",
"repo": "NSLS-II/lightsource2-recipes",
"url": "https://github.com/NSLS-II/lightsource2-recipes/pull/74"
}
|
gharchive/pull-request
|
Update isstools and add misisng deps.
Update to isstools v0.3 (just tagged) and add deps that were missing and being manually installed. Some of these deps aren't packaged yet:
[x] pysmbc -- Neither anaconda nor conda-forge package this yet. (Do not confuse it with pysmbclient, which is a separate package and is packaged by conda-forge.) @arkilic is working on a recipe.
[x] ftplib -- Neither anaconda nor conda-forge package this yet because it's a built-in haha
[Previous comment edited to fix crucial misspelling fplib -> ftplib.]
attn @Brudhu
One last open question: @arkilic points out that some of the content of this "analysis" package might only be used for collection. That looks possible. @Brudhu, please advise. Do you need parts of isstools for data retrieval / analysis or only for collection?
Dan, parts of isstools are used for data retrieval / analysis too.
I still need to fix part of the pizzaboxes data retrieval to use filestore. For now we are still parsing the files directly.
Great, then we can leave this as is. This new 08-id-iss-analysis package will be installed in both 'collection' and 'analysis' conda environments. If you ever develop separate packages that should only be installed in 'collection' environments, we'll create a separate package, 08-id-iss-collection.
Okay! Thanks!
Please keep a track of all the requirements you add like we discussed the other day 🥇
I will try to. :)
|
2025-04-01T04:10:42.608973
| 2018-09-19T04:28:55 |
361573970
|
{
"authors": [
"FeynmanDNA",
"scboesch"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15255",
"repo": "NUS-ALSET/achievements",
"url": "https://github.com/NUS-ALSET/achievements/issues/440"
}
|
gharchive/issue
|
Clicking solve on a Code Combat path activity should open the level in another tab
Users should be able to open up the next code combat level on a path that they haven't solved by clicking the solved button. If the level has not been solved (after refreshing profile), the message can say "Opening up this level in another tab."
If the level has already been solved, then the checkmark should be shown.
related to the feature request #465
if the CodeCombat's profile is empty in Profile's page or the "Enter your Code Combat username" question, click Solve in the CodeCombat Levels should display a popup message of:
Please implement a check for CodeCombat Profile or the status check for "Enter your Code Combat username" question. If fulfills the CodeCombat profile check, first refresh the achievements of that profile. Then either display the checkmark or redirect to the CodeCombat level.
|
2025-04-01T04:10:42.625522
| 2021-03-02T02:38:46 |
819545989
|
{
"authors": [
"jackey42",
"openbsod",
"tokk-nv",
"zinwalin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15256",
"repo": "NVIDIA-AI-IOT/jetracer",
"url": "https://github.com/NVIDIA-AI-IOT/jetracer/issues/94"
}
|
gharchive/issue
|
jetracer basic motion error
sudo pip3 install --upgrade pip
sudo pip3 uninstall Adafruit-PureIO
sudo pip3 install Adafruit-PureIO
We already tried this, but the error has not solved.
Please help ;n;
Hi~!
I'm using Waveshare JetRacer.
-----Original Message-----
From: "Chitoku<EMAIL_ADDRESS>To<EMAIL_ADDRESS>Cc<EMAIL_ADDRESS><EMAIL_ADDRESS>Sent: 2021-03-10 (수) 16:19:42 (GMT+09:00)
Subject: Re: [NVIDIA-AI-IOT/jetracer] jetracer basic motion error (#94)
Hi jackey42,
Can you tell me what vehicle hardware you are using?
(Latrax? Tamiya? or Waveshare JetRacer? Waveshare JetRacer Pro?)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
use waveshare image,and deploy patches for specific hardware.
Hi jackey42,
As zinwalin suggested, please check out the Waveshare image which they also posted in this issue here.
https://github.com/NVIDIA-AI-IOT/jetracer/issues/96
Hello jackey42
Met same errors with JetRacer Pro. So my conditions are:
JetRacer Pro https://www.waveshare.com/wiki/JetRacer_Pro_AI_Kit
jetcard_nano-4gb-jp451 https://doc-0c-bc-docs.googleusercontent.com/docs/securesc/oca2mgdh5rj4h8hbtk0do6b37cs03g8n/rqmq8oajntfduih3euka329cmkam653k/1619391975000/08570363794112935001/15224068068018828316/1MX-z7ZCPvUzpN3nGhfZMAgENtK6VnBdh?e=download&authuser=0&nonce=4tsnrhoii3rl4&user=15224068068018828316&hash=8me5382sk3s71098iksrdem94gv256n9
Jetson Nano (A02) 4GB
branch ws/pro https://github.com/waveshare/jetracer/tree/ws/pro
With this branch I can get basic motions notebook about fully working execpt backward motion. It can move forward like normal jetracer do, but moves backward with 'donkeycar' scheme 'Reverse on RC cars is a little tricky because the ESC must receive a reverse pulse, zero pulse, reverse pulse to start to go backwards.' as described at https://docs.donkeycar.com/guide/calibrate/
BR, Ed.
|
2025-04-01T04:10:42.643967
| 2020-09-07T07:02:02 |
694777250
|
{
"authors": [
"ZacheryAU",
"chirayuG-nvidia",
"zehuanw"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15257",
"repo": "NVIDIA-Merlin/HugeCTR",
"url": "https://github.com/NVIDIA-Merlin/HugeCTR/issues/36"
}
|
gharchive/issue
|
No reaction appeared after the training start
Hi professionals,
We tried the steps in the HugeCTR tutorial and picked DeepFM for trail and successfully started the training, but nothing happened after the 'HugeCTR training start' text (we had waited for several days).
We tried several network configs, which however focusing on the max_iter meaning that network architecture was not changed, same problem.
System: Ubuntu 18.04.4 LTS
GPU: GeForce RTX 2080 Ti
Driver Version: 440.44
CUDA Version: 10.2
Hi, can you please set "display": to 1 in solver parameters in the config file and check if anything is printed. Please ensure you enable "SM 75" when compiling HugeCTR, to work on RTX 2080 Ti.
Hi, can you please set "display": to 1 in solver parameters in the config file and check if anything is printed. Please ensure you enable "SM 75" when compiling HugeCTR, to work on RTX 2080 Ti.
still freezes on 'HugeCTR training start'
Thank you for trying, can you please build HugeCTR with debug flags and share entire log printed on screen when you start training.
cmake -DCMAKE_BUILD_TYPE=Debug -DSM=75 ..
Thank you for trying, can you please build HugeCTR with debug flags and share entire log printed on screen when you start training.
cmake -DCMAKE_BUILD_TYPE=Debug -DSM=75 ..
Nothing appeared on the screen after 'HugeCTR training start'.
More information, we skipped the 'build' folder making and directly cmake, and trimmed the size of criteo dataset with python pandas because of our limited RAM. Not sure whether they matter.
I think DeepFM sample should not consume too much memory and if memory allocation failed it will throw exception. We just release our docker container on NGC: https://ngc.nvidia.com/catalog/containers/nvidia:hugectr. Would you like to have a try? It will help you if you have problem of environments.
I installed newer HugeCTR from GitHub and the problem is solved. Thank you for your attention.
I think DeepFM sample should not consume too much memory and if memory allocation failed it will throw exception. We just release our docker container on NGC: https://ngc.nvidia.com/catalog/containers/nvidia:hugectr. Would you like to have a try? It will help you if you have problem of environments.
Could you please provide more guidance for this NGC docker container for the people who want to try it?
Thank you for your feedback! Please find the guidance on https://github.com/NVIDIA-Merlin/HugeCTR#getting-started. I would like to close this issue but please let us know if you still encounter problems.
|
2025-04-01T04:10:42.693842
| 2017-08-08T08:14:49 |
248635016
|
{
"authors": [
"Seakia",
"lesterlo"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15258",
"repo": "NVIDIA/DIGITS",
"url": "https://github.com/NVIDIA/DIGITS/issues/1768"
}
|
gharchive/issue
|
Can DIGITS be used to train one network include LSTM?
Hi,
I want to train one model which includes VGG net and LSTM module. The LSTM module is connected to VGG net to do activity recognition. Can DIGITS be used to train it?
As we all known, the special point is the clip marker input. And I have no idea about the input format of clip marker to create HDF5 data format. Some open-source codes about LSTM using python layer to create input source and they are a little complicated to understand. Can you give me some advice?
Thanks
Nvidia DIGITS support Tensorflow since version 6.0, could we use tensorflow to train LSTM in Nvidia DIGITS?
|
2025-04-01T04:10:42.703530
| 2017-08-15T03:52:18 |
250215174
|
{
"authors": [
"BradLarson",
"kingreza"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15259",
"repo": "NVIDIA/DIGITS",
"url": "https://github.com/NVIDIA/DIGITS/issues/1780"
}
|
gharchive/issue
|
SqueezNet CaffeModel tuned in DIGITS
I have tuned a pre trained SqueezNet Caffe Model using DIGITS. Everything seems to work fine except when I try to classify an image through DIGITS I get the following error:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/share/digits/digits/model/images/classification/views.py", line 291, in image_classification_model_classify_one
predictions, visualizations = job.train_task().infer_one(image, snapshot_epoch=epoch, layers=layers)
File "/usr/share/digits/digits/model/tasks/caffe_train.py", line 1018, in infer_one
layers=layers,
File "/usr/share/digits/digits/model/tasks/caffe_train.py", line 1063, in classify_one
predictions.append( (labels[i], scores[i]) )
IndexError: list index out of range
The same data DB works fine with AlextNet, however it fails to classify through DIGITS for this new retrained model:
# please cite:
# @article{SqueezeNet,
# Author = {Forrest N. Iandola and Matthew W. Moskewicz and Khalid Ashraf and Song Han and William J. Dally and Kurt Keutzer},
# Title = {SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and $<$1MB model size},
# Journal = {arXiv:1602.07360},
# Year = {2016}
# }
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
crop_size: 227
mean_value: 104
mean_value: 117
mean_value: 123
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
crop_size: 227
mean_value: 104
mean_value: 117
mean_value: 123
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
convolution_param {
num_output: 64
kernel_size: 3
stride: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu_conv1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "fire2/squeeze1x1"
type: "Convolution"
bottom: "pool1"
top: "fire2/squeeze1x1"
convolution_param {
num_output: 16
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire2/relu_squeeze1x1"
type: "ReLU"
bottom: "fire2/squeeze1x1"
top: "fire2/squeeze1x1"
}
layer {
name: "fire2/expand1x1"
type: "Convolution"
bottom: "fire2/squeeze1x1"
top: "fire2/expand1x1"
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire2/relu_expand1x1"
type: "ReLU"
bottom: "fire2/expand1x1"
top: "fire2/expand1x1"
}
layer {
name: "fire2/expand3x3"
type: "Convolution"
bottom: "fire2/squeeze1x1"
top: "fire2/expand3x3"
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire2/relu_expand3x3"
type: "ReLU"
bottom: "fire2/expand3x3"
top: "fire2/expand3x3"
}
layer {
name: "fire2/concat"
type: "Concat"
bottom: "fire2/expand1x1"
bottom: "fire2/expand3x3"
top: "fire2/concat"
}
layer {
name: "fire3/squeeze1x1"
type: "Convolution"
bottom: "fire2/concat"
top: "fire3/squeeze1x1"
convolution_param {
num_output: 16
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire3/relu_squeeze1x1"
type: "ReLU"
bottom: "fire3/squeeze1x1"
top: "fire3/squeeze1x1"
}
layer {
name: "fire3/expand1x1"
type: "Convolution"
bottom: "fire3/squeeze1x1"
top: "fire3/expand1x1"
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire3/relu_expand1x1"
type: "ReLU"
bottom: "fire3/expand1x1"
top: "fire3/expand1x1"
}
layer {
name: "fire3/expand3x3"
type: "Convolution"
bottom: "fire3/squeeze1x1"
top: "fire3/expand3x3"
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire3/relu_expand3x3"
type: "ReLU"
bottom: "fire3/expand3x3"
top: "fire3/expand3x3"
}
layer {
name: "fire3/concat"
type: "Concat"
bottom: "fire3/expand1x1"
bottom: "fire3/expand3x3"
top: "fire3/concat"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "fire3/concat"
top: "pool3"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "fire4/squeeze1x1"
type: "Convolution"
bottom: "pool3"
top: "fire4/squeeze1x1"
convolution_param {
num_output: 32
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire4/relu_squeeze1x1"
type: "ReLU"
bottom: "fire4/squeeze1x1"
top: "fire4/squeeze1x1"
}
layer {
name: "fire4/expand1x1"
type: "Convolution"
bottom: "fire4/squeeze1x1"
top: "fire4/expand1x1"
convolution_param {
num_output: 128
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire4/relu_expand1x1"
type: "ReLU"
bottom: "fire4/expand1x1"
top: "fire4/expand1x1"
}
layer {
name: "fire4/expand3x3"
type: "Convolution"
bottom: "fire4/squeeze1x1"
top: "fire4/expand3x3"
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire4/relu_expand3x3"
type: "ReLU"
bottom: "fire4/expand3x3"
top: "fire4/expand3x3"
}
layer {
name: "fire4/concat"
type: "Concat"
bottom: "fire4/expand1x1"
bottom: "fire4/expand3x3"
top: "fire4/concat"
}
layer {
name: "fire5/squeeze1x1"
type: "Convolution"
bottom: "fire4/concat"
top: "fire5/squeeze1x1"
convolution_param {
num_output: 32
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire5/relu_squeeze1x1"
type: "ReLU"
bottom: "fire5/squeeze1x1"
top: "fire5/squeeze1x1"
}
layer {
name: "fire5/expand1x1"
type: "Convolution"
bottom: "fire5/squeeze1x1"
top: "fire5/expand1x1"
convolution_param {
num_output: 128
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire5/relu_expand1x1"
type: "ReLU"
bottom: "fire5/expand1x1"
top: "fire5/expand1x1"
}
layer {
name: "fire5/expand3x3"
type: "Convolution"
bottom: "fire5/squeeze1x1"
top: "fire5/expand3x3"
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire5/relu_expand3x3"
type: "ReLU"
bottom: "fire5/expand3x3"
top: "fire5/expand3x3"
}
layer {
name: "fire5/concat"
type: "Concat"
bottom: "fire5/expand1x1"
bottom: "fire5/expand3x3"
top: "fire5/concat"
}
layer {
name: "pool5"
type: "Pooling"
bottom: "fire5/concat"
top: "pool5"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "fire6/squeeze1x1"
type: "Convolution"
bottom: "pool5"
top: "fire6/squeeze1x1"
convolution_param {
num_output: 48
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire6/relu_squeeze1x1"
type: "ReLU"
bottom: "fire6/squeeze1x1"
top: "fire6/squeeze1x1"
}
layer {
name: "fire6/expand1x1"
type: "Convolution"
bottom: "fire6/squeeze1x1"
top: "fire6/expand1x1"
convolution_param {
num_output: 192
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire6/relu_expand1x1"
type: "ReLU"
bottom: "fire6/expand1x1"
top: "fire6/expand1x1"
}
layer {
name: "fire6/expand3x3"
type: "Convolution"
bottom: "fire6/squeeze1x1"
top: "fire6/expand3x3"
convolution_param {
num_output: 192
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire6/relu_expand3x3"
type: "ReLU"
bottom: "fire6/expand3x3"
top: "fire6/expand3x3"
}
layer {
name: "fire6/concat"
type: "Concat"
bottom: "fire6/expand1x1"
bottom: "fire6/expand3x3"
top: "fire6/concat"
}
layer {
name: "fire7/squeeze1x1"
type: "Convolution"
bottom: "fire6/concat"
top: "fire7/squeeze1x1"
convolution_param {
num_output: 48
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire7/relu_squeeze1x1"
type: "ReLU"
bottom: "fire7/squeeze1x1"
top: "fire7/squeeze1x1"
}
layer {
name: "fire7/expand1x1"
type: "Convolution"
bottom: "fire7/squeeze1x1"
top: "fire7/expand1x1"
convolution_param {
num_output: 192
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire7/relu_expand1x1"
type: "ReLU"
bottom: "fire7/expand1x1"
top: "fire7/expand1x1"
}
layer {
name: "fire7/expand3x3"
type: "Convolution"
bottom: "fire7/squeeze1x1"
top: "fire7/expand3x3"
convolution_param {
num_output: 192
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire7/relu_expand3x3"
type: "ReLU"
bottom: "fire7/expand3x3"
top: "fire7/expand3x3"
}
layer {
name: "fire7/concat"
type: "Concat"
bottom: "fire7/expand1x1"
bottom: "fire7/expand3x3"
top: "fire7/concat"
}
layer {
name: "fire8/squeeze1x1"
type: "Convolution"
bottom: "fire7/concat"
top: "fire8/squeeze1x1"
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire8/relu_squeeze1x1"
type: "ReLU"
bottom: "fire8/squeeze1x1"
top: "fire8/squeeze1x1"
}
layer {
name: "fire8/expand1x1"
type: "Convolution"
bottom: "fire8/squeeze1x1"
top: "fire8/expand1x1"
convolution_param {
num_output: 256
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire8/relu_expand1x1"
type: "ReLU"
bottom: "fire8/expand1x1"
top: "fire8/expand1x1"
}
layer {
name: "fire8/expand3x3"
type: "Convolution"
bottom: "fire8/squeeze1x1"
top: "fire8/expand3x3"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire8/relu_expand3x3"
type: "ReLU"
bottom: "fire8/expand3x3"
top: "fire8/expand3x3"
}
layer {
name: "fire8/concat"
type: "Concat"
bottom: "fire8/expand1x1"
bottom: "fire8/expand3x3"
top: "fire8/concat"
}
layer {
name: "fire9/squeeze1x1"
type: "Convolution"
bottom: "fire8/concat"
top: "fire9/squeeze1x1"
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire9/relu_squeeze1x1"
type: "ReLU"
bottom: "fire9/squeeze1x1"
top: "fire9/squeeze1x1"
}
layer {
name: "fire9/expand1x1"
type: "Convolution"
bottom: "fire9/squeeze1x1"
top: "fire9/expand1x1"
convolution_param {
num_output: 256
kernel_size: 1
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire9/relu_expand1x1"
type: "ReLU"
bottom: "fire9/expand1x1"
top: "fire9/expand1x1"
}
layer {
name: "fire9/expand3x3"
type: "Convolution"
bottom: "fire9/squeeze1x1"
top: "fire9/expand3x3"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
}
}
layer {
name: "fire9/relu_expand3x3"
type: "ReLU"
bottom: "fire9/expand3x3"
top: "fire9/expand3x3"
}
layer {
name: "fire9/concat"
type: "Concat"
bottom: "fire9/expand1x1"
bottom: "fire9/expand3x3"
top: "fire9/concat"
}
layer {
name: "drop9"
type: "Dropout"
bottom: "fire9/concat"
top: "fire9/concat"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "conv10"
type: "Convolution"
bottom: "fire9/concat"
top: "conv10"
convolution_param {
num_output: 1000
kernel_size: 1
weight_filler {
type: "gaussian"
mean: 0.0
std: 0.01
}
}
}
layer {
name: "relu_conv10"
type: "ReLU"
bottom: "conv10"
top: "conv10"
}
layer {
name: "pool10"
type: "Pooling"
bottom: "conv10"
top: "pool10"
pooling_param {
pool: AVE
global_pooling: true
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "pool10"
bottom: "label"
top: "loss"
#include {
# phase: TRAIN
#}
}
layer {
name: "accuracy_food"
type: "Accuracy"
bottom: "pool10"
bottom: "label"
top: "accuracy_food"
#include {
# phase: TEST
#}
}
If I'm not mistaken, I believe this is related to the same issue as reported in #1492, for which a solution was proposed in pull request #1536.
The core issue is that SqueezeNet is a fully convolutional network, with a pooling layer feeding into the softmax operator, instead of using a fully connected layer at the end like AlexNet. That leads to a different dimensionality in that last layer, which causes the DIGITS classification view to have problems. That pull request seemed to fix the issue for me when I tried it.
|
2025-04-01T04:10:42.718243
| 2016-05-10T20:33:26 |
154100121
|
{
"authors": [
"gheinrich",
"lukeyeager",
"mpbrigham"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15260",
"repo": "NVIDIA/DIGITS",
"url": "https://github.com/NVIDIA/DIGITS/pull/733"
}
|
gharchive/pull-request
|
Update standard networks for Caffe
This pull request makes 3 updates to the Caffe standard networks:
Give unique names to the train/val Data layers in each network, and use stage for include rules instead of phase
Change LeNet to use the Data layer for input scaling during train/val, but still use a Power layer during deploy.
Update the batch sizes
All are now powers of two
AlexNet and GoogLeNet use the training batch sizes that were used in their original papers
The first change is purely cosmetic. The second may have a slight but negligible improvement in performance. I made the third change because cuDNN typically prefers batch sizes that are even powers of two.
I ran some experiments on the new networks to verify that the batch sizes changes didn't break anything.
OS: Ubuntu 16.04
NVcaffe: 0.14.5
cuDNN: 4.0.7
GPU0: GTX980 - Maxwell, 4GB
GPU1: K40 - Kepler, 12GB
Runtime (most are slightly improved - :+1:)
Network
980+cuDNN
980
K40+cuDNN
K40
Old AlexNet (100/100)
1m06s
1m56s
1m13s
2m32s
New AlexNet (128/32)
1m10s
1m51s
1m12s
2m30s
Old GoogLeNet (24/24)
2m32s
6m53s
4m57s
8m37s
New GoogLeNet (32/16)
2m24s
6m50s
4m25s
8m35s
Memory utilization (nothing runs out of memory - :+1:)
Network
980+cuDNN
980
K40+cuDNN
K40
Old AlexNet (100/100)
3572
3219
3809
3214
New AlexNet (128/32)
3559
2991
3699
2985
Old GoogLeNet (24/24)
2974
3056
3097
3050
New GoogLeNet (32/16)
3542
3387
4682
3381
Full data:
https://gist.github.com/lukeyeager/4ddd9e1388f8bd70d337b2c80dd0a035
That looks good to me. I suppose I'll need to update batch sizes for Torch too.
Hi Luke,
You mention slight performance improvement for change #2. Is that due to power scaling in data layer being faster than scale layer? In any case, new users may find it a little confusing to see three power scaling operations rather than just one:
layer {
name: "train-data"
type: "Data"
top: "data"
top: "label"
include { stage: "train" }
data_param { batch_size: 64 }
}
layer {
name: "val-data"
type: "Data"
top: "data"
top: "label"
include { stage: "val" }
data_param { batch_size: 64 }
}
layer {
name: "scale"
type: "Power"
bottom: "data"
top: "scaled"
power_param { scale: 0.0125 }
}
Also, what does the comment "# 1/(standard deviation)" actually mean?
Is that due to power scaling in data layer being faster than scale layer?
Yes, that's the reason. It's handled in the multi-threaded data loader.
Also, what does the comment "# 1/(standard deviation)" actually mean?
The standard deviation for the MNIST dataset is ~80 per pixel (from a range of [0-255] per pixel).
Thank for the feedback Luke. Seeing that training the MNIST dataset is not very computationally expensive, I would still suggest the simpler network definition with a single scale layer, much clearer for new users.
Would it be helpful to change the comment # 1/(standard deviation) to # 1/(standard deviation on MNIST dataset)?
I'm comfortable with those changes, yes. Would you like to make a PR for it?
That's great, thank you. Just submitted #976 .
|
2025-04-01T04:10:42.729987
| 2020-12-28T16:15:41 |
775475910
|
{
"authors": [
"Jianbing-D",
"serser"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15261",
"repo": "NVIDIA/HugeCTR",
"url": "https://github.com/NVIDIA/HugeCTR/issues/198"
}
|
gharchive/issue
|
[Question] No registered 'Const' OpKernel for 'GPU' devices compatible with HugeCTR node
Finally got HugeCTR embedding wrapper through with the following,
emb = PluginEmbedding(vocabulary_size=voc_size, slot_num=slot_num,
embedding_vec_size=vec_size,
embedding_type=embedding_type,
gpu_count=gpu_count, initializer=initializer)
indices = tf.where(keys != -1)
sparse_indices = tf.reshape(keys, [-1, -1, 1])
value_tensors=tf.gather_nd(keys, indices)
out = emb(sparse_indices, value_tensors, sparse_indices.shape)
but a new problem appears,
[28d16h05m32s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=1106229
[28d16h05m32s][HUGECTR][INFO]: All2All Warmup Start
[28d16h05m32s][HUGECTR][INFO]: All2All Warmup End
[28d16h05m32s][HUGECTR][INFO]: gpu0 start to init embedding
[28d16h05m32s][HUGECTR][INFO]: gpu0 init embedding done
2020-12-28 16:05:35.604036: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'Const' OpKernel for GPU devices compatible with node {{node inputs/ctr_SynCate1Cos/HugectrCreateEmbedding}}
(OpKernel was found, but attributes didn't match) Requested Attributes: dtype=DT_STRING, value=Tensor<type: string shape: [] values: plugin_embedding>, _device="/job:localhost/replica:0/task:0/device:GPU:0"
. Registered: device='XLA_CPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='XLA_GPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='TPU_SYSTEM'
device='CPU'
device='GPU'; dtype in [DT_VARIANT]
device='GPU'; dtype in [DT_BOOL]
device='GPU'; dtype in [DT_COMPLEX128]
device='GPU'; dtype in [DT_COMPLEX64]
device='GPU'; dtype in [DT_UINT64]
device='GPU'; dtype in [DT_INT64]
device='GPU'; dtype in [DT_QINT32]
device='GPU'; dtype in [DT_UINT32]
device='GPU'; dtype in [DT_QUINT16]
device='GPU'; dtype in [DT_QINT16]
device='GPU'; dtype in [DT_INT16]
device='GPU'; dtype in [DT_UINT16]
device='GPU'; dtype in [DT_QINT8]
device='GPU'; dtype in [DT_INT8]
device='GPU'; dtype in [DT_UINT8]
device='GPU'; dtype in [DT_DOUBLE]
device='GPU'; dtype in [DT_FLOAT]
device='GPU'; dtype in [DT_BFLOAT16]
device='GPU'; dtype in [DT_HALF]
device='GPU'; dtype in [DT_INT32]
device='XLA_CPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
device='XLA_GPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
ERROR:tensorflow:No registered 'Const' OpKernel for 'GPU' devices compatible with node node inputs/ctr_SynCate1Cos/HugectrCreateEmbedding (defined at <string>:273)
(OpKernel was found, but attributes didn't match) Requested Attributes: _XlaHasReferenceVars=false<EMAIL_ADDRESS>dtype=DT_STRING, value=Tensor<type: string shape: [] values: plugin_embedding>, _device="/job:localhost/replica:0/task:0/device:GPU:0"
. Registered: device='XLA_CPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='XLA_GPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='TPU_SYSTEM'
device='CPU'
device='GPU'; dtype in [DT_VARIANT]
device='GPU'; dtype in [DT_BOOL]
device='GPU'; dtype in [DT_COMPLEX128]
device='GPU'; dtype in [DT_COMPLEX64]
device='GPU'; dtype in [DT_UINT64]
device='GPU'; dtype in [DT_INT64]
device='GPU'; dtype in [DT_QINT32]
device='GPU'; dtype in [DT_UINT32]
device='GPU'; dtype in [DT_QUINT16]
device='GPU'; dtype in [DT_QINT16]
device='GPU'; dtype in [DT_INT16]
device='GPU'; dtype in [DT_UINT16]
device='GPU'; dtype in [DT_QINT8]
device='GPU'; dtype in [DT_INT8]
device='GPU'; dtype in [DT_UINT8]
device='GPU'; dtype in [DT_DOUBLE]
device='GPU'; dtype in [DT_FLOAT]
device='GPU'; dtype in [DT_BFLOAT16]
device='GPU'; dtype in [DT_HALF]
device='GPU'; dtype in [DT_INT32]
device='XLA_CPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
device='XLA_GPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
[[inputs/ctr_SynCate1Cos/HugectrCreateEmbedding]]
E1228 16:05:43.048715<PHONE_NUMBER>04416 task_run.py:305] No registered 'Const' OpKernel for 'GPU' devices compatible with node node inputs/ctr_SynCate1Cos/HugectrCreateEmbedding (defined at <string>:273)
(OpKernel was found, but attributes didn't match) Requested Attributes: _XlaHasReferenceVars=false<EMAIL_ADDRESS>dtype=DT_STRING, value=Tensor<type: string shape: [] values: plugin_embedding>, _device="/job:localhost/replica:0/task:0/device:GPU:0"
. Registered: device='XLA_CPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='XLA_GPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='TPU_SYSTEM'
device='CPU'
device='GPU'; dtype in [DT_VARIANT]
device='GPU'; dtype in [DT_BOOL]
device='GPU'; dtype in [DT_COMPLEX128]
device='GPU'; dtype in [DT_COMPLEX64]
device='GPU'; dtype in [DT_UINT64]
device='GPU'; dtype in [DT_INT64]
device='GPU'; dtype in [DT_QINT32]
device='GPU'; dtype in [DT_UINT32]
device='GPU'; dtype in [DT_QUINT16]
device='GPU'; dtype in [DT_QINT16]
device='GPU'; dtype in [DT_INT16]
device='GPU'; dtype in [DT_UINT16]
device='GPU'; dtype in [DT_QINT8]
device='GPU'; dtype in [DT_INT8]
device='GPU'; dtype in [DT_UINT8]
device='GPU'; dtype in [DT_DOUBLE]
device='GPU'; dtype in [DT_FLOAT]
device='GPU'; dtype in [DT_BFLOAT16]
device='GPU'; dtype in [DT_HALF]
device='GPU'; dtype in [DT_INT32]
device='XLA_CPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
device='XLA_GPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
[[inputs/ctr_SynCate1Cos/HugectrCreateEmbedding]]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1375, in _do_call
return fn(*args)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1360, in _run_fn
target_list, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: No registered 'Const' OpKernel for 'GPU' devices compatible with node {{node inputs/ctr_SynCate1Cos/HugectrCreateEmbedding}}
(OpKernel was found, but attributes didn't match) Requested Attributes: _XlaHasReferenceVars=false<EMAIL_ADDRESS>dtype=DT_STRING, value=Tensor<type: string shape: [] values: plugin_embedding>, _device="/job:localhost/replica:0/task:0/device:GPU:0"
. Registered: device='XLA_CPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='XLA_GPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='TPU_SYSTEM'
device='CPU'
device='GPU'; dtype in [DT_VARIANT]
device='GPU'; dtype in [DT_BOOL]
device='GPU'; dtype in [DT_COMPLEX128]
device='GPU'; dtype in [DT_COMPLEX64]
device='GPU'; dtype in [DT_UINT64]
device='GPU'; dtype in [DT_INT64]
device='GPU'; dtype in [DT_QINT32]
device='GPU'; dtype in [DT_UINT32]
device='GPU'; dtype in [DT_QUINT16]
device='GPU'; dtype in [DT_QINT16]
device='GPU'; dtype in [DT_INT16]
device='GPU'; dtype in [DT_UINT16]
device='GPU'; dtype in [DT_QINT8]
device='GPU'; dtype in [DT_INT8]
device='GPU'; dtype in [DT_UINT8]
device='GPU'; dtype in [DT_DOUBLE]
device='GPU'; dtype in [DT_FLOAT]
device='GPU'; dtype in [DT_BFLOAT16]
device='GPU'; dtype in [DT_HALF]
device='GPU'; dtype in [DT_INT32]
device='XLA_CPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
device='XLA_GPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
[[inputs/ctr_SynCate1Cos/HugectrCreateEmbedding]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "task_run.py", line 311, in <module>
tf.compat.v1.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 303, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "task_run.py", line 303, in main
task.train()
File "task_run.py", line 247, in train
self.model.train(self.train_files, self.valid_files)
File "/workdir/zhangbo97/auto-ml-repo-tf-upgrade-v2-target/dpsr-tf/model/dp_model.py", line 572, in train
tf.estimator.train_and_evaluate(estimator, train_spec, valid_spec)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/training.py", line 505, in train_and_evaluate
return executor.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/training.py", line 646, in run
return self.run_local()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/training.py", line 747, in run_local
saving_listeners=saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 349, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1175, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1208, in _train_model_default
saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1514, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 778, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1283, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1384, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
raise value
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1369, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1442, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1200, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 968, in run
run_metadata_ptr)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1369, in _do_run
run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1394, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: No registered 'Const' OpKernel for 'GPU' devices compatible with node node inputs/ctr_SynCate1Cos/HugectrCreateEmbedding (defined at <string>:273)
(OpKernel was found, but attributes didn't match) Requested Attributes: _XlaHasReferenceVars=false<EMAIL_ADDRESS>dtype=DT_STRING, value=Tensor<type: string shape: [] values: plugin_embedding>, _device="/job:localhost/replica:0/task:0/device:GPU:0"
. Registered: device='XLA_CPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='XLA_GPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
device='TPU_SYSTEM'
device='CPU'
device='GPU'; dtype in [DT_VARIANT]
device='GPU'; dtype in [DT_BOOL]
device='GPU'; dtype in [DT_COMPLEX128]
device='GPU'; dtype in [DT_COMPLEX64]
device='GPU'; dtype in [DT_UINT64]
device='GPU'; dtype in [DT_INT64]
device='GPU'; dtype in [DT_QINT32]
device='GPU'; dtype in [DT_UINT32]
device='GPU'; dtype in [DT_QUINT16]
device='GPU'; dtype in [DT_QINT16]
device='GPU'; dtype in [DT_INT16]
device='GPU'; dtype in [DT_UINT16]
device='GPU'; dtype in [DT_QINT8]
device='GPU'; dtype in [DT_INT8]
device='GPU'; dtype in [DT_UINT8]
device='GPU'; dtype in [DT_DOUBLE]
device='GPU'; dtype in [DT_FLOAT]
device='GPU'; dtype in [DT_BFLOAT16]
device='GPU'; dtype in [DT_HALF]
device='GPU'; dtype in [DT_INT32]
device='XLA_CPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
device='XLA_GPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, DT_INT16, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
[[inputs/ctr_SynCate1Cos/HugectrCreateEmbedding]]
Stuck again, could you please help give a clue?
It seems HugectrCreateEmbedding had been successfully executed.
Could you please paste here what you did in your code? For example:
hugectr_tf_ops.init(...)
embedding_name = hugectr_tf_ops.create_embedding(...)
# Then what
One more thing, you are using the official docker, right? If so, then there should be only fprop_v3, whose inputs should be row_offsets, value_tensors, nnz_array.
We updated embedding_plugin after v2.3 release. And you can access the newest version from master branch. For the version in master branch, fprop_v4 can be used and it is faster than fprop_v3. Moreover, the procedures to process inputs for fprop_v4 are also simpler than that for fprop_v3.
If you want to use fprop_v4, set this argument to v4. And here are the steps to get inputs for this API:
# assume your original input keys with shape [batch_size, slot_num, max_nnz]
keys = ...
# reshape original keys to shape [batch_size * slot_num, max_nnz]
reshape_keys = np.reshape(keys, newshape=[batch_size * slot_num, max_nnz])
# choose valid keys
indices = tf.where(reshape_keys != -1)
values = tf.gather_nd(reshape_keys, indices)
# get row_indices from indices
row_indices = tf.transpose(indices, [1, 0])[0]
# then use row_indices, values as inputs of `fprop_v4`, and its `output_shape` should be set to `[batch_size, slot_num, embedding_vec_size]`
Sorry for the inconvenience and misleading of our documents.
Hi @Jianbing-D, sorry for the delay and thanks so much for the detailed reply. I tried to make a minimal case to reproduce the error but without success. Later I realize it has to be done in tf2 way (eager) but not in tf1-migrated way. I had to rewrite all the code in tf2 following HugeCTR documentation and I now face a new issue https://github.com/NVIDIA/HugeCTR/issues/200. Any advice is appreciated.
Hi @Jianbing-D, sorry for the delay and thanks so much for the detailed reply. I tried to make a minimal case to reproduce the error but without success. Later I realize it has to be done in tf2 way (eager) but not in tf1-migrated way. I had to rewrite all the code in tf2 following HugeCTR documentation and I now face a new issue https://github.com/NVIDIA/HugeCTR/issues/200. Any advice is appreciated.
|
2025-04-01T04:10:42.738752
| 2020-06-11T22:35:01 |
637361501
|
{
"authors": [
"benfred"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15262",
"repo": "NVIDIA/NVTabular",
"url": "https://github.com/NVIDIA/NVTabular/issues/86"
}
|
gharchive/issue
|
[FEA] Configurable row-group size when writing parquet datasets
For best performance we should be writing out parquet datasets with a predefined row group size (say ~128MB per row group).
Right now the number of row groups is dependant on the number of times we call 'write_table' in the cudf parquet writer, which makes the row group sizes somewhat unpredictable in the datasets we write out.
Talking to @rjzamora and this seems like its writing out multiple partitions per write_table call - closing
|
2025-04-01T04:10:42.818801
| 2023-12-20T22:30:08 |
2051386384
|
{
"authors": [
"Quentin-Anthony",
"ksivaman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15263",
"repo": "NVIDIA/TransformerEngine",
"url": "https://github.com/NVIDIA/TransformerEngine/pull/573"
}
|
gharchive/pull-request
|
Add GPT-NeoX "coming soon" to README
We at EleutherAI are going to integrate the TransformerEngine into our GPT-NeoX library, and wanted to go ahead and indicate this in the README.
Look forward to completing full support within the next month!
Thanks @Quentin-Anthony, this is great! Could you sign-off your commit?
Tired of wrestling with git commit histories. I'm just going to open a new PR with these changes on a clean branch.
|
2025-04-01T04:10:42.914626
| 2023-11-07T19:45:29 |
1982099724
|
{
"authors": [
"glowkey",
"nikkon-dev",
"rswhollan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15264",
"repo": "NVIDIA/go-dcgm",
"url": "https://github.com/NVIDIA/go-dcgm/pull/45"
}
|
gharchive/pull-request
|
Updates for DCGM 3.3.0
Also includes fix/formatting for WatchPidFields from helinfan
Have you looked at testing/python3/internal_scripts/nvswitch_counter_monitor.py? It uses some of the fields you are deleting. Well, from the python equivalent of dcgm_fields.h, but that should follow dcgm_fields.h.
The deleted fields are no longer present in dcgm_fields.h. Most likely nvswitch_counter_monitor.py needs to be udpated.
No, more than that: dcgm_fields.py has to match dcgm_fields.h. I did not see it in your merge request. Nik: this looks incomplete.
No, more than that: dcgm_fields.py has to match dcgm_fields.h. I did not see it in your merge request. Nik: this looks incomplete.
This is not the dcgm repo. The changes you are referring to are irrelevant for Go bindings.
|
2025-04-01T04:10:42.916935
| 2020-01-09T09:54:05 |
547368658
|
{
"authors": [
"chynphh",
"gongchenghhu",
"rafaelvalle"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15265",
"repo": "NVIDIA/mellotron",
"url": "https://github.com/NVIDIA/mellotron/issues/31"
}
|
gharchive/issue
|
The distinction between different speaker with mandarin dataset is not obvious.
When using multi-speaker data, can not distinguish between male and female, and there are only slight differences between different speakers. Current training step is 32K. Is this normal? The language is mandarin.
Is the loss still going down and model is not overfitting, i.e. generalization error is not increasing?
I also use multi-speaker data, include half of biaobei, a private male datasets(5000 sentences), and other 4 private small datasets, totally 6 speakers. When I use this datasets to train mellotron, I can't get a good alignment, the alignment like the below figure
But while I only use the single speaker dataset bioabei, the alignment is good.
And 6 speakers is enough?
@chynphh
|
2025-04-01T04:10:43.071658
| 2022-10-13T09:24:19 |
1407454895
|
{
"authors": [
"GMYL",
"GaryShen2008",
"nvliyuan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15266",
"repo": "NVIDIA/spark-rapids-examples",
"url": "https://github.com/NVIDIA/spark-rapids-examples/issues/233"
}
|
gharchive/issue
|
Error Unknown CMake command "CPMFindPackage" when using mvn clean package -Pudf-native-examples
When using mvn clean package -Pudf-native-examples an error Unknown CMake command "CPMFindPackage" occurs. How to solve it?
Version: 22.06
Action: mvn clean package -Pudf-native-examples
Description: The environment is an intranet environment, where rapids-cmake and RAPIDS.cmake are downloaded manually, and the CMakeLists.txt file in the cpp directory is modified. The main modifications are as follows
`# file(DOWNLOAD https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-22.06/RAPIDS.cmake ${CMAKE_BINARY_DIR}/RAPIDS.cmake)
file(COPY /home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/RAPIDS.cmake DESTINATION ${CMAKE_BINARY_DIR})
include(/home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/target/cpp-build/RAPIDS.cmake)
include(/home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/rapids-cmake/rapids-cmake.cmake)
include(/home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/rapids-cmake/rapids-cpm.cmake)
include(/home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/rapids-cmake/rapids-cuda.cmake)
include(/home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/rapids-cmake/rapids-export.cmake)
include(/home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/rapids-cmake/rapids-find.cmake)`
Error message:
[WARNING]
[WARNING] Some problems were encountered while building the effective settings
[WARNING] expected START_TAG or END_TAG not TEXT (position: TEXT seen ...\n\t\n \ua0 \ua0 <i... @110:9) @ /home/ssd3/software/apache-maven-3.6.3/conf/settings.xml, line 110, column 9
[WARNING]
[INFO] Scanning for projects...
[INFO]
[INFO] ------------< com.nvidia:rapids-4-spark-udf-examples_2.12 >-------------
[INFO] Building RAPIDS Accelerator for Apache Spark UDF Examples 22.06.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[WARNING] The POM for org.slf4j:slf4j-api:jar:1.6.1 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details
[WARNING] The POM for commons-lang:commons-lang:jar:2.6 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ rapids-4-spark-udf-examples_2.12 ---
[INFO] Deleting /home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/target
[INFO]
[INFO] --- maven-antrun-plugin:3.0.0:run (cmake) @ rapids-4-spark-udf-examples_2.12 ---
[INFO] Executing tasks
[INFO] [mkdir] Created dir: /home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/target/cpp-build
[INFO] [exec] -- CMAKE_BINARY_DIR--------------------------: /home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/target/cpp-build
[INFO] [exec] -- rapids-cmake-dir--------------------------: else!!
[INFO] [exec] -- CMAKE_CURRENT_LIST_DIR--------------------------: /home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/rapids-cmake
[INFO] [exec] -- The C compiler identification is GNU 4.8.5
[INFO] [exec] -- The CXX compiler identification is GNU 4.8.5
[INFO] [exec] -- The CUDA compiler identification is NVIDIA 11.0.194
[INFO] [exec] -- Detecting C compiler ABI info
[INFO] [exec] -- Detecting C compiler ABI info - done
[INFO] [exec] -- Check for working C compiler: /usr/bin/cc - skipped
[INFO] [exec] -- Detecting C compile features
[INFO] [exec] -- Detecting C compile features - done
[INFO] [exec] -- Detecting CXX compiler ABI info
[INFO] [exec] -- Detecting CXX compiler ABI info - done
[INFO] [exec] -- Check for working CXX compiler: /usr/bin/c++ - skipped
[INFO] [exec] -- Detecting CXX compile features
[INFO] [exec] -- Detecting CXX compile features - done
[INFO] [exec] -- Detecting CUDA compiler ABI info
[INFO] [exec] -- Detecting CUDA compiler ABI info - done
[INFO] [exec] -- Check for working CUDA compiler: /usr/local/cuda-11.0/bin/nvcc - skipped
[INFO] [exec] -- Detecting CUDA compile features
[INFO] [exec] -- Detecting CUDA compile features - done
[INFO] [exec] -- CUDA_VERSION_MAJOR: 11
[INFO] [exec] -- CUDA_VERSION_MINOR: 0
[INFO] [exec] -- CUDA_VERSION: 11.0
[INFO] [exec] -- Configuring incomplete, errors occurred!
[INFO] [exec] See also "/home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/target/cpp-build/CMakeFiles/CMakeOutput.log".
[INFO] [exec] CMake Error at /home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/rapids-cmake/cpm/find.cmake:92 (CPMFindPackage):
[INFO] [exec] Unknown CMake command "CPMFindPackage".
[INFO] [exec] Call Stack (most recent call first):
[INFO] [exec] CMakeLists.txt:91 (rapids_cpm_find)
[INFO] [exec]
[INFO] [exec]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 05:16 min
[INFO] Finished at: 2022-10-13T17:11:49+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:3.0.0:run (cmake) on project rapids-4-spark-udf-examples_2.12: An Ant BuildException has occured: exec returned: 1
[ERROR] around Ant part ...... @ 5:123 in /home/ssd3/target/gmy/2206/RAPIDS-accelerated-UDFs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
I tried once on my local machine, the building failed but didn't reproduce the same failure.
@GMYL
Seems you used [INFO] [exec] -- The CUDA compiler identification is NVIDIA 11.0.194.
I think the cudf 22.06 requires 11.5 according to https://github.com/rapidsai/cudf/blob/branch-22.06/CONTRIBUTING.md#general-requirements.
Can you use the docker container from the dockerfile?
I tried once on my local machine, the building failed but didn't reproduce the same failure.
@GMYL Seems you used [INFO] [exec] -- The CUDA compiler identification is NVIDIA 11.0.194. I think the cudf 22.06 requires 11.5 according to https://github.com/rapidsai/cudf/blob/branch-22.06/CONTRIBUTING.md#general-requirements.
Can you use the docker container from the dockerfile?
Yes, the current version of CUDA for the server environment is really 11.0+.
Currently my server is running spark-rapids-22.06 with CUDA 11.0+, spark-sql and some simple examples of rapids udf.
Now that I want to use RAPIDS-accelerated-UDFs, at first I just ran the MVN package for the RAPIDS-accelerated-UDFs project, running the StringWordCount.java example in the jar package where the executor failed due to the missing libudfexamplesjni.so problem.
Then trying to run mvn clean package -Pudf-native-examples resulted in the above error, it needs to be noted that my server network is limited and I cannot access the Internet.
If the problem with the CUDA version is the cause, then I need to upgrade the CUDA and try compiling it.
Dockerfile has not been used yet.
@GaryShen2008
Reference link https://github.com/rapidsai/cudf/blob/branch-22.06/CONTRIBUTING.md#general-requirements
Re-detect and upgrade the software version on the server,
Now the software version information on the server is as follows
Compilers:
gcc version 10.1.0
nvcc version 11.5.119
cmake version 3.24.2
CUDA/GPU:
CUDA 11.5
NVIDIA driver 495.29.05
GPU Tesla T4
Executing mvn clean package -Pudf-native-examples still appears
"Unknown CMake command "CPMFindPackage"." Is there any other way I can run the string_word_count example in the project?
Latest error message:
[WARNING] Some problems were encountered while building the effective settings
[WARNING] expected START_TAG or END_TAG not TEXT (position: TEXT seen ...\n\t\n \ua0 \ua0 <i... @110:9) @ /home/ssd3/software/apache-maven-3.6.3/conf/settings.xml, line 110, column 9
[WARNING]
[INFO] Scanning for projects...
[INFO]
[INFO] ------------< com.nvidia:rapids-4-spark-udf-examples_2.12 >-------------
[INFO] Building RAPIDS Accelerator for Apache Spark UDF Examples 22.06.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[WARNING] The POM for org.slf4j:slf4j-api:jar:1.6.1 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details
[WARNING] The POM for commons-lang:commons-lang:jar:2.6 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ rapids-4-spark-udf-examples_2.12 ---
[INFO]
[INFO] --- maven-antrun-plugin:3.0.0:run (cmake) @ rapids-4-spark-udf-examples_2.12 ---
[INFO] Executing tasks
[INFO] [mkdir] Created dir: /home/ssd3/target/gmy/2206/spark-rapids-examples/examples/UDF-Examples/RAPIDS-accelerated-UDFs/target/cpp-build
[INFO] [exec] -- The C compiler identification is GNU 10.1.0
[INFO] [exec] -- The CXX compiler identification is GNU 10.1.0
[INFO] [exec] -- The CUDA compiler identification is NVIDIA 11.5.119
[INFO] [exec] -- Detecting C compiler ABI info
[INFO] [exec] -- Detecting C compiler ABI info - done
[INFO] [exec] -- Check for working C compiler: /usr/bin/cc - skipped
[INFO] [exec] -- Detecting C compile features
[INFO] [exec] -- Detecting C compile features - done
[INFO] [exec] -- Detecting CXX compiler ABI info
[INFO] [exec] -- Detecting CXX compiler ABI info - done
[INFO] [exec] -- Check for working CXX compiler: /usr/bin/c++ - skipped
[INFO] [exec] -- Detecting CXX compile features
[INFO] [exec] -- Detecting CXX compile features - done
[INFO] [exec] -- Detecting CUDA compiler ABI info
[INFO] [exec] -- Detecting CUDA compiler ABI info - done
[INFO] [exec] -- Check for working CUDA compiler: /usr/local/cuda-11.5/bin/nvcc - skipped
[INFO] [exec] -- Detecting CUDA compile features
[INFO] [exec] -- Detecting CUDA compile features - done
[INFO] [exec] -- CUDA_VERSION_MAJOR: 11
[INFO] [exec] -- CUDA_VERSION_MINOR: 5
[INFO] [exec] -- CUDA_VERSION: 11.5
[INFO] [exec] -- Configuring incomplete, errors occurred!
[INFO] [exec] See also "/home/ssd3/target/gmy/2206/spark-rapids-examples/examples/UDF-Examples/RAPIDS-accelerated-UDFs/target/cpp-build/CMakeFiles/CMakeOutput.log".
[INFO] [exec] CMake Error at /home/ssd3/target/gmy/2206/spark-rapids-examples/examples/UDF-Examples/RAPIDS-accelerated-UDFs/target/cpp-build/_deps/rapids-cmake-src/rapids-cmake/cpm/find.cmake:152 (CPMFindPackage):
[INFO] [exec] Unknown CMake command "CPMFindPackage".
[INFO] [exec] Call Stack (most recent call first):
[INFO] [exec] CMakeLists.txt:87 (rapids_cpm_find)
[INFO] [exec]
[INFO] [exec]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 05:46 min
[INFO] Finished at: 2022-10-14T16:45:40+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:3.0.0:run (cmake) on project rapids-4-spark-udf-examples_2.12: An Ant BuildException has occured: exec returned: 1
[ERROR] around Ant part ...... @ 5:167 in /home/ssd3/target/gmy/2206/spark-rapids-examples/examples/UDF-Examples/RAPIDS-accelerated-UDFs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
@GMYL Can you try to use the docker build?
Just follow up the instructions here.
I succeeded to build branch-22.06 with one change of cmake version from 3.20.5 to 3.23.1 here.
@GMYL Can you try to use Dockerfile to build?
Just follow up the instructions here.
If you build branch-22.06, you need to upgrade the cmake version to 3.23.1 at least in the Dockerfile.
@GaryShen2008
Can you take a look at your target directory structure after a successful build?
@GaryShen2008 Can you take a look at your target directory structure after a successful build?
root@babba52ba9fc:/spark-rapids-examples/examples/UDF-Examples/RAPIDS-accelerated-UDFs/target# ll
total 640
drwxr-xr-x 10 root root 4096 Oct 14 14:33 ./
drwxr-xr-x 4 root root 4096 Oct 14 12:54 ../
drwxr-xr-x 2 root root 4096 Oct 14 12:54 antrun/
drwxr-xr-x 4 root root 4096 Oct 14 14:33 classes/
drwxr-xr-x 11 root root 4096 Oct 14 14:28 cpp-build/
drwxr-xr-x 2 root root 4096 Oct 14 14:33 dependency/
drwxr-xr-x 3 root root 4096 Oct 14 14:33 generated-sources/
drwxr-xr-x 2 root root 4096 Oct 14 14:33 maven-archiver/
drwxr-xr-x 3 root root 4096 Oct 14 14:33 maven-status/
drwxr-xr-x 3 root root 4096 Oct 14 14:33 native-deps/
-rw-r--r-- 1 root root 613835 Oct 14 14:33 rapids-4-spark-udf-examples_2.12-22.06.0-SNAPSHOT.jar
the error "Unknown CMake command "CPMFindPackage" seems that the CMakeLists.txt cannot include the related rapids-makefiles such as:
include(rapids-cmake)
include(rapids-cpm)
include(rapids-cuda)
include(rapids-export)
include(rapids-find)
since you update the https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-22.10/RAPIDS.cmake
to https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-22.06/RAPIDS.cmake in the CMakeLists.txt which is out of date, could you try the latest branch-22.10/RAPIDS.cmake file?
the correct output log should be :
.........
[INFO] [exec] -- CUDA_VERSION_MINOR: 5
[INFO] [exec] -- CUDA_VERSION: 11.5
[INFO] [exec] -- CPM: adding package<EMAIL_ADDRESS>(branch-22.10)
[INFO] [exec] -- Found CUDAToolkit: /usr/local/cuda/include (found version "11.5.119")
[INFO] [exec] -- Looking for pthread.h
[INFO] [exec] -- Looking for pthread.h - found
[INFO] [exec] -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
[INFO] [exec] -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
[INFO] [exec] -- Check if compiler accepts -pthread
[INFO] [exec] -- Check if compiler accepts -pthread - yes
[INFO] [exec] -- Found Threads: TRUE
[INFO] [exec] -- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11")
[INFO] [exec] -- CPM: cudf: adding package<EMAIL_ADDRESS>(jitify2)
[INFO] [exec] -- CPM: cudf: using local package<EMAIL_ADDRESS>[INFO] [exec] -- CPM: cudf: adding package<EMAIL_ADDRESS>(1.17.2)
[INFO] [exec] -- Found Thrust: /spark-rapids-examples/examples/UDF-Examples/RAPIDS-accelerated-UDFs/target/cpp-build/_deps/thrust-src/thrust/cmake/thrust-config.cmake (found version "<IP_ADDRESS>")
[INFO] [exec] -- Found CUB: /spark-rapids-examples/examples/UDF-Examples/RAPIDS-accelerated-UDFs/target/cpp-build/_deps/thrust-src/dependencies/cub/cub/cmake/cub-config.cmake (found suitable version "<IP_ADDRESS>", minimum required is "<IP_ADDRESS>")
[INFO] [exec] -- CPM: cudf: adding package<EMAIL_ADDRESS>(branch-22.10)
[INFO] [exec] -- RMM: RMM_LOGGING_LEVEL = 'INFO'
[INFO] [exec] -- CPM: cudf: rmm: adding package<EMAIL_ADDRESS>(v1.8.5)
[INFO] [exec] -- Build spdlog: 1.8.5
[INFO] [exec] -- Build type: Release
[INFO] [exec] -- Generating install
[INFO] [exec] -- CPM: cudf: adding package<EMAIL_ADDRESS>(apache-arrow-9.0.0)
@GaryShen2008 @nvliyuan
Thanks for your help. It has been compiled successfully. The main problem is my server network problem. Some dependencies cannot be downloaded normally during the compilation process.
But a new problem arises when running jar:
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: Could not locate native dependency amd64/Linux/libudfexamplesjni.so
Use spark sql to test StringWordCount and sql statements:
CREATE TEMPORARY FUNCTION wordcount AS 'com.nvidia.spark.rapids.udf.hive.StringWordCount';
select wordcount(rowkey) from perceive.hdfs_bayonet_vehiclepass where prtday between 20210309 and 20210309 group by rowkey limit 10;
There is no problem with the drive execution. The log displays:
*Expression HiveSimpleUDF#com.nvidia.spark.rapids.udf.hive.StringWordCount(rowkey#0) AS wordcount(rowkey)#158 will run on GPU
An error occurred during the execution of executor.
Jar package structure:
The error log is as follows:
22/10/19 15:06:01 ERROR TaskSetManager: Task 0 in stage 2.0 failed 4 times; aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 15) (xx.xx.xx.xx executor 0): org.apache.spark.SparkException: Failed to execute user defined function (StringWordCount: (string) => int)
at com.nvidia.spark.rapids.GpuUserDefinedFunction.$anonfun$columnarEval$4(GpuUserDefinedFunction.scala:69)
at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
at org.apache.spark.sql.hive.rapids.GpuHiveSimpleUDF.withResource(hiveUDFs.scala:44)
at com.nvidia.spark.rapids.GpuUserDefinedFunction.$anonfun$columnarEval$2(GpuUserDefinedFunction.scala:57)
at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:46)
at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:44)
at org.apache.spark.sql.hive.rapids.GpuHiveSimpleUDF.withResource(hiveUDFs.scala:44)
at com.nvidia.spark.rapids.GpuUserDefinedFunction.columnarEval(GpuUserDefinedFunction.scala:55)
at com.nvidia.spark.rapids.GpuUserDefinedFunction.columnarEval$(GpuUserDefinedFunction.scala:53)
at org.apache.spark.sql.hive.rapids.GpuHiveSimpleUDF.columnarEval(hiveUDFs.scala:44)
at com.nvidia.spark.rapids.RapidsPluginImplicits$ReallyAGpuExpression.columnarEval(implicits.scala:34)
at com.nvidia.spark.rapids.GpuAlias.columnarEval(namedExpressions.scala:109)
at com.nvidia.spark.rapids.RapidsPluginImplicits$ReallyAGpuExpression.columnarEval(implicits.scala:34)
at com.nvidia.spark.rapids.GpuExpressionsUtils$.columnarEvalToColumn(GpuExpressions.scala:93)
at com.nvidia.spark.rapids.GpuHashAggregateIterator.$anonfun$finalProjectBatch$5(aggregate.scala:538)
at com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.$anonfun$safeMap$1(implicits.scala:216)
at com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.$anonfun$safeMap$1$adapted(implicits.scala:213)
at scala.collection.immutable.List.foreach(List.scala:431)
at com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.safeMap(implicits.scala:213)
at com.nvidia.spark.rapids.RapidsPluginImplicits$AutoCloseableProducingSeq.safeMap(implicits.scala:248)
at com.nvidia.spark.rapids.GpuHashAggregateIterator.$anonfun$finalProjectBatch$4(aggregate.scala:535)
at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
at com.nvidia.spark.rapids.GpuHashAggregateIterator.withResource(aggregate.scala:181)
at com.nvidia.spark.rapids.GpuHashAggregateIterator.$anonfun$finalProjectBatch$1(aggregate.scala:534)
at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
at com.nvidia.spark.rapids.GpuHashAggregateIterator.withResource(aggregate.scala:181)
at com.nvidia.spark.rapids.GpuHashAggregateIterator.finalProjectBatch(aggregate.scala:510)
at com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:262)
at com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:181)
at com.nvidia.spark.rapids.ColumnarToRowIterator.$anonfun$fetchNextBatch$2(GpuColumnarToRowExec.scala:241)
at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
at com.nvidia.spark.rapids.ColumnarToRowIterator.withResource(GpuColumnarToRowExec.scala:187)
at com.nvidia.spark.rapids.ColumnarToRowIterator.fetchNextBatch(GpuColumnarToRowExec.scala:238)
at com.nvidia.spark.rapids.ColumnarToRowIterator.loadNextBatch(GpuColumnarToRowExec.scala:215)
at com.nvidia.spark.rapids.ColumnarToRowIterator.hasNext(GpuColumnarToRowExec.scala:255)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:349)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: Could not locate native dependency amd64/Linux/libudfexamplesjni.so
at com.nvidia.spark.rapids.udf.java.NativeUDFExamplesLoader.ensureLoaded(NativeUDFExamplesLoader.java:34)
at com.nvidia.spark.rapids.udf.hive.StringWordCount.evaluateColumnar(StringWordCount.java:77)
at com.nvidia.spark.rapids.GpuUserDefinedFunction.$anonfun$columnarEval$4(GpuUserDefinedFunction.scala:59)
... 53 more
Caused by: java.io.FileNotFoundException: Could not locate native dependency amd64/Linux/libudfexamplesjni.so
at ai.rapids.cudf.NativeDepsLoader.createFile(NativeDepsLoader.java:210)
at ai.rapids.cudf.NativeDepsLoader.loadDep(NativeDepsLoader.java:181)
at ai.rapids.cudf.NativeDepsLoader.loadNativeDeps(NativeDepsLoader.java:129)
at com.nvidia.spark.rapids.udf.java.NativeUDFExamplesLoader.ensureLoaded(NativeUDFExamplesLoader.java:31)
... 55 more
Solved
|
2025-04-01T04:10:43.076531
| 2024-10-22T16:24:36 |
2605925393
|
{
"authors": [
"YanxuanLiu",
"rishic3"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15267",
"repo": "NVIDIA/spark-rapids-examples",
"url": "https://github.com/NVIDIA/spark-rapids-examples/pull/451"
}
|
gharchive/pull-request
|
Add tags to skip Triton cells for CI/CD
Triton:
After discussing with @YanxuanLiu @eordentlich, we are skipping the Triton portions of the DL notebooks for now, allowing us to run the notebooks in Docker on the pipeline (in conjunction with this PR).
Added "TRITON" cell tags to all Triton-related cells to enable using Jupyter TagPreprocessor to detect and skip Triton cells without modifying any code on the user-end.
Triton cells can be re-enabled when we have either safe Docker-in-Docker execution or we refactor to use PyTriton client (in progress).
Bugs:
Avoiding use of relative paths for datasets to address CI failures in text_classification_tf and feature_columns_tf.
Test passed in my latest test, thanks!
Hi @rishic3 @eordentlich . Will you merge these changes into main branch? Our cicd pipelines will base on main branch.
|
2025-04-01T04:10:43.080327
| 2024-05-13T22:40:55 |
2293966044
|
{
"authors": [
"amahussein",
"nartal1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15268",
"repo": "NVIDIA/spark-rapids-tools",
"url": "https://github.com/NVIDIA/spark-rapids-tools/issues/1011"
}
|
gharchive/issue
|
[FEA] Handle error correctly when Java is not installed while running user-tools
Is your feature request related to a problem? Please describe.
Java is a pre-requisite for running user-tools, but sometimes users may run user-tools (without java installed) in their environment which causes the tools to error without providing the context or steps to install java.
It would be nice to handle this gracefully and provide the page to install Java/JDK.
Currently, it fails with below error:
Processing...⣯2024-05-13 15:18:04,523 ERROR rapids.tools.profiling: Failed to download dependencies Error invoking CMD <java -XX:+UseG1GC -Xmx50g -cp /home/user1/spark-rapids-tools/user_tools/prof_20240513221801_607A727F/work_dir/rapids-4-spark-tools_2.12-24.02.4.jar:/home/user1/spark-rapids-tools/user_tools/prof_20240513221801_607A727F/work_dir/spark-3.5.0-bin-hadoop3/jars/* com.nvidia.spark.rapids.tool.profiling.ProfileMain --output-directory /home/user1/spark-rapidstools/user_tools/prof_20240513221801_607A727F --platform onprem --auto-tuner --num-threads 6 --csv /home/user1/eventlogs/>:
| /bin/bash: java: command not found
2024-05-13 15:18:04,524 ERROR root: Profiling. Raised an error in phase [Execution]
Preferred solution:
Catch the error initially before running java command and put the below error message along with URL for the user to install java:
Cannot find Java/JDK - please install Eclipse Temurin JDK 8 https://adoptium.net/temurin/releases/?package=jdk&version=8
Thanks @nartal1
Are you going to add that check as part of the argprocessor.py validations?
Are you going to add that check as part of the argprocessor.py validations?
Yes @amahussein, That's the plan.
|
2025-04-01T04:10:43.088553
| 2024-10-23T20:20:15 |
2609751762
|
{
"authors": [
"amahussein",
"nartal1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15269",
"repo": "NVIDIA/spark-rapids-tools",
"url": "https://github.com/NVIDIA/spark-rapids-tools/pull/1391"
}
|
gharchive/pull-request
|
user-tools should add xms argument to java cmd
Signed-off-by: Ahmed Hussein<EMAIL_ADDRESS>Fixes #1382
Upon investigation, it was revealed that the min heap size could impact the runtime significantly. (see the linked issue for details of the performance impact)
This code change aims at setting the xms java argument to 50% of the max heap size.
pass xms to the java cmd
update the runtime report to list jvm info along with jvm arguments related to heap:
runtime.jvm.*
runtime.jvm.arg*
sample runtime.properties file. The lines that are generated by the changes in this PR are marked with >
#RAPIDS Accelerator for Apache Spark's Build/Runtime Information
#Wed Oct 23 20:26:21 UTC 2024
build.scala.version=2.12.15
build.hadoop.version=3.3.6
build.spark.version=3.5.0
> runtime.os.version=6.8.0-39-generic
> runtime.jvm.version=1.8.0_422
build.version=24.08.3-SNAPSHOT
runtime.spark.version=3.4.2
> runtime.jvm.arg.heap.min=50g
> runtime.jvm.name=OpenJDK 64-Bit Server VM
> runtime.jvm.arg.gc.UseG1GC=
> runtime.os.name=Linux
> runtime.jvm.arg.heap.max=100g
build.java.version=1.8.0_422
Details
This pull request updates the handling of JVM heap arguments in the Spark RAPIDS tool.
The most important is setting the Xms the jar CLI.
Introduces enhancements to the RuntimeUtil class by adding JVM and OS information to the runtime properties, extracting JVM heap arguments, and ensuring these arguments are correctly set in the user tools.
Enhancements to RuntimeUtil:
core/src/main/scala/org/apache/spark/sql/rapids/tool/util/RuntimeUtil.scala: Added imports for ManagementFactory and Scala collection conversions.
core/src/main/scala/org/apache/spark/sql/rapids/tool/util/RuntimeUtil.scala: Added methods to include JVM and OS information in the runtime properties and to extract JVM heap arguments. [1] [2]
Updates to JVM heap arguments handling:
user_tools/src/spark_rapids_pytools/rapids/rapids_tool.py: Updated the _re_evaluate_platform_args method to set both minimum and maximum heap size arguments for the JVM.
user_tools/src/spark_rapids_tools/utils/util.py: Added logic to calculate and include the minimum heap size in the tool resources.
Related and followups:
internal ticket was filed to update the java CLI in the CI/CD job
there is an internal ticket to update the user guide to emphasize the important of setting the heap arguments.
Thanks @amahussein for investigating and putting up the fix for this. Just a nit.
Thanks @amahussein for investigating and putting up the fix for this. Just a nit.
Thanks @nartal1 !
Removed the debugging code.
|
2025-04-01T04:10:43.089732
| 2024-06-08T05:33:26 |
2341488797
|
{
"authors": [
"mythrocks",
"razajafri"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15270",
"repo": "NVIDIA/spark-rapids",
"url": "https://github.com/NVIDIA/spark-rapids/issues/11031"
}
|
gharchive/issue
|
Fix tests failures in multiple files
FAILED ../../../../integration_tests/src/main/python/hive_delimited_text_test.py::test_read_compressed_hive_text
FAILED ../../../../integration_tests/src/main/python/get_json_test.py::test_get_json_object_quoted_question
FAILED ../../../../integration_tests/src/main/python/time_window_test.py::test_grouped_sliding_window_array
FAILED ../../../../integration_tests/src/main/python/datasourcev2_read_test.py::test_read_all_types_count
FAILED ../../../../integration_tests/src/main/python/orc_write_test.py::test_orc_do_not_lowercase_columns
FAILED ../../../../integration_tests/src/main/python/expand_exec_test.py::test_expand_pre_project
FAILED ../../../../integration_tests/src/main/python/logic_test.py::test_logical_with_side_effect
FAILED ../../../../integration_tests/src/main/python/json_matrix_test.py::test_scan_json_strings
FAILED ../../../../integration_tests/src/main/python/repart_test.py::test_hash_repartition_exact
FAILED ../../../../integration_tests/src/main/python/misc_expr_test.py::test_raise_error
All these tests pass with ANSI mode disabled.
|
2025-04-01T04:10:43.092524
| 2021-04-15T18:33:56 |
859148377
|
{
"authors": [
"jlowe",
"revans2",
"sameerz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15271",
"repo": "NVIDIA/spark-rapids",
"url": "https://github.com/NVIDIA/spark-rapids/issues/2146"
}
|
gharchive/issue
|
[FEA] Improve handling of shuffles during integration tests
Is your feature request related to a problem? Please describe.
While working on #2145 I noticed that the existing repartition test was not actually testing anything on the GPU. The test was supposed to be testing that a partition and shuffle matched the CPU, but the plan optimized out the GPU shuffle since the other parts of the plan were not columnar. The test should have failed since the plan was not columnar in any way, but it went ahead and compared CPU to CPU which isn't helpful.
Describe the solution you'd like
There should be a mechanism to avoid ignoring shuffles that are not on the GPU or force shuffles to be translated to the GPU during planning.
When doing the review of tests that have this problem, focus on tests that are shuffle related.
I was asked in standup how many tests are failing when we add back in the check for the exchange to be on the GPU for all of the tests.
It turns out that we have 8 scala tests that fail, and 209 python tests. The 209 number is a bit misleading because we have parametrized most of the tests there for types so it is actually a much smaller number of specific tests that are failing in this way.
I'll take a crack at fixing fixing the tests, but I may have to take a closer look at the python tests because I would not expect them to have a shuffle on the CPU normally.
|
2025-04-01T04:10:43.097196
| 2023-11-16T20:45:11 |
1997747336
|
{
"authors": [
"andygrove",
"mattahrens",
"razajafri",
"tgravescs"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15272",
"repo": "NVIDIA/spark-rapids",
"url": "https://github.com/NVIDIA/spark-rapids/issues/9753"
}
|
gharchive/issue
|
[AUDIT] Investigate SPARK-45592 - AQE and InMemoryTableScanExec correctness bug
Describe the bug
SPARK-45592 - AQE and InMemoryTableScanExec correctness bug fixes a correctness bug by introducing a new CoalescedHashPartitioning. When I tried to reproduce this bug with the spark rapids plugin it doesn't occur, which is good, but I'm not sure if it was by accident of it this is truly something we don't need to worry about.
We have our own GPUCustomShuffleReaderExec that has the output partitioning overriden but doesn't check nearly what Spark does in AQeShuffleReader in newer versions.
I'm filing this for us to investigate more to make sure we aren't missing anything in our AQE handling.
I tested this against Spark 3.5.0 and was unable to repro the bug on the GPU. The bug is reproducible on the CPU when I did the :paste
@NVnavkumar also tested this but saw an interesting thing where the bug wasn't reproducing even on the CPU when he did a :load but was able to see the bug when using :paste
Based on @tgravescs 's suggestion we can retarget this for 24.04
I think that we do need to update our version of the shuffle reader to match Spark. I updated locally to match 3.5.0 (before the fix) to try and reproduce the issue on the GPU but found that we fall back to CPU with:
! <TableCacheQueryStageExec> cannot run on GPU because GPU does not currently support the operator class org.apache.spark.sql.execution.adaptive.TableCacheQueryStageExec
I will create a PR to update the shuffle reader to match Spark 3.5.1
FYI: https://github.com/NVIDIA/spark-rapids/pull/10518
|
2025-04-01T04:10:43.099787
| 2022-04-19T03:25:20 |
1207667950
|
{
"authors": [
"HaoYang670"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15273",
"repo": "NVIDIA/spark-rapids",
"url": "https://github.com/NVIDIA/spark-rapids/pull/5277"
}
|
gharchive/pull-request
|
Support multiple datatypes in TypeSig.withPsNote()
Signed-off-by: remzi<EMAIL_ADDRESS>close #5203
Add a new API for withPSNote which can mark multi datatypes with same note.
build
build
build
build
build
|
2025-04-01T04:10:43.101777
| 2022-04-29T21:48:25 |
1221585180
|
{
"authors": [
"jlowe",
"viadea"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15274",
"repo": "NVIDIA/spark-rapids",
"url": "https://github.com/NVIDIA/spark-rapids/pull/5408"
}
|
gharchive/pull-request
|
[DOC] Add rebase mode notes for databricks doc [skip ci]
Add rebase mode notes for databricks doc.
By default, those 2 parameters are LEGACY in Databricks(at least 9.1ML and 10.4ML).
spark.sql.legacy.parquet.datetimeRebaseModeInWrite
spark.sql.legacy.parquet.int96RebaseModeInWrite
If we are writing a parquet file with date/timestamp/int96, below fallback will happen:
!Output <InsertIntoHadoopFsRelationCommand> cannot run on GPU because LEGACY rebase mode for dates and timestamps is not supported; LEGACY rebase mode for int96 timestamps is not supported
Minimum repro is:
import scala.collection.Seq
import java.sql.Date
Seq(java.sql.Timestamp.valueOf("1500-01-01 00:00:00")).toDF("ts").write.format("parquet").mode("overwrite").save("/tmp/testparquet_legacy")
We need to manually set them back to "EXCEPTION" which is default value in Apache Spark.
build
|
2025-04-01T04:10:43.103310
| 2020-10-06T06:28:16 |
715383034
|
{
"authors": [
"razajafri"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15275",
"repo": "NVIDIA/spark-rapids",
"url": "https://github.com/NVIDIA/spark-rapids/pull/910"
}
|
gharchive/pull-request
|
[WIP] Cache plug to write InternalRow to a CachedBatch
This PR creates a CachedBatch from InternalRow using ParquetFileWriter and necessary Spark code
I wanted to create a PR on top of the existing PR that I have open but that is not allowed and this PR shows too irrelevant changes from the other PR.
|
2025-04-01T04:10:43.120081
| 2024-01-15T21:01:29 |
2082674221
|
{
"authors": [
"NV-DM",
"gebdag"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15276",
"repo": "NVIDIAGameWorks/rtx-remix",
"url": "https://github.com/NVIDIAGameWorks/rtx-remix/issues/298"
}
|
gharchive/issue
|
Manhunt - Unstable light/shadow accumulation upon player/camera movement
Tested with newest builds (Build #44 [32fedd0] for Bridge and dvxk Build #385 [b37638b])
Tested on a 4090
546.33 drivers
Steam version of Manhunt
crosire's d3d8to9
In Manhunt the lighting doesn't seem to accumulate properly, leading to a shaky appearance of shadows in motion and in some cases even when things are still. It seems like the shadows are being completely redrawn upon motion so they are constantly fading in for a second. This is especially evident with modded in emissive lights but also happens with other light sources. I haven't seen this type of behavior in other games so I assume it is an issue with Manhunt specifically.
Attach files!
NvRemixBridge.log
d3d9.log
manhunt_d3d9.log
rtx.conf.txt
mod.zip (mod.usda)
To Reproduce
Steps to reproduce the behavior:
Load up any map in the game and move the camera/player. I have attached my mod.usda above which adds lights to the game, since the light translation doesn't work well out of the box. This should make the issue clear.
Expected behavior
Shadows keep fading in upon moving the camera/player instead of staying "solid".
Example video (watch in full res and full screen to get a good impression):
https://www.youtube.com/watch?v=P_63ZURij6I
REMIX-2632 for tracking.
|
2025-04-01T04:10:43.137611
| 2024-02-02T12:27:53 |
2114820936
|
{
"authors": [
"0ddly",
"CodaCM",
"Infinite-Mouse",
"KewkLW",
"MartinPihrtJR",
"NV-LL",
"mbgamer05",
"sambvfx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15277",
"repo": "NVIDIAGameWorks/rtx-remix",
"url": "https://github.com/NVIDIAGameWorks/rtx-remix/issues/399"
}
|
gharchive/issue
|
[Toolkit bug]: RTX Remix toolkit crashes on startup
Contact Details (Optional)
No response
Describe the bug
While starting the app, the RTX Remix loading box pops up, then nothing happens next. (see video) Looking at the logs reveals that the app crashed.
https://github.com/NVIDIAGameWorks/rtx-remix/assets/64539948/fc5d9c6f-696f-4987-9916-34925cdf144a
Looking at the log myself, most of the errors seem to be caused by
OSError: [WinError 87] The parameter is incorrect: 'C:'
and later on by
FileNotFoundError: [WinError 206] The filename or extension is too long: 'C:\Program Files\Oculus\Support\oculus-runtime'
Log of crash:
kit_20240202_131950.log
How do you reproduce the bug?
Launch RTX Remix from NVIDIA Omniverse
RTX Remix crashes
What is the expected behavior?
The toolkit should start up.
Version
2024.1.1
Hey @MartinPihrtJR , thanks for reporting this. Something is definitely strange here. Are you still having this issue?
@sambvfx yes, the issue still persists. Tried launching it yesterday (after clean install), with no success. I also tried switching the GPU driver to studio, with no luck at all.
I believe I'm having the same issue. I'll start a new bug if needed but here's the rather huge log.
kit_20240213_154842.log
This is from a fresh install and trying to run.
Thanks for the logs, I've filed as REMIX-2793 internally for us to dive in and figure out what's going on.
using the newest version of the toolkit getting the same issue, i don't know what's causing it and it's just quitting it's self without an error message of any kind
UPDATE
After updating to version 2024.2.1 the issue persists. Application starts up to splashscreen, then crashes with the crashreporter running in the background (I don't even know if it sends the crashdump to nvidia servers).
Looking at the logs myslef, the main issue is still the OSError: [WinError 87] The parameter is incorrect: 'C:' error, while trying to load a python module called OmniGeospatialSchema. The app then tries to load even more modules, which fail due to the error above.
Later on in the logs, the error Error: [WinError 206] The filename or extension is too long: 'C:\Program Files\Oculus\Support\oculus-runtime'. also still appears. I tried uninstalling oculus runtime, and running remix again (on the 2024.1.1 version, while debugging and trying anything to make it work previously), but this did not fix the issue, the code just referenced other folder (i think it was a media codec or something).
I will try to provide any logs that might help, and I am looking forward to resolving the issue.
kit_20240229_154242.log
kit_20240314_131713.log
noticied in this that
"py-spy.exe' exited with code 1"
normally that means it doesn't have a parameter which could be causing the crash but idk why
Uninstall "OpenCL, OpenGL, and Vulkan Compatibility Pack" which comes preinstalled on Windows 11 (unsure about Windows 10.) Recently learnt of this from someone on Discord, now just trying to spread the word as it seems to work with Laptops (the main systems affected by this.)
@0ddly thanks for a tip. I've tried to find the compability pack on my system, but nothing has shown up. Seems like I do not have it installed.
Hello! Will you please test this on the latest release and let us know if the issue persists? Thank you!
Hello @NV-LL . I am unfortunately on a traineeship in another country, with only my laptop. May I ask you what is the latest version of RTX Remix right now? I've tested on version 2024.2.1 and 2024.1.1. If there is a newer version, I will test the issue as soon as possible (approx end of month).
@MartinPihrtJR the Remix app is now on version 2024.3.0, available for update on the Omniverse launcher.
@NV-LL thanks for the info! I will take a look as soon as I get home, and update you with results.
UPDATE
Updated result
Hi @NV-LL, I've tested the problem on the latest version of RTX Remix. The results are below...
I've updated my RTX Remix to the latest version being 2024.3.1. The app still does not work, with the same result as in the original issue. App launches a splashscreen, which disappears after a while and nothing more happens. I've included the crashlog from the kit itself.
My view on the errors
Looking at the log myself, the errors seem to start after a read fail on line 2342 (not sure if related) shortly followed by the OSError: [WinError 87] The parameter is incorrect: 'C:' error leading to a cascade of more and more error forming as a result of unsuccessful loading of modules. Being just an assumption, only the developers at NVidia know how this thing works under the hood... :)
Question
I want to completely uninstall RTX Remix & Omniverse to do a fully clean install (maybe some files from the previous versions were left somewhere causing the errors). Is there a guide for doing so?
Files
kit_20240530_211714.log
Hi @MartinPihrtJR! The Omniverse app has a Launcher Cleanup Tool that will remove all installed Omni apps. We recommend making a backup of all your data and projects first - the tool will allow you to keep your data, but backups help prevent a worst-case scenario. Then you can uninstall the launcher itself via Windows. Let us know if a clean reinstall helps!
I am having a similar issue where it crashes to the error remix::create d3d9" has failed. error code: 1.
I got an issue that looks like the situation in the video but maybe for a different reason.While starting the app,there's actually a window hidden behind the RTX remix loading box that says"Failed to create any gpu devices for compatibility mode." and I have no idea what to do with it.I am using a laptop with an Intel intergrated GPU and a RTX A3000 Laptop GPU(not compatible with Remix) and I plugged a RTX 3080 with 10GB VRAM outside.By the way the sample runs normally on my RTX 3080.Waiting for a fix.
|
2025-04-01T04:10:43.155054
| 2023-10-06T14:40:28 |
1930320068
|
{
"authors": [
"floatplane",
"stevekuznetsov"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15278",
"repo": "NWACus/avy",
"url": "https://github.com/NWACus/avy/issues/426"
}
|
gharchive/issue
|
Add online/offline detection to ObservationUploader
Follow-on work from #424 - we should pause the upload queue when offline, and resume it when online.
@floatplane I think we had this at some point in the past already - see useAppState(). The other half would be a configuration option for tanstack-query.
I went with NetInfo since it seems to be blessed by Expo
|
2025-04-01T04:10:43.159992
| 2024-09-12T18:17:58 |
2523056205
|
{
"authors": [
"AmandaDoyle",
"alexrichey"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15279",
"repo": "NYCPlanning/data-engineering",
"url": "https://github.com/NYCPlanning/data-engineering/issues/1131"
}
|
gharchive/issue
|
Specify Shapefile Layer to Distribute to Socrata
Many products like DCM are packaged up into a shapefile with multiple layers. (each layer being a dataset on it's own)
We can either figure out how to specify the layer to upload in the call to socrata itself.
Or delete the other layers from the zip.
Treat the shapefile as an assembly similar to a zipfile (I mean, technically it its) and "unpackage" it into a single shapefile. This might push us toward defining things at the product level, since this is a shapefile of the entire product.
Not a useful enhancement with new Socrata data structure
|
2025-04-01T04:10:43.175966
| 2012-10-17T17:11:57 |
7662713
|
{
"authors": [
"kbaribeau",
"mrchimp"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15280",
"repo": "NYTimes/backbone.stickit",
"url": "https://github.com/NYTimes/backbone.stickit/issues/15"
}
|
gharchive/issue
|
Cursor jumps to end of input when using Stickit + Html5 email input type and webkit
Hey all,
I just started using stickit and I love it, BUT, I have this issue.
If my template is using the html5 email input type like so:
<input type="email" name="email" class="span12" required />
In my view I have these bindings:
bindings:
"input[name=email]":
modelAttr: "email"
Typing initially into the textbox is fine, but when I try to change it, my cursor gets moved to the end of the line.
Here's a short video: https://dl.dropbox.com/u/80354/stickit-thing.swf
I was able to reproduce this in Safari, Chrome, and the iOS Simulator, but not Firefox.
Thanks!
Just leaving this here in case it helps somebody... I was having the same issue. It turned out I was calling this.stickit() on my view every time the model was validated, which was on each keypress... If you call it that much, you're gonna have a bad time.
|
2025-04-01T04:10:43.232082
| 2023-09-16T00:15:55 |
1899223544
|
{
"authors": [
"Amnish04",
"Namatuzio"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15281",
"repo": "Namatuzio/tiller",
"url": "https://github.com/Namatuzio/tiller/issues/10"
}
|
gharchive/issue
|
Cannot process a text file
When trying to process a single file to generate html, the program throws an error suggesting its not a text file.
python .\main.py .\examples\example2.txt
Suggestion:
The way you are checking for file extension could cause potential bugs
as there could be multiple dots in the file name.
Instead use
if(file.split(".")[-1] == "txt"):
Use -1 index instead of 1.
Ah strange, I could've sworn I had this in an earlier version, nonetheless, thanks for the catch. Fixed.
|
2025-04-01T04:10:43.253909
| 2020-08-25T15:40:30 |
685591341
|
{
"authors": [
"Nandaka",
"mugi-jiichan"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15282",
"repo": "Nandaka/PixivUtil2",
"url": "https://github.com/Nandaka/PixivUtil2/issues/774"
}
|
gharchive/issue
|
Pixiv Update - Basically nothing works anymore
Prerequisites
[ x ] Did you read FAQ section?
[ x ] Did you test with the latest releases or commit ? release
Description
Hi,
apparently pixiv has changed its API within the last 36 hours. I had to relog into Pixiv via browser and had to change my "insecure" password. My account is not blocked or anything. I can access and surf on the site, look at images, use search etc.
Steps to Reproduce
Start any option and proceed as intended with any option
Expected behavior: [What you expected to happen]
pixivUtil parses API/Website and starts to process/download.
Actual behavior: [What actually happened]
"1. Download by member_id" turns into an infinite loop:
Traceback (most recent call last):
File "PixivArtistHandler.pyc", line 77, in process_member
File "PixivBrowserFactory.pyc", line 723, in getMemberPage
File "PixivBrowserFactory.pyc", line 615, in getMemberInfoWhitecube
File "json_init_.pyc", line 357, in loads
File "json\decoder.pyc", line 337, in decode
File "json\decoder.pyc", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Error at processing Artist Info: (<class 'json.decoder.JSONDecodeError'>, JSONDecodeError('Expecting value: line 1 column 1 (char 0)'), <traceback object at 0x00D96DC8>)
'2. Download by image_id' stops with an error, but returns to the menu:
Image ID (69900869): Expecting value: line 1 column 1 (char 0)
Stack Trace: (<class 'json.decoder.JSONDecodeError'>, JSONDecodeError('Expecting value: line 1 column 1 (char 0)'), <traceback object at 0x0B93B4A8>)
Traceback (most recent call last):
File "PixivImageHandler.pyc", line 71, in process_image
File "PixivBrowserFactory.pyc", line 586, in getImagePage
File "PixivBrowserFactory.pyc", line 615, in getMemberInfoWhitecube
File "json_init_.pyc", line 357, in loads
File "json\decoder.pyc", line 337, in decode
File "json\decoder.pyc", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Both '5. Download from bookmarked artists (/bookmark.php?type=user)' and '8. Download new illust from bookmarked members (/bookmark_new_illust.php)' stop and exit the tool:
Image ID (69900869): Expecting value: line 1 column 1 (char 0)
Stack Trace: (<class 'json.decoder.JSONDecodeError'>, JSONDecodeError('Expecting value: line 1 column 1 (char 0)'), <traceback object at 0x0DF55428>)
Traceback (most recent call last):
File "PixivUtil2.py", line 662, in process_image
File "PixivBrowserFactory.pyc", line 471, in getImagePage
File "PixivBrowserFactory.pyc", line 500, in getMemberInfoWhitecube
File "json_init_.pyc", line 357, in loads
File "json\decoder.pyc", line 337, in decode
File "json\decoder.pyc", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I haven't tested '3. Download by tags' nor '7. Download from tags list' which were my other to-go options.
Versions
20200824
202000424
You can get this information from executing PixivUtil2.py --help.
Latest version available in https://github.com/Nandaka/PixivUtil2/releases
looks good from my side, try to update your cookies?
Hi. I did that because I had to relog with the browser. The config file has the current PHPSESSION id. But I try again and clear the pixiv cookie first.
Shoot... I copied a very cookie from an old local Firefox profile using a 3rd party tool after I reset the password in the Portable Firefox from the flash drive ... typical PEBKAC error. >_________________<
Thank you for the quick response. Sorry for making a fuss.
|
2025-04-01T04:10:43.260354
| 2020-09-04T09:16:21 |
692947763
|
{
"authors": [
"Nandaka",
"ReisenII"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15283",
"repo": "Nandaka/PixivUtil2",
"url": "https://github.com/Nandaka/PixivUtil2/issues/784"
}
|
gharchive/issue
|
Unable to download from FANBOX
Prerequisites
[x] Did you read FAQ section?
[x] Did you test with the latest releases or commit ?
Description
Unable to download from supported creators on FANBOX (f1) but no problem with downloading from pixiv users.
Steps to Reproduce
Open latest version of the downloader
Login as usual through email and password set on the config.ini and cookie obtained from PHPSESSID
Select f1 with any number of max pages
Expected behavior: Log into FANBOX and download my supported creators
Actual behavior:
Input: f1
Max Page = 0
Not logged in to FANBOX, trying to update FANBOX cookie...
Could not update FANBOX cookie string.
Traceback (most recent call last):
File "PixivUtil2.py", line 1144, in main
File "PixivUtil2.py", line 886, in main_loop
File "PixivUtil2.py", line 642, in menu_fanbox_download_from_list
File "PixivBrowserFactory.pyc", line 834, in fanboxGetArtistList
File "PixivBrowserFactory.pyc", line 382, in fanbox_is_logged_in
Exception: Not logged in to FANBOX
press enter to exit.
Versions
v20200827-beta1
I guess you need to copy the cookies from fanbox.cc? should be similar method for pixiv, just is uses FANBOXSESSID
Oh yeah it worked, I should've tried to look for the cookie before posting this, I'm dummy
Thanks a lot!
|
2025-04-01T04:10:43.279018
| 2024-05-15T20:33:59 |
2298792500
|
{
"authors": [
"hammy4815",
"stevengj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15284",
"repo": "NanoComp/photonics-opt-testbed",
"url": "https://github.com/NanoComp/photonics-opt-testbed/pull/66"
}
|
gharchive/pull-request
|
New smoothing kernel updated Focusing2D Julia example
I added the changes made in PR #65 regarding the new smoothing kernel.
With these changes, the optimized device achieves an objective value of 175, compared to the value of 168.
cc @smartalecH
With the new kernel, does it do a good job if you start with something that has the correct topology and just do shape optimization (β=∞)?
I'm trying out with a few different smoothing radii, but so far it seems to look similar to before, getting stuck in the g~60s.
|
2025-04-01T04:10:43.293619
| 2016-05-16T15:37:39 |
155056120
|
{
"authors": [
"aitboudad",
"coveralls"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15285",
"repo": "Narzerus/angular-permission",
"url": "https://github.com/Narzerus/angular-permission/pull/248"
}
|
gharchive/pull-request
|
[ui-router] allow v0.3
https://github.com/angular-ui/ui-router/releases/tag/0.3.0
Coverage remained the same at 100.0% when pulling 95a7afc316e7ef44798c91e08443531e16a6ac33 on aitboudad:patch-1 into c542e971ee00550322ebe5804b45536b426f8407 on Narzerus:master.
Coverage remained the same at 100.0% when pulling 774171ddccd18b6a8f9768c4a08282698ab3e779 on aitboudad:patch-1 into c542e971ee00550322ebe5804b45536b426f8407 on Narzerus:master.
|
2025-04-01T04:10:43.312301
| 2022-12-28T20:51:10 |
1513157730
|
{
"authors": [
"GhidorahRex",
"lab313ru"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15286",
"repo": "NationalSecurityAgency/ghidra",
"url": "https://github.com/NationalSecurityAgency/ghidra/issues/4855"
}
|
gharchive/issue
|
[Sleigh] Operand manipulation results in "goto" low-level error
This is a valid operand description (jump destination, pointers are 24bit, but 32 bit is also doesn't work):
addr_pc: $imm16 is imm16 { export *:1 imm16; }
:JMP addr_pc is (op=0x4C); addr_pc { goto addr_pc; }
And this one is not (leads to low-level error):
addr_pc: $imm16 is imm16 { off16:3 = imm16; export *:1 off16; }
:JMP addr_pc is (op=0x4C); addr_pc { goto addr_pc; }
In the first case Sleigh generates the following p-code:
BRANCH *[ram]0x92cc:1
In the second case Sleigh generates the following p-code:
$U2380:3 = COPY 0x92cc:3
$U2400:1 = LOAD ram($U2380:3)
BRANCH $U2400:1
And the error is:
Low-level Error: Could not find op at target address: (unique,0x00002400)
I wasn't able to fix my operand definition to generate a correct p-code. Even any simple modification of a goto target address does that.
There is no such error in 10.1.5 RELEASE.
Here is an example: https://github.com/achan1989/ghidra-65816/
This is a valid operand description (jump destination, pointers are 24bit, but 32 bit is also doesn't work):
addr_pc: $imm16 is imm16 { export *:1 imm16; }
:JMP addr_pc is (op=0x4C); addr_pc { goto addr_pc; }
And this one is not (leads to low-level error):
addr_pc: $imm16 is imm16 { off16:3 = imm16; export *:1 off16; }
:JMP addr_pc is (op=0x4C); addr_pc { goto addr_pc; }
In the first case Sleigh generates the following p-code:
BRANCH *[ram]0x92cc:1
In the second case Sleigh generates the following p-code:
$U2380:3 = COPY 0x92cc:3
$U2400:1 = LOAD ram($U2380:3)
BRANCH $U2400:1
And the error is:
Low-level Error: Could not find op at target address: (unique,0x00002400)
I wasn't able to fix my operand definition to generate a correct p-code. Even any simple modification of a goto target address does that.
You need brackets around a varnode in this case. So the correct syntax would be goto [addr_pc];. However, there's very little reason for you to export the immediate as a varnode. You're better off exporting a constant value (export *[const]:2 imm16) in this instance. See, for example, the answer to this question: https://github.com/NationalSecurityAgency/ghidra/discussions/4851
Thanks, it works, but the decompiler shows something like that and doesn't automatically disassembles the destination:
/* WARNING: Could not emulate address calculation at 0x008213 */
/* WARNING: Treating indirect jump as call */
(**(code **)((uint3)bVar3 << 0x10 | 0x92cc))();
Should it be processed using an analyzer?
Thanks, it works!
|
2025-04-01T04:10:43.314731
| 2019-04-28T18:37:03 |
438086471
|
{
"authors": [
"Kushagra3911",
"Ristovski",
"eLeCtrOssSnake",
"ghost",
"ryanmkurtz",
"saruman9"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15287",
"repo": "NationalSecurityAgency/ghidra",
"url": "https://github.com/NationalSecurityAgency/ghidra/issues/530"
}
|
gharchive/issue
|
How can I patch a .exe
I'm trying to patch .exe, when I have hex edited, and I press O, I'm not able to save it as exe binary, if I select binary is saved as test.exe.bin, and I cannot get it to work
test.exe.bin file is actually test.exe, you should rename it. See also #19.
Duplicate of #19. You have to import the file as a raw binary if you want the export to still run. We currently do not support going from a loaded memory image back into runnable file.
@ryanmkurtz Does "currently" mean this will become a feature somewhere down the line?
Still a real bug.............
Select the PE option,PE stands for Portable Executable
|
2025-04-01T04:10:43.316148
| 2021-05-25T19:59:40 |
901298047
|
{
"authors": [
"rmmayo",
"sudo-may"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15288",
"repo": "NationalSecurityAgency/skills-service",
"url": "https://github.com/NationalSecurityAgency/skills-service/issues/627"
}
|
gharchive/issue
|
SkillsDisplay: Back button/history entries are lost on reload for Firefox and some version of chrome
see NationalSecurityAgency/skills-client#127
PR: #634
@sudo-may - I noticed this was still assigned to me, but I believe it's been merged you have already tested this? Leaving it open for now just in case.
:+1:
|
2025-04-01T04:10:43.401212
| 2019-05-02T09:29:17 |
439499765
|
{
"authors": [
"patricklx",
"vtrifonov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15289",
"repo": "NativeScript/android-runtime",
"url": "https://github.com/NativeScript/android-runtime/issues/1363"
}
|
gharchive/issue
|
memory leak java <-> javascript when java returns []array
Environment
Provide version numbers for the following components (information can be retrieved by running tns info in your project folder or by inspecting the package.json of the project):
CLI: 5.3.1
Cross-platform modules: 5.3.1
Android Runtime: 5.3.1
iOS Runtime (if applicable):
Plugin(s):
Describe the bug
We observe a memory leak when we have the following call thread:
this was to reproduce the ble pattern, but I tested again and I see the leak with only getValue. Looks like it only happens when byte[] is returned. Same issue when String[] is returned.
https://github.com/patricklx/ns-memory-leak/blob/master/app/App_Resources/Android/src/main/java/my/Test.java#L25
I did few memory snapshots, and what is beeing left over is a lot of
system / FunctionTemplateInfo and system / ObjectTemplateInfo
NOTE:
we do not have a memory leak if the getValue does not return the value
markingMode: none, does not help either.
Expected behavior
GC should collect the java array
Sample project
https://github.com/patricklx/ns-memory-leak
this can also be observed with the ble plugin,
since its the same pattern:
https://github.com/EddyVerbruggen/nativescript-bluetooth/blob/master/src/android/TNS_BluetoothGattCallback.ts#L212
@vtrifonov can you also have a looks at this one? its a huge issue for us. We have bluetooth connected to a few devices that send many events
@patricklx what tool did you use to take and compare the memory snapshots?
@vtrifonov I used the chrome devtools memory snapshots
We found a problem when creating Array Wrappers which seems to leak memory because of the ObjectTemplates being created every time. However arrays need more memory than a string as we attach some callbacks to them so that they can work in the JS.
|
2025-04-01T04:10:43.417146
| 2016-02-19T10:20:52 |
134827627
|
{
"authors": [
"KristinaKoeva",
"bolismauro"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15290",
"repo": "NativeScript/ios-runtime",
"url": "https://github.com/NativeScript/ios-runtime/issues/518"
}
|
gharchive/issue
|
Debugger without tns CLI
Hello Everyone,
I'm developing a library that uses the NativeScript ios-runtime.
I don't need the the layer that has been built on top (e.g., views, layout ...) and therefore I won't use the tns CLI tool.
I've create a project that basically leverage the ios-runtime to run some JS scripts.
The way in which I've debugged these scripts is by using console.log
statements but it is becoming more and more unmanageable.
Since my knowledge about tns CLI is very limited I'd like to understand from you if there is a way to run the debugger (either in Safari or in the Native Script inspector) without using the tns CLI.
Thank you very much!
Mauro
Hey Mauro,
First of all if you are not going to use the command line tool I suspect that you are not going to use the default project template either. So in order to debug your application please make sure that you've included the TNSDebugging.h. This header file (among other things) setups the communication channel required in order to attach the NativeScript AppInspector to your application. It's main entry point is the TNSEnableRemoteInspector method so call it before you call the [runtime executeModule:@"./"];. We've done the exact same thing in our template project so you could use it for guidance.
Then follow the steps:
Pass the --nativescript-debug-brk environment varibale to instruct your application that it should wait for a debugger before executing anything.
Download the NativeScript AppInspector application available as a npm package.
Deploy your application on an iOS Simulator/Device. In order to debug an application on an iOS Device you should forward the 18181 debugging port from your localhost to device. Usbmux (available as a npm package) has a useful tool called iproxy that allows you to forward localhost ports to the device. Once installed you should just type the following command in your Terminal window
iproxy 18181 18181 .
To open the NativeScript AppInspector application execute the following command in your Terminal window open [path to installed inspector package]/NativeScript Inspector.app --args [path to installed inspector package]/WebInspectorUI/Main.html Title. On my machine this command looks like this
open node_modules/tns-ios-inspector/NativeScript\ Inspector.app --args node_modules/tns-ios-inspector/WebInspectorUI/Main.html Title
Hope this helps,
Kristina
Hi!
Thanks for your help, I managed to have a working debugger following your instructions.
Unfortunately I've encountered another problem..
My javascript file, let's call it app.js, is created starting from different javascript files.
The AppInspector obliviously shows the source code as a single file and it is quiet hard to debug and find functions.
I'm trying to leverage source maps to avoid this problem, but it seems that the AppInspector is not able to load them.
I've dug in the AppInspector source code and it seems that the SourceMapManager tries to load the source map (app.js.map) but the function NetworkAgent.loadResource(frameIdentifier, sourceMapURL, sourceMapLoaded.bind(this)); fails to to that.
The error the callback receives is
'Network' domain was not found
I honestly don't have any idea on how I can fix this or if source maps are supported at all in this context.
Thanks again!
Mauro
Hey,
The communication between the runtime and the inspector frontend is divided into number of domains. We have implemented some of the domains, like the Page domain that is responsible for the content of your application files. So every Page.getResourceContent call goes directly to the runtime and returns the right content of the file.
The Network domain is not yet implemented in the runtime. We have decided that the implementation of the Network domain should be shared among the iOS and Android runtimes, so we are trying to move its functionality from the runtimes to the NativeScript modules. You could take a look at this branch for reference. The idea behind it is that you could register a domain dispatcher that would handle every call for that specific domain. So when you register a Network domain you would be the one responsible for handling the loadResource method. Since this is still work in progress and is going to change over time I suggest you wait for the official version and then we could figure out a way to support your scenario without the necessity of having Native Script modules.
There is also an option to modify the iOS runtime and register a native Network domain but I really don't recommend this approach.
Regards,
Kristina
Thank you for your answer!
Everything is crystal clear now. I have one more question for you: Is there any ETA for the implementation of this Network module?
Maybe starting from your implementation we can figure out a way to make it work also in our scenario
Thank you
We've just merged a runtime implementation for the source maps. Take a look at #525 .
Bare in mind that this is not tested yet and will be officially released with our 1.7 release expected later this month
Sorry for the very late answer but I've had the chance to test this only today.
So basically I've updated my tns-ios to version 1.7.0.
I've followed the steps and the debugger works as expected with the exception of the source maps.
My ObjC code is the following
extern char startOfMetadataSection __asm("section$start$__DATA$__TNSMetadata");
// You need to call this method only once in your app.
[TNSRuntime initializeMetadata:&startOfMetadataSection];
[TNSRuntimeInspector setLogsToSystemConsole:YES];
// Tell NativeScript where to look for the app folder. Its existence is optional, though.
runtime = [[TNSRuntime alloc] initWithApplicationPath:[[NSBundle mainBundle] bundlePath]];
// Schedule the runtime on the runloop of the thread you'd like promises and other microtasks to run on.
[runtime scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];
TNSEnableRemoteInspector(argc, argv);
// Run main file
[runtime<EMAIL_ADDRESS>
While here you can find the bundle.js, with a test js code, and the source map
https://gist.github.com/bolismauro/9c5cda21e7630749016c
the sourcemap is located in the same folder of the javascript file.
I can debug the code but the inspector shows it without applying the source maps.
I also tried to embed source maps in the bundle.js directly but nothing has changed.
I don't know it this can be useful: In the resources tab of the inspector I see two folders: app, which is empty, and Extra Scripts, which contains bundle.js and nothing else.
Let me know how can I help you in understand this behaviour!
Many Thanks
Hey,
The NativeScript Application source code resides in the app folder and the debugger enumerates all resources that are within that folder. Later when the debugger frontend asks for a particular resource - your source map for example - the backend implementation goes through all the resources that have already been enumerated and returns a resource content if it exists.
So I suggest you move your application code in the app folder.
|
2025-04-01T04:10:43.423157
| 2018-04-20T08:14:48 |
316169799
|
{
"authors": [
"NickIliev",
"dotnetdreamer",
"tdermendjiev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15291",
"repo": "NativeScript/ios-runtime",
"url": "https://github.com/NativeScript/ios-runtime/issues/917"
}
|
gharchive/issue
|
Marshing Objective-C throwing error
From @dotnetdreamer on April 19, 2018 15:6
Please, provide the details below:
Did you verify this is a real problem by searching the [NativeScript Forum]
Yes
Tell us about the problem
Currently i have generated typed difinitions file for AFNetworking library which can be found at https://github.com/AFNetworking/AFNetworking. Now when i try to use one method like following the application is crashing and i debugged the code and got errors as shown below:
using of method:
let request: NSMutableURLRequest = AFHTTPRequestSerializer.serializer()
.multipartFormRequestWithMethodURLStringParametersConstructingBodyWithBlockError(
method, url, parameters, function (formData: AFMultipartFormData) {
const nsUrlPath = NSURL.alloc().initWithString(nsNetworkingFormDataObject.path);
formData.appendPartWithFileURLNameFileNameMimeTypeError(
nsUrlPath, nsNetworkingFormDataObject.name,
nsNetworkingFormDataObject.fileName, nsNetworkingFormDataObject.mimeType);
});
Now when i run the application it is throwing following error:
13:68: JS ERROR TypeError: formData.appendPartWithFileURLNameFileNameMimeTypeError is not a function.
(In 'formData.appendPartWithFileURLNameFileNameMimeTypeError(nsUrlPath, nsNetworkingFormDataObject.name, nsNetworkingFormDataObject.fileName, nsNetworkingFormDataObject.mimeType)', 'formData.appendPartWithFileURLNameFileNameMimeTypeError' is undefined)
file:///app/tns_modules/nativescript-networking/NSNetworking.js:13:68: JS ERROR TypeError: formData.appendPartWithFileURLNameFileNameMimeTypeError is not a function.
(In 'formData.appendPartWithFileURLNameFileNameMimeTypeError(nsUrlPath, nsNetworkingFormDataObject.name, nsNetworkingFormDataObject.fileName, nsNetworkingFormDataObject.mimeType)',
'formData.appendPartWithFileURLNameFileNameMimeTypeError' is undefined)
When i debugged the code i see this:
https://ibb.co/duAVNS
Which platform(s) does your issue occur on?
iOS
Please provide the following version numbers that your issue occurs with:
CLI: 3.4.0
Copied from original issue: NativeScript/NativeScript#5706
Any update ? I need to make this work for the plugin. Any other workaround ?
Hello @dotnetdreamer,
As you can see here AFMultipartFormData is a protocol not a class and protocol methods cannot be called directly from the javascript instance( more on this here). That said the method should be invoked this way:
AFMultipartFormData.prototype.appendPartWithFileURLNameFileNameMimeTypeError.call(formData, nsUrlPath...);
|
2025-04-01T04:10:43.428211
| 2018-07-30T05:14:17 |
345613748
|
{
"authors": [
"svzi",
"tsonevn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15292",
"repo": "NativeScript/nativescript-angular",
"url": "https://github.com/NativeScript/nativescript-angular/issues/1463"
}
|
gharchive/issue
|
Issue with Angular Tab Navigation Template on Android
From @svzi on July 27, 2018 16:34
Tell us about the problem
The tabs are placed above the action-bar as shown in this screenshot:
https://ibb.co/jSkXUT
Which platform(s) does your issue occur on?
Android
Please provide the following version numbers that your issue occurs with:
CLI: 4.1.2
Cross-platform modules: 4.1.2
Runtime(s): 4.1.2
Plugin(s):
"dependencies": {
"@angular/animations": "~6.0.6",
"@angular/common": "~6.0.6",
"@angular/compiler": "~6.0.6",
"@angular/core": "~6.0.6",
"@angular/forms": "~6.0.6",
"@angular/http": "~6.0.6",
"@angular/platform-browser": "~6.0.6",
"@angular/platform-browser-dynamic": "~6.0.6",
"@angular/router": "~6.0.6",
"nativescript-angular": "~6.0.6",
"nativescript-theme-core": "~1.0.4",
"reflect-metadata": "~0.1.10",
"rxjs": "~6.1.0",
"tns-core-modules": "4.1.1",
"zone.js": "^0.8.4"
},
"devDependencies": {
"@angular/compiler-cli": "~6.1.0-beta.3",
"@ngtools/webpack": "6.1.0-rc.0",
"babel-traverse": "6.26.0",
"babel-types": "6.26.0",
"babylon": "6.18.0",
"codelyzer": "~4.3.0",
"lazy": "1.0.11",
"nativescript-dev-sass": "~1.6.0",
"nativescript-dev-typescript": "~0.7.0",
"nativescript-dev-webpack": "~0.14.0",
"tslint": "~5.10.0",
"typescript": "~2.7.2"
}
Please tell us how to recreate the issue in as much detail as possible.
Just start a new NS project based on the tab view starter template.
Is there code involved? If so, please share the minimal amount of code needed to recreate the problem.
Just remove androidTabsPosition="bottom" from line 1 in file app.component.html.
Copied from original issue: NativeScript/NativeScript#6131
Thanks for moving it!
|
2025-04-01T04:10:43.451688
| 2017-02-23T14:41:29 |
209779489
|
{
"authors": [
"tsonevn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15293",
"repo": "NativeScript/nativescript-imagepicker",
"url": "https://github.com/NativeScript/nativescript-imagepicker/issues/74"
}
|
gharchive/issue
|
Image Path Not Found. Native Exception Error Nativescript Android
From @maniacube on February 23, 2017 14:28
Tell us about the problem
When we are uploading images using "nativescript-imagepicker" & "nativescript-background-http", selecting images from gallery and binding to imageView control and then uploading is working fine in iOS but we are getting native exception error in android. We are done it with the following code.
selected.getImage().then(function(imagesource){
let folder = fsModule.knownFolders.documents();
let path = fsModule.path.join(folder.path, "Test"+new Date().getTime()+".png");
let saved = imagesource.saveToFile(path, "png");
if(saved){
console.log("New image path:"+path);
let newPath="file://"+path;
}
});
Please help us to resolve it. Thanks.
Platform(s)
Android
Copied from original issue: NativeScript/NativeScript#3699
Hi @maniacube,
I tested your case with sample-ImageUpload sample app, however, was unable to reproduce any issues while using nativescript-imagepicker and nativescript-background-http plugins.
Regard to that It will help if you could give us more info about your problem and the exception, which you receive.
|
2025-04-01T04:10:43.475687
| 2021-06-29T12:34:45 |
932602695
|
{
"authors": [
"DIYgod",
"kallydev",
"nzhl"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15294",
"repo": "NaturalSelectionLabs/RSS3",
"url": "https://github.com/NaturalSelectionLabs/RSS3/issues/12"
}
|
gharchive/issue
|
Demo needed
Still quite confused about the final implement of RSS3, a runable demo should be helpful~
Still quite confused about the final implement of RSS3, a runable demo should be helpful~
If this is not possible for current stage, some user stories in real life can also help
This is a simple example for publishing content on an RSS3 endpoint network.
import hexbytes
import requests
from eth_keys import keys
if __name__ == "__main__":
endpoint = "https://rss3-hub-playground-6raed.ondigitalocean.app"
pk = "0x47e18d6c386898b424025cd9db446f779ef24ad33a26c499c87bb3d9372540ba"
pk = keys.PrivateKey(hexbytes.HexBytes(pk))
address = pk.public_key.to_checksum_address()
print(address)
r = requests.get(f"{endpoint}/{address}")
print(r.json())
This is a simple example of fetching content from the RSS3 endpoint network.
import hexbytes
import requests
from eth_keys import keys
if __name__ == "__main__":
endpoint = "https://rss3-hub-playground-6raed.ondigitalocean.app"
pk = "0x47e18d6c386898b424025cd9db446f779ef24ad33a26c499c87bb3d9372540ba"
pk = keys.PrivateKey(hexbytes.HexBytes(pk))
address = pk.public_key.to_checksum_address()
print(address)
r = requests.get(f"{endpoint}/{address}")
print(r.json())
pretty cool, seems ppl using public_key as their id to publish content, and private one to denote the ownership ? Any node api provided?
Yeah, you can try it through RSS3-Hub RSS3-SDK-for-JavaScript RSS3-Python-SDK and some upcoming libraries
|
2025-04-01T04:10:43.505754
| 2021-10-07T15:25:35 |
1020175345
|
{
"authors": [
"AndreyNag",
"CryptoPumpkin",
"ro-tex"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15295",
"repo": "NebulousLabs/docker-sia",
"url": "https://github.com/NebulousLabs/docker-sia/issues/63"
}
|
gharchive/issue
|
Server is unable to create the Sia node
Docker host running on Unraid server. The host seems to mess up contract manager file every once in a while. This is a third time it has happened in 1 year of running the host.
Stack Trace or error message
Sia Daemon v1.5.6
Git Revision 0b253ab18
Loading...
(1/9) Loading siad...
(2/9) Loading gateway...
(3/9) Loading consensus...
(4/9) Loading transaction pool...
(5/9) Loading wallet...
(6/9) Loading feemanager...
(7/9) Loading host...
ERROR: [server is unable to create the Sia node; unable to create host; [error during contract manager startup; error while loading contract manager atomic data; error loading the contract manager settings file; unable to read persisted json object from disk: unexpected end of JSON input]; [error while loading contract manager atomic data; error loading the contract manager settings file; unable to read persisted json object from disk: unexpected end of JSON input]]
[server is unable to create the Sia node; unable to create host; [error during contract manager startup; error while loading contract manager atomic data; error loading the contract manager settings file; unable to read persisted json object from disk: unexpected end of JSON input]; [error while loading contract manager atomic data; error loading the contract manager settings file; unable to read persisted json object from disk: unexpected end of JSON input]]
Expected Behavior
Load up and run as normal
Observed Behavior
Error during host loading.
How to reproduce it (as minimally and precisely as possible)
Random occurence. No idea what is causing it. No idea what would be the proper fix without deleting the contract manager file.
Environment
Docker running on Unraid
Sia version:
v1.5.6
OS:
Unraid (Linux)
I have same errore:
(6/8) Loading host... (7/8) Loading renter... ERROR: [server is unable to create the Sia node; unable to create renter; unable to read persisted json object from disk: loading a file with a bad checksum] [server is unable to create the Sia node; unable to create renter; unable to read persisted json object from disk: loading a file with a bad checksum]
Hello @AndreyNag!
This seems to be an issue related to the Sia node itself. The best course of action would be to ask on the Sia Foundation's discord or open an issue at https://github.com/siafoundation/siad. As mentioned in the README, this image has been replaced by ghcr.io/siafoundation/siad and is no longer maintained.
|
2025-04-01T04:10:43.572040
| 2023-12-18T15:16:43 |
2046895240
|
{
"authors": [
"CKolkey",
"MoffunW",
"Yinameah",
"juliekoubova",
"saggitar",
"wrightjjw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15302",
"repo": "NeogitOrg/neogit",
"url": "https://github.com/NeogitOrg/neogit/issues/1047"
}
|
gharchive/issue
|
Stage operation fails
Description
When trying to Stage some hunk, I get an error :
git apply --cached:
error: C:\Users\Cibrario\code\pygris\pygris/src/pygris/macros/rf_pll.py: does not exist in index
An error occured.
Given the mix of / and \, I can guess it's windows related ?
Neovim version
NVIM v0.9.4
Build type: RelWithDebInfo
LuaJIT 2.1.1696883897
Operating system and version
Windows 11
Steps to reproduce
Open Neogit buffer, go over an unstaged hunk and type s.
Expected behavior
the hunk should be staged
Actual behavior
the error occur
Minimal config
-- NOTE: See the end of this file if you are reporting an issue, etc. Ignore all the "scary" functions up top, those are
-- used for setup and other operations.
local M = {}
local base_root_path = vim.fn.fnamemodify(debug.getinfo(1, "S").source:sub(2), ":p:h") .. "/.min"
function M.root(path)
return base_root_path .. "/" .. (path or "")
end
function M.load_plugin(plugin_name, plugin_url)
local package_root = M.root("plugins/")
local install_destination = package_root .. plugin_name
vim.opt.runtimepath:append(install_destination)
if not vim.loop.fs_stat(package_root) then
vim.fn.mkdir(package_root, "p")
end
if not vim.loop.fs_stat(install_destination) then
print(string.format("> Downloading plugin '%s' to '%s'", plugin_name, install_destination))
vim.fn.system({
"git",
"clone",
"--depth=1",
plugin_url,
install_destination,
})
if vim.v.shell_error > 0 then
error(string.format("> Failed to clone plugin: '%s' in '%s'!", plugin_name, install_destination),
vim.log.levels.ERROR)
end
end
end
---@alias PluginName string The plugin name, will be used as part of the git clone destination
---@alias PluginUrl string The git url at which a plugin is located, can be a path. See https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols for details
---@alias MinPlugins table<PluginName, PluginUrl>
---Do the initial setup. Downloads plugins, ensures the minimal init does not pollute the filesystem by keeping
---everything self contained to the CWD of the minimal init file. Run prior to running tests, reproducing issues, etc.
---@param plugins? table<PluginName, PluginUrl>
function M.setup(plugins)
vim.opt.packpath = {} -- Empty the package path so we use only the plugins specified
vim.opt.runtimepath:append(M.root(".min")) -- Ensure the runtime detects the root min dir
-- Install required plugins
if plugins ~= nil then
for plugin_name, plugin_url in pairs(plugins) do
M.load_plugin(plugin_name, plugin_url)
end
end
vim.env.XDG_CONFIG_HOME = M.root("xdg/config")
vim.env.XDG_DATA_HOME = M.root("xdg/data")
vim.env.XDG_STATE_HOME = M.root("xdg/state")
vim.env.XDG_CACHE_HOME = M.root("xdg/cache")
-- NOTE: Cleanup the xdg cache on exit so new runs of the minimal init doesn't share any previous state, e.g. shada
vim.api.nvim_create_autocmd("VimLeave", {
callback = function()
vim.fn.system({
"rm",
"-r",
"-f",
M.root("xdg")
})
end
})
end
-- NOTE: If you have additional plugins you need to install to reproduce your issue, include them in the plugins
-- table within the setup call below.
M.setup({
plenary = "https://github.com/nvim-lua/plenary.nvim.git",
telescope = "https://github.com/nvim-telescope/telescope.nvim",
diffview = "https://github.com/sindrets/diffview.nvim",
neogit = "https://github.com/NeogitOrg/neogit"
})
-- WARN: Do all plugin setup, test runs, reproductions, etc. AFTER calling setup with a list of plugins!
-- Basically, do all that stuff AFTER this line.
require("neogit").setup({}) -- For instance, setup Neogit
Yeah, looks windows related. Kinda odd, because I haven't messed with anything related to staging/paths in a bit. I'll take a peek and see if anything looks suspicious.
Sure. Let me know if you need me to test something.
(And thanks for the work, really cool plugin)
I can repro this as well. It seems like the path to the git repo uses
Windows-style backslashes, but the in-repo part of the path uses UNIX forward
slashes (c:\myrepo\foo/bar/baz). Maybe just normalizing the path to a
single standard would fix it?
Yea, though it's not totally clear where the unix path separators are coming from. If anyone feels like spelunking:
The function that handles staging on the status buffer
git.index.* - used to stage hunks
git.status.stage() - used to stage files
Parsing git status - this is where the filepaths probably come from
Looks like it's using the name attribute for each item.
I can repro this as well. It seems like the path to the git repo uses Windows-style backslashes, but the in-repo part of the path uses UNIX forward slashes (c:\myrepo\foo/bar/baz). Maybe just normalizing the path to a single standard would fix it?
This usually isn't a problem - you can even try mixing dir separators in cmd and it will take. Possibly there could be an issue in how git interprets it, not super sure how that's translated under the hood.
I don't ever stage hunks, but I'd love to do a bit of spelunking. Unfortunately I'm away from a Windows machine until later next week.
So, apparently, it's not just about the Unix/windows path.
I quickly added absolute_path = vim.fs.normalize(absolute_path) after
https://github.com/NeogitOrg/neogit/blob/801143ee4db4121fc11a6468daae6680ba9fab51/lua/neogit/lib/git/status.lua#L13
The error look like :
git apply --cached:
error: C:/Users/Cibrario/code/pygris/pygris/src/pygris/devkit.py: does not exist in index
So apparently there is something else to it. Trying to dig a bit further, but I'm no lua expert ...
I'm curious if the command history is helpful here: try hitting $ after that fails, then find the command and expand it with Tab.
This is all I get. Not sure if there is a way to have more details.
But I noticed that if I try to stage an untracked file, it seems to work just fine. The issue only seems to appear with already tracked file ... Could not find any other interesting information yet ...
Hey all.
I faced the same problem.
Downgrading the version I found out that the problem occurs after commit d3e3160.
And one commit before (159c545) works fine. I think problem is here:
local path = Path:new(item.absolute_path):make_relative(git_root)
Can anyone fix this please?
I fixed it by converting the output of the command that gets the inital repo directory to a proper windows style path (by default git returned a path with unix style path seperators which confused plenary.path)
Seems to solve the issue for me.
https://github.com/NeogitOrg/neogit/pull/1164
|
2025-04-01T04:10:43.588341
| 2023-11-19T09:13:15 |
2000765767
|
{
"authors": [
"aszepieniec",
"dan-da"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15303",
"repo": "Neptune-Crypto/neptune-core",
"url": "https://github.com/Neptune-Crypto/neptune-core/issues/71"
}
|
gharchive/issue
|
Mutator Sets: Merkleize the entire SWBF
The current Mutator Set design conflates two things that are conceptually distinct. Both of them are referred to by the name "active window".
The sliding window, which is the interval in the infinite bit array in which a given addition record defines $k$ locations. The sliding window is different for each batch of $b$ addition records.
The plain, which is the tailing portion of the sliding window Bloom filter (SWBF) that is represented explicitly and not compressed with a Merkle mountain range.
The reason why the the current sliding window was represented explicitly by the MutatorSetAccumulator is the implicit assumption that most transactions spend relatively new UTXOs. As a consequence, a lot of work related to the Merkle mountain range can be saved by not Merkleizing the tail where most of the activity is expected to happen. This is an assumption though; future data might or might not bear it out. And even if it does, it is by no means clear that the optimal portion of the SWBF to be represented explicitly matches exactly with the current sliding window.
This issue entails rewriting the Mutator Set such that:
[ ] Blocks only contain a fully-Merkleized SWBF
[ ] Users can optionally represent an arbitrarily large tailing part of the SWBF explicitly.
The realization that the tailing part of the SWBF might as well be Merkleized opens up intriguing lines of thought:
From the point of view of the users wishing fast transactions at low cost, there is now more free space in the block and thus less need to pay high fees for quick confirmation.
From the point of view of the passive verifier, it reduces the cost. This allows for a larger active window, which (is necessary to support a larger batch size, which) increases anonymity.
From the point of view of the user owning and receiving UTXOs, it increases the cost of maintaining synchronized UTXO membership proofs. The user must immediately (as opposed to after some delay) compute and maintain MMR membership proofs. Moreover synchronization is more costly. A node receiving a UTXO needs to know the chunks of the SWBF where the UTXO points, and this data is no longer present in the block that confirms it. Instead, the node must locally this data locally (and keep it up-to-date), or assemble it from the previous WINDOW_SIZE/STEP_SIZE blocks, or query peers for it. Note: the memory and operational cost of keeping the active window in memory and up-to-date is identical to the current cost profile; this latter point concerns only synchronization.
From the point of view of the miner, it increases the cost as well. The miner now has to do more work related to MMRs. But we can require the miner to do more work; he is getting a reward for it.
From the point of view of the passive verifier, it reduces the cost. This allows for a larger active window, which (is necessary to support a larger batch size, which) increases anonymity.
Can you quantify how much it increases anonymity vs status quo?
Every removal record (transaction input) gives the outside observer a fuzzy timestamp on the matching addition record (transaction output). This fuzzy timestamp identifies a range of plausible addition records. One way to quantify the anonymity is to look at the information entropy of the uniform distribution over this range. By this metric, an anonymity of 4 bits corresponds to a ring size of 16.
The current parameter set ($w = 2^{20}, s = 2^{12}, b = 2^3$, for window size, step size, and batch size, respectively) generates this graph, which captures the frequency with which a given entropy is observed. (Note that every removal record generates a probability distribution, and that two such probability distributions may have distinct entropies.)
It says that the minimum entropy is 3 (corresponding to batch size 8). However, on (geometric) average the entropy is slightly more than 5, corresponding to ring size 32.
If we multiply $w$, $s$, and $b$ by $2^t$, then the shape of the graph is identical and the only difference is that the horizontal axis has shifted to the left by $t$ steps.
If we multiply $w$ and $s$ by $2^t$ but not $b$ then the shape is similar but
the average entropy does increase (but less than when we multiply $b$ also),
the minimum entropy stays the same,
the number of indices per removal record shrinks.
I'm starting on this task momentarily. This post tracks todos and status.
[ ] separate plain from sliding window
[ ] accumulator: fully Merkleized SWBF
[ ] archive: fully Merkleized + explicit plain
[ ] adapt block structure
[ ] peer-to-peer query&response for ranges of the SWBF
I started implementing this on branch aszepieniec/71_merkleize_entire_swbf but am now reconsidering this refactor. The branch compiles, but the tests don't work. Unless there is a compelling argument to continue, I will be abandoning this branch.
The reason for second-guessing this refactor is that the interface of the Mutator Set needs to be amended. Previously, add took a mutator set accumulator and an addition record; now it takes a mutator set accumulator, a dictionary of chunks of the SWBF along with their membership proofs, and an addition record. But this dictionary is part of the information needed to apply the addition record, so at least conceptually it is part of the mutator set accumulator.
Phrased differently: we might as well replace the entire mutator set accumulator with its hash. In order to add or remove an item we would then need to supply its preimage, in addition to the addition record or removal record. Having to add include this preimage defeats the purpose of reducing it to a single hash, because you always need it.
As for the original motivation that led to this effort --
The reason why the the current sliding window was represented explicitly by the MutatorSetAccumulator is the implicit assumption that most transactions spend relatively new UTXOs. As a consequence, a lot of work related to the Merkle mountain range can be saved by not Merkleizing the tail where most of the activity is expected to happen. This is an assumption though; future data might or might not bear it out. And even if it does, it is by no means clear that the optimal portion of the SWBF to be represented explicitly matches exactly with the current sliding window.
-- I have to disagree: the reason why the current sliding window is part of the MutatorSetAccumulator is because you need this data to produce membership proofs. Recall that a mutator set membership proof contains a membership proof for every index showing that it lives in SWBF. If the index lives in the active window and the active window is represented explicitly, then this SWBF-membership-proof is empty. If the index lives in the inactive part, then this SWBF-membership-proof is an MMR membership proof. But the point is that currently, whenever an addition record is applied, all indices are guaranteed to live in the active window so the production of SWBF-membership-proofs is guaranteed to succeed without supplying supplementary witnesses. In the proposed refactor, you need to supply supplementary witnesses that defeat the purpose of Merkleization.
On top of that, managing these supplementary witnesses in a blockchain client, when they are not part of the block, is quite a hassle.
|
2025-04-01T04:10:43.594823
| 2023-04-02T18:22:02 |
1651045762
|
{
"authors": [
"NerdyPuzzle",
"SanMiguelZ"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15304",
"repo": "NerdyPuzzle/MCreator-Nerdys_Geckolib_Plugin",
"url": "https://github.com/NerdyPuzzle/MCreator-Nerdys_Geckolib_Plugin/issues/15"
}
|
gharchive/issue
|
Crash: Game crashed whilst rendering overlay
I got an error which caused the game to crash every time I tried opening minecraft with my mod enabled.
MCreator Version: 2023.1
Generator Version: 1.19.2
Geckolib Plugin Version: 4.7.1
Crash Report: https://pastebin.com/PvehkB0w
Thanks!
something is wrong with your animation file or model file, not a plugin issue.
Okay, thank you.
Hi again, i deleted all of the animations and the error still had not been fixed, if you can, can you tell me what's wrong with it?
|
2025-04-01T04:10:43.630997
| 2023-10-19T10:07:25 |
1951792491
|
{
"authors": [
"anvpetrov",
"ffilippopoulos"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15305",
"repo": "NetApp/trident",
"url": "https://github.com/NetApp/trident/issues/859"
}
|
gharchive/issue
|
Allow users to specify the maximum number of volumes per node
Describe the solution you'd like
Allow users to specify the maximum number of volumes per node.
Describe alternatives you've considered
Spread constraints is the only other solution to manipulate scheduling of volumes, but works based on topology features, like zones, and cannot guarantee the maximum number of volumes per node.
Additional context
This will be in line with Kubernetes CSI docs: https://kubernetes-csi.github.io/docs/volume-limits.html and will be following best practices.
Kubernetes will respect this limit as long the CSI driver advertises it. To support volume limits in a CSI driver, the plugin must fill in max_volumes_per_node in NodeGetInfoResponse.
It is recommended that CSI drivers allow for customization of volume limits. That way cluster administrators can distribute the limits of the same storage backends (e.g. iSCSI) accross different drivers, according to their individual needs.
Defaulting max_volumes_per_node to 0 should maintain the current controller behaviour and users could simply use a flag in cases where they need to change that value. It should really be down to the cluster administrators to configure the maximum volumes allowed per CSI driver on a node based on their workloads needs and their hardware.
We've raised a similar issue in the past: https://github.com/NetApp/trident/issues/710
Hello
We need this feature too.
|
2025-04-01T04:10:43.654040
| 2017-08-24T19:23:26 |
252704441
|
{
"authors": [
"jeyrschabu"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15306",
"repo": "Netflix/SimianArmy",
"url": "https://github.com/Netflix/SimianArmy/pull/304"
}
|
gharchive/pull-request
|
Adding support for a dry run functionality when in Leashed Mode
Added an interface to a DryRunnalble Janitor
Allowing Janitor in Leashed Mode to mark resources
Marking a resource in Leashed mode doesn't generate an event
A dry run cleanup should not actually cleanup the resource
Added additional logging
Dry Run allows to expose failures in leashed mode and address them before enabling janitor
|
2025-04-01T04:10:43.689823
| 2017-03-29T12:24:42 |
217858533
|
{
"authors": [
"amoolya19",
"jayantmishra",
"sameekb",
"v1r3n"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15307",
"repo": "Netflix/conductor",
"url": "https://github.com/Netflix/conductor/issues/141"
}
|
gharchive/issue
|
How to create a HTTPTask using conductor-client
Hi Team,
I would like to create a HTTPTask using conductor-client apis. But the Type enum within WorkflowTask contains all the types barring HTTP.
` public static enum Type {
SIMPLE, DYNAMIC, FORK_JOIN, FORK_JOIN_DYNAMIC, DECISION, JOIN, SUB_WORKFLOW, EVENT, WAIT, USER_DEFINED;
private static Set<String> systemTasks = new HashSet<>();
static {
systemTasks.add(Type.SIMPLE.name());
systemTasks.add(Type.DYNAMIC.name());
systemTasks.add(Type.FORK_JOIN.name());
systemTasks.add(Type.FORK_JOIN_DYNAMIC.name());
systemTasks.add(Type.DECISION.name());
systemTasks.add(Type.JOIN.name());
systemTasks.add(Type.SUB_WORKFLOW.name());
systemTasks.add(Type.EVENT.name());
systemTasks.add(Type.WAIT.name());
//Do NOT add USER_DEFINED here...
}
public static boolean is(String name) {
return systemTasks.contains(name);
}
}
`
Pls help me in defining both TaskDef and WorkflowTask for HTTPTask. Also, do I need to provide a Worker implementation for this?
Regards,
Sameek
Hi Sameek,
The setter takes a string and you can pass HTTP as the task type (Case sensitive).
Hi Viren,
Thanks for the response.
But I am unable to set the input parameters of the HTTPTask.
I understand there has to a input parameter named http_request. But what should be the mapping of the parameter? I've tried HttpTask.Input which didn't work with the message that "Invalid input expression for http_request , paramPathComponents.size=2, expression=HttpTask.Input"
I feel we've to set the mapping as HttpTask.Input.uri or likewise. But ``HttpTask.Inputcontains multiple parameters like uri, method, headers, body, accept, vipAddress and contentType. How do I map all these tohttp_request`?
Though the documentation speaks on HTTPTask sample payload but there's nothing on how to create the task using conductor-client apis. Javadoc would also have helped in this case.
A sample code snippet to create the HTTPTAsk using conductor-client will be helpful because I am afraid that there will be future showstoppers after I am able to define the mapping correctly.
Hi Sameek,
Take a look at the kitchensink workflow that has a task with http:
https://netflix.github.io/conductor/metadata/kitchensink/
Here is the snippet of an http task:
"name": "search_elasticsearch",
"taskReferenceName": "get_es_1",
"inputParameters": {
"http_request": {
"uri": "http://localhost:9200/conductor/_search?size=10",
"method": "GET"
}
},
"type": "HTTP"
}
Hi Viren,
I've understood this JSON to define a HTTPTask. But I am not creating this json manually. Instead I am using conductor-client and conductor-common apis to define tasks and a workflow by using classes like:
com.netflix.conductor.common.metadata.tasks.TaskDef com.netflix.conductor.common.metadata.workflow.WorkflowTask
The WorkflowTask class accepts a Map<String>,<Object> as inputParameters. The comment just above the declaration says:
//Key: Name of the input parameter. MUST be one of the keys defined in TaskDef (e.g. fileName)
//Value: mapping of the parameter from another task (e.g. task1.someOutputParameterAsFileName)
I understand key should be http_request in this case. But what should be the value in this case, so that the HTTPTask gets created properly? Passing HttpTask.Input as value didn't work.
Also, TaskDef accepts List<String> as inputKeys. I've put http_request in this list. Is that correct? Or anything else needs to be done?
I am looking forward to help/guidance in using conductor-client/common apis to define a HTTPTask as I am not creating the task definition json manually.
@sameekb, have you tried with output params for http task. in documentation there are 3 parameters categorised as output but not sure on how to use them. help on this if you have any idea on this.
Hi @amoolya19 ,
Yes, was able to process the output of the HTTPTask. While defining the task where you want to process the HTTPTask output, define the parameter value in the json as ${name_of_httpTask.output} .
If the HTTP response is a JSON, remember that Conductor unmarshals the json and passes the the response as nested Map objects. Before processing the response, just do a sysout on the response to understand the response that conductor passes to the next task.
However, if the response cannot be parsed as JSON or Text, a string representation is stored as a text value.
Hi @sameekb ,
Thank you. i was able to get the output. Also have you tried HTTP Task Output with response parameter? i could not find any examples on this. let me know if you have succeeded in this.
Hi @amoolya19,
Do you mean request parameter?
@sameekb,
No. response.
@amoolya19
response is the output of the HTTPTask which includes header, body and status code. If the response of the called service is a JSON, conductor unmarshals it and passes it as nested Map objects to the task processing the response. You've to programatically traverse through the Map objects to reach to the data you are interested in.
Before writing the traversal logic, just print the response in console/log. Once printed you'll get the JSON equivalent of the nested Map objects. Once the structure is known to you, coding the traversal logic will be easy for you.
If the response is not JSON (say a XML), a string representation of the response is stored as a text value
Hope it helps!!
@sameekb
once i executed my workflow, i took the json from UI and i was able to see whatever my microservices were retruning in body. but dint get to know how to use response tag to capture the response from http task.
is my understanding correct?
what i am asking is how to use reponse tag in json. for ex
"http_request": {
"uri": "http://localhost:8081/",
"method": "GET"
"response": ? ${name_of_httpTask.output}?
}
@amoolya19
No need define response within http_request as its output of the HTTPTask. HTTPTask should contain inputParameters like uri, method, accept, contentType etc. Instead define a simple task which will process the response of the HTTPTask. While defining the above simple task define a inputParameter with the value ${name_of_httpTask.output}. In the corresponding Worker implementation you need to execute the following line
task.getInputData().get("inputParameterName")
to get the response object in terms of nested maps if the HTTPResponse is a JSON
@sameekb,
Thank you. yes i have done the way you have explained. but just was wondering if i could use response tag.
Also when i tried to input, like below
[{ "name": "test_workflow", "description": "test http workflow", "version": 1, "tasks": [{ "name": "test_task1", "taskReferenceName": "tt0", "inputParameters": { "http_request": { "uri": "http://localhost:8081/2", "method": "GET" } }, "type": "HTTP" }, { "name": "test_task1", "taskReferenceName": "tt1", "inputParameters": { "http_request": { "uri": "http://localhost:8081/tt1", "method": "GET", "body" : "${tt0.output.number}" } }, "type": "HTTP" } ] "schemaVersion": 2 }]
output of tt0 is
{
"number": 2
}
now i try to put this in body for tt1 but it fails here.
"taskType": "HTTP",
"status": "FAILED",
"inputData": {
"http_request": {
"uri": "http://localhost:8081/tt1",
"method": "GET",
"body": null
}
},
"referenceTaskName": "tt1",
body is null.
@amoolya19
Both the task names are test_task1. Change the name of the second one to test_task2.
Second task method is GET, but you are setting the parameter in body. Parameters in body should be used while using POST requests.
If localhost:8081 is a GET service, the URL in the second task should be http://localhost:8081/${test_task1.output.number}. Remove the body in the second task.
If the service is a POST service, in body you've to mention request_param_name : request_param_value . Request param name should be the name of the parameter that the service accepts. Request param value should be in this case ${test_task1.output.number}
@sameekb ,
i have actually tried with post as well but luck.
i changed only request_param_name : request_param_value as you suggested. but still the value is getting passed. i dont want to pass the parameter through url to the second task. i want it to be in body.
Task1
Task2
screen shots if you can see, task 1 gives out as {"number": 2}. but in task 2 input is
"body": {
"number": null
}
i have created 2 seperate tasks.
my workflow json looks like
[{ "name": "test_workflow", "description": "test http workflow", "version": 1, "tasks": [{ "name": "test_task1", "taskReferenceName": "tt1", "inputParameters": { "http_request": { "uri": "http://localhost:8081/2", "method": "GET" } }, "type": "HTTP" }, { "name": "test_task2", "taskReferenceName": "tt2", "inputParameters": { "http_request": { "uri": "http://localhost:8081/tt1", "method": "POST", "body": { "number": "${tt0.output.number}" } } }, "type": "HTTP" }], "schemaVersion": 2 }]
@amoolya19
Is localhost:8081 a GET service or Post service? In test_task1, the URI is getting invoked as GET whereas in test_task2 it is getting invoked as POST.
Since, it gives out {"number": 2} in test_task1, I assume its GET service.
If localhost:8081 designed to handle both GET and POST, then only test_task2 will work which sends out a POST request. The uri in test_task2 should be http://localhost:8081/ if you want to send a POST request.
In the body of test_task2 you've mentioned '${tt0.output.number}'. Now I don't see any task with tt0 as name or taskReferenceName. Whenever you use placeholders you've to use name and not taskReferenceName.
So, if you want pass the output of test_task1, your body should look like :
"body": { "number": "${test_task1.output.number}" }
If ${test_task1.output.number} doesn't work, then you've to introduce a simple task with input parameter having value ${test_task1.output} between test_task1 and test_task2. So, the simple task will have the entire response in terms of nested maps. Then you've to implement a corresponding worker which will traverse through the response, extract the of value of number and populate the value in another output variable. This output variable of the simple task will be an input to the test_task2. The input variable in test_task2 should have the value "${name_of_simple_task.output.name_of_output_parameter}".
@sameekb
sorry to confuse you.
my WS has 2 endpoints. one is get and other is post. task1 calls
@RequestMapping(value ="/{id}", method=RequestMethod.GET , produces={"application/json"})
public ResponseBody home(@PathVariable("id") int number) { //something }
task2 calls
@RequestMapping(value ="/tt1", method=RequestMethod.POST , produces={"application/json"})
public ResponseBody tt1(@RequestBody ResponseBody response) { //something}
i even added below but even count is null.
"outputParameters": {
"count": "${test_task1.output.number}"
},
could you please elobrate on "implement a corresponding worker".
@amoolya19
Pls visit the below URL for Worker implementation
[https://netflix.github.io/conductor/worker/](Conductor Worker)
@sameekb ,
will try that. btw i was using taskreference name looking at kitchen sink example.
{
"name": "search_elasticsearch",
"taskReferenceName": "get_es_1",
"inputParameters": {
"http_request": {
"uri": "http://localhost:9200/conductor/_search?size=10",
"method": "GET"
}
},
"type": "HTTP"
},
{
"name": "task_30",
"taskReferenceName": "task_30",
"inputParameters": {
"statuses": "${get_es_1.output..status}",
"workflowIds": "${get_es_1.output..workflowId}"
},
"type": "SIMPLE"
}
will try simple task.
Yeah. It should be taskReferenceName
Since I used same name and taskReferenceName in my POC, didn't face problem wrt this.
Relevant to this thread and might help:
https://github.com/Netflix/conductor/issues/149
@sameekb ,
Able to solve this. Thank you so much for your patience and time.
#149
Welcome @amoolya19 .
Thanks @v1r3n, your pointer to thread #149 really helped a lot. I could achieve the same result, but this time without any simple task
@smkb80 , Did you get the solution to the problem mentioned by you, of defining HTTPTask only by client-apis, specifically registering http_request and its other attributes such as uri, method etc, if yes please let me know the workaround
Thanks
|
2025-04-01T04:10:43.698009
| 2021-06-13T05:00:28 |
919716781
|
{
"authors": [
"berngp",
"hantsy",
"paulbakker"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15308",
"repo": "Netflix/dgs-framework",
"url": "https://github.com/Netflix/dgs-framework/issues/400"
}
|
gharchive/issue
|
bug: SnakeYaml android classfied jar is not found
Latest DSG platform, when using Gradle as build tools, an error has occurred when building the application.
Could not find snakeyaml-1.28-android.jar (org.yaml:snakeyaml:1.28).
I have to exclude snakeyaml to overcome this issue.
https://github.com/hantsy/spring-graphql-sample/blob/master/dgs-codegen/build.gradle
But the maven project does not report this issue.
https://github.com/hantsy/spring-graphql-sample/blob/master/dgs
Expected behavior
implementation "com.netflix.graphql.dgs:graphql-dgs-spring-boot-starter"
Actual behavior
implementation "com.netflix.graphql.dgs:graphql-dgs-spring-boot-starter", {
exclude group: 'org.yaml', module: 'snakeyaml'
}
We have both a Java and Kotlin examples that run on every single pr as part of our build pipeline. I also see the dependency you mentioned available. This was probably a transient issue resolving the artifact.
https://github.com/Netflix/dgs-examples-java
https://github.com/Netflix/dgs-examples-kotlin
When building the project, it only occurred in my Gradle project, not happened in the Maven project.
I am not sure what happened in the Gradle dependency resolving.
Closing because haven't seen this issue and can't reproduce it.
@paulbakker Check my gradlew build --scan result of the dgs-framework on my Windows, check the result here: https://scans.gradle.com/s/vb3kn46n5whgk
And this error only occurred under Windows/Gradle.
I also tried to create some Maven projects, they did not repot this error.
@paulbakker I have to add this change in the Gradle build script of dgs-mocking module to make sure it is built successfully under Windows.
implementation("com.github.javafaker:javafaker:1.+") {
exclude("org.yaml", "snakeyaml")
}
Not sure how Gradle resolved the duplicated deps. com.github.javafaker:javafaker depends on a lower version of snakeyaml but it aligned the version with the higher version of snakeyaml that declared in the spring-boot-dependencies. But I am not sure why it used an android variant.
In Maven, when resolving the dependencies of a project, it will ignore the lower version in the dependencies tree, and choose the higher version directly.
|
2025-04-01T04:10:43.699855
| 2022-04-18T20:41:45 |
1207433182
|
{
"authors": [
"berngp",
"kilink"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15309",
"repo": "Netflix/dgs-framework",
"url": "https://github.com/Netflix/dgs-framework/pull/992"
}
|
gharchive/pull-request
|
Avoid passing in empty ChainedInstrumentation
When no Instrumentation beans are supplied, avoid passing in a ChainedInstrumentation
instance containing an empty list of Instrumentations. Doing so would result in some
unnecessary overhead, as the builder's internal checkInstrumentationDefaultState method
will attempt to add its own default instrumentation.
Not that I did actually see this show up in flame graphs, for a case where I had no instrumentation registered, it was spending time in ChainedInstrumentation's internals, looking up state in a HashMap, etc. This is because by passing in an empty ChainedInstrumentation, we end up with the default DataLoaderDispatcherInstrumentation instance wrapped unnecessarily in a ChainedInstrumentation.
Thanks @kilink
|
2025-04-01T04:10:43.700879
| 2014-04-24T14:28:16 |
32153152
|
{
"authors": [
"bpollack",
"tbak"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15310",
"repo": "Netflix/eureka",
"url": "https://github.com/Netflix/eureka/issues/109"
}
|
gharchive/issue
|
Do we want to upgrade to Java 7?
I'm going through the code to just crush as many generic CheckStyles issues as I can, and noticed that a lot of lines longer than 120 characters would be fine if we went to Java 7 for the diamond operator. How do you feel about requiring Java 7?
We completed migration to Java 7.
|
2025-04-01T04:10:43.706081
| 2016-10-14T14:18:12 |
183061947
|
{
"authors": [
"neilschelly"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15311",
"repo": "Netflix/lemur",
"url": "https://github.com/Netflix/lemur/issues/450"
}
|
gharchive/issue
|
Cannot install application
Updating gulp-protractor dependency
Following the quickstart instructions at http://lemur.readthedocs.io/en/latest/quickstart/index.html, I ran into issues at make develop because several dependencies couldn't be resolved. Several errors like this were in the output in the output:
npm ERR! TypeError: Cannot read property 'latest' of undefined
npm ERR! at next (/usr/share/npm/lib/cache.js:687:35)
npm ERR! at /usr/share/npm/lib/cache.js:675:5
npm ERR! at saved (/usr/share/npm/node_modules/npm-registry-client/lib/get.js:142:7)
npm ERR! at /usr/lib/nodejs/graceful-fs/polyfills.js:133:7
npm ERR! at Object.oncomplete (fs.js:107:15)
npm ERR! If you need help, you may report this log at:
npm ERR! <http://github.com/isaacs/npm/issues>
npm ERR! or email it to:
npm ERR<EMAIL_ADDRESS>
npm ERR! System Linux 3.13.0-92-generic
npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install"
npm ERR! cwd /home/lemur/lemur
npm ERR! node -v v0.10.25
npm ERR! npm -v 1.3.10
npm ERR! type non_object_property_load
npm list yielded this output at the bottom:
npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>
npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! missing<EMAIL_ADDRESS>required by<EMAIL_ADDRESS>npm ERR! not ok code 0
This was on a brand new bare installation of Ubuntu 14.04 on a VM running in VirtualBox/Vagrant. I traced it to this dependency problem, that you'd only run into if you started with no eligible versions of protractor in your NPM package cache.
lemur depends on gulp-protractor 0.0.11 explicitly
gulp-protractor 0.0.11 depends on protractor at any version (*)
The latest versions of protractor (@4) require much newer versions of nodejs (>4) according to the Compatibility section of https://www.npmjs.com/package/protractor.
gulp-protractor 0.0.12 constrains some of these dependencies better, adds a debug option, and fixes a few typos in comments and metadata: https://github.com/mllrsohn/gulp-protractor/compare/0.0.11...0.0.12
I'm unsure what testing or work is required for this change to be considered, but I can issue a pull request from my own master that resolves this: https://github.com/neilschelly/lemur/
|
2025-04-01T04:10:43.708923
| 2016-09-07T12:43:29 |
175492839
|
{
"authors": [
"brharrington",
"ewiseblatt"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15312",
"repo": "Netflix/spectator",
"url": "https://github.com/Netflix/spectator/pull/319"
}
|
gharchive/pull-request
|
Added StackdriverWriter to push Spectator Registry data into Stackdriver
@brharrington please review
@cfieber, FYI
This requires the earlier PR I sent you, but I made it disjoint PR.
This is the gist of how I'm going to push things into stackdriver. It uses the earlier PR to get and filter the metrics to push, then transforms the data into stackdriver calls.
In practice, a configuration bean in kork will setup a scheduler that calls this writer (and also optionally set up the web endpoint for external polling).
@brharrington This is a major overhaul. Removing wrong impression that measurements werent unique let me simplify the logic. However I added more sophistication to preparing the data for stackdriver and working around some bugs and limitations in stackdriver I've come across, and found some workarounds for Spectator as well.
This needs PR 318 to make travis happy.
@rimey
Could you have a look from a stackdriver perspective?
I'm going to try moving this into kork instead. Waiting for confirmation before closing this.
Restarting build after merging #318.
|
2025-04-01T04:10:43.710556
| 2018-04-28T05:22:02 |
318601805
|
{
"authors": [
"alirezavalizade",
"jrsquared"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15313",
"repo": "Netflix/vizceral",
"url": "https://github.com/Netflix/vizceral/pull/124"
}
|
gharchive/pull-request
|
babel-preset-es2015 => babel-preset-env - import lodash with correct way
babel-preset-es2015 => babel-preset-env
import lodash with correct way. benchmarks
Importing lodash modules individually is a pain and feels really messy. The same bundle-size effect can be had with a webpack plugin, since we are already using webpack, there's no reason to not just do it in webpack.
|
2025-04-01T04:10:43.723659
| 2023-06-03T15:28:05 |
1739622494
|
{
"authors": [
"enitrat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15314",
"repo": "NethermindEth/CairoByExample",
"url": "https://github.com/NethermindEth/CairoByExample/pull/12"
}
|
gharchive/pull-request
|
feat: upgradeable contract
Closes #11
@Julio4 take a look and let me know if you find it easy to understand :)
Thanks for the valuable feedback @julio4 !
|
2025-04-01T04:10:43.727508
| 2022-06-09T15:38:42 |
1266328166
|
{
"authors": [
"D-DePablos"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15315",
"repo": "NethermindEth/juno",
"url": "https://github.com/NethermindEth/juno/pull/207"
}
|
gharchive/pull-request
|
Feature/cairo cli integration
Changes:
Initial CLI implementation for juno, juno-cli, which integrates commands from the Starknet Feeder Gateway.
Types of changes
[x ] New feature (non-breaking change which adds functionality)
Testing
Requires testing
[ ] Yes
[x ] No
Currently implementation simple enough to compare directly to pure HTTP requests.
Further comments
See idea behind PR here https://www.notion.so/nethermind/Cairo-CLI-Implementation-f524ca0c7f404b8eb151842c5f6d2065
And living docs here
Ok I'm just going to let this be. I have tried "go mod tidy" and forcing it to load modules back in but it's just not working. Would appreciate some guidance when anybody has the time!
Closed in favour of https://github.com/NethermindEth/juno/pull/215
|
2025-04-01T04:10:43.758931
| 2022-02-07T11:58:52 |
1125891064
|
{
"authors": [
"sursi",
"unicoder88"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15316",
"repo": "Nets-eCom/Magento2_easy",
"url": "https://github.com/Nets-eCom/Magento2_easy/issues/32"
}
|
gharchive/issue
|
Vanilla flow bug - order addresses are empty
Preconditions:
module version 1.4.6
Handle customer data: no
Split billing and shipping addresses: no
Checkout flow: embedded
Enable Auto-capture: no
After placing an order, order billing and shipping addresses are empty. Reason - but somewhere in \Dibs\EasyCheckout\Model\Checkout::placeOrder:
// When embedded flow is used we let nets handle customer data, if redirect flow is used then we handle it.
// when the solution is hosted, the checkout url is not matching our checkout url!
$weHandleConsumerData = false;
if ($this->getHelper()->getCheckoutUrl() !== $dibsPayment->getCheckoutUrl()) {
$weHandleConsumerData = true;
}
In my case this renders to:
$weHandleConsumerData = false;
if ("/easycheckout" !== "/checkout") {
$weHandleConsumerData = true;
}
So it ignores order address in Easy, and uses Magento quote billing/shipping addresses :(
Hi All,
If Handle customer data is set to No the customer is expected to enter the same address.
Sincerely,
Surjan
|
2025-04-01T04:10:43.825110
| 2018-11-23T11:47:44 |
383795942
|
{
"authors": [
"JuliaSprenger",
"coveralls",
"pep8speaks"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15317",
"repo": "NeuralEnsemble/python-neo",
"url": "https://github.com/NeuralEnsemble/python-neo/pull/606"
}
|
gharchive/pull-request
|
[WIP] Array Annotations (improved)
This is a PR which resolves the current merge conflicts and failing tests occurring in #472. In addition it contains a small number of documentation changes.
Hello @JuliaSprenger! Thanks for submitting the PR.
There are no PEP8 issues in the file neo/core/analogsignal.py !
In the file neo/core/basesignal.py, following are the PEP8 issues :
Line 13:69: W291 trailing whitespace
Line 42:78: W291 trailing whitespace
Line 44:73: W291 trailing whitespace
Line 63:68: W291 trailing whitespace
Line 65:58: W291 trailing whitespace
Line 93:61: W291 trailing whitespace
In the file neo/core/dataobject.py, following are the PEP8 issues :
Line 82:60: W503 line break before binary operator
Line 251:82: E226 missing whitespace around arithmetic operator
In the file neo/core/epoch.py, following are the PEP8 issues :
Line 33:51: E241 multiple spaces after ','
Line 37:1: E302 expected 2 blank lines, found 1
Line 116:17: W504 line break after binary operator
In the file neo/core/event.py, following are the PEP8 issues :
Line 103:17: W504 line break after binary operator
There are no PEP8 issues in the file neo/core/irregularlysampledsignal.py !
In the file neo/core/spiketrain.py, following are the PEP8 issues :
Line 42:17: W504 line break after binary operator
Line 215:27: W504 line break after binary operator
Line 281:17: W504 line break after binary operator
Line 291:17: W504 line break after binary operator
Line 292:17: W504 line break after binary operator
Line 298:17: W504 line break after binary operator
Line 299:17: W504 line break after binary operator
In the file neo/io/brainwaresrcio.py, following are the PEP8 issues :
Line 297:13: E722 do not use bare 'except'
Line 356:13: E722 do not use bare 'except'
Line 580:32: W504 line break after binary operator
Line 904:23: W504 line break after binary operator
In the file neo/test/coretest/test_analogsignal.py, following are the PEP8 issues :
Line 286:21: W503 line break before binary operator
Line 287:21: W503 line break before binary operator
Line 288:21: W503 line break before binary operator
Line 448:61: E226 missing whitespace around arithmetic operator
Line 448:83: E226 missing whitespace around arithmetic operator
Line 610:43: E226 missing whitespace around arithmetic operator
Line 614:56: E226 missing whitespace around arithmetic operator
In the file neo/test/coretest/test_analogsignalarray.py, following are the PEP8 issues :
Line 625:33: W504 line break after binary operator
In the file neo/test/coretest/test_dataobject.py, following are the PEP8 issues :
Line 161:26: W504 line break after binary operator
In the file neo/test/coretest/test_epoch.py, following are the PEP8 issues :
Line 168:17: W504 line break after binary operator
Line 169:17: W504 line break after binary operator
Line 409:58: E226 missing whitespace around arithmetic operator
In the file neo/test/coretest/test_event.py, following are the PEP8 issues :
Line 353:17: W504 line break after binary operator
Line 520:45: E226 missing whitespace around arithmetic operator
Line 530:72: E226 missing whitespace around arithmetic operator
Line 539:83: E226 missing whitespace around arithmetic operator
In the file neo/test/coretest/test_generate_datasets.py, following are the PEP8 issues :
Line 680:5: E265 block comment should start with '# '
In the file neo/test/coretest/test_irregularysampledsignal.py, following are the PEP8 issues :
Line 110:74: E231 missing whitespace after ','
Line 234:25: E126 continuation line over-indented for hanging indent
Line 238:25: E126 continuation line over-indented for hanging indent
Line 342:90: E226 missing whitespace around arithmetic operator
Line 798:17: W503 line break before binary operator
Line 799:17: W503 line break before binary operator
Line 800:17: W503 line break before binary operator
In the file neo/test/coretest/test_spiketrain.py, following are the PEP8 issues :
Line 1932:17: W504 line break after binary operator
In the file neo/test/generate_datasets.py, following are the PEP8 issues :
Line 122:50: E226 missing whitespace around arithmetic operator
Line 148:52: E226 missing whitespace around arithmetic operator
Line 193:13: W504 line break after binary operator
Line 217:13: W504 line break after binary operator
Line 290:48: E226 missing whitespace around arithmetic operator
Line 291:50: E226 missing whitespace around arithmetic operator
Line 312:100: E501 line too long (111 > 99 characters)
Line 322:100: E501 line too long (121 > 99 characters)
Line 442:25: W504 line break after binary operator
In the file neo/test/tools.py, following are the PEP8 issues :
Line 123:21: W504 line break after binary operator
Line 124:17: W504 line break after binary operator
Line 510:20: E126 continuation line over-indented for hanging indent
Line 511:17: E131 continuation line unaligned for hanging indent
Coverage increased (+1.5%) to 49.568% when pulling 9217af135865d872f8dda91198b420793a4d1707 on JuliaSprenger:bjoern_arrayanno into 078912b85042e255c41bbdc8f8e4a330e761cf8f on NeuralEnsemble:master.
|
2025-04-01T04:10:43.925053
| 2023-08-05T20:34:02 |
1837930074
|
{
"authors": [
"Metroidude",
"workbee49"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:15318",
"repo": "NextJSTemplates/startup-nextjs",
"url": "https://github.com/NextJSTemplates/startup-nextjs/issues/7"
}
|
gharchive/issue
|
Hot Module Refresh Support
Hello,
Hot module refresh is one of my favorite features. I spent a while modifying the template to match my needs, but I can't seem to get hot module refresh working. Can you assist? Shouldn't Next.js ship with it by default? Maybe I just didn't enable it correctly.
@Metroidude did you resolve the issue?
Yes, thank you. Newbie mistake. I was compiling for production instead of using npm run dev
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.