added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[s]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
---|---|---|---|---|---|
2025-04-01T06:36:38.467025 | 2024-01-18T04:32:44 | 2087637140 | {
"authors": [
"JoviDeCroock",
"deathemperor"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3",
"repo": "0no-co/GraphQLSP",
"url": "https://github.com/0no-co/GraphQLSP/issues/180"
} | gharchive/issue | Field used but still warned as not used
Describe the bug
maxHP is used in line 39 but still reported as not used
Only happens on root query data, works fine in Fragment. However, id is not reported as not used even both PokemonList and PokemonItem do not use.
Reproduction
No response
gql.tada version
gql.tada 1.0.2
Validations
[X] I can confirm that this is a bug report, and not a feature request, RFC, question, or discussion, for which GitHub Discussions should be used
[X] Read the docs.
[X] Follow our Code of Conduct
@JoviDeCroock using LSP version 1.0.0 when creating this issue but still happens in 1.0.3
That does not happen to me for latest, when you upgrade the LSP you have to restart your TSServer.
I did however discover a different bug where overlapping fields can become a nuisance, fixing that in https://github.com/0no-co/GraphQLSP/pull/182
can confirm it's fixed in 1.0.5
this is not fixed though:
However, id is not reported as not used even both PokemonList and PokemonItem do not use.
@deathemperor id and __typename are a reserved field for normalised caches and such
@JoviDeCroock updated my previous comment. issue should still remains open.
That link reproduces nothing for me 😅
my bad. that repo doesn't reproduce. our production repo does. Our take more look at it
|
2025-04-01T06:36:38.477226 | 2023-02-24T12:53:25 | 1598576771 | {
"authors": [
"0sugo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5",
"repo": "0sugo/portfolio_mobile_view",
"url": "https://github.com/0sugo/portfolio_mobile_view/pull/5"
} | gharchive/pull-request | Desktop version 1
In this pr I did the following :
Created a new desktop version of my personal portfolio that was originally in mobile version using a media query.
I made sure the site is responsive by adding a break-point at 768px.
@NduatiKagiri Buttons on my end seem to be aligned center kindly may I know at what resolution do they misbehave?
|
2025-04-01T06:36:38.490549 | 2022-12-21T01:58:45 | 1505589195 | {
"authors": [
"Dominik1999",
"bobbinth"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6",
"repo": "0xPolygonMiden/examples",
"url": "https://github.com/0xPolygonMiden/examples/issues/37"
} | gharchive/issue | Delete obsolete branches
The repo currently has 8 branches and it seems like at least 3 of them are obsolete. Ideally, we should be deleting branches as soon as the related PR is merged.
done
|
2025-04-01T06:36:38.509541 | 2024-10-28T06:00:56 | 2617390616 | {
"authors": [
"bitwalker",
"bobbinth",
"greenhat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7",
"repo": "0xPolygonMiden/miden-vm",
"url": "https://github.com/0xPolygonMiden/miden-vm/issues/1547"
} | gharchive/issue | Support ability to specify advice data via MASM
In some situations it maybe desirable to specify some data which a given program assumes to be available in the advice provider. One example of this is read-only data output by the compiler, but there could be may other examples. Currently, such data needs to be loaded separately into the VM which introduces extra complexities.
One way around this is to allow users to define data which is to be loaded into the advice provider before a given program starts executing. The syntax for this in MASM could look like so:
advent.FOO.0x9dfb1fc9f2d5625a5a304b9012b4de14df5cf6e0155cdd63a27c25360562587a
642174d286a4f38e4d2e09a79d048fe7c89dec9a03fce29cbe10d32aa18a1dc4
bb48645fa4ffe141f9a139aef4fa98aec50bded67d45a29e545e386b79d8cefe
0f87d6b3c174fad0099f7296ded3abfef1a282567c4182b925abd69b0ed487c3
c251ce5e4e2da760658f29f6a8c54788d52ae749afd1aef6531bf1457b8ea5fb
end
Here, advent specifies that we want to add an entry to the advice map. The key for the entry would be the word defined by 0x9dfb1fc9f2d5625a... value. The data of the entry would be the list of field elements defined by the hex encoded string. We also provide a way to specify a label FOO by which they key can be referred to from the code. For example:
being
push.FOO
end
Would push the key 0x9dfb1fc9f2d5625a... onto the stack.
Upon assembly, this data would be added to the MastForest. For this, we'd need to add a single AdviceMap property to the MastForest struct - e.g., something like this:
pub struct MastForest {
/// All of the nodes local to the trees comprising the MAST forest.
nodes: Vec<MastNode>,
/// Roots of procedures defined within this MAST forest.
roots: Vec<MastNodeId>,
/// All the decorators included in the MAST forest.
decorators: Vec<Decorator>,
/// Advice map to be loaded into the VM prior to executing procedures from this MAST forest.
advice_map: AdviceMap,
}
Then, when the VM starts executing a given MAST forest, it'll copy the contents of the advice map into its advice provider (we can also use a slightly more sophisticated strategy to make sure that the map is copied only once).
Open questions
While the above approach should work, there are a few things we need to clarify before implementing it:
In the above example FOO refers to a full word. All our constants currently refer to single elements. Ideally, we should be able to tell by looking at the constant name whether it is for a full word or a single element. So, maybe we should come up with some simple scheme here to differentiate them?
Should the key handle FOO be accessible outside of the module it was defined in? It seems like it would be a good idea, but then we need to be able to apply some kind of visibility modifiers to advent.
How should we handle conflicting keys during assembly and execution?
If we encounter two entries with the same key but different data during assembly, this should probably be an error.
But what to do if we start executing a MAST forest which wants to load data into the advice provider but an entry with the same key but different data is already in the advice map? Should we error out? Silently replace the existing data with the new one? Anything else?
...
This change is beneficial to #1544 since I was thinking of a way to convey the notion that MastForest(code) requires the rodata loaded into the advice provider before it can be executed.
The MASM-facing part (syntax, parsing, etc.) of the implementation would take me quite a lot of time since I'm not familiar with the code, but the VM-facing part I believe I can do in a reasonable amount of time. If @bitwalker is ok with it, I can take a stab at it.
Upon assembly, this data would be added to the MastForest. For this, we'd need to add a single AdviceMap property to the MastForest struct - e.g., something like this:
pub struct MastForest {
/// All of the nodes local to the trees comprising the MAST forest.
nodes: Vec<MastNode>,
/// Roots of procedures defined within this MAST forest.
roots: Vec<MastNodeId>,
/// All the decorators included in the MAST forest.
decorators: Vec<Decorator>,
/// Advice map to be loaded into the VM prior to executing procedures from this MAST forest.
advice_map: AdviceMap,
}
Then, when the VM starts executing a given MAST forest, it'll copy the contents of the advice map into its advice provider (we can also use a slightly more sophisticated strategy to make sure that the map is copied only once).
I've taken a look and here are my findings on what needs to be done to implement this:
Move the AdviceMap type from processor to core;
Handle the AdviceMap when merging MAST forests (join with other AdviceMaps?);
Serialization/deserialization of the MastForest should handle the AdviceMap as well, but it'll break storing the rodata separately in the Package (roundtrip serialization would not work). We could put rodata in AdviceMap on the compiler side as well and not store it separately in the Package. @bitwalker is it ok?
Open questions
While the above approach should work, there are a few things we need to clarify before implementing it:
How should we handle conflicting keys during assembly and execution?
If we encounter two entries with the same key but different data during assembly, this should probably be an error.
Yes, I think it should be an error. From rodata perspective, the digest is a hash of the data itself, so if the data is different, the digest will be different as well. From the MASM perspective, this might mean key/digest re-use, which does not seem like something a user might want, so failing early is a good thing to do.
But what to do if we start executing a MAST forest which wants to load data into the advice provider but an entry with the same key but different data is already in the advice map? Should we error out? Silently replace the existing data with the new one? Anything else?
If the user code treats the advice provider as a some sort of dictionary, that's a valid use case. I'm not sure if it should be an error.
Handle the AdviceMap when merging MAST forests (join with other AdviceMaps?);
Yes, I think merging would work fine here. If there is a conflict (two entries with the same key by different data), we'd error out here as well.
Serialization/deserialization of the MastForest should handle the AdviceMap as well, but it'll break storing the rodata separately in the Package (roundtrip serialization would not work). We could put rodata in AdviceMap on the compiler side as well and not store it separately in the Package. @bitwalker is it ok?
Yeah - I think once we have this support for advice map entries in MastForest, there is no need to store rodata separately in the package.
In the above example FOO refers to a full word. All our constants currently refer to single elements. Ideally, we should be able to tell by looking at the constant name whether it is for a full word or a single element. So, maybe we should come up with some simple scheme here to differentiate them?
The parser already knows how to parse various sizes of constants, including single words, or even arbitrarily large data (the size of the data itself indicates which type it is).
Should the key handle FOO be accessible outside of the module it was defined in? It seems like it would be a good idea, but then we need to be able to apply some kind of visibility modifiers to advent.
These would be effectively globally visible symbols, and while unlikely, you can have conflicting keys, so I think any attempt to make it seem like these can be scoped should be avoided.
How should we handle conflicting keys during assembly and execution?
I'm not sure how we handle this during execution today actually, presumably we just clobber the data if two things are loaded with the same key into the advice map?
During assembly I think it has to be an error. It might be possible to skip the error if the data is the same, I think it's still an open question whether or not you would want to know about the conflicting key regardless.
I'm questioning a bit whether it makes sense to define this stuff in Miden Assembly;
I think the keyword has to be something more readable, advent - even knowing what it is supposed to be - still took me a second to figure out what it meant. Personally, I'd choose something more like advice.init or adv_map.init or something.
I've taken a look and here are my findings on what needs to be done to implement this:
Move the AdviceMap type from processor to core;
Handle the AdviceMap when merging MAST forests (join with other AdviceMaps?);
We'll need to catch conflicting keys (different values for the same key, but fine if the keys overlap with the same value), but a straight merge of the two maps should be fine otherwise.
Serialization/deserialization of the MastForest should handle the AdviceMap as well, but it'll break storing the rodata separately in the Package (roundtrip serialization would not work). We could put rodata in AdviceMap on the compiler side as well and not store it separately in the Package. @bitwalker is it ok?
Once we can write our rodata to the MastForest directly, we won't need to do it in the Package anymore, so that sounds fine to me!
@greenhat For now, I would focus purely on the implementation around the MastForest/processor (what you've suggested AIUI), don't worry about the AST at all. That's all we need for the compiler anyway, while we figure out how to handle the frontend aspect in the meantime.
|
2025-04-01T06:36:38.529513 | 2022-02-21T04:52:51 | 1145320840 | {
"authors": [
"0xdanelia",
"TechnologyClassroom"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9",
"repo": "0xdanelia/rxr",
"url": "https://github.com/0xdanelia/rxr/issues/1"
} | gharchive/issue | Missing LICENSE
I see you have no LICENSE file for this project. The default is copyright.
I would suggest releasing the code under the GPL-3.0-or-later or AGPL-3.0-or-later license so that others are encouraged to contribute changes back to your project.
License file added
|
2025-04-01T06:36:38.539633 | 2021-07-04T12:19:15 | 936445338 | {
"authors": [
"100PXSquared",
"Jacbo1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10",
"repo": "100PXSquared/VisTrace",
"url": "https://github.com/100PXSquared/VisTrace/issues/11"
} | gharchive/issue | [bug] dll version not injecting to client since 0.2.3
The dll version does not inject to the client since 0.2.3. Only the workshop version works after that which restricts its use to singleplayer unless you can convince a server owner to add it.
Not a bug, however injection can be re-added easily as a backup.
Added in e0baa6f1658b192cd4d0a9d9edf3e6f92dd2b143
|
2025-04-01T06:36:38.643103 | 2020-11-01T11:16:17 | 733940829 | {
"authors": [
"khawkins98",
"zachleat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11",
"repo": "11ty/11ty-website",
"url": "https://github.com/11ty/11ty-website/pull/833"
} | gharchive/pull-request | Add sites: EMBl.org and Visual Framework
This PR adds two projects:
EMBL.org laboratory website
The code for this is linked, but is access restricted due to "internal policy". Not sure if a public codebase is a requirement?
The Visual Framework
An in-development component library, which is sponsored by the EMBL.org project
Thanks for maintaining the dashboard, it's really neat!
Thank you!
|
2025-04-01T06:36:38.647899 | 2024-07-15T15:50:19 | 2409074688 | {
"authors": [
"pauleveritt",
"uncenter",
"zachleat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:12",
"repo": "11ty/eleventy-plugin-template-languages",
"url": "https://github.com/11ty/eleventy-plugin-template-languages/issues/12"
} | gharchive/issue | JSX and TSX
Hey Zach, looking for any contributors? I wouldn't mind pitching in to add JSX and TSX and languages, as well as general maintenance. I could also help organize a group that pitches in.
I think JSX and TSX should be separate as TSX adds some TypeScript pushups that will frustrate people.
Yeah, absolutely! Be awesome to simplify the docs on these pages (or at least provide simplified instructions): https://www.11ty.dev/docs/languages/jsx/ https://www.11ty.dev/docs/languages/typescript/
@zachleat Here's a PR in a fork with tests and writeup.
I propose I toot about it, you quote-toot it it, and see if we can get some non-Zach eyeballs on it.
Once settled, merged, and released, I can do a PR for docs changes.
Why not PR it to this repository?
@uncenter I did, yesterday.
Did you mean to open two duplicate PRs?
I did, I wanted to treat JSX as different from TSX. The latter has a bit more ceremony (and Zach is doing TS stuff ATM.)
If you'd prefer, I can combine them.
|
2025-04-01T06:36:38.944891 | 2023-10-23T17:30:39 | 1957661763 | {
"authors": [
"1bl4z3r",
"jamesbraza"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:14",
"repo": "1bl4z3r/hermit-V2",
"url": "https://github.com/1bl4z3r/hermit-V2/issues/30"
} | gharchive/issue | Request: ability to disable last modified timestamp
I see this is part of single.html in v1.0.2, where it has an if .Lastmod conditional.
Any chance we can expose a new config parameter to turn off this behavior? I like the default "creation only" timestamp. Alternately, maybe a post front matter parameter could be useful too
Hi @jamesbraza,
If you dont supply Lastmod in posts' front matter, it will not show Modified section.
Le me know if you were able to resolve.
For me, my front matter is (it has no Lastmod):
---
title: "Foo"
date: 2023-04-23
draft: false
tags: ["article"]
---
Even explicitly adding Lastmod: false did not disable the "Modified" text. What am I missing?
Hi again,
It seems to be very strange indeed.
In this post, I have supplied Lastmod as follows.
title: "Typography"
slug : 'typography'
date: 2023-07-22T14:36:33+05:30
draft: false
featuredImg: ""
description : 'Integer lobortis vulputate mauris quis maximus. Vestibulum ac eros porttitor, auctor sem sed, tincidunt nulla. In sit amet tincidunt ex.'
tags:
- Demo
- Typography
Lastmod : 2023-08-15T15:36:33+05:30
I get :
Now if I remove:
title: "Typography"
slug : 'typography'
date: 2023-07-22T14:36:33+05:30
draft: false
featuredImg: ""
description : 'Integer lobortis vulputate mauris quis maximus. Vestibulum ac eros porttitor, auctor sem sed, tincidunt nulla. In sit amet tincidunt ex.'
tags:
- Demo
- Typography
I don't get the Lastmod section.
Pleas re-check your config if everything is properly formatted and/or do a full recheck on all the files if Lastmod is given somewhere.
Lol I am trying to figure it out, it's got me stumped too
One question, elsewhere I see .Params.tags, but with .Lastmod it's not using Params. Why is it not .Params.Lastmod?
Ahh I see. I have enableGitInfo = true in my config, which seems to globally enable .Lastmod: https://github.com/1bl4z3r/hermit-V2/blob/1ad173d2ab6817d7ca033b28b507df5ba8e08be6/hugo.toml#L32
Any chance you can add in some capability to disable enableGitInfo opting pages into .Lastmod?
To properly explain your previous query,
There were some changes in Hugo where while defining custom local page variables (i.e. Page Variables whose scope is within the page itself), we can ignore .Params as it is implied that we are trying to fetch local page variables. You can definitely put in .Params.Lastmod and the output would be exactly the same.
It was something to differentiate from inbuilt Page variables and custom Page variables. I am still unsure if we can access custom Page variables from other pages or not.
For enableGitInfo, I am quite unsure how to properly implement this, so that it would not break the core functionality.
Ohkay, here's a big brain moment.
What can be done is in each page a new Page Variable could be setup, whose only job in the world is to enable/disable [Modified:] section. It won't matter if .Lastmod should be shown or not for the post.
If .GitInfo is true and .LastmodEnabler is false, Modified section is not shown
If .GitInfo is true and .LastmodEnabler is true, Modified section is shown, Date fetched from git
If .GitInfo is false and .LastmodEnabler is true, Modified section is shown, with each page having a dedicated .Lastmod
If .GitInfo is false and .LastmodEnabler is false, Modified section is not shown
if Page.Lastmodenabler
{
}
Let me know if you want this to be implemented.
I like what you propose! It:
Is simple and intuitive
Allows for both global configuration and per-page configuration
Solves my problem here too haha
Sounds good to me
Cool cool cool
It shouldn't take me more than 1 business day to implement.
Okay sound good, ahha no need to provide business days here, it's FOSS babyyyy
Implemented. Same is updated on #last-modified-date
If IgnoreLastmod is not provided or IgnoreLastmod=false, then:
If enableGitInfo = true, then Git Hash will be shown in [...] after Date.
If enableGitInfo = false, then:
If Lastmod is not provided or Lastmod has same value as Date, error will be thrown.
If Lastmod is provided or Lastmod is different from Date, value of Lastmod will be displayed in [...] after Date.
Closing this issue. Re-open if required.
Not Fixed yet
This is finalfinalfinal. As usual, details updated in #last-modified-date
If ShowLastmod:true :
If enableGitInfo = true, then Git Hash will be shown in [...] after Date.
If enableGitInfo = false, then:
If Lastmod is not provided or Lastmod has same value as Date, error will be thrown.
If Lastmod is provided or Lastmod is different from Date, value of Lastmod will be displayed in [...] after Date.
If ShowLastmod is not provided. User response defaults to false. It is equivalent to providing ShowLastmod:false.
And I was wrong.
Any Page Variable should be called via .Page.Params. if you ignore .Page or .Site, . by default has Global scope.
.Lastmod is inbuilt Hugo Variable attached with GitInfo, hence it has global scope.
|
2025-04-01T06:36:39.022748 | 2022-03-12T07:45:25 | 1167211980 | {
"authors": [
"282857341",
"jjhHan",
"jjhhan",
"pcl1121",
"puppy2000"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:15",
"repo": "282857341/nnFormer",
"url": "https://github.com/282857341/nnFormer/issues/57"
} | gharchive/issue | trainer class is None when training with my own data
how to solve this problem?
Please make sure the trainer file exists in the path:
nnFormer/training/network_training/
Another possible reason is that the trainer file exists in the above path, but the Class name in the trainer is not same with the trainer file
do you run nnformer on your own dataset successfully?
what does that mean?I use my own data only have foreground and background
I ran python inference_synapse.py
but I got this
open the dice_pre.txt
I got
Is the number of classs not set properly
so what should I do
Please make sure the trainer file exists in the path:
nnFormer/training/network_training/
Another possible reason is that the trainer file exists in the above path, but the Class name in the trainer is not same with the trainer file
|
2025-04-01T06:36:39.035846 | 2017-07-28T17:48:42 | 246415595 | {
"authors": [
"asilvestre87",
"kxxxo",
"nfacha",
"tonydspaniard"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:16",
"repo": "2amigos/yii2-leaflet-markercluster-plugin",
"url": "https://github.com/2amigos/yii2-leaflet-markercluster-plugin/issues/3"
} | gharchive/issue | Not working - JS Error
Uncaught TypeError: Cannot read property 'trim' of undefined
at trim (Util.js:121)
at splitWords (Util.js:127)
at NewClass.on (Events.js:49)
at NewClass.initialize (leaflet.markercluster-src.js:51)
at new NewClass (Class.js:22)
at Object.L.markerClusterGroup (leaflet.markercluster-src.js:1080)
at map_init (map:678)
at HTMLDocument.<anonymous> (map:683)
at fire (jquery.js:3187)
at Object.fireWith [as resolveWith] (jquery.js:3317)
Followed the documentation on readme, it looks like outdated tho
Same error after install
@nfacha @kxxxo Big apologies... I do not have much time to fix the bug my self right now, but I'll fix it as soon as i can.
Same error...
I need to change
$cluster = new MarkerCluster([
'jsonUrl' => Yii::$app->controller->createUrl('projects/json')
]);
for this:
$cluster = new MarkerCluster([
'url' => Yii::$app->urlManager->createUrl('projects/json')
]);
Because jsonUrl isn't throws this exception:
Setting unknown property: dosamigos\leaflet\plugins\markercluster\MarkerCluster::jsonUrl
and
Yii::$app->controller->createUrl('projects/json') throws: Calling unknown method: app\controllers\ProjectsController::createUrl()
but the error (Uncaught TypeError: Cannot read property 'trim' of undefined) don't fixed
|
2025-04-01T06:36:39.124917 | 2021-11-09T21:36:15 | 1049144011 | {
"authors": [
"sgibson91"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:17",
"repo": "2i2c-org/infrastructure",
"url": "https://github.com/2i2c-org/infrastructure/pull/819"
} | gharchive/pull-request |
Running dask_test_notebook.ipynb test notebook...
Hub https://staging.us-central1-b.gcp.pangeo.io not healthy! Stopping further deployments. Exception was 'NoneType' object has no attribute 'splitlines'.
I'm going to deploy manually and skip the tests for now. The actually config update was applied successfully and worked as expected when I tested on staging.
|
2025-04-01T06:36:39.129369 | 2021-10-01T15:13:20 | 1013466571 | {
"authors": [
"choldgraf",
"sgibson91"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:18",
"repo": "2i2c-org/pilot-hubs",
"url": "https://github.com/2i2c-org/pilot-hubs/pull/726"
} | gharchive/pull-request | Recovering PR #706: Supporting native JupyterHub OAuthenticator alongside Auth0 for our hubs
Summary
This PR is a reconstruction of PR #706 which failed on merge due to JSON schema validation errors. In addition to reconstructing that, this PR also aims to address the JSON schema validation errors and extends the validate() function in deployer to also validate the secret config, if it exists.
fixes https://github.com/2i2c-org/pilot-hubs/issues/625
Changes to config/hubs/schema.yaml
I have borrowed a pattern from @damianavila to make the auth0.connection property conditional on auth0.enabled which is now a fully fledged property of auth0.
I have also set the default value of auth0.enabled to be true so that we don't have to go through every *.cluster.yaml file and add the enabled key for auth0.
I have deployed this to Pangeo staging (partnered with #707) and it works! So marking this as ready for review :)
Just a quick question - is this basically just the same PR as before, but now the original bug that we tripped has been fixed? If so, and if you've already tested it out, I'd be +1 on merging unless you want fresh eyes on any new stuff in particular, since the last PR already had some approves
I've updated the comment @yuvipanda pointed out, so I will merge once tests pass :)
|
2025-04-01T06:36:39.177246 | 2023-05-23T22:07:32 | 1722854624 | {
"authors": [
"34736384",
"Dartv",
"DenSwitch",
"Drekaelric",
"GatoTristeY",
"GraveUypo",
"LouisD69",
"MisakaMikoto2333",
"Qepz",
"RyanDVasconcelos",
"SalenGency",
"UnlishedTen83",
"WenchenWang",
"copyvw",
"cybik",
"dioni04",
"hayzar-s",
"hotpot1026",
"jamespmnd",
"kuznetsov-ns",
"lunimater",
"markuskusxyren",
"mio12333",
"narnian19",
"nodaSnowball",
"nofuma99",
"pocasolta01",
"rlawjdtn8890",
"t0xic0der",
"theloraxofdeath",
"twiGGyAJOfficial",
"xzf0509",
"yangtryyds"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:24",
"repo": "34736384/genshin-fps-unlock",
"url": "https://github.com/34736384/genshin-fps-unlock/issues/137"
} | gharchive/issue | Doesn't work with 3.7.0
hi. it doesn't work with the new update 3.7.0. please update it :)
yeah same problem here, pls fix i cant play this game at 60fps after using this godly fps unlocker
Same here.
+1 on this. Hope tool can be updated for 3.7 and beyond
same cant use it
same problem
Same here. It's too awful playing again with 60 fps
same here
Same here
same here
same here
same here please fix IT thank you
A dark day for gamers...
Yep, doesn't work anymore. Gross how they would go out of their way to sabotage something that literally doesn't hurt anyone in any way. If there's no way around this, i'm quitting this game early.
Just needs the new memory pattern to scan for. Here's the issue
Does not work for me as well :(
Same here. I hope an update will be released soon. My eyes can't handle 60FPS anymore.. they're bleeding..
Someone else has a problem!
https://github.com/34736384/genshin-fps-unlock/issues/139#issue-1723272551
Seeing everyone in a state of panic and confusion because of the tool malfunction really cracks me up. Does mihoyo even realize how poorly their game is designed? They really need to reflect on themselves.
진정해, 곧 업데이트할게
2.1.0.exe 실행시 Setup 설치가 쉽지 않아 문제가 발생합니다 어서 수정하세요!
다른 사람이 문제가 있습니다! #139(댓글)
제 생각에는 당신의 PC에 문제가 있는 것 같습니다. 또한 이 프로젝트의 관리자는 귀하에게 솔루션을 제공할 의무가 없습니다. 매너 좀 지켜주세요.
Chrome blocked it because it appeared as a malicious code when downloading it.
works fine. thanks
thank you for fixing it. i can't live without the extra smoothness anymore.
is this program part of hutao? or another independent program?
I downloaded the new patch but i'm still locked at 60 fps, do i need to do something else besides replace the .exe?
Same here, still locked at 60. Tried changing different settings in program and game, but no luck.
new one stopped working also
|
2025-04-01T06:36:39.181105 | 2018-06-08T09:46:53 | 330591831 | {
"authors": [
"dolmen",
"xuri"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:25",
"repo": "360EntSecGroup-Skylar/excelize",
"url": "https://github.com/360EntSecGroup-Skylar/excelize/issues/230"
} | gharchive/issue | Follow best practices for Git commit messages
@xuri Could you follow the established best pratices in Git commit messages ? That would help external contributors to follow changes happening.
https://chris.beams.io/posts/git-commit/
Commit d96440edc480976e3ec48958c68e67f7a506ad32 breaks many rules:
many unrelated changes in a single commit
commit messge does not follow the standard format "1 summary line + 1 empty line + details"
By the way, I expect that the content of CONTRIBUTING.md also applies to project maintainers.
https://github.com/360EntSecGroup-Skylar/excelize/blob/master/CONTRIBUTING.md#commit-messages
Thanks for your suggestion, I will follow best practices in the future code commit.
|
2025-04-01T06:36:39.191193 | 2024-10-29T14:54:02 | 2621505832 | {
"authors": [
"hugop95",
"pp0rtal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:26",
"repo": "360Learning/mongo-bulk-data-migration",
"url": "https://github.com/360Learning/mongo-bulk-data-migration/pull/19"
} | gharchive/pull-request | fix: fixes conflict-related issues
Fixes #18
Each commit fixes one issue.
Commit 1: Rollbacking a specific array element.
Commit 2: Rollbacking a specific array element (nested).
Commit 3: Rollbacking a number-indexed object.
Explanation
Rollbacking a specific array element
Initial document
{
array: ['a', 'b']
}
Migration query:
{
$set: {
'array.1': 'new b'
}
}
Generated rollback query:
{
"$set": {
"_id": "6720f5a6efc5558c18b3a795",
"array": [
"a",
"b"
]
},
"$unset": {
"array.1": 1
}
}
Proposed fix
For a field X to be added to the $unset section, there must not be any key K from the $set section where ${K}. is included in X (array.1 must not include array. in the example above).
Rollbacking a specific array element (nested)
Initial document
{
array: [
null,
{
nestedArray: [null, 'b']
},
],
};
Migration query:
{
$set: {
'array.1.nestedArray.1': 'new b'
}
}
Generated rollback query:
{
"$set": {
"_id": "6720f5daf91fae9d26b6990f",
"array": [
null
],
"array.1.nestedArray": [
null,
"b"
]
},
"$unset": {
"array.1.nestedArray.1": 1
}
}
The issue comes from the fact that we always set the complete array when we should be updating specific elements sometimes.
It's better in terms of performance to set an array completely rather than individually set its elements, so let's ensure that this remains the standard behavior when possible.
Proposed fix
After flattening the backup document, while iterating on its keys/values, the idea is to add an additional check when we encounter an array to restore: check if among the properties set during the update, a "deeper" key exists.
If that's the case, we should use
rollbackSet[${nestedPathToArray}.${index}] = value;
rather than
rollbackSet[nestedPathToArray][Number(index)] = value;
Rollbacking a number-indexed object
Initial document
{
object: {
0: 'a',
1: 'b'
},
};
Migration query:
{
$set: {
'object.1': 'new b'
}
}
Generated rollback query:
{
"$set": {
"_id": "6720f61a1f16e5169e89423e",
"object": "b"
},
"$unset": {
"object.1": 1
}
}
Individual array element setting have the same syntax as this case.
Fix proposed
Replace rollbackSet[nestedPathToArray] = value; with rollbackSet[key] = value;.
The rollbackSet[nestedPathToArray] = value; line was not covered by tests so I think unexpected side effects on that change are minimal.
Checklist
[x] Test new version on dev (locally).
[x] Test new version on staging (locally).
Hi @LucVidal360 @pp0rtal 👋
As @pp0rtal mentioned, I think it would be a good idea to create a RC and test this version in the CI if possible. I have tested it locally and things work as expected, but an additional check would be welcomed.
@LucVidal360 It sure was! 😁
@hugop95 Thanks a lot for this contribution! :rocket:
|
2025-04-01T06:36:39.199992 | 2016-11-10T19:03:40 | 188590915 | {
"authors": [
"eBucher",
"ezquire",
"freqlabs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:27",
"repo": "370-Alexa-Project/CS370_Echo_Demo",
"url": "https://github.com/370-Alexa-Project/CS370_Echo_Demo/issues/65"
} | gharchive/issue | Utterance Issues
Here are a list of known utterance issues that seem reasonable for a user to say, but Alexa is thinking that the user is trying to invoke a different intent. More will be added as they are found.
Utterance: Clubs
Expected Intent: ClubsCategoryIntent
Actual Intent: AllCategoryIntent
Utterance: how much is it
Expected Intent: GetFeeDetailsIntent
Actual Intent: AllCategoryIntent
3
Utterance: Tell me the sports events
Expected Intent: SportsCategoryIntent
Actual Intent: NextEventIntent
Ok so i think this issue can be rolled into the other one any objections?
4
Utterance: Where is it
Expected Intent: LocationDetailIntent
Actual Intent: AllCategoryIntent
I think we need versions of the detail intent utterances that don't have slots.
5
Utterance: What's happening Monday
Expected Intent: GetEvenstOnDateIntent
Actual Intent: AllCategoryIntent
6
Utterance: What's tomorrow
Expected Intent: GetEventsOnDateIntent
Actual Intent: NextEventIntent
I'm closing this because all the quirks listed here have been fixed. The tree is currently undergoing some big changes, so a new issue may be started for new quirks that are encountered.
|
2025-04-01T06:36:39.215978 | 2020-12-08T13:56:29 | 759466495 | {
"authors": [
"123321ssd",
"3arthqu4ke",
"notperry1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:28",
"repo": "3arthqu4ke/PingBypass-Client",
"url": "https://github.com/3arthqu4ke/PingBypass-Client/issues/2"
} | gharchive/issue | Server.json
Pingbypass client doesn't create the server.json file for some reason
The Server.json should be created on your VPS by the PingBypass. The Client creates no such file.
this is the client not the server xd
|
2025-04-01T06:36:39.219811 | 2020-05-24T16:28:54 | 623898024 | {
"authors": [
"DAMO238",
"TonyCrane",
"aliPMPAINT"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:29",
"repo": "3b1b/manim",
"url": "https://github.com/3b1b/manim/pull/1102"
} | gharchive/pull-request | Added multithreading support
Reason for adding
To enable faster rendering of scenes on cpu's with a high amount of cores vs single core speed.
Allow users to opt in using the new "-j" or "--threads" flag followed by the integer number of threads to use (note that -j is picked because -t is taken and the syntax of gcc is -j4 for 4 cores).
Working Example
My CPU does not benefit massively from it, but using 2 threads gave me a roughly 10-20% speed boost (measured using the time command).
Threads can only work on seperate scenes, so enabling extra threads when processing one scene will do nothing.
Makes use of the inbuilt threading library, so there is no need to add more dependencies.
Dude, you made lots of people angry...
Just one little piece of advice: Be careful
Seems outdated, so I'm closing this.
|
2025-04-01T06:36:39.274207 | 2016-03-18T01:50:19 | 141751883 | {
"authors": [
"hafizur-rahman"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:30",
"repo": "3scale/3scale_ws_api_for_python",
"url": "https://github.com/3scale/3scale_ws_api_for_python/pull/23"
} | gharchive/pull-request | Fix hardcoded timeout
Note: was introduced as a typo from https://github.com/3scale/3scale_ws_api_for_python/pull/21
@vdel26 Please include this fix as well. Thanks.
|
2025-04-01T06:36:39.310152 | 2024-06-24T15:19:31 | 2370515883 | {
"authors": [
"olli-gold",
"saschaszott"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:31",
"repo": "4Science/DSpace",
"url": "https://github.com/4Science/DSpace/issues/462"
} | gharchive/issue | SolrDedupServiceImpl: potential bug found in method cleanIndex
Bug Description
We found a potential bug in method cleanIndex:
https://github.com/4Science/DSpace/blob/04c7aa622a94e048a056dee462d8220592764971/dspace-api/src/main/java/org/dspace/app/deduplication/service/impl/SolrDedupServiceImpl.java#L637
Should this check be modified to i != null?
As far as I understand the code I would not say this is a bug. If there is an object in the index, which does not exist in the database, it's removed from the index. If it exists it should not be removed, as long as the index should not be erased completely. If it's expected to erase all of the index, this check is not needed at all and all objects should be unindexed (but I guess it does not make sense to erase the index by iterating through all objects).
So I don't think this is a bug and would guess it's intended as it is.
@olli-gold , thanks. That makes sense. I'll close the issue as won't fix.
@olli-gold , I have to reopen this issue.
If i is null, then in
https://github.com/4Science/DSpace/blob/04c7aa622a94e048a056dee462d8220592764971/dspace-api/src/main/java/org/dspace/app/deduplication/service/impl/SolrDedupServiceImpl.java#L642
the null value is passed as second method parameter.
At the end, this will result in a NullPointerException in
https://github.com/4Science/DSpace/blob/04c7aa622a94e048a056dee462d8220592764971/dspace-api/src/main/java/org/dspace/app/deduplication/service/impl/SolrDedupServiceImpl.java#L533
because of null object access in item.getID() and item.getType(). Do you agree?
Oh, yes, you are right. This is obviously a bug, which needs to be fixed.
|
2025-04-01T06:36:39.319334 | 2023-03-08T21:38:26 | 1615982014 | {
"authors": [
"clarabakker"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:32",
"repo": "4dn-dcic/Benchmark",
"url": "https://github.com/4dn-dcic/Benchmark/pull/2"
} | gharchive/pull-request | pruned and updated EC2 spreadsheet
Updated EC2 spreadsheet using https://instances.vantage.sh/
Hm, I wanted the process to be reproducible, so I avoided manually adding entries. The indexing needed to be changed for this, which is responsible for the missing names (and can be fixed). I'll see what will help.
|
2025-04-01T06:36:39.362871 | 2016-11-14T21:16:39 | 189228600 | {
"authors": [
"metaprime",
"ravenstorm767",
"seattle255"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:33",
"repo": "4pr0n/ripme",
"url": "https://github.com/4pr0n/ripme/issues/361"
} | gharchive/issue | Future of RipMe: separation of program + ripper logic
@4pr0n seems to not be maintaining this project on a regular basis anymore. It's totally understandable. I myself haven't had any time lately to dedicate to any side hobby projects, let alone this one. For the most part, RipMe still continues to work for me. (I still use it multiple times a week.)
As discussed in #247 it seems to many people feel like the project is dead -- and/or for their scenarios, it is not working as well any more, and their scenarios are not being updated to fix the problems.
@Wiiplay123 @i-cant-git - since you two have also been contributors to the project and commented on #247, I wonder if we can discuss a potential future for this project? Also if you could also share if you know of any currently-maintained projects that can be used as alternatives?
I think this style of catch-all project is the sort of thing that is unmaintainable in the long term except by a seriously dedicated effort. The problem is that there's essentially no limit to the number of rippers that could be included in this project's source code. Things have gotten really bloated here, and everyone is depending on official updates from a single source to add new rippers. It's hard to know how to prioritize maintenance.
Questions like what rippers are people using most can only be answered by how loudly people complain about the broken ones. There's a lot of more obscure sites that are supported in-box with RipMe (I contributed some of them), and maybe some of the more common ones go by the wayside when trying to support so many.
I've been thinking lately that this project is really in two distinct parts:
There's the core of the project which provides the structure, interface, and framework to use to define the rippers.
There's the rippers themselves, which all more or less follow a consistent algorithm of starting at a URL, navigating through some HTML, extracting image links from the HTML, and queuing those images to be ripped.
I've been thinking it might be a worthwhile effort to separate the two concerns. Keep all of part #1 in the same repo, and expose rippers as a plug-in model. Move the rippers into another repo, maybe keep just the core of the rippers maintained by the main project (in a separate), and make it easy for users to define their own locally on their machine. Add a way to add new ripper sources (github repos, local sources, links to individual definition files) in the RipMe UI.
Rippers could even be written in a non-compiled scripting language like JavaScript (since the JDK has a built-in ScriptEngine for JavaScript), or if we can separate concerns well enough, define the logic in a simple description language like JSON. If we could do that, individuals could maintain their own rippers, and we could provide links to known ripper sources, as well as including a few of those sources in the default application configuration.
Pages that host image content generally look like one of the following:
Images are embedded directly in the page (action: download the embedded images)
Thumbnails link to full-size images (action: download the images at the links)
Thumbnails link to another page like 1 (action: load the linked page and then use action 1)
Thumbnails link to another gallery page like 2 or 3 (action: load the page and then use action 2 or 3).
Thumbnails link through an ad-wall which redirects to an image or a page like 1 or 2 (I'm not sure if we currently have any rippers which automate getting through the ad wall)
The sites we are interested in making rippers for are either one of the above, or a social style of website where users aggregate content by linking to pages like the above.
For sites formatted like 1 and 2 (example: 4chan threads are formatted like 2), AND where all content is on a single page (whether the content is embedded or linked), there are already many tools which download content from any arbitrary website (no specific ripper logic would really be needed in that case, and actually significantly restricts the usefulness of the ripper). Here's a recommendation for a download manager that can deal with that kind of website (Firefox only, unfortunately, but since RipMe users are using an external program to do our image downloads, I'm sure that's okay): http://www.downthemall.net/
For me, that covers a lot of sites I'm interested in that don't already have rippers, and also covers a lot of sites that do already have rippers. In that case, the rippers are probably redundant.
For sites like 1 and 2 where all the content isn't on a single page, we need to supply some logic to navigate from page to page, and otherwise, the generic techniques for 1 and 2 can be automatically applied once we get to a page where they apply.
Sometimes it is possible to construct the URL of the image from the thumbnail in a gallery in style 3. The e-hentai ripper is an example of this. Following a technique like that saves us from loading a ton of additional pages, which saves time and keeps the ripper from getting blocked because it made too many requests in a short period of time (DDOS detection or REST API limiting).
One place a program like this helps a lot is for sites like Tumblr and Instagram that deliberately make it difficult for a user to download the content by either blocking right-clicking or by obscuring the image in the web page somehow that makes it difficult or impossible to right-click and save. But, because those images are downloaded into the page, it is possible for us to get those links and download them to save on the user's computer. The location to find the URL on the page is usually easily extracted with some simple HTML-traversal logic. This is the sort of automation we strive to allow with RipMe.
I think the biggest use-case for this application is mostly for websites that host community-generated content in large or even indefinitely-sized albums, especially when that content is spread out over many pages: Reddit (subreddits, user profiles), Imgur (mainly because of heavy use in Reddit), Tumblr, Instagram.
Those are just some thoughts. There's likely to be more.
Summary of action items:
To reduce Ripper maintenance, enable automatic detection of page styles 1 and 2 and do the right thing in those cases. Then, remove rippers with only that basic logic. Possibly, add a whitelist of URL patterns known to be page styles 1 and 2, so that the user never needs to know there's no longer a dedicated Ripper for those pages.
Page styles 3 and 4 could be automatically detected and ripped, but we should be careful to add delays so that the Ripper doesn't get blocked for requesting too many pages at once. Rippers that use the gallery to deduce the actual image URLs should be kept. This style of ripper logic would likely be easy to encode as a simple RegEx like s/(.*foobar\.com.*)\/thumb(\/.*\.jpg)/\1\2/ -- remove the /thumb/ from the path.
Page style 5 would be difficult to detect automatically without trying to navigate the pages but we might be able to add logic to automatically click through different kinds of ad-walls like adf.ly. Once we get to the other side of the ad-wall we could try to automatically detect the type of page and do the right thing.
After that, any Rippers which meaningfully improve performance or reliability could be added to one of the Ripper galleries (either the separate repo maintained by the RipMe project maintainers, or a third party ripper repo).
Even for the well-known gallery types, it's still nice to automatically detect an album name from the page. I think currently, we don't ever try to detect an album name and instead let the ripper decide it. Still nice to let the ripper decide if it wants to, but detecting the title from the page would be nice as well, as long as we're going ahead with automatic detection.
Additionally, I realized that if we release at least some rippers via separate repos, especially as non-compiled scripting or spec code, we don't have to re-release (and force users to re-install) a new version of RipMe for every small fix or update to the rippers. We just download the new ripper definitions and continue with the same version of the software.
I'd propose moving to version 2.0 if we refactor the API this way, and make sure that we use SemVer with respect to the ripper definition interface to ensure compatibility of rippers with a particular version of RipMe.
@4pr0n on a related note, would you consider adding some of us who have previously contributed to this project as contributors so that we can manage issues/PRs and not have to hijack this project on a fork or simply leave the project to die?
I nominate myself as a maintainer :)
This looks great. Has the project been forked?
@seattle255 In fact, @4pr0n just added me as a collaborator (I requested via PM over on Reddit) so I guess I'm at least partially taking over management of the project.
First things first is to bring the project up-to-date for various changes to the websites that have partially or completely broken the rippers. Before making any big changes we need to fix some things. Time to start merging some pull requests! (Although that might need to wait until I'm free after the holidays.)
okay, first of all, i have pretty much zero knowledge on how all of this works but i have been using this tool for a long time now and appreciate how much work and effort you guys put into it.
Im basically an end user who barely knows how to tinker with the configs. pretty much all ive done in this project is to suggest websites to be added and etc.
I am wondering how this would affect me, or pretty much a significant amount of other users. my interpretation is that you guys are planning on lets say, making a separate "game' with the ripper with rippers for other sites acting as additional "dlcs" or whatever analogy fits better
anyway good luck to you guys and whatever youre all planning to do and ill gladly help test them out on all the sites ive used ripme on and other possible sites as well
@ravenstorm767 - There's nothing to worry about. The program would work the same way as before, with some new features, as described above. Anything would either just work, or would work like installing a plug-in, to support new websites.
I do hear you that making the project more complicated for the users is a non-goal. I'll reconsider some of what I've proposed to ensure that the project stays simple and easy to use.
The main motivation here is to reorganize the code to make things a bit easier to maintain. As you've probably noticed, there's a lot of interest in a large number of websites for this program to support, and a serious lack of man-hours to support this project. The less that needs maintaining in the core project, the easier it will be on the maintainers.
If we could make a large number of websites "just work" without specific support, that would be a huge step forward in reducing the cost to maintain this project, with no loss of features.
Until I get started, I won't know how feasible these changes will be. Now that the project has an additional maintainer, it will be much easier to keep the project healthy.
Noticed this early comment by @4pr0n: https://github.com/4pr0n/ripme/issues/8#issuecomment-40295011 laying out plans for a generic ripper.
|
2025-04-01T06:36:39.395638 | 2017-06-22T21:42:06 | 237982466 | {
"authors": [
"UndeadSec",
"iFireTech"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:34",
"repo": "4w4k3/BeeLogger",
"url": "https://github.com/4w4k3/BeeLogger/issues/41"
} | gharchive/issue | Problem with BeeLogger
This is my problem
sh: 1: wine: not found
Traceback (most recent call last):
File "bee.py", line 161, in
main()
File "bee.py", line 134, in main
os.rename('dist/k.exe', 'dist/' + name)
OSError: [Errno 2] No such file or directory
Probably you don't have the repo.
See: https://docs.kali.org/general-use/kali-linux-sources-list-repositories
|
2025-04-01T06:36:39.426188 | 2024-08-23T05:15:06 | 2482327327 | {
"authors": [
"Rootsalman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:35",
"repo": "5up-okamura/react-native-draggable-gridview",
"url": "https://github.com/5up-okamura/react-native-draggable-gridview/issues/13"
} | gharchive/issue | Getting multiple touch issue when trying to take a screenShot
whenever Iam trying to take a screenShot iam getting the error like (TypeError: Cannot read property 'x' of undefined, js engine: hermes).
please help me with this.
@5up-okamura please respond.
|
2025-04-01T06:36:39.436228 | 2021-10-06T19:50:40 | 1019092507 | {
"authors": [
"will-holley"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:36",
"repo": "721labs/partial-common-ownership",
"url": "https://github.com/721labs/partial-common-ownership/pull/15"
} | gharchive/pull-request | Add math lib to PartialCommonOwnership721.sol
In working on https://github.com/721labs/partial-common-ownership/issues/9, I learned that the non-deterministic test results are caused by Solidity rounding down during integer division. This PR seeks to fix this by adding a Math library that provides representational-support for floating point numbers.
👋🏻 Adding a Math library ended up being unnecessary to pass the failing tests (which failed as a result of broken tests, not broken logic).
|
2025-04-01T06:36:39.448424 | 2017-11-24T15:51:23 | 276652426 | {
"authors": [
"CunningFatalist",
"blecua84"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:37",
"repo": "7leads/ngx-cookie-service",
"url": "https://github.com/7leads/ngx-cookie-service/issues/7"
} | gharchive/issue | Get a cookie from a defined path
Hi guys!
Great module! It's very useful!
I have a question for you. I'm going to receive a cookie in my application from an external CRM. It could be stored with path and domain. Using your library, I've tried to retrieve it using the "get" method with cookies's name, but I can't get it if it was created with specified path.
Can I retrieve a cookie specifying the path that was used to create the cookie?
Thanks you in advance!
Jose
Hello,
I am very sorry for the late answer. I am quite busy at the moment and guess that this won't change until the end of the year 😀
Have you managed to solve your issue yet?
I will try to answer your question to the best of my knowledge. It is possible to get cookies from different paths, if you're doing this on the server side. It is more hacky and not recommended, if you're using JavaScript. Sources:
https://stackoverflow.com/questions/945862/retrieve-a-cookie-from-a-different-path
https://www.sitepoint.com/community/t/access-cookie-of-different-more-specific-path-but-same-domain/6475/4
I hope this helps.
Cheers
PS: Again, if you solved your issue I would appreciate if you could share your solution here, so that others can use your experience for future reference 👍
|
2025-04-01T06:36:39.479609 | 2024-07-12T05:41:28 | 2404798740 | {
"authors": [
"Vad1mo",
"albertollamaso"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:38",
"repo": "8gears/n8n-helm-chart",
"url": "https://github.com/8gears/n8n-helm-chart/pull/105"
} | gharchive/pull-request | Fix: OCI registry when releasing helm chart
Installation of helm chart use this path in the OCI registry: oci://8gears.container-registry.com/library/n8n but with latest release it is required to add an extra path /n8n
When trying:
helm pull oci://8gears.container-registry.com/library/n8n --version 0.24.0
Error: 8gears.container-registry.com/library/n8n:0.24.0: not found
it works when
helm pull oci://8gears.container-registry.com/library/n8n/n8n --version 0.24.0
But that's not the path used in the helm chart documentation. CF: https://artifacthub.io/packages/helm/open-8gears/n8n/
You can also confirm that latest release 0.24.0 is not available in the helm chart. CF: https://artifacthub.io/packages/helm/open-8gears/n8n/?modal=changelog
Summary by CodeRabbit
Chores
Updated Helm chart push configuration to a new location within the container registry.
@albertollamaso thank you, for correcting that typo.
new release is out
Thanks @Vad1mo for quick action on this. I am now able to pull the release version 0.24.0 using the proper OCI ur.
Perhaps it is not showing in the UI: https://artifacthub.io/packages/helm/open-8gears/n8n/?modal=changelog
Not sure if an extra step is required in artifacthub.io to be honest.
|
2025-04-01T06:36:39.483278 | 2016-07-27T13:35:01 | 167855230 | {
"authors": [
"florianpreusner",
"mathielen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:39",
"repo": "8p/GuzzleBundle",
"url": "https://github.com/8p/GuzzleBundle/pull/57"
} | gharchive/pull-request | Update Logger.php
Q
A
Bug fix?
yes
New feature?
no
BC breaks?
no
Deprecations?
no
License
MIT
$context['request'] could be null (i.e. when the requested URL could not be resolved - DNS-wise)
Which results in:
Type error: Argument 1 passed to EightPoints\Bundle\GuzzleBundle\Log\LogResponse::__construct() must be an instance of Psr\Http\Message\ResponseInterface, null given
Nice! Thanks for the fix!
Going to merge it after Travis CI tests and create a new bugfix version (5.0.1).
|
2025-04-01T06:36:39.512723 | 2020-07-11T06:20:43 | 655146269 | {
"authors": [
"muthuspark"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:45",
"repo": "9sphere/text-detector",
"url": "https://github.com/9sphere/text-detector/issues/2"
} | gharchive/issue | Adjust the discarded region size to be relative to the size of the image
Previous value of 15 works for images smaller than 1000 * 1000 pixels size but if its larger then the region size needs to be larger too.
fixed.
|
2025-04-01T06:36:39.521181 | 2022-11-16T00:15:34 | 2104400752 | {
"authors": [
"rbeucher"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:46",
"repo": "ACCESS-NRI/COSIMA-recipes-workflow",
"url": "https://github.com/ACCESS-NRI/COSIMA-recipes-workflow/issues/58"
} | gharchive/issue | Identify Core Datasets needed to run the COSIMA-recipes
We need to make sure that all data required to run the recipe is available.
@max-anu, I believe you have made a list somewhere.
I am especially concerned about the things that are stored in hh5.
We need to clarify what is needed to run the recipes and what we can/will support or not.
|
2025-04-01T06:36:39.523676 | 2024-08-29T01:34:27 | 2493270551 | {
"authors": [
"aidanheerdegen",
"blimlim"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:47",
"repo": "ACCESS-NRI/access-esm1.5-configs",
"url": "https://github.com/ACCESS-NRI/access-esm1.5-configs/pull/84"
} | gharchive/pull-request | Historical - Increase jobfs to 1500MB
This pull request increases the jobfs requested for the historical configuration to 1500MB from the default 800MB, with the goal of avoiding jobfs exceeded errors.
Closes historical half of #83.
Note that jobfs is shared over nodes
https://opus.nci.org.au/display/Help/PBS+Directives+Explained#PBSDirectivesExplained--ljobfs=<10GB>
On 48 cpu nodes ACCESS-ESM1.5 uses 8 nodes. If the JOBFS is used in setup it will only be the root node, so the usage will be concentrated on a single node.
Dale wrote a very nice explainer of JOBSFS in case anyone is interested
https://climate-cms.org/posts/2022-11-10-jobfs.html
|
2025-04-01T06:36:39.530679 | 2020-09-29T00:32:54 | 710663068 | {
"authors": [
"SamuelJoly",
"acedrew"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:48",
"repo": "ACE-IoT-Solutions/ace-svg-react",
"url": "https://github.com/ACE-IoT-Solutions/ace-svg-react/issues/6"
} | gharchive/issue | how to use data from the query
Hi! I am very new to Grafana, Javascript and SQL. I am now trying to develop an interface on ACE.SVG. I have no trouble with my query. Although, I can't find the way to use it properly with this plugin. Could someone give me a quick example or explanation on how to get my data?
I have seen this demo, but this is very unclear to me. Especially "let buffer = data.series[0].fields[1].values.buffer;".
options.animateLogo = (svgmap, data) => { let buffer = data.series[0].fields[1].values.buffer; let valueCount = buffer.length let chartData = []; for (let i=0; i<valueCount; i+=(Math.floor(valueCount / 4)-1)) { chartData.push(buffer[i]) } let minData = chartData.reduce((acc, val) => { return Math.min(acc, val);
Thank you to help a beginner!
@SamuelJoly
The data variable is how you access the Grafana data frame API.
Grafana has great documentation for that API here: https://grafana.com/docs/grafana/latest/developers/plugins/data-frames/
All the code above is just sampling values from the time series, specifically 4 values, at even spacing, then getting the max and min from that set of 4 values.
Also remember you can use your browser's developer tools and console.log() to dig into any of this.
Here's pseudocode with literal values:
let buffer = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,16,17,18,19] // The first series in the data frame, the second column (value rather than timestamp) and the values in the form of a single buffer, or array let valueCount = buffer.length // this is just capturing the length of the series for clarity let chartData = [] // this is just making an empty list for us to put the 4 samples we want into for 0 through 3 as i, get the value at the address i times 1/4 the length of the list, and add that value to the chartData array let minData equal the smallest value in the chartData Array we built
Does any of that help?
Thank you! This makes it very clear!
Might have other questions later ;)
|
2025-04-01T06:36:39.538330 | 2017-09-25T17:38:16 | 260357747 | {
"authors": [
"mccoy20",
"sterlingbaldwin",
"zshaheen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:49",
"repo": "ACME-Climate/acme_processflow",
"url": "https://github.com/ACME-Climate/acme_processflow/issues/37"
} | gharchive/issue | acme_diags did not produce output, waiting for user input on UVCDAT_ANONYMOUS_LOG ?
The acme_diags did not produce output in the latest run on 9/22 (/p/cscratch/acme/mccoy20/test_2017-09-14-Chris)
It seems that it may be waiting for user input, here is the output of acme_diag_set_1980_1984_8d193.err
[mccoy20@acme1 run_scripts]$ more acme_diag_set_1980_1984_8d193.err
Traceback (most recent call last):
...
File "/export/mccoy20/anaconda2/envs/workflow/lib/python2.7/site-packages/cdms2/init.py", line 6, in
cdat_info.pingPCMDIdb("cdat", "cdms2") # noqa
File "/export/mccoy20/anaconda2/envs/workflow/lib/python2.7/site-packages/cdat_info/cdat_info.py", line 205, in pingP
CMDIdb
askAnonymous(val)
File "/export/mccoy20/anaconda2/envs/workflow/lib/python2.7/site-packages/cdat_info/cdat_info.py", line 164, in askAn
onymous
"(you can also set the environment variable UVCDAT_ANONYMOUS_LOG to yes or no)? [yes]/no: ")
EOFError: EOF when reading a line
@mccoy20 @sterlingbaldwin I think Renata's correct, I've had this issue before. Just set the environmental variable beforehand. So either export UVCDAT_ANONYMOUS_LOG=False or export UVCDAT_ANONYMOUS_LOG=True.
Fixed in the new nightly.
|
2025-04-01T06:36:39.612242 | 2020-05-07T12:08:05 | 614005565 | {
"authors": [
"jshier",
"matthewmayer"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:50",
"repo": "AFNetworking/AFNetworking",
"url": "https://github.com/AFNetworking/AFNetworking/pull/4565"
} | gharchive/pull-request | Use the latest patch version in README
Goals :soccer:
Make README up to date
Implementation Details :construction:
Specifying 4.0 instead of full patch version caused problem e.g. as at https://stackoverflow.com/questions/61655457/cocoapods-could-not-find-compatible-versions-for-pod-afnetworking/61657341#61657341
Testing Details :mag:
Documentation update only
Thanks for the PR! However, these are correct: all of the package managers should properly update to the latest version with the current version strings, it's usually just a matter of "update" vs. "install" actions. I like to avoid having to update the README for every version bump. Thanks anyway!
|
2025-04-01T06:36:39.613999 | 2019-03-30T11:30:28 | 427273155 | {
"authors": [
"AFathi",
"asam139",
"hpayami",
"vade"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:51",
"repo": "AFathi/ARVideoKit",
"url": "https://github.com/AFathi/ARVideoKit/pull/79"
} | gharchive/pull-request | Update to Swift 5.0
The migration was easy. It only appears a couple of warnings.
This is great, just works, thank you!
I manipulated and manually added to my project and worked.
Thanks for your code.
@asam139, I have added your changes under the swift_5 branch.
Thanks for contributing to ARVideoKit!
|
2025-04-01T06:36:39.632292 | 2024-11-19T21:26:20 | 2673576959 | {
"authors": [
"kscott-1"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:52",
"repo": "AI-sandbox/snputils",
"url": "https://github.com/AI-sandbox/snputils/pull/8"
} | gharchive/pull-request | redo pvar indexing logic & lazily load pvar file to save memory
This fixes #7 and also adds some lazy functionality from polars. I did scrap this together rather quickly, so I suggest pulling down, running tests, and building any additional fixes on top of this.
I see the issue - should be fixed in the third commit
|
2025-04-01T06:36:39.642273 | 2023-10-03T12:56:47 | 1924095378 | {
"authors": [
"kaivalmehta"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:53",
"repo": "AIBauchi/PyDS-A",
"url": "https://github.com/AIBauchi/PyDS-A/pull/47"
} | gharchive/pull-request | added avl tree implementation
PR Description
This code implements avl trees, insertion and deletion in it. It also adds functionalities of different types of traversals
Related Issue
Closes: #46
Changes Made
List the changes you made in this pull request:
Created the file for and implemented Avl Tree Data structure and several operations in it.
Testing
##Manual Testing
I tested the code with several examples of insertions , testing and covering all the edge cases. One of the test case is given in the code as example usage..
Author
Kaival Mehta (kaivalmehta)
@Tinny-Robot If the code is okay can you merge the request? Or do i need to make any changes?
I have made the changes, kindly check it
|
2025-04-01T06:36:39.666297 | 2023-05-23T18:12:41 | 1722560287 | {
"authors": [
"AKD-01",
"rt-001"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:54",
"repo": "AKD-01/blogweet",
"url": "https://github.com/AKD-01/blogweet/pull/176"
} | gharchive/pull-request | Avoid Blank Posts and Add Default Photo #50
No Title No Post
No Post
No Title
@rt-001 where is default photo added?
Because of no respone from your side for more than 2 weeks, closed the pr.
@rt-001 where is default photo added?
@AKD-01 I did it in the last PR, but you mentioned that this reduces the user experience. I apologize for not replying earlier, but I completed my work.
Ohhk, if you have completed your work, you may raise a new pr.
|
2025-04-01T06:36:39.681067 | 2020-10-19T11:29:20 | 724525299 | {
"authors": [
"jimkont",
"mgns"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:55",
"repo": "AKSW/RDFUnit",
"url": "https://github.com/AKSW/RDFUnit/pull/104"
} | gharchive/pull-request | New CLI and ability to generate testcases from ontology only
In this branch I implemented the possibility to generate test case descriptions from an ontology. I switched from apache command line parser to picocli, bumped dependencies and (that might be problematic) did some code reformat (in an attempt to clean-up the code I used the Intellij Reformat Code feature).
Description
Almost every file has been touched in this branch, as I run code reformat on the project, which changed indentation of lines (quite a lot whitespace-only changes) and order of imports. Let me know, how we could handle this.
Following changes have been made:
introduced subcommands for validate and generate
bumped dependencies
create fat jar package (including all dependencies) using maven-assembly-plugin, new rdfunit-distribution project
prefix read from ontology vann:preferredNamespacePrefix if not given as parameter
Fixes (partly due to dependency updates):
Jena UUID generation
Motivation and Context
Generation of test case description needed for a customer.
How Has This Been Tested?
The command line args for validate work as before, a new subcommand has been introduced for the feature. shell scripts have been adapted accordingly, so there is no change.
Screenshots (if appropriate):
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist:
[?] My code follows the code style of this project.
[x] My change requires a change to the documentation.
[-] I have updated the documentation accordingly.
[-] I have added tests to cover my changes.
[-] All new and existing tests passed.
Thanks a lot for your contribution @mgns !
The whitespace changes makes it hard to review this change, I trust this is good and would be happy to merge as is but would make git history better to read if it would be easy to separate the whitespace changes with the new features.
one possible way could be the following, take the latest master and run the whole project with the same tool you used to create the whitespace changes and make a merge request with that alone. Once we merge that, rerun the same tool again on your branch and then rebase / merge on latest master. We can do a squash merge in the last step to make the current changes visible.
If this approach doesn't work we could merge it as is, wdyt?
Obsolete
Obsolete
|
2025-04-01T06:36:39.696031 | 2024-03-26T12:36:35 | 2208130287 | {
"authors": [
"TomXD1234"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:56",
"repo": "AMereBagatelle/fabricskyboxes",
"url": "https://github.com/AMereBagatelle/fabricskyboxes/issues/104"
} | gharchive/issue | Fabricskyblock not working on vulkan
When I installed fabricskyblocks for amazing sky and in my pack have custom sky but still it not working with vulkan so what I do
causes
I have find a fork of skybox name FabricSkyBoxes Interop and it work with your mod and it works fine after installing this fork
|
2025-04-01T06:36:39.700792 | 2019-12-12T23:20:59 | 537265881 | {
"authors": [
"dbrewer333"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:57",
"repo": "AMoo-Miki/homebridge-tuya-lan",
"url": "https://github.com/AMoo-Miki/homebridge-tuya-lan/issues/129"
} | gharchive/issue | Tuya-lan-find gives error when attempting to launch
I've run homebridge successfully for some months now, and recently installed the homebridge-tuya-lan module in order to control some Feit color lightbulbs that are not homekit compatible. I modified config.js of homebridge to include the tuya platform, but added no devices to it, since I don't yet have the id and key of the feit bulbs. Homebridge continues to launch successful except to complain of there being no configured devices for tuya. When I attempt to run tuya-lan-find, it give the following error:
/usr/local/lib/node_modules/homebridge-tuya-lan/bin/cli.js:175
let {address, port} = proxy.httpServer.address();
^
TypeError: Cannot read property 'address' of undefined
at proxy.listen (/usr/local/lib/node_modules/homebridge-tuya-lan/bin/cli.js:175:44)
at /usr/local/lib/node_modules/homebridge-tuya-lan/node_modules/http-mitm-proxy/lib/proxy.js:62:14
at /usr/local/lib/node_modules/homebridge-tuya-lan/node_modules/http-mitm-proxy/lib/ca.js:130:14
at /usr/local/lib/node_modules/homebridge-tuya-lan/node_modules/http-mitm-proxy/node_modules/async/dist/async.js:3888:9
at /usr/local/lib/node_modules/homebridge-tuya-lan/node_modules/http-mitm-proxy/node_modules/async/dist/async.js:473:16
at iterateeCallback (/usr/local/lib/node_modules/homebridge-tuya-lan/node_modules/http-mitm-proxy/node_modules/async/dist/async.js:988:17)
at /usr/local/lib/node_modules/homebridge-tuya-lan/node_modules/http-mitm-proxy/node_modules/async/dist/async.js:969:16
at /usr/local/lib/node_modules/homebridge-tuya-lan/node_modules/http-mitm-proxy/node_modules/async/dist/async.js:3885:13
at /usr/local/lib/node_modules/homebridge-tuya-lan/node_modules/mkdirp/index.js:47:53
at FSReqCallback.oncomplete (fs.js:161:21)
I'm running homebridge on Mac OS 10.15.1 on a 16" MacBook Pro.
I ran homebridge in debug mode and saw nothing pop up associated with tuya-lan-find in the console logs.
I found it necessary to launch tuya-lan-find as root in order for it to start the proxy server.
|
2025-04-01T06:36:39.714316 | 2018-12-30T12:47:32 | 394869533 | {
"authors": [
"attenzione",
"volemont"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:58",
"repo": "ANXS/hostname",
"url": "https://github.com/ANXS/hostname/pull/18"
} | gharchive/pull-request | Fix deprecation warning
Fixes
[DEPRECATION WARNING]: State 'installed' is deprecated. Using state 'present' instead.. This
feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Please release this to Galaxy
|
2025-04-01T06:36:39.800316 | 2023-02-03T19:50:40 | 1570360326 | {
"authors": [
"coveralls",
"jonluca",
"philsturgeon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:59",
"repo": "APIDevTools/json-schema-ref-parser",
"url": "https://github.com/APIDevTools/json-schema-ref-parser/pull/305"
} | gharchive/pull-request | feat: add reference resolution option to allow root level dereferencing
This helps solve https://github.com/APIDevTools/json-schema-ref-parser/issues/199 by adding a new flag, externalReferenceResolution, that allows for reference resolution at the root level
@philsturgeon can you give me admin/bypass permissions on this repo?
You got it.
On Tue, Sep 19, 2023 at 01:48, JonLuca De Caro @.***(mailto:On Tue, Sep 19, 2023 at 01:48, JonLuca De Caro < wrote:
@.***(https://github.com/philsturgeon) can you give me admin/bypass permissions on this repo?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
22 of 29 (75.86%) changed or added relevant lines in 6 files are covered.
1 unchanged line in 1 file lost coverage.
Overall coverage decreased (-0.2%) to 95.79%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
lib/refs.ts
2
3
66.67%
lib/index.ts
1
7
14.29%
Files with Coverage Reduction
New Missed Lines
%
lib/index.ts
1
96.76%
Totals
Change from base Build<PHONE_NUMBER>:
-0.2%
Covered Lines:
3212
Relevant Lines:
3311
💛 - Coveralls
|
2025-04-01T06:36:39.802465 | 2020-03-17T07:02:40 | 582799019 | {
"authors": [
"JamesMessinger",
"nkthanh98"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:60",
"repo": "APIDevTools/swagger-cli",
"url": "https://github.com/APIDevTools/swagger-cli/issues/41"
} | gharchive/issue | Can't not bundle in Travis CI
I try to bundle doc in travis but it's not success, Seems, it was a js error
Note, build folder existed
This is error
$ swagger-cli bundle -o build/swagger.bundle.yaml -t yaml swagger.yaml
Cannot read property 'mkdir' of undefined
This is .travis.yml
language: node_js
node_js:
- 8
before_install:
- npm install swagger-cli
- export PATH=$(npm bin):$PATH
script:
- swagger-cli validate swagger.yaml
after_success:
- swagger-cli bundle -o build/swagger.bundle.yaml -t yaml swagger.yaml
deploy:
provider: pages
skip_cleanup: true
github_token: $GH_TOKEN
local_dir: build
on:
branch: master
Node 8 is no longer supported. Change the node_js setting in your Travis CI file to 10 and it'll work
|
2025-04-01T06:36:39.830163 | 2016-10-13T03:41:18 | 182688138 | {
"authors": [
"ANWSY",
"alvaromb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:61",
"repo": "APSL/react-native-keyboard-aware-scroll-view",
"url": "https://github.com/APSL/react-native-keyboard-aware-scroll-view/issues/68"
} | gharchive/issue | the code in android made the app die
if(Platform.OS === 'android') {
try {
this.scrollToFocusedInputWithNodeHandle(currentlyFocusedField)
} catch (e) {
}
} else {
UIManager.viewIsDescendantOf(
currentlyFocusedField,
this.getScrollResponder().getInnerViewNode(),
(isAncestor) => {
if (isAncestor) {
// Check if the TextInput will be hidden by the keyboard
UIManager.measureInWindow(currentlyFocusedField, (x, y, width, height) => {
if (y + height > frames.endCoordinates.screenY) {
this.scrollToFocusedInputWithNodeHandle(currentlyFocusedField)
}
})
}
}
)
}
Please, in the meantime disable the automatic scrolling under Android:
enableAutoAutomaticScroll={(Platform.OS === 'ios') ? true : false}
that means this component doesn't work in the Android Platform .
such as this statement '
this.scrollToFocusedInputWithNodeHandle(currentlyFocusedField)
' will made the app die ?
There is no UIManager.viewIsDescendantOf in Android yet. The problem is that if you have multiple scroll views in the same scene, all will listen to the keyboard event but only one contains the TextInput children, so the ones without it will crash. The descendant method avoided the crash under iOS, but I haven't had time yet to send a PR for Android.
|
2025-04-01T06:36:39.930049 | 2024-01-23T06:04:30 | 2095303402 | {
"authors": [
"tracyn-arm",
"zxros10"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:62",
"repo": "ARM-software/armnn",
"url": "https://github.com/ARM-software/armnn/issues/756"
} | gharchive/issue | Gather(ND) dim error
gather_test.tar.gz
Execute command:
./aarch64_build/tests/ExecuteNetwork -N -I 100 -c GpuAcc -m gather_test/gather_dim_test_float32.tflite --reuse-buffers --tflite-executor parser
Warning: No input files provided, input tensors will be filled with 0s.
Info: ArmNN v33.1.0
arm_release_ver of this libmali is 'g6p0-01eac0', rk_so_ver is '5'.
Info: Initialization time: 23.86 ms.
Error: Failed to parse operator #4 within subgraph #0 error: Operation has invalid output dimensions: 3 Output must be an (4 + 1 - 1) -D tensor at function ParseGather [/home/arm-user/source/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:4786]
The Gather operator in the Compute Library used by Arm NN will build the output shape in a specific way:
https://github.com/ARM-software/ComputeLibrary/blob/c2a79a4b8c51ce835eaf984f3a1370447b3282c4/arm_compute/core/utils/misc/ShapeCalculator.h#L1684
The docs for arm_compute::misc::shape_calculator::compute_gather_shape() are:
/** Calculate the gather output shape of a tensor
*
* @param[in] input_shape Input tensor shape
* @param[in] indices_shape Indices tensor shape. Only supports for 2d and 3d indices
* @param[in] actual_axis Axis to be used in the computation
*
* @note Let input_shape be (X,Y,Z) and indices shape (W,O,P) and axis 1
* the new shape is computed by replacing the axis in the input shape with
* the indice shape so the output shape will be (X,W,O,P,Z)
*
* @return the calculated shape
*/
In the failing case provided, we have:
* @note Let input_shape be [1,40,20,4] and indices shape [1] and axis 3
* the new shape is computed by replacing the axis in the input shape with
* the indice shape so the output shape will be [1,40,20,1]
This results in the generated error where: [1,40,20] != [1,40,20,1] as Arm NN is conforming to the requirements of the library it uses and failing on the output tensor shape that was set in the model.
|
2025-04-01T06:36:39.937284 | 2023-03-28T10:57:10 | 1643695514 | {
"authors": [
"chetan-rathore",
"hrw"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:63",
"repo": "ARM-software/bsa-acs",
"url": "https://github.com/ARM-software/bsa-acs/issues/135"
} | gharchive/issue | Is there scenario document for SystemReady SR?
docs/ directory has scenario documents for SystemReady ES and IR. There is no such one for SystemReady SR (or LS).
I am trying to create new version of my xBSA checklist page and to add there info which (S)BSA ACS tests need to pass for each entry.
Scenarios cover several tags used by BSA ACS which are not mentioned in BSA specification (DEN0094).
Hi @hrw,
Its a nice suggestion, generally as both ES and SR are very close in terms of BSA rules needs to run on ES or SR systems, ES scenario document is valid for SR also, but with the current names of the documents it might seems that SR guide is missing.
We will discuss internally on two approaches.
Have only two scenario documents ( one for systems that uses Device tree and one for the system using ACPI) + one .md file that reflects which rules are applicable for which band and at what levels (something closer the table mentioned in your checklist page)
Have separate scenario documents for each band.
We will keep you updated on the same.
Thanks,
ACS team
Hi @hrw,
ACS design makes test layer agnostic to the PAL layer. Majority of test algorithm is same across systems with device tree, acpi table or even when running in baremetal enviroment.
Based on ACS design, Single Test Scenario document will be sufficient.
Further a testcase checklist will be added, which will cover
Which test is required for which platforms (IR, ES, SR, baremetal)
Which tests are verified at ACS end and which are not due to the required hardware not being available?
We are planning to upstream the changes by this month end.
Thanks,
ACS team
Hi @hrw,
As part of BSA ACS 1.0.5 release, we have made slight changes to documentation as discussed. (https://github.com/ARM-software/bsa-acs/commit/b30c93dbf6c239d9df1505d8364a0bffbe58a2f6)
Single test scenario document covering test algorithm for a test
testcase checklist which indicates for which Systemready band a test is required to run.
Thanks,
ACS team
Thanks @hrw for raising this, we are closing this as changes are merged.
|
2025-04-01T06:36:39.941563 | 2022-03-22T15:35:00 | 1176967784 | {
"authors": [
"chetan-rathore",
"gowthamsiddarthd",
"sunnywang-arm"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:64",
"repo": "ARM-software/bsa-acs",
"url": "https://github.com/ARM-software/bsa-acs/issues/24"
} | gharchive/issue | System hangs at test 861 : PCIe Unaligned access
In the verbose messages (the outputbsa -v 1), test 861 keeps printing the same message for checking bdf 000000.
[ 601.380221] 861 : PCIe Unaligned access START
[ 601.386824]
[ 601.386824] Calculated config address is 28c0600010
[ 601.386824] The BAR value of bdf 060000 is 3011
[ 601.386824] Calculated config address is 28c0000010
[ 601.386824] The BAR value of bdf 000000 is 0
[ 601.386824] Calculated config address is 28c0000010
[ 601.386824] The BAR value of bdf 000000 is 0
[ 601.386824] Calculated config address is 28c0000010
[ 601.386824] The BAR value of bdf 000000 is 0
[ 601.386824] Calculated config address is 28c0000010
It looks like the problem is that there is no break in while loop in https://github.com/ARM-software/bsa-acs/blob/1cc33fea036e4a34dae7f75366e685226a647417/test_pool/pcie/operating_system/test_os_p061.c#L50
The same test (405) in sbsa doesn't have this issue because it has a condition check to break the loop in https://github.com/ARM-software/sbsa-acs/blob/5ccf09073bd4c17cab96bc338e2a3a314bf3a078/test_pool/pcie/test_p005.c#L84
Therefore, we may need to make the change below to fix this issue.
Move the line "next_bdf:" to above the line "while (count--) {"
Remove "count--;" for the places that would run "goto next_bdf;"
This issue has been solved with the PR: https://github.com/ARM-software/bsa-acs/pull/27
Hi @sunnywang-arm,
The fix is merged with #27.
Thanks,
ACS team
|
2025-04-01T06:36:40.138536 | 2016-06-09T16:23:36 | 159450812 | {
"authors": [
"jupe",
"kjbracey-arm",
"yogpan01"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:65",
"repo": "ARMmbed/mbed-trace",
"url": "https://github.com/ARMmbed/mbed-trace/pull/35"
} | gharchive/pull-request | Fix #34: Do not try to use a va_list twice
If the prefix function was set, mbed_vtracef would pass ap to vsnprintf to
determine the output length, then use ap again for the actual output,
running off the real arguments.
Create a copy of ap for the initial scan. (Thank you to C99, who added
va_copy. Would have been stuck without it.)
missing test :)
Verified with mbed-client-testapp on Linux , works fine
+1
thanks :)
Expanded the existing prefix function test to take 2 arguments.
|
2025-04-01T06:36:40.144030 | 2020-04-22T12:41:28 | 604726170 | {
"authors": [
"danh-arm",
"yanesca"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:66",
"repo": "ARMmbed/mbedtls",
"url": "https://github.com/ARMmbed/mbedtls/issues/3226"
} | gharchive/issue | Migrate release announcements
Background
Release announcements are currently sent to a list of emails originating from the website, but which faces several issues (we don't have a mechanism to extend it anymore and sending out the announcements has an unacceptable delay).
Task Breakdown
[x] create a child page for the release process in confluence for announcement emails and document the remaining subtasks there
[x] ask @danh-arm about the process and create a new release mailing list (eg<EMAIL_ADDRESS>[x] create an official email address for the purpose of sending the announcements from that (eg<EMAIL_ADDRESS>[x] compose a last message to the old list with an announcement about the move and
point them to the new announcement list and invite them to the developer lists as well. Have the text reviewed by @danh-arm or @yanesca
[x] send out the last message (on how to do that, see "release announcement" on the Release Process confluence page)
[ ] draft an email template for a release email that is sent out 1 or 2 weeks prior the actual release
[x] revise and improve the template for the second release email. (for a sample release email see "release announcement" on the Release Process confluence page)
[x] have both of the templates reviewed by @danh-arm and @yanesca
This is linked to #3231 in that we want to announce that change at the same time.
The release announcements have been going out on the new announcement list for more than 6 months now. Sending an advance announcement wasn't part of the process and not necessary for the new release process.
Closing this issue as completed.
|
2025-04-01T06:36:40.167298 | 2023-03-13T00:20:51 | 1620568476 | {
"authors": [
"ASDAlexander77",
"Edouard127"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:67",
"repo": "ASDAlexander77/TypeScriptCompiler",
"url": "https://github.com/ASDAlexander77/TypeScriptCompiler/issues/22"
} | gharchive/issue | Cannot build config tsc debug: MLIR not found
I'm currently trying to build the project to work on my side, but I can't get past config_tsc_debug.bat
I'm building using Visual Studio 16 2019
C:\Users\Administrator\Desktop\BetterElytraBot\TypeScriptCompiler\__build\tsc>cmake ../../tsc -G "Visual Studio 16 2019" -A x64 -DCMAKE_BUILD_TYPE=Debug -Wno-dev
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19045.
-- MLIR_DIR is C:/Users/Administrator/Desktop/BetterElytraBot/TypeScriptCompiler/3rdParty/llvm/debug/lib/cmake/mlir
-- CMAKE_VS_PLATFORM_TOOLSET_HOST_ARCHITECTURE was x64 and set to x64
CMake Error at CMakeLists.txt:62 (find_package):
Could not find a package configuration file provided by "MLIR" with any of
the following names:
MLIRConfig.cmake
mlir-config.cmake
Add the installation prefix of "MLIR" to CMAKE_PREFIX_PATH or set
"MLIR_DIR" to a directory containing one of the above files. If "MLIR"
provides a separate development package or SDK, be sure it has been
installed.
-- Configuring incomplete, errors occurred!
See also "C:/Users/Administrator/Desktop/BetterElytraBot/TypeScriptCompiler/__build/tsc/CMakeFiles/CMakeOutput.log".
See also "C:/Users/Administrator/Desktop/BetterElytraBot/TypeScriptCompiler/__build/tsc/CMakeFiles/CMakeError.log".
The folder /llvm/debug/lib/mlir does not exist
you need to run prepare_3rdParty.bat and ensure that it finished
then u need to open "tsc" folder and run 2 bats: config_tsc_debug.bat and then build_tsc_debug.bat
then u need to open "tsc" folder and run 2 bats: config_tsc_debug.bat and then build_tsc_debug.bat
I found out that I needed more than 120 GB of storage to build it so I made some space and restarted
This is how it looks like, now
But it looks like there's still some missing folders
I did find an error and a warning from prepare_3rdParty.bat
CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.24/Modules/GNUInstallDirs.cmake:243 (message):
Unable to determine default CMAKE_INSTALL_LIBDIR directory because no
target architecture is known. Please enable at least one language before
including GNUInstallDirs.
Call Stack (most recent call first):
C:/Users/Administrator/Desktop/BetterElytraBot/TypeScriptCompiler/3rdParty/llvm-project/llvm/cmake/modules/LLVMInstallSymlink.cmake:5 (include)
tools/llvm-ar/cmake_install.cmake:48 (include)
tools/cmake_install.cmake:39 (include)
cmake_install.cmake:69 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Error at tools/mlir/lib/Dialect/MemRef/Transforms/cmake_install.cmake:37 (file):
file INSTALL cannot find
"C:/Users/Administrator/Desktop/BetterElytraBot/TypeScriptCompiler/__build/llvm/debug/lib/MLIRMemRefTransforms.lib":
No such file or directory.
Call Stack (most recent call first):
tools/mlir/lib/Dialect/MemRef/cmake_install.cmake:38 (include)
tools/mlir/lib/Dialect/cmake_install.cmake:66 (include)
tools/mlir/lib/cmake_install.cmake:40 (include)
tools/mlir/cmake_install.cmake:55 (include)
tools/cmake_install.cmake:45 (include)
cmake_install.cmake:69 (include)
I built the project again
Here are the errors
CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.24/Modules/GNUInstallDirs.cmake:243 (message):
Unable to determine default CMAKE_INSTALL_LIBDIR directory because no
target architecture is known. Please enable at least one language before
including GNUInstallDirs.
Call Stack (most recent call first):
C:/Users/Administrator/Desktop/BetterElytraBot/TypeScriptCompiler/3rdParty/llvm-project/llvm/cmake/modules/LLVMInstallSymlink.cmake:5 (include)
tools/llvm-ar/cmake_install.cmake:48 (include)
tools/cmake_install.cmake:39 (include)
cmake_install.cmake:69 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Error at tools/mlir/lib/Dialect/MemRef/Transforms/cmake_install.cmake:37 (file):
file INSTALL cannot find
"C:/Users/Administrator/Desktop/BetterElytraBot/TypeScriptCompiler/__build/llvm/debug/lib/MLIRMemRefTransforms.lib":
No such file or directory.
Call Stack (most recent call first):
tools/mlir/lib/Dialect/MemRef/cmake_install.cmake:38 (include)
tools/mlir/lib/Dialect/cmake_install.cmake:66 (include)
tools/mlir/lib/cmake_install.cmake:40 (include)
tools/mlir/cmake_install.cmake:55 (include)
tools/cmake_install.cmake:45 (include)
cmake_install.cmake:69 (include)
try do delete __build folder and try again
try do delete __build folder and try again
I tried that multiple times
The 4 first builds were all different
And the 2 latests builds ended up with the same content
I'm not exactly sure what exactly could cause this, maybe I have to use Visual Studio 17 2022 instead of 16 2019 but I can't get the compiler to be recognized with this version
try to use "ninja" build tools
Are you building on linux ?
I will try to build the project in linux to see if it works with ninja
|
2025-04-01T06:36:40.173901 | 2022-09-07T22:59:04 | 1365287730 | {
"authors": [
"jhkennedy"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:68",
"repo": "ASFHyP3/hyp3-testing",
"url": "https://github.com/ASFHyP3/hyp3-testing/pull/53"
} | gharchive/pull-request | Add autoRIFT golden test pairs for L4,5,7,9
The landsat-4 pair here failed with an OutOfMemoryError: Container killed due to memory usage for all 3 attempts (the log just abruptly ends which is indicative of a memory failure; confirmed by looking at the batch job)
https://hyp3-test-api.asf.alaska.edu/jobs/5efd7f58-0cdf-4663-bed7-68d7b00b16ee
I'll dig up a L4 pair we know runs.
@forrestfwilliams any thoughts as to why the L4 pair failed? Looks to be during the FFT step.
Submitted all the proposed pairs here to production:
https://hyp3-api.asf.alaska.edu/jobs?name=Golden test update for l457
{
"jobs": [
{
"execution_started": true,
"job_id": "75e52782-505b-4127-9107-66218c6691af",
"job_parameters": {
"granules": [
"LC09_L1GT_215109_20220125_20220125_02_T2",
"LC09_L1GT_215109_20220210_20220210_02_T2"
]
},
"job_type": "AUTORIFT",
"name": "Golden test update for l457",
"priority": 9996,
"request_time": "2022-11-03T00:36:26+00:00",
"status_code": "PENDING",
"user_id": "jhkennedy"
},
{
"execution_started": true,
"job_id": "87278854-a560-4bab-8004-4f79c1f5e73f",
"job_parameters": {
"granules": [
"LT05_L1GS_001013_19920425_20200915_02_T2",
"LT05_L1GS_001013_19920628_20200914_02_T2"
]
},
"job_type": "AUTORIFT",
"name": "Golden test update for l457",
"priority": 9998,
"request_time": "2022-11-03T00:36:26+00:00",
"status_code": "PENDING",
"user_id": "jhkennedy"
},
{
"execution_started": true,
"job_id": "d0e55049-c7cd-444b-87b9-6fef0d31b276",
"job_parameters": {
"granules": [
"LE07_L1TP_063018_20040911_20200915_02_T1",
"LE07_L1TP_063018_20040810_20200915_02_T1"
]
},
"job_type": "AUTORIFT",
"name": "Golden test update for l457",
"priority": 9997,
"request_time": "2022-11-03T00:36:26+00:00",
"status_code": "PENDING",
"user_id": "jhkennedy"
},
{
"execution_started": true,
"job_id": "879efe81-adf2-48cb-89db-f4bd88c577e4",
"job_parameters": {
"granules": [
"LT04_L1TP_063018_19880627_20200917_02_T1",
"LT04_L1TP_063018_19880627_20200917_02_T1"
]
},
"job_type": "AUTORIFT",
"name": "Golden test update for l457",
"priority": 9999,
"request_time": "2022-11-03T00:36:26+00:00",
"status_code": "PENDING",
"user_id": "jhkennedy"
}
],
"next": "https://hyp3-api.asf.alaska.edu/jobs?name=Golden+test+update+for+l457&start_token=eyJqb2JfaWQiOiAiZWEyNzkyMzgtM2M0Yi00YWI0LThkOWUtZWQ0OWZhZGZiYzkzIiwgInVzZXJfaWQiOiAiamhrZW5uZWR5IiwgInJlcXVlc3RfdGltZSI6ICIyMDIyLTAyLTE4VDIyOjI5OjQyKzAwOjAwIn0%3D"
}
Woops, accidentally summited the L4 pair with the same reference and secondary scenes. Here's the correct pair, as in the suggestion:
https://hyp3-api.asf.alaska.edu/jobs/4a74c527-f8d4-4ddf-b2ad-a3385f814153
|
2025-04-01T06:36:40.197695 | 2019-10-22T04:58:18 | 510429747 | {
"authors": [
"ang-zeyu",
"damithc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:70",
"repo": "AY1920S1-CS2103T-T11-4/main",
"url": "https://github.com/AY1920S1-CS2103T-T11-4/main/issues/71"
} | gharchive/issue | Highly active tP noted -- Kudos!
Guys, it looks like your tP is maintaining a high-level of coding activities so far and all members are actively contributing (as per the tP Dashboard) :+1:
Note: hope you can work together to help less-active team members, if they need help.
Keep up the good work!
noted 👍
|
2025-04-01T06:36:40.213491 | 2021-11-07T12:01:41 | 1046728764 | {
"authors": [
"codecov-commenter",
"g4ryy",
"nniiggeell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:71",
"repo": "AY2122S1-CS2103-T16-3/tp",
"url": "https://github.com/AY2122S1-CS2103-T16-3/tp/pull/286"
} | gharchive/pull-request | Update documentation
Changes include:
Fix issues in user guide
Update user stories in developer guide
Add additional NFRs in developer guide
Add Effort appendix in developer guide
Enlarge the storage class diagram
Readjusted the model component in the UI class diagram
Add PPP
Update Documentation.md to remove traces of AB-3
Update index.md to remove traces of address book
Codecov Report
Merging #286 (ca45524) into master (abc7c90) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #286 +/- ##
=========================================
Coverage 76.94% 76.94%
Complexity 998 998
=========================================
Files 140 140
Lines 2646 2646
Branches 356 356
=========================================
Hits 2036 2036
Misses 530 530
Partials 80 80
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update abc7c90...ca45524. Read the comment docs.
LGTM
|
2025-04-01T06:36:40.220176 | 2021-11-07T07:24:13 | 1046679827 | {
"authors": [
"codecov-commenter",
"yeo-yiheng"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:72",
"repo": "AY2122S1-CS2103T-T13-2/tp",
"url": "https://github.com/AY2122S1-CS2103T-T13-2/tp/pull/314"
} | gharchive/pull-request | Fix bold
Fixes #312
Codecov Report
Merging #314 (3d7a044) into master (122cdc2) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #314 +/- ##
=========================================
Coverage 75.97% 75.97%
Complexity 683 683
=========================================
Files 94 94
Lines 1998 1998
Branches 223 223
=========================================
Hits 1518 1518
Misses 414 414
Partials 66 66
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 122cdc2...3d7a044. Read the comment docs.
|
2025-04-01T06:36:40.225308 | 2021-10-20T13:54:22 | 1031442293 | {
"authors": [
"kflim",
"limzk126"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:73",
"repo": "AY2122S1-CS2103T-W10-1/tp",
"url": "https://github.com/AY2122S1-CS2103T-W10-1/tp/pull/72"
} | gharchive/pull-request | Update functionality of finding people
IMPORTANT (Can skip reviewing files changed first)
The functionality of find has been drastically changed to include flags for finding based on any detail (e.g. find -n for names). It will not work without flags but I am open to changing back to the original way of finding by name without a flag. Also, the keywords for find must be spaced apart, otherwise, it is taken as one keyword
It will no longer be possible to find multiple people at once just by matching any of the keywords for this implementation so "find -n Alex Bernice" will not give any results for the basic list because "Alex Bernice" does not match any name as an abbreviation. However, if the keywords match the names of multiple people, it would still return more than one person. There would probably need to be a certain character or flag to specify finding people with other abbreviations (which I think is unnecessary since there are tags and tasks for grouping but I would be open to making this option possible).
Also, the "subsequence" is relative to each part of a person's detail (e.g. name). So, an example would be "Harry James Potter" and "Harry J Potter" or "h P" would work because there is some part of the name that starts with one of the keywords and the order of the keywords relative to the name is the same, regardless of case. For this reasons, "a J P" would not be valid. It also preserves the functionality of finding by full name.
Let me know if the sequence is supposed to be one keyword and just match the names by each character in the subsequence and have the same order as the different parts of the name (e.g. ay -> alex yeoh) because I think a pure subsequence to match any part of a person's name could end up in matching many people's names that should not have matched unless it is case-sensitive (e.g. ex -> alex yeoh). Its also consistent with the way I implemented the find function for other details like tags and tasks because I think one word as a subsequence for many tags or tasks is not ideal. So, this is why I implemented finding by names this way but I could see a special case for names
Also, I'm not sure if everything needs to be abbreviated, like phone number, so do let me know as well.
Less important
However, this order that I've been using does not matter for tags or tasks since it is not as important as for the other details, though the keywords will still have to all match someone's tags or tasks, following the idea for finding by name (or its abbreviation).
LGTM. Might want to consider adding for description too.
Some descriptions might be long and random so I think it might be doable but I I think I would have to go back to full word matches to reduce the number of matches. If everyone thinks its ok, then I'll implement it.
LGTM. I think how the keywords are matched are similar to IntelliJ's search which I think is fine.
|
2025-04-01T06:36:40.269556 | 2022-04-05T07:38:49 | 1192751218 | {
"authors": [
"KwanHW",
"codecov-commenter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:74",
"repo": "AY2122S2-CS2103-W17-1/tp",
"url": "https://github.com/AY2122S2-CS2103-W17-1/tp/pull/322"
} | gharchive/pull-request | Update PPP - Hao Wei
Changes
This PR amends the PPP by adding recent contributions to the program as well as standardizing the program description.
Related Issues
None.
Codecov Report
Merging #322 (f74f3c0) into master (ed8ca54) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #322 +/- ##
=========================================
Coverage 72.28% 72.28%
Complexity 1247 1247
=========================================
Files 160 160
Lines 3842 3842
Branches 452 452
=========================================
Hits 2777 2777
Misses 985 985
Partials 80 80
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ed8ca54...f74f3c0. Read the comment docs.
|
2025-04-01T06:36:40.288161 | 2022-10-26T09:37:24 | 1423729718 | {
"authors": [
"HakkaNgin",
"codecov-commenter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:75",
"repo": "AY2223S1-CS2103-F13-3/tp",
"url": "https://github.com/AY2223S1-CS2103-F13-3/tp/pull/69"
} | gharchive/pull-request | Enhance new meeting parser
NOTE:
This is a major change that affects how CreateMeetingCommand and CreateMeetingCommandParser function, although the expected outcomes are the same. Need to re-write the corresponding tests.
What?
Modified CreateMeetingCommandParser such that it handles more of the validation of the user input instead of letting CreateMeetingCommand and Meeting handle the validation.
Why?
Previously most of the validation takes place in the CreateMeetingCommand and Meeting classes -- in CreateMeetingCommandParser, the parse(arguments) method only checks whether the arguments are empty, while passing the trimmed input arguments wholesale to the CreateMeetingCommand class for processing, which defeats the purpose of a parser. Thus it is preferable to restructure the code such that CreateMeetingCommandParser adheres to the necessary functionalities of a parser. Refer to #68
How?
I have moved the splitting of the trimmed user input (names of Person(s) to meet, meeting description, meeting date and time, meeting location) and the validation of date and time [essentially processes that do not require the model] forward to CreateMeetingCommandParser, leaving CreateMeetingCommand to use the model to handle the validation of the person/ contact, as well as the creation and validation of the new Meeting. Meeting will take in the information of the new meeting passed in as the arguments of its constructor without further validation.
Codecov Report
Base: 67.82% // Head: 62.18% // Decreases project coverage by -5.64% :warning:
Coverage data is based on head (ba0370d) compared to base (057cbb3).
Patch coverage: 38.98% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## master #69 +/- ##
============================================
- Coverage 67.82% 62.18% -5.65%
+ Complexity 556 518 -38
============================================
Files 102 105 +3
Lines 1930 1962 +32
Branches 209 215 +6
============================================
- Hits 1309 1220 -89
- Misses 564 677 +113
- Partials 57 65 +8
Impacted Files
Coverage Δ
...u/address/logic/commands/DeleteMeetingCommand.java
0.00% <0.00%> (ø)
.../seedu/address/logic/parser/AddressBookParser.java
62.96% <0.00%> (-2.43%)
:arrow_down:
...dress/logic/parser/DeleteMeetingCommandParser.java
0.00% <0.00%> (ø)
src/main/java/seedu/address/model/Model.java
100.00% <ø> (ø)
...rc/main/java/seedu/address/model/ModelManager.java
74.07% <0.00%> (-9.88%)
:arrow_down:
...el/meeting/exceptions/ImpreciseMatchException.java
0.00% <0.00%> (ø)
...main/java/seedu/address/model/meeting/Meeting.java
71.05% <42.85%> (-8.70%)
:arrow_down:
...u/address/logic/commands/CreateMeetingCommand.java
17.64% <45.45%> (-76.95%)
:arrow_down:
...dress/logic/parser/CreateMeetingCommandParser.java
82.60% <77.77%> (-17.40%)
:arrow_down:
...edu/address/logic/commands/EditMeetingCommand.java
57.14% <100.00%> (-38.10%)
:arrow_down:
... and 11 more
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
2025-04-01T06:36:40.304366 | 2022-10-18T08:21:52 | 1412782527 | {
"authors": [
"anthonyhoth",
"codecov-commenter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:76",
"repo": "AY2223S1-CS2103T-T14-3/tp",
"url": "https://github.com/AY2223S1-CS2103T-T14-3/tp/pull/79"
} | gharchive/pull-request | Added Edit Record Functionality
Added EditRecordCommand, EditRecordCommandParser for editing of a patient's records
Added test classes for EditRecordCommand, EditRecordCommandParser
Updated test class for DeleteRecordCommandParser
Added Exception classes for duplicate or missing Record
Enabled assertions in build.gradle
Renamed Record commands
Resolves #73 , #75
Codecov Report
Merging #79 (2510761) into master (f0fa370) will increase coverage by 0.94%.
The diff coverage is 73.83%.
@@ Coverage Diff @@
## master #79 +/- ##
============================================
+ Coverage 69.85% 70.80% +0.94%
- Complexity 535 567 +32
============================================
Files 94 98 +4
Lines 1712 1819 +107
Branches 172 196 +24
============================================
+ Hits 1196 1288 +92
+ Misses 465 463 -2
- Partials 51 68 +17
Impacted Files
Coverage Δ
.../java/seedu/address/logic/commands/AddCommand.java
85.71% <ø> (ø)
...seedu/address/logic/commands/AddRecordCommand.java
86.66% <ø> (ø)
...edu/address/logic/commands/ClearRecordCommand.java
100.00% <ø> (ø)
...du/address/logic/commands/DeleteRecordCommand.java
85.71% <ø> (ø)
...eedu/address/logic/commands/FindRecordCommand.java
81.81% <ø> (ø)
...eedu/address/logic/commands/ListRecordCommand.java
100.00% <ø> (ø)
.../seedu/address/logic/parser/AddressBookParser.java
68.00% <0.00%> (-2.84%)
:arrow_down:
src/main/java/seedu/address/model/Model.java
100.00% <ø> (ø)
...el/person/exceptions/DuplicateRecordException.java
0.00% <0.00%> (ø)
...del/person/exceptions/RecordNotFoundException.java
0.00% <0.00%> (ø)
... and 9 more
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
2025-04-01T06:36:40.309455 | 2023-10-20T07:26:24 | 1953700431 | {
"authors": [
"Cloud7050"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:77",
"repo": "AY2324S1-CS2103-W14-3/tp",
"url": "https://github.com/AY2324S1-CS2103-W14-3/tp/issues/95"
} | gharchive/issue | Combine sample data, examples into ProductionData
There are some classes that handle sample data that end users will see, such as SampleContactsUtil. There are also a fair few locations with help text that include many arbitrary strings of valid data. We could combine these into some form of ProductionData file, similar to how we now have a TestData. Note that such a file should likely be in main/ and not test/.
The class may contain mostly valid data, but in case invalid data is needed, it could contain that too. Then, we could import these values into TestData for use in our actual tests, replacing any existing values that serve the same purpose (it may turn out to be good to ensure the values we show to users are actually valid/invalid).
This may be considered a subissue of #76.
Nice to have, but won't negatively impact grading if not done. Culling this issue.
|
2025-04-01T06:36:40.310329 | 2023-10-23T02:23:06 | 1956227436 | {
"authors": [
"LimJH2002",
"lyuanww"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:78",
"repo": "AY2324S1-CS2103T-F08-1/tp",
"url": "https://github.com/AY2324S1-CS2103T-F08-1/tp/issues/73"
} | gharchive/issue | A revamped GUI for the existing UI
Note: The whole UI will be done by v1.3b.
And Also revamping success messages etc
|
2025-04-01T06:36:40.313192 | 2023-10-24T08:04:06 | 1958724289 | {
"authors": [
"dickongwd",
"lilozz2",
"zhyuhan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:79",
"repo": "AY2324S1-CS2103T-F11-4/tp",
"url": "https://github.com/AY2324S1-CS2103T-F11-4/tp/pull/68"
} | gharchive/pull-request | add PersonCreator class
Fixes #63 and #64
Should we just name it as PersonBuilder? I think that's the convention for this builder pattern.
What do you guys think about following the pattern described here.
The Person constructor will be private, and PersonBuilder will be a static nested class inside Person (so that it has access to the Person constructor). Any Person object will be created only through PersonBuilder. Optional attributes can be added by calling the corresponding with... method. The final collection of fields will then be used to create a Person using the build method.
Good idea! I will reference this in another issue.
|
2025-04-01T06:36:40.316362 | 2023-11-03T18:04:45 | 1976695659 | {
"authors": [
"Carlintyj",
"nus-pe-script"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:80",
"repo": "AY2324S1-CS2103T-T17-4/tp",
"url": "https://github.com/AY2324S1-CS2103T-T17-4/tp/issues/174"
} | gharchive/issue | [PE-D][Tester B] Improvement on the warning message returned
Command that I input: add /nMary p/11111 c/1111 t/1233
Response: Invalid command format!
add: Adds a person to the address book. Parameters: n/NAME p/PHONE [e/EMAIL] [a/ADDRESS] [th/TELEHANDLE] [t/TAG]... [c/COURSE]...
Example: add n/John Doe p/98765432<EMAIL_ADDRESS>a/311, Clementi Ave 2, #02-25 th/@Johnnnnyyy t/Friend c/CS2100 c/CS2103T c/IS1108
Perhaps try to specify which type of input the user input in an invalid format would be more useful. Currently, I am not sure about whether the input type of phone, course or tag is wrong in the case provided above.
Labels: severity.Low type.DocumentationBug
original: NgChunMan/ped#1
Note taken on feature suggestion.
|
2025-04-01T06:36:40.320008 | 2024-11-01T16:15:57 | 2629368506 | {
"authors": [
"RezwanAhmed123",
"zi-yii"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:81",
"repo": "AY2425S1-CS2103-F12-1/tp",
"url": "https://github.com/AY2425S1-CS2103-F12-1/tp/issues/182"
} | gharchive/issue | AddClaim unable to take in parameters in any order
Should we implement AddClaim such that parameters can be in any order (as stated in the original user guide)? There is a contradiction between the user guide and AddClaim parser.
When order of parameters is changed, the wrong error message is shown as well.
Possible solution 1: Remove the point on "Parameters can be in any order" from the user guide
Possible solution 2: Implement ability to take in parameters in any order for AddClaim
I personally think solution 1 would be better
i think solution 1 is also best, but if u wanna explore solution 2, maybe we can see how it is done in the "Add" command
i fixed it as such in my most recent PR
|
2025-04-01T06:36:40.322086 | 2024-10-28T15:34:45 | 2618780541 | {
"authors": [
"KengHian",
"Quasant"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:82",
"repo": "AY2425S1-CS2103-F13-4/tp",
"url": "https://github.com/AY2425S1-CS2103-F13-4/tp/issues/148"
} | gharchive/issue | [Alek] ummatch [non-natural num] [non-natural num] feedback not specific
I think this is not really a format issue?
This error message is not specific that the issue is with the index
|
2025-04-01T06:36:40.324229 | 2024-11-06T04:18:04 | 2636994433 | {
"authors": [
"KrashKart"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:83",
"repo": "AY2425S1-CS2103T-F14a-4/tp",
"url": "https://github.com/AY2425S1-CS2103T-F14a-4/tp/issues/240"
} | gharchive/issue | Standardize FindCommand logic
Currently, if we find by multiple keywords with the same field with find, CC will OR the predicates. However, when finding by different fields (name and tag for example), CC will AND both predicates.
Suspect this is due to the argMultiMap grouping keywords for the same field together, causing this difference (confirmed)
Proposed changes:
Enforce 1 keyword per field except for tag (keep the AND)
Get all contacts that fulfill any predicate (keep the OR)
Closed, we have decided to keep this functionality
|
2025-04-01T06:36:40.333421 | 2018-04-13T09:11:28 | 314028162 | {
"authors": [
"AaronJackson",
"aggarwal-himanshu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:84",
"repo": "AaronJackson/vrn",
"url": "https://github.com/AaronJackson/vrn/issues/71"
} | gharchive/issue | Mapping pixels to voxels
Hi Aaron,
Thanks for sharing the code. I would like to know, how can I map pixels in the scaled image to voxels in the raw file. Like if I localize a face feature lets say nose in jpg, how can I know which part of raw file correspond to that region. I am planning to interpolate 2 raw files for specific features like nose etc. to simulate aesthetic surgery results. Do you have any tips. Thanks.
A pre-processing step crops the image. The volume is in direct correspondence with this image. If you load the raw file using readvol and take sum(vol,3) you will see that it is in correspondence.
Hi Aaron,
Can you tell me what does values at a particular voxel in raw file signify? Earlier I thought there are only two values 0 and 255, indicating whether the voxel is filled or not. Am I correct? I see multiple values different values
Never mind. I figured
Some isosurface functions use it for smoothing. Matlab does a very nice job of this.
|
2025-04-01T06:36:40.366806 | 2022-07-25T14:27:46 | 1316924320 | {
"authors": [
"aloysbaillet",
"glevner",
"jfpanisset"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:86",
"repo": "AcademySoftwareFoundation/aswf-docker",
"url": "https://github.com/AcademySoftwareFoundation/aswf-docker/issues/155"
} | gharchive/issue | install packages to a different root directory
We need to install packages to a directory other than /usr/local, because we need to be able to access multiple VFX reference platforms without having to switch between different Docker images.
The shell scripts that build each package allow you to specify any location you want, which is a good start. But adapting the Dockerfiles to build and install all the packages to a different location is a huge headache. (For me, anyway.)
Has anybody else out there been confronted with this problem? And perhaps found a solution?
You can use the Conan packages to install the available packages anywhere. Building multiple parallel VFX platforms in the same image would be very hard without Conan. Unfortunately not all VFX packages are available yet in Conan.
Thanks for that, Aloys. When you say "the Conan packages", do you mean one can use the Conan recipes provided by ASWF to do this? Or do you mean to use packages from Conan Center?
The direction in which the containers are evolving is that packages will all be built as Conan packages, and then assembled into runnable containers: we're not quite there yet, but getting there.
Starting with VFX 2023, there's a script in scripts/common/install_conanpackages.sh which installs all the Conan packages called for by a container, and if you look in python/aswfdocker/data/ci-image-dockerfile.jinja2 you will find the template for the code installing those packages in /usr/local:
{% if name != "common" %}
RUN --mount=type=cache,target=/opt/conan_home/d \
--mount=type=bind,rw,target=/opt/conan_home/.conan,source=packages/conan/settings \
/tmp/install_conanpackages.sh /usr/local vfx${ASWF_VFXPLATFORM_VERSION}
{% else %}
RUN --mount=type=cache,target=/opt/conan_home/d \
--mount=type=bind,rw,target=/opt/conan_home/.conan,source=packages/conan/settings \
/tmp/install_conanpackages.sh /usr/local ci_common${CI_COMMON_VERSION}
{% endif %}
So you could change the destination based on the ASWF_VFXPLATFORM_VERSION variable, then:
aswfdocker dockergen
which regenerates the ci-*/Dockerfile files, and rebuild your own containers with a versioned installation destination.
Of course that only handles the Conan packages... It would definitely be better if the aswfdocker command had an option to specify the installation prefix for custom builds.
|
2025-04-01T06:36:40.372965 | 2021-03-10T02:13:27 | 826954335 | {
"authors": [
"cary-ilm",
"kdt3rd"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:87",
"repo": "AcademySoftwareFoundation/openexr",
"url": "https://github.com/AcademySoftwareFoundation/openexr/pull/957"
} | gharchive/pull-request | Clean up cmake/config
Remove unused cmake/JoinPaths.cmake
Remove unused cmake/OpenEXRLibraryDefine.cmake
Clean up and reformat comments
Signed-off-by: Cary Phillips<EMAIL_ADDRESS>
I think I may have botched the merge when resolving the conflicts here. Can someone confirm that the squash and merge here will still leave a linear history?
I noticed that the have large stack support was set after the config file was generated while I was making the changes for symbol visibility configure options, so that should be addressed. There are a number of other cleanups I would like to see eventually (not needed prior to 3.0 release), such as I don't believe we need IexConfig.h or IlmThreadConfig.h any more...
This PR removes two unused files and cleans up some comments, no need to go
into the 3.0.0 release. It has conflicts that need to be resolved anyway,
it may be better to start from scratch, I'll close it and open another
later.
On Mon, Mar 15, 2021 at 3:50 AM Kimball Thurston @.***>
wrote:
I noticed that the have large stack support was set after the config file
was generated while I was making the changes for symbol visibility
configure options, so that should be addressed. There are a number of other
cleanups I would like to see eventually (not needed prior to 3.0 release),
such as I don't believe we need IexConfig.h or IlmThreadConfig.h any more...
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/AcademySoftwareFoundation/openexr/pull/957#issuecomment-799319344,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AFC3DGLWBEDAPDSSYIHKKV3TDXQ7TANCNFSM4Y454MTQ
.
--
Cary Phillips | R&D Supervisor | ILM | San Francisco
|
2025-04-01T06:36:40.374453 | 2020-11-11T14:12:57 | 740797085 | {
"authors": [
"jmertic"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:88",
"repo": "AcademySoftwareFoundation/tac",
"url": "https://github.com/AcademySoftwareFoundation/tac/pull/207"
} | gharchive/pull-request | Create overview of communication tools and calendar processes and opt…
…ions for ASWF projects
Signed-off-by: John Mertic<EMAIL_ADDRESS>
Agreed @jfpanisset - feel free to make the edits in there.
|
2025-04-01T06:36:40.441319 | 2017-03-30T16:10:43 | 218249270 | {
"authors": [
"RobertNorthard",
"stephankfolkes"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:89",
"repo": "Accenture/adop-docker-compose",
"url": "https://github.com/Accenture/adop-docker-compose/pull/206"
} | gharchive/pull-request | Update to instructions
This is to update the instructions to inform users of the requirement to open port 80 to complete the stack installation, and instruction of how to achieve this.
Thanks @stephankfolkes for the contribution. However, this has been added to the readme already (https://github.com/Accenture/adop-docker-compose).
|
2025-04-01T06:36:40.442347 | 2022-08-10T21:50:22 | 1335240470 | {
"authors": [
"jenlampton"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:90",
"repo": "Accessible360/accessible-slick",
"url": "https://github.com/Accessible360/accessible-slick/issues/77"
} | gharchive/issue | Improve Pause and Play controls
The aria labels for slide controls could be clearer by using "Pause Carousel" instead of "Pause" and "Play Carousel" instead of "Play".
Because this looked like it qualified as one of the simpler updates, I went ahead and created a Pull Request.
|
2025-04-01T06:36:40.446730 | 2024-06-25T21:02:22 | 2373655107 | {
"authors": [
"Achie72"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:91",
"repo": "Achie72/druid-dash-2",
"url": "https://github.com/Achie72/druid-dash-2/pull/2"
} | gharchive/pull-request | Add Day3 stuff
Contains everything from the last day, before I almost lost it all
Looks like this won't work, gonna need a force push because PICO being pico
|
2025-04-01T06:36:40.453067 | 2018-11-03T21:47:06 | 377096221 | {
"authors": [
"PeterOrneholm",
"viktorvan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:92",
"repo": "ActiveLogin/ActiveLogin.Identity",
"url": "https://github.com/ActiveLogin/ActiveLogin.Identity/pull/40"
} | gharchive/pull-request | Fix some typos and simplify age calculation a bit.
I think using DayOfYear makes the code a little easier to read compared to checking both month and day.
Thank you, much nicer!
|
2025-04-01T06:36:40.656761 | 2017-11-11T15:17:06 | 273152159 | {
"authors": [
"BobGneu",
"htw5295",
"ilexp"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:93",
"repo": "AdamsLair/duality",
"url": "https://github.com/AdamsLair/duality/issues/589"
} | gharchive/issue | Create a Docs Repository
Summary
As discussed in issue #355, we should consider moving away from the GitHub Wiki for documentation in the long term. One alternative would be to create a new AdamsLair/duality-docs repo that hosts all documentation.
Analysis
The GitHub Wiki lacks some features and also has some disadvantages from the management side:
Pages need to have unique names and folder structures are ignored.
Users can't upload images and other media.
Lack of multi-version support (crucial for v3.0).
Lack of multi-language support.
No branches for bigger edits, no PRs or reviews.
Either everyone can edit, or only people with direct push access.
History not available on a global level.
All of these problems are solved by moving docs to a new AdamsLair/duality-docs repository.
The repository's main readme file acts as the home page for documentation.
A proper directory structure needs to be defined.
Anticipate support for multiple versions co-existing.
Anticipate support for multiple languages co-existing.
Maybe something like Pages/en/vX/...?
Investigate the possibility of turning on the github pages feature for the master branch and generating a nice docs website from the markdown files.
This issue is up for grabs. Let me know if you're interested in getting some work done on this. Ideally, do the first prototypes in a public repo in your own account, which we can later either transfer or re-create in the AdamsLair organization.
I think creating new repository is a best for management.
As things stand the duality project is actually pretty heavy, due to binaries being in the repo or otherwise, so we ought to put the documentation in a separate repo for that reason if nothing else.
Yep. And even if it was only for keeping it tidy, I'd still agree 100%.
Assuming that route is taken, I can start transferring our documentation into a repo and put together a gh-pages structure that seems intuitive and then we can transfer the repo over to the AdamsLair space. It seems like this would be pretty easy to do, I've transferred repos previously and it's seamless.
Sounds good! GitHub also allows you to configure the repo so all the docs can be on the regular master branch, so we don't have to use this awkward gh-pages naming convention.
gh-pages will generate the documentation pages as we commit/merge to master on the documentation repo, which means we can focus on creating content using markdown and create a custom theme over time to ensure that it ultimately conforms to the Duality branded color and stylistic choices.
👍
Overall, however, we ought to be strict about making our contributions in markdown, as anything else only makes maintenance more complicated.
Yep, agree completely. We should limit the docs to markdown for maintenance reasons, and also to strictly separate content from design / layout.
Please let me know if there are other elements you would like to have me look into.
I think it would be a good thing to have a first "prototype", and then we can take it from there and talk about how to proceed.
Sounds good. I'll get it going this evening. Expect a link when you pop on tomorrow.
I have cloned the wiki repo and shoved it into my repo for testing.
The site is currently published at https://bobgneu.github.io/duality-documentation
Got the index page into place. Gonna take a break for the evening to do some thinking about organization.
I have cloned the wiki repo and shoved it into my repo for testing.
Okay, let's iterate on this. 👍
First thing that we need to improve vs. the raw wiki export is directory structure. The root folder shouldn't have any content, and we should anticipate docs for multiple Duality versions and, potentially, in multiple languages. Multi-language is not a priority for now, but it only costs us another directory step, so we might as well include it.
I'd currently go with a directory structure like this:
Pages/en/v2/...
Except for the welcome page, which could remain in root. Not yet sure about how to organize folders inside /v2/....
Second point, is there any way to get the footer and sidebar back?
The template can be modified. I can look into that if you would like.
I reorganized the pages as requested and took the liberty of wrapping my mind around the template system, and its quite straight forward.
{% include footer.html %}
the include statement references a file within the _layouts directory, and explicitly drops its content into the template. By default the default.html template is used, where the {{content}} block is replaced by the rendered markdown.
It is rudimentary, but works.
There is the ability for each md file to include configuration in its header, as well as some customization fields as we see fit.
I tried to include markdown, but the contents are not rendered, and instead just copied in explicitly, even with an md extension.
WRT Organizing the documentation, a good first step would be to classify them based on the perspective of a new developer. Each document could be bucketed based on a Low/Medium/High experience score that we can now derive from a value in the header of the md file, tied to a badge or something in the top right of each page. Some level of flagging of Tutorials would also be beneficial, as when I was starting up the first steps were pretty daunting and I was looking for tutorials to help bridge the gaps when kicking off my first project.
For Inspiration: https://github.com/pages-themes
Re-organized folder structure looks good. Also, thanks for your insight into the templating - seems like we could provide a completely custom template file, complete with footer and potentially even an auto-generated sidebar. Sounds very promising!
WRT Organizing the documentation, a good first step would be to classify them based on the perspective of a new developer. Each document could be bucketed based on a Low/Medium/High experience score that we can now derive from a value in the header of the md file, tied to a badge or something in the top right of each page. Some level of flagging of Tutorials would also be beneficial, as when I was starting up the first steps were pretty daunting and I was looking for tutorials to help bridge the gaps when kicking off my first project.
Good idea, but would defer this to some later point, when we already have all the docs moved and published. The docs repo will then also have its own GitHub project and issue list, so we'll have a good place to keep track of ideas like this too.
For now, here's a list of the things that I think need to be tackled so we can do a full docs switch, not necessarily in order:
Move img and en from root into a Pages subfolder, so we don't clutter the root directory with content specific folders as we add more later on.
Figure out how images and links work, and how they can be linked with a relative URL.
Fix the images and links in all pages.
Decide on a good base theme / template to use and customize.
Integrate the base theme.
Figure out which special templates are required, for example for the home page
Adjust the default template with footer support.
Adjust the default template with sidebar support.
Transfer the repository to the AdamsLair organization account.
Consider renaming the repo to duality-docs.
Just before release, pull all the latest Wiki pages into the new repo again to not miss recent changes. Make sure to fix images and links as done before.
@BobGneu Feel free to add any points that you have on your radar, or address any of them. This issue is somewhat big, so I think it makes sense to at some point just make the cut and turn it into a team effort. In that case, just let me know that you're ready for the switch and we'll do the repo transfer.
Let's do the transfer. The repo can be renamed upon transfer and from that point things will be more manageable.
Images are referenced as with standard html, relative or absolute paths are interchangeable. I already did the work to validate this on the home page. It should be pretty straight forward to correct the other images. Once we get the repo transferred we can use the issues system to track the notes above.
I'll be off for a week, but will get back to you as soon as I'm back 👍
One side note, in order to transfer ownership I will need the permission relating to creating repositories within the AdamsLair space.
https://help.github.com/articles/transferring-a-repository-owned-by-your-personal-account/
Repo renamed.
Moved img and en into pages sub directory.
In the future, when making bulleted lists you can make them into checkboxes to simplify and track updates. All checkboxes are tracked in the first post of a PR and Issue. They show up on the issue listing.
Images can be referenced using markdown similar to the following
Inline Relative / Exact


Deferred Relative
![Debug Game Break][DebugGameBreak]
![Debug Game Break][DebugGameBreakDirect]
[DebugGameBreak]: ../../img/GettingStarted/RunGameButton.png
[DebugGameBreakDirect]: {{site.baseurl}}/pages/img/GettingStarted/RunGameButton.png
Alternatively, we can make image references.
<img src="{{site.baseurl}}/pages/img/GettingStarted/RunGameButton.png" />
In terms of the layout, Having a thin layout is not going to work with many of the code samples, as anything deeper than about 30 characters is going to require scrolling or wordwrap is going to be a mess, stretching things out.
In looking at a few of the other similar sites ~ 740 - 800px of width seems to provide enough space for code examples.
We can create a template with a header, side menu and footer. Given that we have full HTML access we can even position the footer and headers at the top and bottom of any given window. No need for JQuery or anything. Mock up the layout you would like to see and I'll give it a go.
Repo renamed.
Moved img and en into pages sub directory.
👍
Images can be referenced using markdown similar to the following
I think we should stick to the markdown way. I don't have a clear favorite among the variants, but would vaguely prefer relative inline paths.
In terms of the layout, Having a thin layout is not going to work with many of the code samples, as anything deeper than about 30 characters is going to require scrolling or wordwrap is going to be a mess, stretching things out.
In looking at a few of the other similar sites ~ 740 - 800px of width seems to provide enough space for code examples, though its still pretty tight.
We can create a template with a header, side menu and footer. Given that we have full HTML access we can even position the footer and headers at the top and bottom of any given window. No need for JQuery or anything. Mock up the layout you would like to see and I'll give it a go.
Do we have a proper css file to work with? Might as well go for a responsive design and use the full width up to a max value for big screens, and adjust the sidebar for small screens below a min width. Could turn it into a "site header" instead in those cases.
Let's do the transfer.
Great, let's do it. I think the easiest way would be to transfer it to me, and I'll forward it to the AdamsLair org.
Initiated the transfer to you.
Transferred 👍 Here's the new repo link.
ToDo
Set up labels on the new docs repo, probably similar to the ones in the main Duality repo.
Transfer all remaining ToDo items into issues in the new docs repo, but keep this one open until first release.
Fix the images and links in all pages.
Decide on a good base theme / template to use and customize.
Integrate the base theme.
Figure out which special templates are required, for example for the home page
Adjust the default template with footer support.
Adjust the default template with sidebar support.
Transfer the repository to the AdamsLair organization account.
Consider renaming the repo to duality-docs.
Just before release, pull all the latest Wiki pages into the new repo again to not miss recent changes. Make sure to fix images and links as done before.
Labels Created.
images ticket created.
https://github.com/AdamsLair/duality-docs/issues/2
Theme ticket created
https://github.com/AdamsLair/duality-docs/issues/3
I summed up the template based notes together in
AdamsLair/duality-docs#3 and opened up a ticket for pulling in the latest from the wiki. https://github.com/AdamsLair/duality-docs/issues/4
I elaborated as best I could, Please feel free to edit them further.
Nice work on the issues! The template one is actually not what I meant, but still a good idea the way you read it. Updated ToDo overview:
ToDo
Adjust labels to match naming (where appropriate) and colors from the main repo and follow a consistent color scheme overall.
Transfer remaining ToDo items into issues in the new docs repo, but keep this one open until first release.
Fix the remaining images and links on pages that were not yet cleaned up.
Decide on a good base theme / template to use, then integrate it to start iterating on.
As part of this, figure out which special jekyll page templates are required, for example for the home page.
Adjust the default template with footer support.
Adjust the default template with sidebar support.
Just before release, pull all the latest Wiki pages into the new repo again to not miss recent changes. Make sure to fix images and links as done before.
Note that we both had been added as collaborators accidentally during the transfer. I removed us again, so access rights are now again managed via teams, but this also means that you won't be able to adjust the labels, since you're no longer a project admin. I'll pick up that one.
Progress
Adjusted labels to use similar color-based categorization as in the main repo, but removed and renamed labels where applicable.
Immediate ToDo
Transfer remaining ToDo items into issues in the new docs repo, but keep this one open until first release.
Fix the remaining images and links on pages that were not yet cleaned up.
Decide on a good base theme / template to use, then integrate it to start iterating on.
As part of this, figure out which special jekyll page templates are required, for example for the home page.
Adjust the default template with footer support.
Adjust the default template with sidebar support.
Just before release, pull all the latest Wiki pages into the new repo again to not miss recent changes. Make sure to fix images and links as done before.
Created a First Release milestone on the docs repo, which contains all issues that need to be addressed in order to release.
cc @AdamsLair/duality-contributors for everyone who would be interested in joining the docs transfer with a PR or two.
Moved all remaining docs issues from the main repo to the docs repo.
Closing this, as all open work have been moved to the new docs repo and first release milestone.
|
2025-04-01T06:36:40.770736 | 2023-07-25T01:50:12 | 1819411650 | {
"authors": [
"ameshkov",
"jputting"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:94",
"repo": "AdguardTeam/CoreLibs",
"url": "https://github.com/AdguardTeam/CoreLibs/issues/1783"
} | gharchive/issue | Filtering Log Still Not Fixed After 5 Years
The design of Adguard's Filtering Log doesn't recognise when parts of a webwpage are blocked by Cosmetic/Element hiding rules, they can be in a subscribed to Filter Lister or a Custom User Created Rule.
This badly designed Filtering Log makes it extremely difficult to determine what is blocking parts of a webpage.
This should NOT be the case, ALL BLOCKED ITEMS should be reported.
This is in a fact a Technical Issue which is created by the badly designed Filtering Logs.
Adguard have stubbornly refused to acknowledge there is a problem created by the Filtering Logs.
This was raised 5 years ago on the Github Forums
Here's the link: https://github.com/AdguardTeam/CoreLibs/issues/180
The Adguard Filtering Log is still NOT fixed after 5 years.
It is absolutely disgusting and very shameful conduct that this was raised on Github 5 very, very long years ago and Adguard have done absolutely NOTHING TO FIX THEIR FILTERING LOGS.
With various other ad blockers such as ublock Origin, the filtering logs do clearly show when a Cosmetic or User/Custom rule is in effect.
When Blocked Items are NOT REPORTED AT ALL, do you understand that this makes discovering the source of a webpage problem an extremely time consuming, difficult and painstaking process.
To discover the source of the problem I literally had to spend several hours of my valuable time disabling each and every single Adguard function one at a time and then reloading the webpage until I discovered the source of the problem.
It is absolutely disgusting and quite shameful that after 5 very, very long years Adguard has still done absolutely NOTHING about this.
Addressed it here: https://github.com/AdguardTeam/CoreLibs/issues/1784#issuecomment-1649284401
|
2025-04-01T06:36:40.838154 | 2017-11-06T11:01:38 | 271435409 | {
"authors": [
"Sc0rpic0m",
"davidjgonzalez"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:95",
"repo": "Adobe-Consulting-Services/acs-aem-commons",
"url": "https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/1164"
} | gharchive/issue | HTTPCache : JCR storage handler
Hi! I am quite new here, but would like to take part of this awesome project.
I wanted to contribute by developing a JCR storage handler for the HTTP cache.
Just to prevent me from doing work thats already done, is somebody already working on this?
My idea was the following:
-Have the storage work under 1 root node (configureable through OSGI config)
-Have then bucket nodes under this root node (like a hashmap) that comes from the hashcode in the cachekey.
-Have the bucket nodes go a few levels deep, since you dont want all buckets under 1 node, this might cause storage performance issues. So for instance if you have the hashcode<PHONE_NUMBER>0:
rootNode / 123456 / 078910 / entrynode1
The breakpoint to split can then be configureable through OSGI config.
To retrieve a node, it would be like a hashmap, so loop all the nodes under a bucket and perform equals on the cachekey. The cachekey would have extend Serializeable.
What do you guys think?
@Sc0rpic0m no one is working on this!
Its been a while since i looked at the code; but sounds like a good approach! Thoughts around clearing the cache? Would the cache contents be deny all and only accessible via a service user?
@Sivaramvt thoughts on the approach?
Cheers for the reply!
Clearing the cache would be as follows:
Expiry:
1: Put an expiry in (through OSGI config) as epoch timestamp or calendar object as property.
2: The cache store also implement scheduler, and in the run method, fire off a service that performs a query on the root node, targeting nodes with the expiry time expired.
3: delete these nodes.
Regular flush:
Same as regular hashmap, just fetch the node and delete is.
Just gotta think of an efficient way to delete the bucket nodes that don't contain contents after a cleanup or regular flush.
For cache contents: yes, would be deny all and only accessible via a service user indeed.
You can look at my fork here as well:
https://github.com/Sc0rpic0m/acs-aem-commons/tree/feature/httpcache-jcr-memstore/bundle/src/main/java/com/adobe/acs/commons/httpcache/store/jcr/impl
Of course it's just WIP.
@Sc0rpic0m could you make a PR to /develop and just put in the PR's title [REVIEW ONLY] HTTP Cache JCR Store implementation - its a bit easier to review and leave comments in the context of a PR. We can always close the PR. Thanks!
|
2025-04-01T06:36:40.844606 | 2019-05-06T11:40:32 | 440657920 | {
"authors": [
"amargheriti89",
"joerghoh"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:96",
"repo": "Adobe-Consulting-Services/acs-aem-commons",
"url": "https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/1878"
} | gharchive/issue | Dispatcher flush rules - doesn't manage multiple occurrence of the same path
Required Information
[ ] AEM Version, including Service Packs, Cumulative Fix Packs, etc: AEM 6.4.2
[ ] ACS AEM Commons Version: ACS-AEM-Commons 4.0.0
[ ] Reproducible on Latest? yes
Expected Behavior
By configuring the "ACS AEM Commons - Dispatcher Flush Rules" service with the following values:
-/content/we-retail/ca/en=/content/we-retail/ca/fr
-/content/we-retail/ca/en=/content/we-retail/us/es
-/content/we-retail/us/en=/content/we-retail/us/es
the expected behaviour is that after publishing the page /content/we-retails/ca/en also the paths:
/content/we-retail/ca/fr
/content/we-retail/us/es
are flushed.
Actual Behavior
Inside the actual version after configuring the ACS AEM Commons - Dispatcher Flush Rules service as described above, only one of all the paths that need to be flushed after the replication of the /content/we-retail/ca/en page, is correctly flushed
Steps to Reproduce
You can reproduce the issue with the following steps:
Configure the ACS AEM Commons - Dispatcher Flush Rules configuration with 2 occurrence of the same path in order to flush 2 different paths (e.g. as the attached image)
Replicate the path which need to trigger the flush of the other
Check the flush agent into the publish instance (e.g. as the attached image)
Links
Links to related assets, e.g. content packages containing test components
Hm, can you try with this configuration:
-/content/we-retail/ca/en=/content/we-retail/ca/fr&/content/we-retail/us/es
-/content/we-retail/us/en=/content/we-retail/us/es
|
2025-04-01T06:36:40.845911 | 2017-11-04T15:06:21 | 271198586 | {
"authors": [
"davidjgonzalez"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:97",
"repo": "Adobe-Marketing-Cloud/asset-share-commons",
"url": "https://github.com/Adobe-Marketing-Cloud/asset-share-commons/pull/38"
} | gharchive/pull-request | #18 - Asset, Config and PagePredicate models now leverage the modelC…
…ache so they are not reinitialized for every component on the page that uses them.
@godanny86 would be good to have an extra sanity check that the sample content works and i didn't miss testing any of the components.
@godanny86 should be fixed
|
2025-04-01T06:36:40.851145 | 2020-10-23T13:40:04 | 728227882 | {
"authors": [
"kwin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:98",
"repo": "AdobeDocs/cloudmanager-api-docs",
"url": "https://github.com/AdobeDocs/cloudmanager-api-docs/issues/9"
} | gharchive/issue | Fix swagger spec for response type of all requests returning binary responses
The swagger spec e.g. for getStepLogs (https://github.com/AdobeDocs/cloudmanager-api-docs/blob/5f2281c1abd493768d81f54ffab245cc02c02e1d/swagger-specs/api.yaml#L774) currently does not define any response type, therefore swagger generates a client method which returns void. Instead there should be a binary response defined as outlined in https://swagger.io/docs/specification/2-0/describing-responses/#response-that-returns-a-file.
Actually it seems that always JSON is returned like this
{"redirect":"https://cm0pl0va80stor0prd.file.core.windows.net/909da636-9119-40db-b11c-f8d23003f15e/deploy/step484607.log?sig=2qQOgXo3zAGwwnaih9sJvBd0GVcCzKw8ne3BrBAOKNk%3D&se=2020-10-23T14%3A53%3A26Z&sv=2018-03-28&rsct=application%2Foctet-stream&rscd=attachment%3B%20filename%3Ddeploy%2Fstep484607.log&sp=r&sr=f"}
|
2025-04-01T06:36:40.852731 | 2021-07-12T17:13:27 | 942269435 | {
"authors": [
"Alicesnk",
"chetanmeh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:99",
"repo": "AdobeDocs/journey-optimizer.en",
"url": "https://github.com/AdobeDocs/journey-optimizer.en/issues/2"
} | gharchive/issue | Decision Rules for Offer used in AJO should only refer Profile attributes
Issue in ./help/using/offers/offer-library/creating-decision-rules.md
Decision rules for Offer which are supposed to be later included in AJO should only refer to profile attributes. They should not be using properties from xEvent. If such properties are used the offer validation would fail during Message Publishing step
Captured in DOCAC-6995
|
2025-04-01T06:36:40.856279 | 2020-06-21T05:53:31 | 642501357 | {
"authors": [
"mohsinhundekar",
"shashank132"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:100",
"repo": "AdobeXD/xd-to-flutter-plugin",
"url": "https://github.com/AdobeXD/xd-to-flutter-plugin/issues/56"
} | gharchive/issue | Hi, it would be great and really help all flutter developers if they can convert the components from xd to widgets like BUTTON , TEXTFIELD, LISTVIEW, ANIMATIONS like bounce effect etc..
We love to hear your ideas, but we need your help. Please take a few minutes to fill out the information below, and provide a concise, descriptive title.
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
yes , it would be better if "xd to flutter " plugin can convert the textfield ( if user click on the textfield he can able to write ), button , screen redirection after 5 sec or in specific time duration .
|
2025-04-01T06:36:40.858813 | 2018-12-14T07:34:11 | 390997868 | {
"authors": [
"ashryanbeats",
"yoshikinoko"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:101",
"repo": "AdobeXD/xdpm",
"url": "https://github.com/AdobeXD/xdpm/issues/9"
} | gharchive/issue | Request: Validating the menus (uiEntryPoints)
Currently (v1.1.2), xdpm checks if the uiEntryPoints exists or not. However, it would be better to check the "structure" of uiEntryPoints.
For example, to check the nesting level of submenus, and required field.
https://github.com/AdobeXD/xdpm/blob/02698548e351162c89679379a072c50c5822f106/lib/validate.js#L90-L93
xdpm helped saved a lot of my development time, and I believe that this tool accelerates the speed of developing plugins for many developers.
Thanks,
Agreed. I think this is something we want to do, but we need to make the time to do it. It won't make it in time for today's update, but I want to acknowledge that it's an idea we should work on.
Close the issue.
Already resolved with #23 (and PR of #24).
|
2025-04-01T06:36:40.861583 | 2021-03-02T13:50:37 | 820037825 | {
"authors": [
"M-Davies",
"sxa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:102",
"repo": "AdoptOpenJDK/openjdk-build",
"url": "https://github.com/AdoptOpenJDK/openjdk-build/issues/2508"
} | gharchive/issue | make-adopt-build-farm.sh does not handle mistyped names
What are you trying to do? Run make-adopt-build-farm with an alternate VARIANT set
Expected behaviour: If I mistype the VARIANT value it aborts
Observed behaviour: If I mistype VARIANT it defaults to hotspot
Any other comments:
Is that not what we want? When running without params set, it informs you that it is defaulting to Hotspot (like the other params):
(base) ➜ build-farm git:(master) ./make-adopt-build-farm.sh
ARCHITECTURE not defined - assuming x64
TARGET_OS not defined - assuming you want Darwin
JAVA_TO_BUILD not defined - defaulting to jdk11u
VARIANT not defined - assuming hotspot
FILENAME not defined - assuming jdk11u-hotspot.tar.gz
BUILD TYPE:
VERSION: jdk11u
ARCHITECTURE x64
VARIANT: hotspot
OS: darwin
SCM_REF:
Detecting boot jdk for: jdk11u
Found build version: 11
Required boot JDK version: 10
[ERROR] No local file detected at /Users/morgan/Documents/Repos/openjdk-build/build-farm/platform-specific-configurations/darwin.sh and PLATFORM_CONFIG_LOCATION is not set. Please set PLATFORM_CONFIG_LOCATION to a repository path of a platform config file (e.g. AdoptOpenJDK/openjdk-build/master/build-farm/platform-specific-configurations).
Would you prefer to induce a "hard" failure where the script will fail if it does not detect one or more of these params?
|
2025-04-01T06:36:40.866638 | 2021-03-22T11:15:57 | 837613608 | {
"authors": [
"sxa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:103",
"repo": "AdoptOpenJDK/openjdk-infrastructure",
"url": "https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/2067"
} | gharchive/pull-request | github: Use correct file for AIX playbook
Required now that aix.yml no longer exists now that https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/2053 has been merged
Signed-off-by: Stewart X Addison<EMAIL_ADDRESS>
Checklist
[x] commit message has one of the standard prefixes
[ ] FAQ.md updated if appropriate
[ ] other documentation is changed or added (if applicable)
[ ] playbook changes run through VPC or QPC (if you have access)
[ ] for inventory.yml changes, bastillion/nagios/jenkins updated accordingly
https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/2068 has run the github checks using this PR and https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/2051 - linter seems happy so even though no linter checks have been done above I believe it's sufficient to prove that this fix is ok and can be approved+merged.
|
2025-04-01T06:36:40.868226 | 2019-05-16T13:27:07 | 444955764 | {
"authors": [
"jeevan264"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:104",
"repo": "Adoxio/xRM-Portals-Community-Edition",
"url": "https://github.com/Adoxio/xRM-Portals-Community-Edition/issues/111"
} | gharchive/issue | Email address must be specified in order to enable portal access
Hi,
I get the below error when trying to register a new user on the portal with both redeem invitation and register. Redeem works fine if i update the details under Web Authentication section ahead and redeem the invitation code.
"Email address must be specified in order to enable portal access"
Thanks
Further investigation revealed that it is an issue with one of the internal plug-in which was causing the issue. Thanks
|
2025-04-01T06:36:40.871745 | 2017-10-15T21:42:28 | 265606916 | {
"authors": [
"amervitz"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:105",
"repo": "Adoxio/xRM-Portals-Community-Edition",
"url": "https://github.com/Adoxio/xRM-Portals-Community-Edition/issues/38"
} | gharchive/issue | Entity Lists not rendering Date Only fields properly
Configure a Date and Time field with the behavior of Date Only:
:
Set the field's value in the Dynamics 365 web client:
The Entity Lists's rendering of the field isn't following the expected behavior of displaying the value without a time zone conversion:
To consistently reproduce, set the local operating system's timezone to UTC-08:00:
Fixed by commit https://github.com/Adoxio/xRM-Portals-Community-Edition/commit/9b59d339f98e5e75ee222fe2b5185d4226e42a63.
|
2025-04-01T06:36:40.882105 | 2024-10-21T13:34:06 | 2602580057 | {
"authors": [
"Adriwin06",
"achillebourgault"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:106",
"repo": "Adriwin06/Ultimate-CommonUI-Menu-System",
"url": "https://github.com/Adriwin06/Ultimate-CommonUI-Menu-System/pull/20"
} | gharchive/pull-request | BP_FrontEndMenuCamera use instance rotation
BP_FrontEndMenuCamera now use the camera's current rotation when moving, instead of setting it to 0, 0, 0.
This enhancement allow the camera to maintain its intended orientation during transitions, improving the overall user experience.
I trust you I can't try it right now I'm not home 😉
|
2025-04-01T06:36:40.905336 | 2023-04-23T16:57:16 | 1680093638 | {
"authors": [
"Aedial",
"cwyuu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:107",
"repo": "Aedial/novelai-api",
"url": "https://github.com/Aedial/novelai-api/issues/17"
} | gharchive/issue | how to make the generated text always end with a complete sentence?
I find that when I generate text on a web page, the endings are all one complete sentence, starting with "." but when I use the api to generate it, it always ends with an incomplete sentence. I have changed "global_settings.generate_until_sentence = True" in the api, because I find that the request in the web page is "True" here but the api defaults to "false". But I found it didn't work. So I would like to know: how can I make the api generate text that always ends with a complete sentence.Can anyone answer my question? Thanks
Translated with www.DeepL.com/Translator (free version)
I see no bug here. Setting the generate_until_sentence to True does reflect on the request sent to the server when fed to the high_level.generate or high_level.generate_stream.
Note that it will continue sentence only if a period is actually found in the 20 tokens after the end. It means it is affected by context, preset, biases, bans, etc.
If you see an issue, it is likely incomplete copy of the settings on your side. For a deterministic comparison, set the top_k to 1 on both, and you should see the exact same content if both settings are the same.
Thanks for the answer! My problem is solved, I double checked the "preset" parameter in the api and the parameter in the web request and found the difference between them. Finally I found that it was the "repetition_penalty" that was affecting the output. I used to think it had no effect. When I set the "repetition_penalty" from the default "2.25" to "1.148125", the output text starts with The output text ends with a complete sentence.
Translated with www.DeepL.com/Translator (free version)
It seems there is an obscure scaling done on repetition penalty, adjusting the value following 0.525*(X - 1)/7 + 1 (with X previous rep pen - formula extracted from minified JavaScript code). Seems kind of weird for it to exist frontend-side and not backend-side.
A fix is coming along with new sanity tests checking compliancy instead of just "nothing is broken".
|
2025-04-01T06:36:40.961881 | 2024-08-20T18:32:27 | 2476279143 | {
"authors": [
"AeroX2",
"bezmi",
"maehw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:108",
"repo": "AeroX2/brother-cart-emulator",
"url": "https://github.com/AeroX2/brother-cart-emulator/issues/3"
} | gharchive/issue | Write of image binary file using PED Basic and cracker.py with radare2 fails
Hi @AeroX2 !
Thanks for your efforst in making writing the embroidery card binary images writeable to files!
I'ved tried to get the toolchain running on my machine but have failed so far. Maybe you could give me some guidance here?
Versions:
PED-Basic v1.07 (the about dialog outputs a copyright message "2002 - 2005")
CardIO.dll seems to be version <IP_ADDRESS> from 22.09.2005
Python 3.12.5
radare2-5.9.4-w64
Windows 10
I don't really think that you had a different version of PED-Basic or the CardIO.dll as both seem pretty old - but you never know.
I somehow guess that I am running into Windows user rights issues. I had installed PED-Basic as the local admin and tried to execute pelite.exe via python cracker.py as normal user. Next, I tried to modify all access rights to all files in a way that a normal user should have full access. I then even copied everything into C:\Users\User\Documents\PED-Basic and it still does not work.
I'll share snippets from the log of python cracker.py, especially those that indicate warnings or errors:
WARN: Relocs has not been applied. Please use `-e bin.relocs.apply=true` or `-e bin.cache=true` next time
ERROR: Cannot debug file (pelite.exe) with permissions set to 0x7Reopening the original file in read-only mode.
INFO: Spawned new process with pid 13068, tid = 3036
INFO: File dbg://C:\\Users\\User\\Documents\\PED-Basic\\pelite.exe reopened in read-write mode
WARN: Relocs has not been applied. Please use `-e bin.relocs.apply=true` or `-e bin.cache=true` next time
13068
(13068) loading library at 0x00007FFFD20B0000 (C:\Windows\System32\ntdll.dll) ntdll.dll
...
WARN: Relocs has not been applied. Please use `-e bin.relocs.apply=true` or `-e bin.cache=true` next time
[Relocations]
vaddr paddr type ntype name
---------------------------------------
0x0003bda4 0x00036048 SET_32 3 CardIO.dll_public: void __thiscall CCardIO::constructor(int)
...
596 relocations
w \x90 @ 4273898
w \x90 @ 4273899
w \x90 @ 4273900
w \x90 @ 4273901
w \x90 @ 4273902
w \x90 @ 4273903
w \x90 @ 4310977
w \x90 @ 4310978
w \xeb @ 4275697
w \xeb @ 24027
ERROR: Cannot write. Use `omf`, `io.cache` or reopen the file in rw with `oo+`
w \xeb @ 24099
ERROR: Cannot write. Use `omf`, `io.cache` or reopen the file in rw with `oo+`
w \xeb @ 24174
ERROR: Cannot write. Use `omf`, `io.cache` or reopen the file in rw with `oo+`
WARN: base addr should not be larger than the breakpoint address
WARN: Cannot set breakpoint outside maps. Use dbg.bpinmaps to false
INFO: Continue until 0x00006ad5 using 1 bpsize
I guess that the ERROR: Cannot write. messages are what really makes me trouble. The file image.bin however is created with 64 kBytes but only contains 0xFF (what would be an empty flash EEPROM/memory). Is there anything I should inspect?
Any hints are very appreciated!
M.
PS: I've also tried to share my findings in the EEVblog forum thread https://www.eevblog.com/forum/reviews/brother-(possibly-also-bernina)-embroidery-machine-memory-cards/?all
Edit: Reinstalling PED Basic in C:\PED-Basic and running python cracker.py from cmd.exe with admin rights also did not help. So the caching part seems to be related to radare2 but I don't really know what to modify in the Python script.
Edit: Trying to use r = r2pipe.open('pelite.exe', ['-e', 'bin.cache=true', '-w']) gave me:
C:\PED-Basic>python cracker.py
ERROR: Cannot debug file (pelite.exe) with permissions set to 0x7Reopening the original file in read-only mode.
INFO: Spawned new process with pid 7076, tid = 6352
INFO: File dbg://C:\\PED-Basic\\pelite.exe reopened in read-write mode
ERROR: bin.relocs and io.cache should not be used with the current io plugin
7076
(7076) loading library at 0x00007FFFD20B0000 (C:\Windows\System32\ntdll.dll) ntdll.dll
...
ERROR: bin.relocs and io.cache should not be used with the current io plugin
Traceback (most recent call last):
File "C:\PED-Basic\cracker.py", line 35, in <module>
cardio_addr = int(re.findall(r"0x([0-9A-F]+)", cardios[-1])[0],16)
~~~~~~~^^^^
IndexError: list index out of range
Edit: I've given up for now. Some files or folders (especially the radare ones) seem partiall write-protected. And even when I remove the protection as admin.. they come back right again. Also: rolling back to radare2-5.8.2-w64 which may have been the version you used (released Jan 23, 2023) gives me different warnings/errors - but also does not work overall:
C:\PED-Basic>python cracker.py
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
ERROR: Cannot debug file (pelite.exe) with permissions set to 0x7Reopening the original file in read-only mode.
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
INFO: Spawned new process with pid 2732, tid = 2264
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
INFO: File dbg://C:\\PED-Basic\\pelite.exe reopened in read-write mode
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
...
I think the "ERROR: Cannot debug file (pelite.exe) with permissions set to 0x7Reopening the original file in read-only mode." might be the critical part here. I have no clues how to fix this under Windows... :(
Yeah you are right on with radare2, this script is quite old at this point and so hasn't really kept up to date with radare2 hence the issues.
The read write issue is actually because it is trying to write into the wrong addresses since it was unable to find the correct memory address for CardIO.dll.
I've just updated the script and tested it and it seems to work, though I'm testing within a virtual machine with ASLR turned off so your milage may vary.
The other issue I see you might be running into is if the program is still open in the background you also can't write into it, you need to close all instances of cracker.py and the program and then run it and it should hopefully work 🤞
Just tested on a non-VM machine and everything seemed to worked with the updated script
Windows 11
Python - 3.12.0
r2pipe - 1.9.4
PELite 1.07
Thank you very much for your fast reply and fix. Good news: it now also works for me!
I forked your repository and suggest a few minor changes, see #4 . I've done this primarily to help other users understand your code without doing their own research/debugging. Feel free to discard them. 😉
Thanks very much for submitting and publishing your code!
If you have time, I'd be happy if you could answer the following questions (no rush):
The card memory usage indicator doesn't work with the patches. Do you know why? Do you think there is an easy fix? Would be nice to have direct feedback instead of the "0%" bar.
Why is the output binary file 64 kiBytes (65'536 bytes)? If I get it right the upper technial limit is 512 kiBytes? Do you think this is fix-able as well?
What license is your repo (and my contributions)? I think you haven't added a LICENSE file yet. I'd be happy if this was open source (and maybe it kind of already is... never had repos without an explicit license).
🥳 🎉
I forked your repository and suggest a few minor changes, see https://github.com/AeroX2/brother-cart-emulator/pull/4 . I've done this primarily to help other users understand your code without doing their own research/debugging. Feel free to discard them. 😉
Thanks for the PR, definitely will make it easier for future users
The card memory usage indicator doesn't work with the patches. Do you know why? Do you think there is an easy fix? Would be nice to have direct feedback instead of the "0%" bar.
Unlikely to be an easy fix, I suspect the reason for the 0% is because I'm not actually writing to a card, only looking at the memory and taking the data that would be written to the card, the progress bar is likely tied to the actual card writing.
Why is the output binary file 64 kiBytes (65'536 bytes)? If I get it right the upper technial limit is 512 kiBytes? Do you think this is fix-able as well?
Two things, one is I pick the smallest card size of 64kb, you can see in the README.md I pick the address 0x10005E6B, to override and this corresponds with 64kb, for 512kb, you'll need address 0x10005E7D (probably).
Second is that pxj 0x10000 in cracker.py is dumping out only 64kb worth of data, so changing those will allow you to dump more.
What license is your repo (and my contributions)? I think you haven't added a LICENSE file yet. I'd be happy if this was open source (and maybe it kind of already is... never had repos without an explicit license).
I'll add a MIT license, that way people are free to use it the way they want
Hi James,
thanks for everything:
Accepting my PR
Keeping the "tested configurations" sections in README and clarifying your setup
Adding the LICENSE file
Being patient, nice and responsive to my requests!
I just got a brother PED-Basic in the mail. I had not planned to buy one, but that one was even kind of affordable. This should make some research easier in case I keep being motivated.
It really seems that every time the usage bar is updated, the PED card reader is accessed (red busy LED turns off for a short moment). The blue color part is the amount of memory already reserved by the PES files "copied to the right side". The cyan color part is the extra amount of memory "copying" the selected PES file on the left would add on top:
Interestingly, the GUI seams to offer write-only access to the card. I have no clue if my card had been written before or erased in a way. But even after writing some of the samples and re-plugging the card... nothing on the right appears without copying PES files from left to right. So, I guess that the card is just completely whiped and the generated binary is written... w/o data being read. I guess that at least some manufacturer/model data from the EEPROM/flash memory should be read to determine the card size - but that's a wild guess.
Summing up: thanks for your help! Rowing this boat together definitely makes it more fun!
Hi again,
I am trying to print the disassembly of functions in pelite.exe and CardIO.dll to better understand the context of the patches/ your screenshots so that I can extend cracker.py. So I am asking you to shed some light into the darker spots I do not fully understand yet.
I've pushed a commit on the fork of your repo:
https://github.com/maehw/brother-cart-emulator/commit/cf5b36798d20327e7ab338c64bfde90991501025
When being exeucted, I get the following output:
Printing imported CardIO functions...
signature: void__thiscallCCardIO::constructor(int), address: 0x00436048
32 bit word: 0x10001980
signature: enumCIOError__thiscallCCardIO::ChkCardWriterConnected(int,unsignedchar*,int*), address: 0x0043604c
32 bit word: 0x10001df1
signature: enumCIOError__thiscallCCardIO::Receive(classCObArray*,int,voidconst*,voidconst*), address: 0x00436050
32 bit word: 0x10001ca1
signature: void__thiscallCCardIO::ResetCardID(void), address: 0x00436054
32 bit word: 0x10001940
signature: enumCIOError__thiscallCCardIO::Send(classCObArray&,voidconst*,voidconst*,enumCCardAtrbType*), address: 0x00436058
32 bit word: 0x10001d0f
signature: enumCIOError__thiscallCCardIO::ChkCardVolume(classCObArray&,int&,int&,enumCCardAtrbType*), address: 0x0043605c
32 bit word: 0x10001d80
So it seems that the exported functions are known to r2 here. I used r2 command ii.
As the address values are +4 byte each - I guess this is rather a function pointer table (32 bit addresses from good' ol' 32 bit world?) somewhere in memory where the lib is loaded during runtime.
When having a look at the values at those addresses pxw 4 @ ..., I get those 0x1000____ addresses. Where does this offset come from? I'v also spotted it in your code 0x10000000. Unfortunately, I wasn't able to get a dump of the functions there (command pdf @ ...).
What am I missing here?
I'd also like to understand why you chose cracker.py to run until addresse 0x6ad2/0x6b0e. Without the disassembly I am lacking context here.
Also, I'd like to explore the code area more where the different flash sizes are used.
How did you get the GUI view? Is it another RE tool? Prefereably, I'd like to get the disassemblies in the context of running cracker.py.
Your help is very much appreciated!
Cheers
When having a look at the values at those addresses pxw 4 @ ..., I get those 0x1000____ addresses. Where does this offset come from? I'v also spotted it in your code 0x10000000. Unfortunately, I wasn't able to get a dump of the functions there (command pdf @ ...).
Hmm not sure why pdf might not be working but I'd probably recommend doing it in Ghidra since that tool is much more friendly, I only used r2pipe because it was the only scriptable debugger that I knew of and too be honest was a bit of a pain to work with.
As for the 0x10000000, that address is the default address Windows uses for any DLL's that are loaded by a program that aren't rebased a different address so in this case the CardIO.dll
https://devblogs.microsoft.com/oldnewthing/20141003-00/?p=43923#:~:text=Since the operating system itself,you start colliding with DLLs.
I'd also like to understand why you chose cracker.py to run until addresse 0x6ad2/0x6b0e. Without the disassembly I am lacking context here.
Yeah all good, reverse engineering is quite difficult and you have to do a bit of guess work, so if you look at offset 0x6b11, there is a function which is setting up the Brother copyright header, this isn't writing the embroidery data that comes later
But I know that this is the function that write into the card data memory location so I can break at this point and extract it with p8j 4 @ rcx, (technically I could have also done this at 0x6b0e but I felt it was just easier to do at this location because I knew the address was in register rcx)
The 0x6b11 function is only called by one other function (offset 0x6ac8), and there are three other functions called here which write the embroidery data so offset 0x6b0e is the ret instruction and is the first instruction where I know that the embroidery data is all written to memory and I can safely extract it.
https://github.com/user-attachments/assets/c987ba6e-fbaa-4aeb-a996-2f01ca683887
How did you get the GUI view? Is it another RE tool? Prefereably, I'd like to get the disassemblies in the context of running cracker.py
I use a tool called Ghidra (https://ghidra-sre.org/), x64dbg (https://x64dbg.com/), along with retsync (https://github.com/bootleg/ret-sync).
You probably just need Ghidra, which can decompile the program into a C-ish program state but if you want to see what exactly is happening at each state you need to step through it with a debugger (x64dbg) and being able to look back and forth at what is happening with the debugger and Ghidra is where ret-sync comes in.
Hi James / @AeroX2 ,
thanks for your detailed explanations.
This gave me new insights as I am an embedded software developer and not very experienced with application development - at least not on the reverse engineering side of things.
Using Ghidra alone for analysis of the DLL's disassembly did the trick. If I find time and need for setting up the other tools, I might write you again.
In the meantime, I've added support for multiple card sizes - see pull request #5 .
Would be great to get the "progress bar" (memory card usage indicator) feature working as I've seen single PES design pattern files overrunning the default 64 kiBytes (see explanation in my pull request).
HTH / Cheers
Hello again,
I've dived deeper into the disassembled code of both pelite.exe and CardIO.dll.
Unfortunatel, I haven't been able to locate the code which calculates/draws the memory card usage indicator. Could you maybe give me some more guidance here to make this happen?
So far, I've "only" used Ghidra for static analysis.
I've mainly used your patches to get some context and the strings I found (many printf format specifiers). I've also seen some calculations which I thought were suspicious - without any luck.
The function at 0x00416361 seems to format the relative and absolute sizes (%3d%% resp. %5s%s X %5s%s) of the selected pattern which are displayed in the lower left corner of the PED-Basic window. I've verified this by replacing single characters in those format specifiers.
I've also found another percentage value being formatted in the function at 0x0041c67b -- it is the relative pattern size displayed on the right window side after copying it to the card (format specifier %d%%).
This was interesting, but it did not lead me in the right direction.. because the 100% value is never re-labeled. Only the blue/cyan rects are displayed. This should also happen when a pattern is selected on the left hand side.. but I did not find an entry function (on-item-selection-callback?!) where this is hooked.
In addition, I've also read @bezmi's comment https://github.com/AeroX2/brother-cart-emulator/issues/1#issuecomment-1435530216 -- I also see a corrupted string in the output binary file (ÿrother_sewing) - do you have an idea what could cause this issue? Did the corrupt images work with your machine?
Another hint: in the patch descriptions of image-dumper/README.md, you call it "Bypass ChkCardVolume". Strictly speaking the method seems to be called but the result value is ignored and your patches modify the code to make it look like everything turns out as expected. I found this pretty misleading.
This is fun.. and tedious at the same time.
Hi James,
You probably just need Ghidra, which can decompile the program into a C-ish program state but if you want to see what exactly is happening at each state you need to step through it with a debugger (x64dbg) and being able to look back and forth at what is happening with the debugger and Ghidra is where ret-sync comes in.
I'd be interested in a brief description how to set the three tools up in the sync'ed mode you described. Can you also see the decompiled C code? I can run pelite.exe from x64dbg but I have no clue how to start a Ghidra session for the dynamic analysis and also no idea what to do with the ret-sync release file (which seems to be a single *.dp64 resp. *.dp32 file).
Have a nice sunday!
This was interesting, but it did not lead me in the right direction.. because the 100% value is never re-labeled. Only the blue/cyan rects are displayed. This should also happen when a pattern is selected on the left hand side.. but I did not find an entry function (on-item-selection-callback?!) where this is hooked.
I suspect this is because it is using the Windows API for progress bars and so there isn't a "100%" label in the program just a function that is advancing the ticks for the progress bar but I haven't dived into this myself.
In addition, I've also read @bezmi's comment https://github.com/AeroX2/brother-cart-emulator/issues/1#issuecomment-1435530216 -- I also see a corrupted string in the output binary file (ÿrother_sewing) - do you have an idea what could cause this issue? Did the corrupt images work with your machine?
I don't think I've seen a ÿrother_sewing just ÿbrother_sewing but I suspect it would still work since I don't think the headers are really read by the machine just the offsets
I'd be interested in a brief description how to set the three tools up in the sync'ed mode you described.
ret-sync just makes the debugger and Ghidra talk to each other so that the line that you are currently breakpointing on is the same line that is highlighted in Ghidra. This setup makes it a touch easier to reverse engineer but you can still look at the line numbers manually and just match them up.
Can you also see the decompiled C code?
Ghidra shows decompiled C code but it is still heavily obfuscated and not easy to navigate.
I can run pelite.exe from x64dbg but I have no clue how to start a Ghidra session for the dynamic analysis and also no idea what to do with the ret-sync release file (which seems to be a single *.dp64 resp. *.dp32 file).
You can have a look at the ret-sync page for their Ghidra (https://github.com/bootleg/ret-sync?tab=readme-ov-file#ghidra-usage) instructions
(which seems to be a single *.dp64 resp. *.dp32 file).
These are the plugin files that x64dbg uses, you need to drag and drop them in x64dbg plugin folder
Hey @maehw, have you tried with the vikant writer workflow from my repo? There is an ipython notebook that shows how to locate the thumbnail data and the python script emulates a vikant writer so that you can create flash images from custom stitch data (you don't need any hardware to do it). I also have notes which I think are detailed enough to be able to replace thumbnails and stitch data in a given vikant/PED file with our own stuff using python directly. Let me know if you want to look at any of those binary files with custom stitch data.
I started with reverse engineering the binaries, but it gets complicated really fast and it was actually more instructive to jump around the card data in python.
Hi @AeroX2 and @bezmi ,
thank you for your replies.
I've dived deeper into CardIO.dll and also USB communication with the brother's card reader/writer.
Actually, it's really the case that "ÿbrother_sewing" (where ÿ is 0xFF) is written into the card's memory (at offset 0x170). The very last write operation (over)writes the single character 'b' at 0x170 (not at 0x100 as mentioned by you, @bezmi , in the linked issue). So maybe the character is wrong in the binary output file as it comes out of cracker.py. This must be some special handling by either the DLL or the standalone application. It actually may prevent the machine from reading the card. I also had seen several string comparisons in the code.
@AeroX2 : I haven't tried the x64dbg + Ghidra + ret-sync combination (yet). But thanks for the instructions!
@bezmi : I couldn't get the Vikant-based workflow running. That's what I wrote in the eevblog.com forum:
Other things I could not get working: the Vikant card emulator (another Python script)... or rather I could not get it working with the "Ultimate Explorer for Brother" (Serial port version). Even though I created virtual COM ports with com0com, only the Python side connected... and Vikant's Ultimate Explorer didn't want to open or even find the COM port. So this approach currently won't work for me to write own card binaries from PES files.
So it seems that I already had issues with getting the virtual COM port working with the software (that was before I got myself a brother PED card reader/writer + re-writeable memory card which works well with brother's PED Basic software). Which version of the Vikant software have you been using? I'd like to have a working toolchain on Linux/MacOS even though the patched PED Basic approach (running under Windows) works - and it works nicely!
Hi all.
New findings & fixes (#7):
I think the card memory sizes have been wrong, see #6.
Writing "ÿrother_sewing" instead of "brother_sewing" will let my machine (brother PE-150) ignore the memory card and even prevent any operation of the machine at all and keep beeping... - so I fixed it in cracker.py data postprocessing. The puzzle about different locations of the string is now also solved for me: it depends on the hoop size of the character is written at 0xC0, 0x100 or 0x170. According to the "code" 0x280 and 0x28E would also be valid locations - but I don't know under what circumstances - haven't seen them being used by PED Basic. The b is the final character which is written to EEPROM/flash - so the machine could also use it to check if the memory write operation had been completed successfully or if it would likely face a corrupt card image.
HTH
Thanks for your help. I'd be okay if this issue was closed. I'd then open new issues for other, smaller subtopics - if you're okay with that.
@maehw I was just using the version from their website: https://vikant-emb.com/downloads I haven't checked to see if it still works though. I can't think of any issues I had other than having to run as administrator to access the com ports. Glad to know that PED Basic worked for you.
|
2025-04-01T06:36:40.981631 | 2021-08-16T18:59:38 | 972015533 | {
"authors": [
"Chris-Schnaufer"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:109",
"repo": "AgPipeline/issues-and-projects",
"url": "https://github.com/AgPipeline/issues-and-projects/issues/531"
} | gharchive/issue | Added Python tests GitHub Actions to Atlana (DPP UI)
Task to do
Add python tests actions for PRs and merges
Reason
Help identify issues before and after code is merged
Result
GitHub actions to test python code are run on PRs and merges
Merged commit: https://github.com/AgPipeline/Atlana/tree/ec1ede3e3b3df4127cc2b90d86fb097782ef0f8c
|
2025-04-01T06:36:40.991977 | 2023-09-30T19:32:49 | 1920375769 | {
"authors": [
"ymw0407"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:110",
"repo": "AgainIoT/Open-Set-Go_server",
"url": "https://github.com/AgainIoT/Open-Set-Go_server/issues/98"
} | gharchive/issue | 🐛 [BUG] - critical security problems from vm2
Browsers
Firefox, Chrome, Safari, Microsoft Edge, Opera
OS
Windows, Linux, Mac
Description
As dependabot told us, vm2 has a fatal problem. Therefore, starting with the vm2 module in question, the<EMAIL_ADDRESS>module must also be replaced/modified.
dependabot이 알려준 것과 같이 vm2에는 치명적인 문제가 있다고 합니다. 따라서 문제가 있는 vm2 모듈부터<EMAIL_ADDRESS>모듈도 대체/수정해야합니다.
https://github.com/TooTallNate/proxy-agents/issues/240
https://github.com/TooTallNate/proxy-agents/pull/224
As in Issue and Pull-Request above, vm2 used in proxy-agent's degenerator module has been removed. Therefore, it seems that the re-installation will solve the problem.
위의 Issue와 Pull-Request와 같이 proxy-agent의 degenerator 모듈에서 사용되었던 vm2가 제거가 되었다고 합니다. 따라서 재설치를 진행한다면 해당 문제가 해결될 것으로 보입니다.
Reproduction URL
https://github.com/AgainIoT/Open-Set-Go_server/security/dependabot/5
Reproduction Steps
https://github.com/AgainIoT/Open-Set-Go_server/security/dependabot/5
Solutions
https://github.com/TooTallNate/proxy-agents/issues/240
https://github.com/TooTallNate/proxy-agents/pull/224
As in Issue and Pull-Request above, vm2 used in proxy-agent's degenerator module has been removed. Therefore, it seems that the re-installation will solve the problem.
위의 Issue와 Pull-Request와 같이 proxy-agent의 degenerator 모듈에서 사용되었던 vm2가 제거가 되었다고 합니다. 따라서 재설치를 진행한다면 해당 문제가 해결될 것으로 보입니다.
Screenshots
No response
I can find solution from this issue
https://github.com/nest-modules/mailer/issues/723
|
2025-04-01T06:36:41.014477 | 2023-08-13T03:44:13 | 1848386548 | {
"authors": [
"LoopIssuer",
"xiayangqun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:111",
"repo": "AgoraIO-Extensions/Agora-Unity-Quickstart",
"url": "https://github.com/AgoraIO-Extensions/Agora-Unity-Quickstart/issues/203"
} | gharchive/issue | Unity Android and iOS (possible) phone overheat
I'm using Agora SDK (together with MediaPipe Unity Plugin), everything works fine. Unfortunately, our client believes that the application overheats the phone (Android and iOS) too quickly (does not provide data) and users therefore resign from using it.
Is there anything that can be done to make the phones less hot and drain less battery?
Thanks in advance
Reduce the resolution and frame rate of the video, which will reduce power consumption
|
2025-04-01T06:36:41.016481 | 2020-04-02T06:56:32 | 592402016 | {
"authors": [
"ironynet",
"kadariyaujwal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:112",
"repo": "AgoraIO/Flutter-SDK",
"url": "https://github.com/AgoraIO/Flutter-SDK/issues/99"
} | gharchive/issue | Add support for setting video encoder configuration.
Hello, can you please add an option to set video encoder configuration? If it's already, let me know how?
See the api document.
https://pub.dev/documentation/agora_rtc_engine/latest/agora_rtc_engine/AgoraRtcEngine/setVideoEncoderConfiguration.html
https://pub.dev/documentation/agora_rtc_engine/latest/agora_rtc_engine/VideoEncoderConfiguration-class.html
|
2025-04-01T06:36:41.058373 | 2024-03-03T20:02:32 | 2165532970 | {
"authors": [
"AhmedLSayed9",
"FluffyDiscord",
"vasilich6107"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:114",
"repo": "AhmedLSayed9/dropdown_button2",
"url": "https://github.com/AhmedLSayed9/dropdown_button2/issues/244"
} | gharchive/issue | DropDown button 2 latest beta fails with empty dropdown
This code fails with
DropdownButtonHideUnderline(
child: DropdownButton2<String>(
value: null,
items: [],
),
)
RangeError (index): Invalid value: Valid value range is empty: 0
DropdownButton2State.build (package:dropdown_button2/src/dropdown_button2.dart:687:30)
I dont have this error, but it won't open nonetheless. I want it to open, because I am using dropdownSearchData and creating new items when nothing is found.
This no longer occurs in latest beta version.
Feel free to open a new issue if it still exists.
~I dont have this error~ (it does trigger when manually openening using dropdownKey.currentState!.callTap()), but it won't open nonetheless. I want it to open, because I am using dropdownSearchData and creating new items when nothing is found.
Let's discuss this at #257
~I dont have this error~ (it does trigger when manually openening using dropdownKey.currentState!.callTap()), but it won't open nonetheless. I want it to open, because I am using dropdownSearchData and creating new items when nothing is found.
Let's discuss this at #257
|
2025-04-01T06:36:41.060642 | 2021-10-04T11:20:44 | 1015052147 | {
"authors": [
"AhmedRaja1",
"agungd3v"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:115",
"repo": "AhmedRaja1/Hacktoberfest",
"url": "https://github.com/AhmedRaja1/Hacktoberfest/pull/256"
} | gharchive/pull-request | add bulk email sender
an API for sending bulk emails using nodejs and express
@agungd3v can you please add some descrtiption!
@all-contributors please add @agungd3v for code
@agungd3v please star the repo!
@AhmedRaja1 I'm done adding description
|
2025-04-01T06:36:41.082841 | 2021-09-16T11:05:55 | 998080956 | {
"authors": [
"SangwonOh",
"yadavendra15"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:116",
"repo": "AirenSoft/OvenLiveKit-Web",
"url": "https://github.com/AirenSoft/OvenLiveKit-Web/issues/1"
} | gharchive/issue | Delay in Start the Streaming of Screen Share
Hi Oven Team,
When I tried start the Screen Share streaming, it takes some time to start the streaming.
I didn't find the reason of the delay in the start.
Hoping for fixed this issue soon.
Screen capture is now stable. Closing this issue. (refer https://github.com/AirenSoft/OvenLiveKit-Web/issues/10)
|
2025-04-01T06:36:41.095921 | 2024-01-26T10:27:56 | 2101967556 | {
"authors": [
"aindriu-aiven",
"muralibasani"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:117",
"repo": "Aiven-Open/klaw",
"url": "https://github.com/Aiven-Open/klaw/pull/2244"
} | gharchive/pull-request | User deletion with admin permissions
Linked issue
Resolves: #2122
What kind of change does this PR introduce?
[ ] Bug fix
[ ] New feature
[ X] Refactor
[ ] Docs update
[ ] CI update
What is the current behavior?
Describe the state of the application before this PR. Illustrations appreciated (videos, gifs, screenshots).
Currently it is not possible to delete a user who has super user permissions.
What is the new behavior?
Describe the state of the application after this PR. Illustrations appreciated (videos, gifs, screenshots).
remove check for superuser permission while deleting
only user with permission FULL_ACCESS_USERS_TEAMS_ROLES can delete other user with FULL_ACCESS_USERS_TEAMS_ROLES
send mail to both the users (removed user and removed by), and klaw admin
audit log
Other information
Additional changes, explanations of the approach taken, unresolved issues, necessary follow ups, etc.
Requirements (all must be checked before review)
[ ] The pull request title follows our guidelines
[ ] Tests for the changes have been added (if relevant)
[ ] The latest changes from the main branch have been pulled
[ ] pnpm lint has been run successfully
When I delete a superuser I get a "Delete User Request: SUCCESS" I think it should just say "Delete User: SUCCESS"
As I thought I had to approve the deletion request initially. It also does say this on create as well, "Create User Request: SUCCESS"
|
2025-04-01T06:36:41.099262 | 2021-11-05T14:42:13 | 1045923279 | {
"authors": [
"10lulu",
"Agaloth",
"Ajneb97",
"srbeastman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:118",
"repo": "Ajneb97/PlayerKits",
"url": "https://github.com/Ajneb97/PlayerKits/issues/1"
} | gharchive/issue | [SUGGESTION] Custom model data for main gui things as arrow
As talked on spigot, it's about the back button, I know some people that needs the same features so I created that to make it easier and less spammy on spigot :)
Oh thx, i was about to make one as well. We rly need this feature
I started uploading my plugins to github and making them open-source so you can add your own specific features ;)
This is such a good idea!!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.