id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
826954335 | Clean up cmake/config
Remove unused cmake/JoinPaths.cmake
Remove unused cmake/OpenEXRLibraryDefine.cmake
Clean up and reformat comments
Signed-off-by: Cary Phillips [email protected]
I think I may have botched the merge when resolving the conflicts here. Can someone confirm that the squash and merge here will still leave a linear history?
I noticed that the have large stack support was set after the config file was generated while I was making the changes for symbol visibility configure options, so that should be addressed. There are a number of other cleanups I would like to see eventually (not needed prior to 3.0 release), such as I don't believe we need IexConfig.h or IlmThreadConfig.h any more...
This PR removes two unused files and cleans up some comments, no need to go
into the 3.0.0 release. It has conflicts that need to be resolved anyway,
it may be better to start from scratch, I'll close it and open another
later.
On Mon, Mar 15, 2021 at 3:50 AM Kimball Thurston @.***>
wrote:
I noticed that the have large stack support was set after the config file
was generated while I was making the changes for symbol visibility
configure options, so that should be addressed. There are a number of other
cleanups I would like to see eventually (not needed prior to 3.0 release),
such as I don't believe we need IexConfig.h or IlmThreadConfig.h any more...
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/AcademySoftwareFoundation/openexr/pull/957#issuecomment-799319344,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AFC3DGLWBEDAPDSSYIHKKV3TDXQ7TANCNFSM4Y454MTQ
.
--
Cary Phillips | R&D Supervisor | ILM | San Francisco
| gharchive/pull-request | 2021-03-10T02:13:27 | 2025-04-01T06:36:40.372965 | {
"authors": [
"cary-ilm",
"kdt3rd"
],
"repo": "AcademySoftwareFoundation/openexr",
"url": "https://github.com/AcademySoftwareFoundation/openexr/pull/957",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
740797085 | Create overview of communication tools and calendar processes and opt…
…ions for ASWF projects
Signed-off-by: John Mertic [email protected]
Agreed @jfpanisset - feel free to make the edits in there.
| gharchive/pull-request | 2020-11-11T14:12:57 | 2025-04-01T06:36:40.374453 | {
"authors": [
"jmertic"
],
"repo": "AcademySoftwareFoundation/tac",
"url": "https://github.com/AcademySoftwareFoundation/tac/pull/207",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
218249270 | Update to instructions
This is to update the instructions to inform users of the requirement to open port 80 to complete the stack installation, and instruction of how to achieve this.
Thanks @stephankfolkes for the contribution. However, this has been added to the readme already (https://github.com/Accenture/adop-docker-compose).
| gharchive/pull-request | 2017-03-30T16:10:43 | 2025-04-01T06:36:40.441319 | {
"authors": [
"RobertNorthard",
"stephankfolkes"
],
"repo": "Accenture/adop-docker-compose",
"url": "https://github.com/Accenture/adop-docker-compose/pull/206",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1335240470 | Improve Pause and Play controls
The aria labels for slide controls could be clearer by using "Pause Carousel" instead of "Pause" and "Play Carousel" instead of "Play".
Because this looked like it qualified as one of the simpler updates, I went ahead and created a Pull Request.
| gharchive/issue | 2022-08-10T21:50:22 | 2025-04-01T06:36:40.442347 | {
"authors": [
"jenlampton"
],
"repo": "Accessible360/accessible-slick",
"url": "https://github.com/Accessible360/accessible-slick/issues/77",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1853782141 | feat(ws): Add a WebSocket server option
Requires #12
me omw to lgtm my own code >:)
| gharchive/pull-request | 2023-08-16T19:21:04 | 2025-04-01T06:36:40.445605 | {
"authors": [
"AceLikesGhosts"
],
"repo": "AceLikesGhosts/ytm-rpc",
"url": "https://github.com/AceLikesGhosts/ytm-rpc/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2373655107 | Add Day3 stuff
Contains everything from the last day, before I almost lost it all
Looks like this won't work, gonna need a force push because PICO being pico
| gharchive/pull-request | 2024-06-25T21:02:22 | 2025-04-01T06:36:40.446730 | {
"authors": [
"Achie72"
],
"repo": "Achie72/druid-dash-2",
"url": "https://github.com/Achie72/druid-dash-2/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2646325458 | feat: add login with phone return code
根据框架加了验证手机号,然后返回固定验证码,以及写了中间件处理验证码那一块
A了哥
| gharchive/pull-request | 2024-11-09T17:08:10 | 2025-04-01T06:36:40.448468 | {
"authors": [
"Goblin-master",
"dbinggo"
],
"repo": "AchoBeta/achobeta-pluto-backend",
"url": "https://github.com/AchoBeta/achobeta-pluto-backend/pull/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
377096221 | Fix some typos and simplify age calculation a bit.
I think using DayOfYear makes the code a little easier to read compared to checking both month and day.
Thank you, much nicer!
| gharchive/pull-request | 2018-11-03T21:47:06 | 2025-04-01T06:36:40.453067 | {
"authors": [
"PeterOrneholm",
"viktorvan"
],
"repo": "ActiveLogin/ActiveLogin.Identity",
"url": "https://github.com/ActiveLogin/ActiveLogin.Identity/pull/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
467985243 | fix(versions): update Activiti/activiti-cloud-connectors versions into develop
UpdateBot pushed maven dependency: org.activiti.cloud.common:activiti-cloud-service-common-dependencies to: 7.1.43
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.common:activiti-cloud-service-common-dependencies 7.1.43
| gharchive/pull-request | 2019-07-15T07:48:51 | 2025-04-01T06:36:40.460624 | {
"authors": [
"jx-activiti-cloud"
],
"repo": "Activiti/activiti-cloud-connectors",
"url": "https://github.com/Activiti/activiti-cloud-connectors/pull/148",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
366775156 | update org.activiti.cloud.query:activiti-cloud-query-dependencies to 7.0.20
UpdateBot pushed maven dependency: org.activiti.cloud.query:activiti-cloud-query-dependencies to: 7.0.19
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.query:activiti-cloud-query-dependencies 7.0.19
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.query:activiti-cloud-query-dependencies 7.0.20
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.query:activiti-cloud-query-dependencies 7.0.21
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.query:activiti-cloud-query-dependencies 7.0.22
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.query:activiti-cloud-query-dependencies 7.0.22
| gharchive/pull-request | 2018-10-04T12:53:04 | 2025-04-01T06:36:40.464609 | {
"authors": [
"jx-activiti-cloud"
],
"repo": "Activiti/activiti-cloud-dependencies",
"url": "https://github.com/Activiti/activiti-cloud-dependencies/pull/111",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
468705279 | update [\s+tag: (.*)] to 0.0.136
UpdateBot pushed regex: [\s+tag: (.*)] to: 0.0.136
UpdateBot commands:
updatebot push-regex --regex \s+tag: (.*) --value 0.0.136 --exclude --previous-line \s+ repository: activiti/activiti-modeling-app charts/activiti-cloud-modeling/values.yaml
| gharchive/pull-request | 2019-07-16T14:58:27 | 2025-04-01T06:36:40.485349 | {
"authors": [
"jx-activiti-cloud"
],
"repo": "Activiti/activiti-cloud-modeling",
"url": "https://github.com/Activiti/activiti-cloud-modeling/pull/316",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
273152159 | Create a Docs Repository
Summary
As discussed in issue #355, we should consider moving away from the GitHub Wiki for documentation in the long term. One alternative would be to create a new AdamsLair/duality-docs repo that hosts all documentation.
Analysis
The GitHub Wiki lacks some features and also has some disadvantages from the management side:
Pages need to have unique names and folder structures are ignored.
Users can't upload images and other media.
Lack of multi-version support (crucial for v3.0).
Lack of multi-language support.
No branches for bigger edits, no PRs or reviews.
Either everyone can edit, or only people with direct push access.
History not available on a global level.
All of these problems are solved by moving docs to a new AdamsLair/duality-docs repository.
The repository's main readme file acts as the home page for documentation.
A proper directory structure needs to be defined.
Anticipate support for multiple versions co-existing.
Anticipate support for multiple languages co-existing.
Maybe something like Pages/en/vX/...?
Investigate the possibility of turning on the github pages feature for the master branch and generating a nice docs website from the markdown files.
This issue is up for grabs. Let me know if you're interested in getting some work done on this. Ideally, do the first prototypes in a public repo in your own account, which we can later either transfer or re-create in the AdamsLair organization.
I think creating new repository is a best for management.
As things stand the duality project is actually pretty heavy, due to binaries being in the repo or otherwise, so we ought to put the documentation in a separate repo for that reason if nothing else.
Yep. And even if it was only for keeping it tidy, I'd still agree 100%.
Assuming that route is taken, I can start transferring our documentation into a repo and put together a gh-pages structure that seems intuitive and then we can transfer the repo over to the AdamsLair space. It seems like this would be pretty easy to do, I've transferred repos previously and it's seamless.
Sounds good! GitHub also allows you to configure the repo so all the docs can be on the regular master branch, so we don't have to use this awkward gh-pages naming convention.
gh-pages will generate the documentation pages as we commit/merge to master on the documentation repo, which means we can focus on creating content using markdown and create a custom theme over time to ensure that it ultimately conforms to the Duality branded color and stylistic choices.
👍
Overall, however, we ought to be strict about making our contributions in markdown, as anything else only makes maintenance more complicated.
Yep, agree completely. We should limit the docs to markdown for maintenance reasons, and also to strictly separate content from design / layout.
Please let me know if there are other elements you would like to have me look into.
I think it would be a good thing to have a first "prototype", and then we can take it from there and talk about how to proceed.
Sounds good. I'll get it going this evening. Expect a link when you pop on tomorrow.
I have cloned the wiki repo and shoved it into my repo for testing.
The site is currently published at https://bobgneu.github.io/duality-documentation
Got the index page into place. Gonna take a break for the evening to do some thinking about organization.
I have cloned the wiki repo and shoved it into my repo for testing.
Okay, let's iterate on this. 👍
First thing that we need to improve vs. the raw wiki export is directory structure. The root folder shouldn't have any content, and we should anticipate docs for multiple Duality versions and, potentially, in multiple languages. Multi-language is not a priority for now, but it only costs us another directory step, so we might as well include it.
I'd currently go with a directory structure like this:
Pages/en/v2/...
Except for the welcome page, which could remain in root. Not yet sure about how to organize folders inside /v2/....
Second point, is there any way to get the footer and sidebar back?
The template can be modified. I can look into that if you would like.
I reorganized the pages as requested and took the liberty of wrapping my mind around the template system, and its quite straight forward.
{% include footer.html %}
the include statement references a file within the _layouts directory, and explicitly drops its content into the template. By default the default.html template is used, where the {{content}} block is replaced by the rendered markdown.
It is rudimentary, but works.
There is the ability for each md file to include configuration in its header, as well as some customization fields as we see fit.
I tried to include markdown, but the contents are not rendered, and instead just copied in explicitly, even with an md extension.
WRT Organizing the documentation, a good first step would be to classify them based on the perspective of a new developer. Each document could be bucketed based on a Low/Medium/High experience score that we can now derive from a value in the header of the md file, tied to a badge or something in the top right of each page. Some level of flagging of Tutorials would also be beneficial, as when I was starting up the first steps were pretty daunting and I was looking for tutorials to help bridge the gaps when kicking off my first project.
For Inspiration: https://github.com/pages-themes
Re-organized folder structure looks good. Also, thanks for your insight into the templating - seems like we could provide a completely custom template file, complete with footer and potentially even an auto-generated sidebar. Sounds very promising!
WRT Organizing the documentation, a good first step would be to classify them based on the perspective of a new developer. Each document could be bucketed based on a Low/Medium/High experience score that we can now derive from a value in the header of the md file, tied to a badge or something in the top right of each page. Some level of flagging of Tutorials would also be beneficial, as when I was starting up the first steps were pretty daunting and I was looking for tutorials to help bridge the gaps when kicking off my first project.
Good idea, but would defer this to some later point, when we already have all the docs moved and published. The docs repo will then also have its own GitHub project and issue list, so we'll have a good place to keep track of ideas like this too.
For now, here's a list of the things that I think need to be tackled so we can do a full docs switch, not necessarily in order:
Move img and en from root into a Pages subfolder, so we don't clutter the root directory with content specific folders as we add more later on.
Figure out how images and links work, and how they can be linked with a relative URL.
Fix the images and links in all pages.
Decide on a good base theme / template to use and customize.
Integrate the base theme.
Figure out which special templates are required, for example for the home page
Adjust the default template with footer support.
Adjust the default template with sidebar support.
Transfer the repository to the AdamsLair organization account.
Consider renaming the repo to duality-docs.
Just before release, pull all the latest Wiki pages into the new repo again to not miss recent changes. Make sure to fix images and links as done before.
@BobGneu Feel free to add any points that you have on your radar, or address any of them. This issue is somewhat big, so I think it makes sense to at some point just make the cut and turn it into a team effort. In that case, just let me know that you're ready for the switch and we'll do the repo transfer.
Let's do the transfer. The repo can be renamed upon transfer and from that point things will be more manageable.
Images are referenced as with standard html, relative or absolute paths are interchangeable. I already did the work to validate this on the home page. It should be pretty straight forward to correct the other images. Once we get the repo transferred we can use the issues system to track the notes above.
I'll be off for a week, but will get back to you as soon as I'm back 👍
One side note, in order to transfer ownership I will need the permission relating to creating repositories within the AdamsLair space.
https://help.github.com/articles/transferring-a-repository-owned-by-your-personal-account/
Repo renamed.
Moved img and en into pages sub directory.
In the future, when making bulleted lists you can make them into checkboxes to simplify and track updates. All checkboxes are tracked in the first post of a PR and Issue. They show up on the issue listing.
Images can be referenced using markdown similar to the following
Inline Relative / Exact


Deferred Relative
![Debug Game Break][DebugGameBreak]
![Debug Game Break][DebugGameBreakDirect]
[DebugGameBreak]: ../../img/GettingStarted/RunGameButton.png
[DebugGameBreakDirect]: {{site.baseurl}}/pages/img/GettingStarted/RunGameButton.png
Alternatively, we can make image references.
<img src="{{site.baseurl}}/pages/img/GettingStarted/RunGameButton.png" />
In terms of the layout, Having a thin layout is not going to work with many of the code samples, as anything deeper than about 30 characters is going to require scrolling or wordwrap is going to be a mess, stretching things out.
In looking at a few of the other similar sites ~ 740 - 800px of width seems to provide enough space for code examples.
We can create a template with a header, side menu and footer. Given that we have full HTML access we can even position the footer and headers at the top and bottom of any given window. No need for JQuery or anything. Mock up the layout you would like to see and I'll give it a go.
Repo renamed.
Moved img and en into pages sub directory.
👍
Images can be referenced using markdown similar to the following
I think we should stick to the markdown way. I don't have a clear favorite among the variants, but would vaguely prefer relative inline paths.
In terms of the layout, Having a thin layout is not going to work with many of the code samples, as anything deeper than about 30 characters is going to require scrolling or wordwrap is going to be a mess, stretching things out.
In looking at a few of the other similar sites ~ 740 - 800px of width seems to provide enough space for code examples, though its still pretty tight.
We can create a template with a header, side menu and footer. Given that we have full HTML access we can even position the footer and headers at the top and bottom of any given window. No need for JQuery or anything. Mock up the layout you would like to see and I'll give it a go.
Do we have a proper css file to work with? Might as well go for a responsive design and use the full width up to a max value for big screens, and adjust the sidebar for small screens below a min width. Could turn it into a "site header" instead in those cases.
Let's do the transfer.
Great, let's do it. I think the easiest way would be to transfer it to me, and I'll forward it to the AdamsLair org.
Initiated the transfer to you.
Transferred 👍 Here's the new repo link.
ToDo
Set up labels on the new docs repo, probably similar to the ones in the main Duality repo.
Transfer all remaining ToDo items into issues in the new docs repo, but keep this one open until first release.
Fix the images and links in all pages.
Decide on a good base theme / template to use and customize.
Integrate the base theme.
Figure out which special templates are required, for example for the home page
Adjust the default template with footer support.
Adjust the default template with sidebar support.
Transfer the repository to the AdamsLair organization account.
Consider renaming the repo to duality-docs.
Just before release, pull all the latest Wiki pages into the new repo again to not miss recent changes. Make sure to fix images and links as done before.
Labels Created.
images ticket created.
https://github.com/AdamsLair/duality-docs/issues/2
Theme ticket created
https://github.com/AdamsLair/duality-docs/issues/3
I summed up the template based notes together in
AdamsLair/duality-docs#3 and opened up a ticket for pulling in the latest from the wiki. https://github.com/AdamsLair/duality-docs/issues/4
I elaborated as best I could, Please feel free to edit them further.
Nice work on the issues! The template one is actually not what I meant, but still a good idea the way you read it. Updated ToDo overview:
ToDo
Adjust labels to match naming (where appropriate) and colors from the main repo and follow a consistent color scheme overall.
Transfer remaining ToDo items into issues in the new docs repo, but keep this one open until first release.
Fix the remaining images and links on pages that were not yet cleaned up.
Decide on a good base theme / template to use, then integrate it to start iterating on.
As part of this, figure out which special jekyll page templates are required, for example for the home page.
Adjust the default template with footer support.
Adjust the default template with sidebar support.
Just before release, pull all the latest Wiki pages into the new repo again to not miss recent changes. Make sure to fix images and links as done before.
Note that we both had been added as collaborators accidentally during the transfer. I removed us again, so access rights are now again managed via teams, but this also means that you won't be able to adjust the labels, since you're no longer a project admin. I'll pick up that one.
Progress
Adjusted labels to use similar color-based categorization as in the main repo, but removed and renamed labels where applicable.
Immediate ToDo
Transfer remaining ToDo items into issues in the new docs repo, but keep this one open until first release.
Fix the remaining images and links on pages that were not yet cleaned up.
Decide on a good base theme / template to use, then integrate it to start iterating on.
As part of this, figure out which special jekyll page templates are required, for example for the home page.
Adjust the default template with footer support.
Adjust the default template with sidebar support.
Just before release, pull all the latest Wiki pages into the new repo again to not miss recent changes. Make sure to fix images and links as done before.
Created a First Release milestone on the docs repo, which contains all issues that need to be addressed in order to release.
cc @AdamsLair/duality-contributors for everyone who would be interested in joining the docs transfer with a PR or two.
Moved all remaining docs issues from the main repo to the docs repo.
Closing this, as all open work have been moved to the new docs repo and first release milestone.
| gharchive/issue | 2017-11-11T15:17:06 | 2025-04-01T06:36:40.656761 | {
"authors": [
"BobGneu",
"htw5295",
"ilexp"
],
"repo": "AdamsLair/duality",
"url": "https://github.com/AdamsLair/duality/issues/589",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1819411650 | Filtering Log Still Not Fixed After 5 Years
The design of Adguard's Filtering Log doesn't recognise when parts of a webwpage are blocked by Cosmetic/Element hiding rules, they can be in a subscribed to Filter Lister or a Custom User Created Rule.
This badly designed Filtering Log makes it extremely difficult to determine what is blocking parts of a webpage.
This should NOT be the case, ALL BLOCKED ITEMS should be reported.
This is in a fact a Technical Issue which is created by the badly designed Filtering Logs.
Adguard have stubbornly refused to acknowledge there is a problem created by the Filtering Logs.
This was raised 5 years ago on the Github Forums
Here's the link: https://github.com/AdguardTeam/CoreLibs/issues/180
The Adguard Filtering Log is still NOT fixed after 5 years.
It is absolutely disgusting and very shameful conduct that this was raised on Github 5 very, very long years ago and Adguard have done absolutely NOTHING TO FIX THEIR FILTERING LOGS.
With various other ad blockers such as ublock Origin, the filtering logs do clearly show when a Cosmetic or User/Custom rule is in effect.
When Blocked Items are NOT REPORTED AT ALL, do you understand that this makes discovering the source of a webpage problem an extremely time consuming, difficult and painstaking process.
To discover the source of the problem I literally had to spend several hours of my valuable time disabling each and every single Adguard function one at a time and then reloading the webpage until I discovered the source of the problem.
It is absolutely disgusting and quite shameful that after 5 very, very long years Adguard has still done absolutely NOTHING about this.
Addressed it here: https://github.com/AdguardTeam/CoreLibs/issues/1784#issuecomment-1649284401
| gharchive/issue | 2023-07-25T01:50:12 | 2025-04-01T06:36:40.770736 | {
"authors": [
"ameshkov",
"jputting"
],
"repo": "AdguardTeam/CoreLibs",
"url": "https://github.com/AdguardTeam/CoreLibs/issues/1783",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1763458838 | PhoneNumberInput - fix clearing logic
При нажатии иконки очистки должен стираться только номер телефона, код страны должен оставаться
Иконка очистки убрана из компонента, добавлена дефолтная иконка при отсутствии совпадений с кодами стран. Согласовано с @Eldar-Gyzyev
| gharchive/issue | 2023-06-19T12:39:24 | 2025-04-01T06:36:40.822509 | {
"authors": [
"Parasjona",
"syros"
],
"repo": "AdmiralDS/react-ui",
"url": "https://github.com/AdmiralDS/react-ui/issues/971",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
271435409 | HTTPCache : JCR storage handler
Hi! I am quite new here, but would like to take part of this awesome project.
I wanted to contribute by developing a JCR storage handler for the HTTP cache.
Just to prevent me from doing work thats already done, is somebody already working on this?
My idea was the following:
-Have the storage work under 1 root node (configureable through OSGI config)
-Have then bucket nodes under this root node (like a hashmap) that comes from the hashcode in the cachekey.
-Have the bucket nodes go a few levels deep, since you dont want all buckets under 1 node, this might cause storage performance issues. So for instance if you have the hashcode 12345678910:
rootNode / 123456 / 078910 / entrynode1
The breakpoint to split can then be configureable through OSGI config.
To retrieve a node, it would be like a hashmap, so loop all the nodes under a bucket and perform equals on the cachekey. The cachekey would have extend Serializeable.
What do you guys think?
@Sc0rpic0m no one is working on this!
Its been a while since i looked at the code; but sounds like a good approach! Thoughts around clearing the cache? Would the cache contents be deny all and only accessible via a service user?
@Sivaramvt thoughts on the approach?
Cheers for the reply!
Clearing the cache would be as follows:
Expiry:
1: Put an expiry in (through OSGI config) as epoch timestamp or calendar object as property.
2: The cache store also implement scheduler, and in the run method, fire off a service that performs a query on the root node, targeting nodes with the expiry time expired.
3: delete these nodes.
Regular flush:
Same as regular hashmap, just fetch the node and delete is.
Just gotta think of an efficient way to delete the bucket nodes that don't contain contents after a cleanup or regular flush.
For cache contents: yes, would be deny all and only accessible via a service user indeed.
You can look at my fork here as well:
https://github.com/Sc0rpic0m/acs-aem-commons/tree/feature/httpcache-jcr-memstore/bundle/src/main/java/com/adobe/acs/commons/httpcache/store/jcr/impl
Of course it's just WIP.
@Sc0rpic0m could you make a PR to /develop and just put in the PR's title [REVIEW ONLY] HTTP Cache JCR Store implementation - its a bit easier to review and leave comments in the context of a PR. We can always close the PR. Thanks!
| gharchive/issue | 2017-11-06T11:01:38 | 2025-04-01T06:36:40.838154 | {
"authors": [
"Sc0rpic0m",
"davidjgonzalez"
],
"repo": "Adobe-Consulting-Services/acs-aem-commons",
"url": "https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/1164",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
440657920 | Dispatcher flush rules - doesn't manage multiple occurrence of the same path
Required Information
[ ] AEM Version, including Service Packs, Cumulative Fix Packs, etc: AEM 6.4.2
[ ] ACS AEM Commons Version: ACS-AEM-Commons 4.0.0
[ ] Reproducible on Latest? yes
Expected Behavior
By configuring the "ACS AEM Commons - Dispatcher Flush Rules" service with the following values:
-/content/we-retail/ca/en=/content/we-retail/ca/fr
-/content/we-retail/ca/en=/content/we-retail/us/es
-/content/we-retail/us/en=/content/we-retail/us/es
the expected behaviour is that after publishing the page /content/we-retails/ca/en also the paths:
/content/we-retail/ca/fr
/content/we-retail/us/es
are flushed.
Actual Behavior
Inside the actual version after configuring the ACS AEM Commons - Dispatcher Flush Rules service as described above, only one of all the paths that need to be flushed after the replication of the /content/we-retail/ca/en page, is correctly flushed
Steps to Reproduce
You can reproduce the issue with the following steps:
Configure the ACS AEM Commons - Dispatcher Flush Rules configuration with 2 occurrence of the same path in order to flush 2 different paths (e.g. as the attached image)
Replicate the path which need to trigger the flush of the other
Check the flush agent into the publish instance (e.g. as the attached image)
Links
Links to related assets, e.g. content packages containing test components
Hm, can you try with this configuration:
-/content/we-retail/ca/en=/content/we-retail/ca/fr&/content/we-retail/us/es
-/content/we-retail/us/en=/content/we-retail/us/es
| gharchive/issue | 2019-05-06T11:40:32 | 2025-04-01T06:36:40.844606 | {
"authors": [
"amargheriti89",
"joerghoh"
],
"repo": "Adobe-Consulting-Services/acs-aem-commons",
"url": "https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/1878",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
271198586 | #18 - Asset, Config and PagePredicate models now leverage the modelC…
…ache so they are not reinitialized for every component on the page that uses them.
@godanny86 would be good to have an extra sanity check that the sample content works and i didn't miss testing any of the components.
@godanny86 should be fixed
| gharchive/pull-request | 2017-11-04T15:06:21 | 2025-04-01T06:36:40.845911 | {
"authors": [
"davidjgonzalez"
],
"repo": "Adobe-Marketing-Cloud/asset-share-commons",
"url": "https://github.com/Adobe-Marketing-Cloud/asset-share-commons/pull/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
728227882 | Fix swagger spec for response type of all requests returning binary responses
The swagger spec e.g. for getStepLogs (https://github.com/AdobeDocs/cloudmanager-api-docs/blob/5f2281c1abd493768d81f54ffab245cc02c02e1d/swagger-specs/api.yaml#L774) currently does not define any response type, therefore swagger generates a client method which returns void. Instead there should be a binary response defined as outlined in https://swagger.io/docs/specification/2-0/describing-responses/#response-that-returns-a-file.
Actually it seems that always JSON is returned like this
{"redirect":"https://cm0pl0va80stor0prd.file.core.windows.net/909da636-9119-40db-b11c-f8d23003f15e/deploy/step484607.log?sig=2qQOgXo3zAGwwnaih9sJvBd0GVcCzKw8ne3BrBAOKNk%3D&se=2020-10-23T14%3A53%3A26Z&sv=2018-03-28&rsct=application%2Foctet-stream&rscd=attachment%3B%20filename%3Ddeploy%2Fstep484607.log&sp=r&sr=f"}
| gharchive/issue | 2020-10-23T13:40:04 | 2025-04-01T06:36:40.851145 | {
"authors": [
"kwin"
],
"repo": "AdobeDocs/cloudmanager-api-docs",
"url": "https://github.com/AdobeDocs/cloudmanager-api-docs/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
942269435 | Decision Rules for Offer used in AJO should only refer Profile attributes
Issue in ./help/using/offers/offer-library/creating-decision-rules.md
Decision rules for Offer which are supposed to be later included in AJO should only refer to profile attributes. They should not be using properties from xEvent. If such properties are used the offer validation would fail during Message Publishing step
Captured in DOCAC-6995
| gharchive/issue | 2021-07-12T17:13:27 | 2025-04-01T06:36:40.852731 | {
"authors": [
"Alicesnk",
"chetanmeh"
],
"repo": "AdobeDocs/journey-optimizer.en",
"url": "https://github.com/AdobeDocs/journey-optimizer.en/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
642501357 | Hi, it would be great and really help all flutter developers if they can convert the components from xd to widgets like BUTTON , TEXTFIELD, LISTVIEW, ANIMATIONS like bounce effect etc..
We love to hear your ideas, but we need your help. Please take a few minutes to fill out the information below, and provide a concise, descriptive title.
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
yes , it would be better if "xd to flutter " plugin can convert the textfield ( if user click on the textfield he can able to write ), button , screen redirection after 5 sec or in specific time duration .
| gharchive/issue | 2020-06-21T05:53:31 | 2025-04-01T06:36:40.856279 | {
"authors": [
"mohsinhundekar",
"shashank132"
],
"repo": "AdobeXD/xd-to-flutter-plugin",
"url": "https://github.com/AdobeXD/xd-to-flutter-plugin/issues/56",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
390997868 | Request: Validating the menus (uiEntryPoints)
Currently (v1.1.2), xdpm checks if the uiEntryPoints exists or not. However, it would be better to check the "structure" of uiEntryPoints.
For example, to check the nesting level of submenus, and required field.
https://github.com/AdobeXD/xdpm/blob/02698548e351162c89679379a072c50c5822f106/lib/validate.js#L90-L93
xdpm helped saved a lot of my development time, and I believe that this tool accelerates the speed of developing plugins for many developers.
Thanks,
Agreed. I think this is something we want to do, but we need to make the time to do it. It won't make it in time for today's update, but I want to acknowledge that it's an idea we should work on.
Close the issue.
Already resolved with #23 (and PR of #24).
| gharchive/issue | 2018-12-14T07:34:11 | 2025-04-01T06:36:40.858813 | {
"authors": [
"ashryanbeats",
"yoshikinoko"
],
"repo": "AdobeXD/xdpm",
"url": "https://github.com/AdobeXD/xdpm/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
820037825 | make-adopt-build-farm.sh does not handle mistyped names
What are you trying to do? Run make-adopt-build-farm with an alternate VARIANT set
Expected behaviour: If I mistype the VARIANT value it aborts
Observed behaviour: If I mistype VARIANT it defaults to hotspot
Any other comments:
Is that not what we want? When running without params set, it informs you that it is defaulting to Hotspot (like the other params):
(base) ➜ build-farm git:(master) ./make-adopt-build-farm.sh
ARCHITECTURE not defined - assuming x64
TARGET_OS not defined - assuming you want Darwin
JAVA_TO_BUILD not defined - defaulting to jdk11u
VARIANT not defined - assuming hotspot
FILENAME not defined - assuming jdk11u-hotspot.tar.gz
BUILD TYPE:
VERSION: jdk11u
ARCHITECTURE x64
VARIANT: hotspot
OS: darwin
SCM_REF:
Detecting boot jdk for: jdk11u
Found build version: 11
Required boot JDK version: 10
[ERROR] No local file detected at /Users/morgan/Documents/Repos/openjdk-build/build-farm/platform-specific-configurations/darwin.sh and PLATFORM_CONFIG_LOCATION is not set. Please set PLATFORM_CONFIG_LOCATION to a repository path of a platform config file (e.g. AdoptOpenJDK/openjdk-build/master/build-farm/platform-specific-configurations).
Would you prefer to induce a "hard" failure where the script will fail if it does not detect one or more of these params?
| gharchive/issue | 2021-03-02T13:50:37 | 2025-04-01T06:36:40.861583 | {
"authors": [
"M-Davies",
"sxa"
],
"repo": "AdoptOpenJDK/openjdk-build",
"url": "https://github.com/AdoptOpenJDK/openjdk-build/issues/2508",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
837613608 | github: Use correct file for AIX playbook
Required now that aix.yml no longer exists now that https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/2053 has been merged
Signed-off-by: Stewart X Addison [email protected]
Checklist
[x] commit message has one of the standard prefixes
[ ] FAQ.md updated if appropriate
[ ] other documentation is changed or added (if applicable)
[ ] playbook changes run through VPC or QPC (if you have access)
[ ] for inventory.yml changes, bastillion/nagios/jenkins updated accordingly
https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/2068 has run the github checks using this PR and https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/2051 - linter seems happy so even though no linter checks have been done above I believe it's sufficient to prove that this fix is ok and can be approved+merged.
| gharchive/pull-request | 2021-03-22T11:15:57 | 2025-04-01T06:36:40.866638 | {
"authors": [
"sxa"
],
"repo": "AdoptOpenJDK/openjdk-infrastructure",
"url": "https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/2067",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
444955764 | Email address must be specified in order to enable portal access
Hi,
I get the below error when trying to register a new user on the portal with both redeem invitation and register. Redeem works fine if i update the details under Web Authentication section ahead and redeem the invitation code.
"Email address must be specified in order to enable portal access"
Thanks
Further investigation revealed that it is an issue with one of the internal plug-in which was causing the issue. Thanks
| gharchive/issue | 2019-05-16T13:27:07 | 2025-04-01T06:36:40.868226 | {
"authors": [
"jeevan264"
],
"repo": "Adoxio/xRM-Portals-Community-Edition",
"url": "https://github.com/Adoxio/xRM-Portals-Community-Edition/issues/111",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
265606916 | Entity Lists not rendering Date Only fields properly
Configure a Date and Time field with the behavior of Date Only:
:
Set the field's value in the Dynamics 365 web client:
The Entity Lists's rendering of the field isn't following the expected behavior of displaying the value without a time zone conversion:
To consistently reproduce, set the local operating system's timezone to UTC-08:00:
Fixed by commit https://github.com/Adoxio/xRM-Portals-Community-Edition/commit/9b59d339f98e5e75ee222fe2b5185d4226e42a63.
| gharchive/issue | 2017-10-15T21:42:28 | 2025-04-01T06:36:40.871745 | {
"authors": [
"amervitz"
],
"repo": "Adoxio/xRM-Portals-Community-Edition",
"url": "https://github.com/Adoxio/xRM-Portals-Community-Edition/issues/38",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
145225506 | Missing auth access route to get access token inside SyncMe
:warning:
Done with 9583294
| gharchive/issue | 2016-04-01T16:34:17 | 2025-04-01T06:36:40.880506 | {
"authors": [
"AdrianBZG"
],
"repo": "AdrianBZG/SyncMe",
"url": "https://github.com/AdrianBZG/SyncMe/issues/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2602580057 | BP_FrontEndMenuCamera use instance rotation
BP_FrontEndMenuCamera now use the camera's current rotation when moving, instead of setting it to 0, 0, 0.
This enhancement allow the camera to maintain its intended orientation during transitions, improving the overall user experience.
I trust you I can't try it right now I'm not home 😉
| gharchive/pull-request | 2024-10-21T13:34:06 | 2025-04-01T06:36:40.882105 | {
"authors": [
"Adriwin06",
"achillebourgault"
],
"repo": "Adriwin06/Ultimate-CommonUI-Menu-System",
"url": "https://github.com/Adriwin06/Ultimate-CommonUI-Menu-System/pull/20",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1680093638 | how to make the generated text always end with a complete sentence?
I find that when I generate text on a web page, the endings are all one complete sentence, starting with "." but when I use the api to generate it, it always ends with an incomplete sentence. I have changed "global_settings.generate_until_sentence = True" in the api, because I find that the request in the web page is "True" here but the api defaults to "false". But I found it didn't work. So I would like to know: how can I make the api generate text that always ends with a complete sentence.Can anyone answer my question? Thanks
Translated with www.DeepL.com/Translator (free version)
I see no bug here. Setting the generate_until_sentence to True does reflect on the request sent to the server when fed to the high_level.generate or high_level.generate_stream.
Note that it will continue sentence only if a period is actually found in the 20 tokens after the end. It means it is affected by context, preset, biases, bans, etc.
If you see an issue, it is likely incomplete copy of the settings on your side. For a deterministic comparison, set the top_k to 1 on both, and you should see the exact same content if both settings are the same.
Thanks for the answer! My problem is solved, I double checked the "preset" parameter in the api and the parameter in the web request and found the difference between them. Finally I found that it was the "repetition_penalty" that was affecting the output. I used to think it had no effect. When I set the "repetition_penalty" from the default "2.25" to "1.148125", the output text starts with The output text ends with a complete sentence.
Translated with www.DeepL.com/Translator (free version)
It seems there is an obscure scaling done on repetition penalty, adjusting the value following 0.525*(X - 1)/7 + 1 (with X previous rep pen - formula extracted from minified JavaScript code). Seems kind of weird for it to exist frontend-side and not backend-side.
A fix is coming along with new sanity tests checking compliancy instead of just "nothing is broken".
| gharchive/issue | 2023-04-23T16:57:16 | 2025-04-01T06:36:40.905336 | {
"authors": [
"Aedial",
"cwyuu"
],
"repo": "Aedial/novelai-api",
"url": "https://github.com/Aedial/novelai-api/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2476279143 | Write of image binary file using PED Basic and cracker.py with radare2 fails
Hi @AeroX2 !
Thanks for your efforst in making writing the embroidery card binary images writeable to files!
I'ved tried to get the toolchain running on my machine but have failed so far. Maybe you could give me some guidance here?
Versions:
PED-Basic v1.07 (the about dialog outputs a copyright message "2002 - 2005")
CardIO.dll seems to be version 3.2.1.1 from 22.09.2005
Python 3.12.5
radare2-5.9.4-w64
Windows 10
I don't really think that you had a different version of PED-Basic or the CardIO.dll as both seem pretty old - but you never know.
I somehow guess that I am running into Windows user rights issues. I had installed PED-Basic as the local admin and tried to execute pelite.exe via python cracker.py as normal user. Next, I tried to modify all access rights to all files in a way that a normal user should have full access. I then even copied everything into C:\Users\User\Documents\PED-Basic and it still does not work.
I'll share snippets from the log of python cracker.py, especially those that indicate warnings or errors:
WARN: Relocs has not been applied. Please use `-e bin.relocs.apply=true` or `-e bin.cache=true` next time
ERROR: Cannot debug file (pelite.exe) with permissions set to 0x7Reopening the original file in read-only mode.
INFO: Spawned new process with pid 13068, tid = 3036
INFO: File dbg://C:\\Users\\User\\Documents\\PED-Basic\\pelite.exe reopened in read-write mode
WARN: Relocs has not been applied. Please use `-e bin.relocs.apply=true` or `-e bin.cache=true` next time
13068
(13068) loading library at 0x00007FFFD20B0000 (C:\Windows\System32\ntdll.dll) ntdll.dll
...
WARN: Relocs has not been applied. Please use `-e bin.relocs.apply=true` or `-e bin.cache=true` next time
[Relocations]
vaddr paddr type ntype name
---------------------------------------
0x0003bda4 0x00036048 SET_32 3 CardIO.dll_public: void __thiscall CCardIO::constructor(int)
...
596 relocations
w \x90 @ 4273898
w \x90 @ 4273899
w \x90 @ 4273900
w \x90 @ 4273901
w \x90 @ 4273902
w \x90 @ 4273903
w \x90 @ 4310977
w \x90 @ 4310978
w \xeb @ 4275697
w \xeb @ 24027
ERROR: Cannot write. Use `omf`, `io.cache` or reopen the file in rw with `oo+`
w \xeb @ 24099
ERROR: Cannot write. Use `omf`, `io.cache` or reopen the file in rw with `oo+`
w \xeb @ 24174
ERROR: Cannot write. Use `omf`, `io.cache` or reopen the file in rw with `oo+`
WARN: base addr should not be larger than the breakpoint address
WARN: Cannot set breakpoint outside maps. Use dbg.bpinmaps to false
INFO: Continue until 0x00006ad5 using 1 bpsize
I guess that the ERROR: Cannot write. messages are what really makes me trouble. The file image.bin however is created with 64 kBytes but only contains 0xFF (what would be an empty flash EEPROM/memory). Is there anything I should inspect?
Any hints are very appreciated!
M.
PS: I've also tried to share my findings in the EEVblog forum thread https://www.eevblog.com/forum/reviews/brother-(possibly-also-bernina)-embroidery-machine-memory-cards/?all
Edit: Reinstalling PED Basic in C:\PED-Basic and running python cracker.py from cmd.exe with admin rights also did not help. So the caching part seems to be related to radare2 but I don't really know what to modify in the Python script.
Edit: Trying to use r = r2pipe.open('pelite.exe', ['-e', 'bin.cache=true', '-w']) gave me:
C:\PED-Basic>python cracker.py
ERROR: Cannot debug file (pelite.exe) with permissions set to 0x7Reopening the original file in read-only mode.
INFO: Spawned new process with pid 7076, tid = 6352
INFO: File dbg://C:\\PED-Basic\\pelite.exe reopened in read-write mode
ERROR: bin.relocs and io.cache should not be used with the current io plugin
7076
(7076) loading library at 0x00007FFFD20B0000 (C:\Windows\System32\ntdll.dll) ntdll.dll
...
ERROR: bin.relocs and io.cache should not be used with the current io plugin
Traceback (most recent call last):
File "C:\PED-Basic\cracker.py", line 35, in <module>
cardio_addr = int(re.findall(r"0x([0-9A-F]+)", cardios[-1])[0],16)
~~~~~~~^^^^
IndexError: list index out of range
Edit: I've given up for now. Some files or folders (especially the radare ones) seem partiall write-protected. And even when I remove the protection as admin.. they come back right again. Also: rolling back to radare2-5.8.2-w64 which may have been the version you used (released Jan 23, 2023) gives me different warnings/errors - but also does not work overall:
C:\PED-Basic>python cracker.py
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
ERROR: Cannot debug file (pelite.exe) with permissions set to 0x7Reopening the original file in read-only mode.
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
INFO: Spawned new process with pid 2732, tid = 2264
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
ERROR: Parse error @ line 30 (Invalid register type)
INFO: File dbg://C:\\PED-Basic\\pelite.exe reopened in read-write mode
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
WARN: invalid type
...
I think the "ERROR: Cannot debug file (pelite.exe) with permissions set to 0x7Reopening the original file in read-only mode." might be the critical part here. I have no clues how to fix this under Windows... :(
Yeah you are right on with radare2, this script is quite old at this point and so hasn't really kept up to date with radare2 hence the issues.
The read write issue is actually because it is trying to write into the wrong addresses since it was unable to find the correct memory address for CardIO.dll.
I've just updated the script and tested it and it seems to work, though I'm testing within a virtual machine with ASLR turned off so your milage may vary.
The other issue I see you might be running into is if the program is still open in the background you also can't write into it, you need to close all instances of cracker.py and the program and then run it and it should hopefully work 🤞
Just tested on a non-VM machine and everything seemed to worked with the updated script
Windows 11
Python - 3.12.0
r2pipe - 1.9.4
PELite 1.07
Thank you very much for your fast reply and fix. Good news: it now also works for me!
I forked your repository and suggest a few minor changes, see #4 . I've done this primarily to help other users understand your code without doing their own research/debugging. Feel free to discard them. 😉
Thanks very much for submitting and publishing your code!
If you have time, I'd be happy if you could answer the following questions (no rush):
The card memory usage indicator doesn't work with the patches. Do you know why? Do you think there is an easy fix? Would be nice to have direct feedback instead of the "0%" bar.
Why is the output binary file 64 kiBytes (65'536 bytes)? If I get it right the upper technial limit is 512 kiBytes? Do you think this is fix-able as well?
What license is your repo (and my contributions)? I think you haven't added a LICENSE file yet. I'd be happy if this was open source (and maybe it kind of already is... never had repos without an explicit license).
🥳 🎉
I forked your repository and suggest a few minor changes, see https://github.com/AeroX2/brother-cart-emulator/pull/4 . I've done this primarily to help other users understand your code without doing their own research/debugging. Feel free to discard them. 😉
Thanks for the PR, definitely will make it easier for future users
The card memory usage indicator doesn't work with the patches. Do you know why? Do you think there is an easy fix? Would be nice to have direct feedback instead of the "0%" bar.
Unlikely to be an easy fix, I suspect the reason for the 0% is because I'm not actually writing to a card, only looking at the memory and taking the data that would be written to the card, the progress bar is likely tied to the actual card writing.
Why is the output binary file 64 kiBytes (65'536 bytes)? If I get it right the upper technial limit is 512 kiBytes? Do you think this is fix-able as well?
Two things, one is I pick the smallest card size of 64kb, you can see in the README.md I pick the address 0x10005E6B, to override and this corresponds with 64kb, for 512kb, you'll need address 0x10005E7D (probably).
Second is that pxj 0x10000 in cracker.py is dumping out only 64kb worth of data, so changing those will allow you to dump more.
What license is your repo (and my contributions)? I think you haven't added a LICENSE file yet. I'd be happy if this was open source (and maybe it kind of already is... never had repos without an explicit license).
I'll add a MIT license, that way people are free to use it the way they want
Hi James,
thanks for everything:
Accepting my PR
Keeping the "tested configurations" sections in README and clarifying your setup
Adding the LICENSE file
Being patient, nice and responsive to my requests!
I just got a brother PED-Basic in the mail. I had not planned to buy one, but that one was even kind of affordable. This should make some research easier in case I keep being motivated.
It really seems that every time the usage bar is updated, the PED card reader is accessed (red busy LED turns off for a short moment). The blue color part is the amount of memory already reserved by the PES files "copied to the right side". The cyan color part is the extra amount of memory "copying" the selected PES file on the left would add on top:
Interestingly, the GUI seams to offer write-only access to the card. I have no clue if my card had been written before or erased in a way. But even after writing some of the samples and re-plugging the card... nothing on the right appears without copying PES files from left to right. So, I guess that the card is just completely whiped and the generated binary is written... w/o data being read. I guess that at least some manufacturer/model data from the EEPROM/flash memory should be read to determine the card size - but that's a wild guess.
Summing up: thanks for your help! Rowing this boat together definitely makes it more fun!
Hi again,
I am trying to print the disassembly of functions in pelite.exe and CardIO.dll to better understand the context of the patches/ your screenshots so that I can extend cracker.py. So I am asking you to shed some light into the darker spots I do not fully understand yet.
I've pushed a commit on the fork of your repo:
https://github.com/maehw/brother-cart-emulator/commit/cf5b36798d20327e7ab338c64bfde90991501025
When being exeucted, I get the following output:
Printing imported CardIO functions...
signature: void__thiscallCCardIO::constructor(int), address: 0x00436048
32 bit word: 0x10001980
signature: enumCIOError__thiscallCCardIO::ChkCardWriterConnected(int,unsignedchar*,int*), address: 0x0043604c
32 bit word: 0x10001df1
signature: enumCIOError__thiscallCCardIO::Receive(classCObArray*,int,voidconst*,voidconst*), address: 0x00436050
32 bit word: 0x10001ca1
signature: void__thiscallCCardIO::ResetCardID(void), address: 0x00436054
32 bit word: 0x10001940
signature: enumCIOError__thiscallCCardIO::Send(classCObArray&,voidconst*,voidconst*,enumCCardAtrbType*), address: 0x00436058
32 bit word: 0x10001d0f
signature: enumCIOError__thiscallCCardIO::ChkCardVolume(classCObArray&,int&,int&,enumCCardAtrbType*), address: 0x0043605c
32 bit word: 0x10001d80
So it seems that the exported functions are known to r2 here. I used r2 command ii.
As the address values are +4 byte each - I guess this is rather a function pointer table (32 bit addresses from good' ol' 32 bit world?) somewhere in memory where the lib is loaded during runtime.
When having a look at the values at those addresses pxw 4 @ ..., I get those 0x1000____ addresses. Where does this offset come from? I'v also spotted it in your code 0x10000000. Unfortunately, I wasn't able to get a dump of the functions there (command pdf @ ...).
What am I missing here?
I'd also like to understand why you chose cracker.py to run until addresse 0x6ad2/0x6b0e. Without the disassembly I am lacking context here.
Also, I'd like to explore the code area more where the different flash sizes are used.
How did you get the GUI view? Is it another RE tool? Prefereably, I'd like to get the disassemblies in the context of running cracker.py.
Your help is very much appreciated!
Cheers
When having a look at the values at those addresses pxw 4 @ ..., I get those 0x1000____ addresses. Where does this offset come from? I'v also spotted it in your code 0x10000000. Unfortunately, I wasn't able to get a dump of the functions there (command pdf @ ...).
Hmm not sure why pdf might not be working but I'd probably recommend doing it in Ghidra since that tool is much more friendly, I only used r2pipe because it was the only scriptable debugger that I knew of and too be honest was a bit of a pain to work with.
As for the 0x10000000, that address is the default address Windows uses for any DLL's that are loaded by a program that aren't rebased a different address so in this case the CardIO.dll
https://devblogs.microsoft.com/oldnewthing/20141003-00/?p=43923#:~:text=Since the operating system itself,you start colliding with DLLs.
I'd also like to understand why you chose cracker.py to run until addresse 0x6ad2/0x6b0e. Without the disassembly I am lacking context here.
Yeah all good, reverse engineering is quite difficult and you have to do a bit of guess work, so if you look at offset 0x6b11, there is a function which is setting up the Brother copyright header, this isn't writing the embroidery data that comes later
But I know that this is the function that write into the card data memory location so I can break at this point and extract it with p8j 4 @ rcx, (technically I could have also done this at 0x6b0e but I felt it was just easier to do at this location because I knew the address was in register rcx)
The 0x6b11 function is only called by one other function (offset 0x6ac8), and there are three other functions called here which write the embroidery data so offset 0x6b0e is the ret instruction and is the first instruction where I know that the embroidery data is all written to memory and I can safely extract it.
https://github.com/user-attachments/assets/c987ba6e-fbaa-4aeb-a996-2f01ca683887
How did you get the GUI view? Is it another RE tool? Prefereably, I'd like to get the disassemblies in the context of running cracker.py
I use a tool called Ghidra (https://ghidra-sre.org/), x64dbg (https://x64dbg.com/), along with retsync (https://github.com/bootleg/ret-sync).
You probably just need Ghidra, which can decompile the program into a C-ish program state but if you want to see what exactly is happening at each state you need to step through it with a debugger (x64dbg) and being able to look back and forth at what is happening with the debugger and Ghidra is where ret-sync comes in.
Hi James / @AeroX2 ,
thanks for your detailed explanations.
This gave me new insights as I am an embedded software developer and not very experienced with application development - at least not on the reverse engineering side of things.
Using Ghidra alone for analysis of the DLL's disassembly did the trick. If I find time and need for setting up the other tools, I might write you again.
In the meantime, I've added support for multiple card sizes - see pull request #5 .
Would be great to get the "progress bar" (memory card usage indicator) feature working as I've seen single PES design pattern files overrunning the default 64 kiBytes (see explanation in my pull request).
HTH / Cheers
Hello again,
I've dived deeper into the disassembled code of both pelite.exe and CardIO.dll.
Unfortunatel, I haven't been able to locate the code which calculates/draws the memory card usage indicator. Could you maybe give me some more guidance here to make this happen?
So far, I've "only" used Ghidra for static analysis.
I've mainly used your patches to get some context and the strings I found (many printf format specifiers). I've also seen some calculations which I thought were suspicious - without any luck.
The function at 0x00416361 seems to format the relative and absolute sizes (%3d%% resp. %5s%s X %5s%s) of the selected pattern which are displayed in the lower left corner of the PED-Basic window. I've verified this by replacing single characters in those format specifiers.
I've also found another percentage value being formatted in the function at 0x0041c67b -- it is the relative pattern size displayed on the right window side after copying it to the card (format specifier %d%%).
This was interesting, but it did not lead me in the right direction.. because the 100% value is never re-labeled. Only the blue/cyan rects are displayed. This should also happen when a pattern is selected on the left hand side.. but I did not find an entry function (on-item-selection-callback?!) where this is hooked.
In addition, I've also read @bezmi's comment https://github.com/AeroX2/brother-cart-emulator/issues/1#issuecomment-1435530216 -- I also see a corrupted string in the output binary file (ÿrother_sewing) - do you have an idea what could cause this issue? Did the corrupt images work with your machine?
Another hint: in the patch descriptions of image-dumper/README.md, you call it "Bypass ChkCardVolume". Strictly speaking the method seems to be called but the result value is ignored and your patches modify the code to make it look like everything turns out as expected. I found this pretty misleading.
This is fun.. and tedious at the same time.
Hi James,
You probably just need Ghidra, which can decompile the program into a C-ish program state but if you want to see what exactly is happening at each state you need to step through it with a debugger (x64dbg) and being able to look back and forth at what is happening with the debugger and Ghidra is where ret-sync comes in.
I'd be interested in a brief description how to set the three tools up in the sync'ed mode you described. Can you also see the decompiled C code? I can run pelite.exe from x64dbg but I have no clue how to start a Ghidra session for the dynamic analysis and also no idea what to do with the ret-sync release file (which seems to be a single *.dp64 resp. *.dp32 file).
Have a nice sunday!
This was interesting, but it did not lead me in the right direction.. because the 100% value is never re-labeled. Only the blue/cyan rects are displayed. This should also happen when a pattern is selected on the left hand side.. but I did not find an entry function (on-item-selection-callback?!) where this is hooked.
I suspect this is because it is using the Windows API for progress bars and so there isn't a "100%" label in the program just a function that is advancing the ticks for the progress bar but I haven't dived into this myself.
In addition, I've also read @bezmi's comment https://github.com/AeroX2/brother-cart-emulator/issues/1#issuecomment-1435530216 -- I also see a corrupted string in the output binary file (ÿrother_sewing) - do you have an idea what could cause this issue? Did the corrupt images work with your machine?
I don't think I've seen a ÿrother_sewing just ÿbrother_sewing but I suspect it would still work since I don't think the headers are really read by the machine just the offsets
I'd be interested in a brief description how to set the three tools up in the sync'ed mode you described.
ret-sync just makes the debugger and Ghidra talk to each other so that the line that you are currently breakpointing on is the same line that is highlighted in Ghidra. This setup makes it a touch easier to reverse engineer but you can still look at the line numbers manually and just match them up.
Can you also see the decompiled C code?
Ghidra shows decompiled C code but it is still heavily obfuscated and not easy to navigate.
I can run pelite.exe from x64dbg but I have no clue how to start a Ghidra session for the dynamic analysis and also no idea what to do with the ret-sync release file (which seems to be a single *.dp64 resp. *.dp32 file).
You can have a look at the ret-sync page for their Ghidra (https://github.com/bootleg/ret-sync?tab=readme-ov-file#ghidra-usage) instructions
(which seems to be a single *.dp64 resp. *.dp32 file).
These are the plugin files that x64dbg uses, you need to drag and drop them in x64dbg plugin folder
Hey @maehw, have you tried with the vikant writer workflow from my repo? There is an ipython notebook that shows how to locate the thumbnail data and the python script emulates a vikant writer so that you can create flash images from custom stitch data (you don't need any hardware to do it). I also have notes which I think are detailed enough to be able to replace thumbnails and stitch data in a given vikant/PED file with our own stuff using python directly. Let me know if you want to look at any of those binary files with custom stitch data.
I started with reverse engineering the binaries, but it gets complicated really fast and it was actually more instructive to jump around the card data in python.
Hi @AeroX2 and @bezmi ,
thank you for your replies.
I've dived deeper into CardIO.dll and also USB communication with the brother's card reader/writer.
Actually, it's really the case that "ÿbrother_sewing" (where ÿ is 0xFF) is written into the card's memory (at offset 0x170). The very last write operation (over)writes the single character 'b' at 0x170 (not at 0x100 as mentioned by you, @bezmi , in the linked issue). So maybe the character is wrong in the binary output file as it comes out of cracker.py. This must be some special handling by either the DLL or the standalone application. It actually may prevent the machine from reading the card. I also had seen several string comparisons in the code.
@AeroX2 : I haven't tried the x64dbg + Ghidra + ret-sync combination (yet). But thanks for the instructions!
@bezmi : I couldn't get the Vikant-based workflow running. That's what I wrote in the eevblog.com forum:
Other things I could not get working: the Vikant card emulator (another Python script)... or rather I could not get it working with the "Ultimate Explorer for Brother" (Serial port version). Even though I created virtual COM ports with com0com, only the Python side connected... and Vikant's Ultimate Explorer didn't want to open or even find the COM port. So this approach currently won't work for me to write own card binaries from PES files.
So it seems that I already had issues with getting the virtual COM port working with the software (that was before I got myself a brother PED card reader/writer + re-writeable memory card which works well with brother's PED Basic software). Which version of the Vikant software have you been using? I'd like to have a working toolchain on Linux/MacOS even though the patched PED Basic approach (running under Windows) works - and it works nicely!
Hi all.
New findings & fixes (#7):
I think the card memory sizes have been wrong, see #6.
Writing "ÿrother_sewing" instead of "brother_sewing" will let my machine (brother PE-150) ignore the memory card and even prevent any operation of the machine at all and keep beeping... - so I fixed it in cracker.py data postprocessing. The puzzle about different locations of the string is now also solved for me: it depends on the hoop size of the character is written at 0xC0, 0x100 or 0x170. According to the "code" 0x280 and 0x28E would also be valid locations - but I don't know under what circumstances - haven't seen them being used by PED Basic. The b is the final character which is written to EEPROM/flash - so the machine could also use it to check if the memory write operation had been completed successfully or if it would likely face a corrupt card image.
HTH
Thanks for your help. I'd be okay if this issue was closed. I'd then open new issues for other, smaller subtopics - if you're okay with that.
@maehw I was just using the version from their website: https://vikant-emb.com/downloads I haven't checked to see if it still works though. I can't think of any issues I had other than having to run as administrator to access the com ports. Glad to know that PED Basic worked for you.
| gharchive/issue | 2024-08-20T18:32:27 | 2025-04-01T06:36:40.961881 | {
"authors": [
"AeroX2",
"bezmi",
"maehw"
],
"repo": "AeroX2/brother-cart-emulator",
"url": "https://github.com/AeroX2/brother-cart-emulator/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
972015533 | Added Python tests GitHub Actions to Atlana (DPP UI)
Task to do
Add python tests actions for PRs and merges
Reason
Help identify issues before and after code is merged
Result
GitHub actions to test python code are run on PRs and merges
Merged commit: https://github.com/AgPipeline/Atlana/tree/ec1ede3e3b3df4127cc2b90d86fb097782ef0f8c
| gharchive/issue | 2021-08-16T18:59:38 | 2025-04-01T06:36:40.981631 | {
"authors": [
"Chris-Schnaufer"
],
"repo": "AgPipeline/issues-and-projects",
"url": "https://github.com/AgPipeline/issues-and-projects/issues/531",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1920375769 | 🐛 [BUG] - critical security problems from vm2
Browsers
Firefox, Chrome, Safari, Microsoft Edge, Opera
OS
Windows, Linux, Mac
Description
As dependabot told us, vm2 has a fatal problem. Therefore, starting with the vm2 module in question, the nestjs-modules/mailer@^1.9.1 module must also be replaced/modified.
dependabot이 알려준 것과 같이 vm2에는 치명적인 문제가 있다고 합니다. 따라서 문제가 있는 vm2 모듈부터 nestjs-modules/mailer@^1.9.1 모듈도 대체/수정해야합니다.
https://github.com/TooTallNate/proxy-agents/issues/240
https://github.com/TooTallNate/proxy-agents/pull/224
As in Issue and Pull-Request above, vm2 used in proxy-agent's degenerator module has been removed. Therefore, it seems that the re-installation will solve the problem.
위의 Issue와 Pull-Request와 같이 proxy-agent의 degenerator 모듈에서 사용되었던 vm2가 제거가 되었다고 합니다. 따라서 재설치를 진행한다면 해당 문제가 해결될 것으로 보입니다.
Reproduction URL
https://github.com/AgainIoT/Open-Set-Go_server/security/dependabot/5
Reproduction Steps
https://github.com/AgainIoT/Open-Set-Go_server/security/dependabot/5
Solutions
https://github.com/TooTallNate/proxy-agents/issues/240
https://github.com/TooTallNate/proxy-agents/pull/224
As in Issue and Pull-Request above, vm2 used in proxy-agent's degenerator module has been removed. Therefore, it seems that the re-installation will solve the problem.
위의 Issue와 Pull-Request와 같이 proxy-agent의 degenerator 모듈에서 사용되었던 vm2가 제거가 되었다고 합니다. 따라서 재설치를 진행한다면 해당 문제가 해결될 것으로 보입니다.
Screenshots
No response
I can find solution from this issue
https://github.com/nest-modules/mailer/issues/723
| gharchive/issue | 2023-09-30T19:32:49 | 2025-04-01T06:36:40.991977 | {
"authors": [
"ymw0407"
],
"repo": "AgainIoT/Open-Set-Go_server",
"url": "https://github.com/AgainIoT/Open-Set-Go_server/issues/98",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1848386548 | Unity Android and iOS (possible) phone overheat
I'm using Agora SDK (together with MediaPipe Unity Plugin), everything works fine. Unfortunately, our client believes that the application overheats the phone (Android and iOS) too quickly (does not provide data) and users therefore resign from using it.
Is there anything that can be done to make the phones less hot and drain less battery?
Thanks in advance
Reduce the resolution and frame rate of the video, which will reduce power consumption
| gharchive/issue | 2023-08-13T03:44:13 | 2025-04-01T06:36:41.014477 | {
"authors": [
"LoopIssuer",
"xiayangqun"
],
"repo": "AgoraIO-Extensions/Agora-Unity-Quickstart",
"url": "https://github.com/AgoraIO-Extensions/Agora-Unity-Quickstart/issues/203",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
592402016 | Add support for setting video encoder configuration.
Hello, can you please add an option to set video encoder configuration? If it's already, let me know how?
See the api document.
https://pub.dev/documentation/agora_rtc_engine/latest/agora_rtc_engine/AgoraRtcEngine/setVideoEncoderConfiguration.html
https://pub.dev/documentation/agora_rtc_engine/latest/agora_rtc_engine/VideoEncoderConfiguration-class.html
| gharchive/issue | 2020-04-02T06:56:32 | 2025-04-01T06:36:41.016481 | {
"authors": [
"ironynet",
"kadariyaujwal"
],
"repo": "AgoraIO/Flutter-SDK",
"url": "https://github.com/AgoraIO/Flutter-SDK/issues/99",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
930697437 | 🛑 Mogrib.com is down
In 06438b8, Mogrib.com (https://www.mogrib.com) was down:
HTTP code: 500
Response time: 2873 ms
Resolved: Mogrib.com is back up in 9efa4d2.
| gharchive/issue | 2021-06-26T13:35:35 | 2025-04-01T06:36:41.054964 | {
"authors": [
"AhmadIbrahiim"
],
"repo": "AhmadIbrahiim/Mogrib-Uptime",
"url": "https://github.com/AhmadIbrahiim/Mogrib-Uptime/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2165532970 | DropDown button 2 latest beta fails with empty dropdown
This code fails with
DropdownButtonHideUnderline(
child: DropdownButton2<String>(
value: null,
items: [],
),
)
RangeError (index): Invalid value: Valid value range is empty: 0
DropdownButton2State.build (package:dropdown_button2/src/dropdown_button2.dart:687:30)
I dont have this error, but it won't open nonetheless. I want it to open, because I am using dropdownSearchData and creating new items when nothing is found.
This no longer occurs in latest beta version.
Feel free to open a new issue if it still exists.
~I dont have this error~ (it does trigger when manually openening using dropdownKey.currentState!.callTap()), but it won't open nonetheless. I want it to open, because I am using dropdownSearchData and creating new items when nothing is found.
Let's discuss this at #257
~I dont have this error~ (it does trigger when manually openening using dropdownKey.currentState!.callTap()), but it won't open nonetheless. I want it to open, because I am using dropdownSearchData and creating new items when nothing is found.
Let's discuss this at #257
| gharchive/issue | 2024-03-03T20:02:32 | 2025-04-01T06:36:41.058373 | {
"authors": [
"AhmedLSayed9",
"FluffyDiscord",
"vasilich6107"
],
"repo": "AhmedLSayed9/dropdown_button2",
"url": "https://github.com/AhmedLSayed9/dropdown_button2/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1015052147 | add bulk email sender
an API for sending bulk emails using nodejs and express
@agungd3v can you please add some descrtiption!
@all-contributors please add @agungd3v for code
@agungd3v please star the repo!
@AhmedRaja1 I'm done adding description
| gharchive/pull-request | 2021-10-04T11:20:44 | 2025-04-01T06:36:41.060642 | {
"authors": [
"AhmedRaja1",
"agungd3v"
],
"repo": "AhmedRaja1/Hacktoberfest",
"url": "https://github.com/AhmedRaja1/Hacktoberfest/pull/256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
998080956 | Delay in Start the Streaming of Screen Share
Hi Oven Team,
When I tried start the Screen Share streaming, it takes some time to start the streaming.
I didn't find the reason of the delay in the start.
Hoping for fixed this issue soon.
Screen capture is now stable. Closing this issue. (refer https://github.com/AirenSoft/OvenLiveKit-Web/issues/10)
| gharchive/issue | 2021-09-16T11:05:55 | 2025-04-01T06:36:41.082841 | {
"authors": [
"SangwonOh",
"yadavendra15"
],
"repo": "AirenSoft/OvenLiveKit-Web",
"url": "https://github.com/AirenSoft/OvenLiveKit-Web/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2101967556 | User deletion with admin permissions
Linked issue
Resolves: #2122
What kind of change does this PR introduce?
[ ] Bug fix
[ ] New feature
[ X] Refactor
[ ] Docs update
[ ] CI update
What is the current behavior?
Describe the state of the application before this PR. Illustrations appreciated (videos, gifs, screenshots).
Currently it is not possible to delete a user who has super user permissions.
What is the new behavior?
Describe the state of the application after this PR. Illustrations appreciated (videos, gifs, screenshots).
remove check for superuser permission while deleting
only user with permission FULL_ACCESS_USERS_TEAMS_ROLES can delete other user with FULL_ACCESS_USERS_TEAMS_ROLES
send mail to both the users (removed user and removed by), and klaw admin
audit log
Other information
Additional changes, explanations of the approach taken, unresolved issues, necessary follow ups, etc.
Requirements (all must be checked before review)
[ ] The pull request title follows our guidelines
[ ] Tests for the changes have been added (if relevant)
[ ] The latest changes from the main branch have been pulled
[ ] pnpm lint has been run successfully
When I delete a superuser I get a "Delete User Request: SUCCESS" I think it should just say "Delete User: SUCCESS"
As I thought I had to approve the deletion request initially. It also does say this on create as well, "Create User Request: SUCCESS"
| gharchive/pull-request | 2024-01-26T10:27:56 | 2025-04-01T06:36:41.095921 | {
"authors": [
"aindriu-aiven",
"muralibasani"
],
"repo": "Aiven-Open/klaw",
"url": "https://github.com/Aiven-Open/klaw/pull/2244",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1045923279 | [SUGGESTION] Custom model data for main gui things as arrow
As talked on spigot, it's about the back button, I know some people that needs the same features so I created that to make it easier and less spammy on spigot :)
Oh thx, i was about to make one as well. We rly need this feature
I started uploading my plugins to github and making them open-source so you can add your own specific features ;)
This is such a good idea!!
| gharchive/issue | 2021-11-05T14:42:13 | 2025-04-01T06:36:41.099262 | {
"authors": [
"10lulu",
"Agaloth",
"Ajneb97",
"srbeastman"
],
"repo": "Ajneb97/PlayerKits",
"url": "https://github.com/Ajneb97/PlayerKits/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1417898950 | Update my ads
Story
As an advertiser I want to be able to update my ads because the price or other values might change during the existence of an ad.
Preconditions
user is logged in
Post-Conditions
changes are saved in the corresponding database tables
Dependencies
This story can be started after Story Nr. 184 has been finished.
DoR
[x] External dependencies are identified or eliminated
[x] Design (at least LoFi?)
[x] Has acceptance criteria
Acceptance Criteria
[ ] AC 1: the user can see an update icon below each of his ads in "settings/my-ads".
[ ] AC 2: the user is able to click on the update icon of one of his ads and is then showed the same view as in the component "create", except that all values are prefilled as they were.
AC 1 could look like this ->
DoD
[ ] Story is tested against acceptance criteria
[ ] Unit test should be passed
[ ] Integration test is done ( if applicable)
[ ] Non-functional requirements are met
[ ] Story ok-ed by Product Owner
[ ] Peer Code Review performed
[ ] Any configuration or build changes documented
[ ] any database changes are documented Database
Changes are pushed to branch 186-update-my-ads
Might be combined with 292
@LanaKast the update functionality is not implemented, so I assume that it will be a big task because a form for collecting the user data should be implemented in the frontend and the backend should provide an endpoint for updating the topic.
| gharchive/issue | 2022-10-21T07:38:59 | 2025-04-01T06:36:41.124291 | {
"authors": [
"Atanasov-AA",
"aschiakros",
"balsih"
],
"repo": "AkrosAG/Akros-Marketplace",
"url": "https://github.com/AkrosAG/Akros-Marketplace/issues/186",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2572717249 | Adding input validations
Hello I can add proper input validation for the login and register pages
Pls assign this to me
@Gauravtb2253 currently we are facing some problem in our auth apis ,we are working it.After it get fixed we take issues related to it.
| gharchive/issue | 2024-10-08T10:01:20 | 2025-04-01T06:36:41.125629 | {
"authors": [
"AkshitLakhera",
"Gauravtb2253"
],
"repo": "AkshitLakhera/PenCraft-Full-Stack-Blogging-Application",
"url": "https://github.com/AkshitLakhera/PenCraft-Full-Stack-Blogging-Application/issues/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
386233760 | corrupted pacman.conf in chroot
I assume it happened because of devel, because I don't think I updated anything else 😄
Delete /var/lib/aurbuild
Run aur build -sc
Observe pacman.conf
Contents of /var/lib/aurbuild/x86_64/root/etc/pacman.conf
[options]
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
HoldPkg = pacman
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
HoldPkg = glibc
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Architecture = auto
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
SigLevel = PackageRequired
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
SigLevel = PackageTrustedOnly
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
SigLevel = DatabaseOptional
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
SigLevel = DatabaseTrustedOnly
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
LocalFileSigLevel = PackageOptional
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
LocalFileSigLevel = PackageTrustedOnly
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
[core]
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.neuf.no/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.myrveln.se/pub/linux/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://archlinux.dynamict.se/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.acc.umu.se/mirror/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.ubrco.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://repo.itmettke.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.orbit-os.com/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://archlinux.beccacervello.it/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.pseudoform.org/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://www.mirrorservice.org/sites/ftp.archlinux.org/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.dkm.cz/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.n-ix.net/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.hactar.xyz/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://arch.jensgutermuth.de/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.sh.cvut.cz/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.niyawe.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.cyberbits.eu/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.uni-plovdiv.net/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.sergal.org/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.csclub.uwaterloo.ca/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.hackingand.coffee/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mex.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.lug.mtu.edu/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ind.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.dc02.hackingand.coffee/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.kurnode.com/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.lty.me/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://sgp.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.halifax.rwth-aachen.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.neusoft.edu.cn/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
[extra]
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.neuf.no/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.myrveln.se/pub/linux/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://archlinux.dynamict.se/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.acc.umu.se/mirror/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.ubrco.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://repo.itmettke.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.orbit-os.com/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://archlinux.beccacervello.it/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.pseudoform.org/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://www.mirrorservice.org/sites/ftp.archlinux.org/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.dkm.cz/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.n-ix.net/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.hactar.xyz/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://arch.jensgutermuth.de/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.sh.cvut.cz/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.niyawe.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.cyberbits.eu/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.uni-plovdiv.net/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.sergal.org/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.csclub.uwaterloo.ca/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.hackingand.coffee/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mex.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.lug.mtu.edu/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ind.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.dc02.hackingand.coffee/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.kurnode.com/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.lty.me/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://sgp.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.halifax.rwth-aachen.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.neusoft.edu.cn/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
[community]
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.neuf.no/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.myrveln.se/pub/linux/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://archlinux.dynamict.se/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.acc.umu.se/mirror/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.ubrco.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://repo.itmettke.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.orbit-os.com/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://archlinux.beccacervello.it/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.pseudoform.org/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://www.mirrorservice.org/sites/ftp.archlinux.org/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.dkm.cz/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.n-ix.net/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.hactar.xyz/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://arch.jensgutermuth.de/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.sh.cvut.cz/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.niyawe.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.cyberbits.eu/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.uni-plovdiv.net/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.sergal.org/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.csclub.uwaterloo.ca/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.hackingand.coffee/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mex.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.lug.mtu.edu/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ind.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.dc02.hackingand.coffee/arch/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.kurnode.com/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirror.lty.me/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://sgp.mirror.pkgbuild.com/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://ftp.halifax.rwth-aachen.de/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
Server = https://mirrors.neusoft.edu.cn/archlinux/$repo/os/$arch
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
CacheDir = /var/cache/pacman/pkg/ /var/cache/pacman/maximbaz/
[maximbaz]
Usage = All
SigLevel = PackageRequired
SigLevel = PackageTrustedOnly
SigLevel = DatabaseRequired
SigLevel = DatabaseTrustedOnly
Server = file:///var/cache/pacman/maximbaz
aur build -sc is full of these log entries
warning: config file /etc/pacman.conf, line 32: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 33: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 35: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 36: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 38: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 39: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 41: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 42: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 44: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 45: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 47: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 48: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 50: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 51: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 53: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 54: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 56: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 57: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 59: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 60: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 62: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 63: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 65: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 66: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 68: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 69: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 71: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 72: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 74: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 75: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 77: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 78: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 80: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 81: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 83: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 84: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 86: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 87: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 89: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 90: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 92: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 93: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 95: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 96: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 98: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 99: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 101: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 102: directive 'CacheDir' in section 'core' not recognized.
warning: config file /etc/pacman.conf, line 104: directive 'CacheDir' in section 'core' not recognized.
Nice! Exceeded even my expectations of breakage.
| gharchive/issue | 2018-11-30T15:28:57 | 2025-04-01T06:36:41.140554 | {
"authors": [
"AladW",
"maximbaz"
],
"repo": "AladW/aurutils",
"url": "https://github.com/AladW/aurutils/issues/462",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
388673750 | aur-sync: update options in man page
Fixes #474
I think I got ahead of you: https://github.com/AladW/aurutils/commit/ae6cfb56ead53e16991492dcbc4be777844d2a1b
I didn't see a reason for leaving out the short options, so I readded them, rather than change the man page.
Sure that makes sense!
| gharchive/pull-request | 2018-12-07T14:20:50 | 2025-04-01T06:36:41.142983 | {
"authors": [
"AladW",
"alfunx"
],
"repo": "AladW/aurutils",
"url": "https://github.com/AladW/aurutils/pull/475",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
2418636032 | perf: update to latest dependencies and linting
Alaska Airlines Pull Request
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Resolves: #35
Summary:
Please summarize the scope of the changes you have submitted, what the intent of the work is and anything that describes the before/after state of the project.
Type of change:
Please delete options that are not relevant.
[ ] New capability
[x] Revision of an existing capability
[ ] Infrastructure change (automation, etc.)
[ ] Other (please elaborate)
Checklist:
[x] My update follows the CONTRIBUTING guidelines of this project
[x] I have performed a self-review of my own update
By submitting this Pull Request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Pull Requests will be evaluated by their quality of update and whether it is consistent with the goals and values of this project. Any submission is to be considered a conversation between the submitter and the maintainers of this project and may require changes to your submission.
Thank you for your submission!
-- Auro Design System Team
:tada: This PR is included in version 2.1.9-beta.2 :tada:
The release is available on:
npm package (@beta dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2024-07-19T11:05:23 | 2025-04-01T06:36:41.158262 | {
"authors": [
"blackfalcon",
"jason-capsule42"
],
"repo": "AlaskaAirlines/auro-backtotop",
"url": "https://github.com/AlaskaAirlines/auro-backtotop/pull/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1812469782 | Introduce ToBits::to_bits_le_into and use it in Vec/[T]
This PR targets the feat/narwhal branch, but it's quite likely it would result in a performance improvement in other use cases as well; it introduces a ToBits::to_bits_le_into method that allows us to avoid a lot of allocations when calling Vec::to_bits on large collections.
In a 1-minute run of test_state_coherence in snarkOS/narwhal, this PR reduces the number of allocations from ~4.2M to ~2M, and temporary allocations from ~2.2M to ~0.7M, as measured with heaptrack.
I've investigated the option of calling to_bits_le directly in BatchHeader::compute_batch_id, but that would require plenty of additional implementations of ToBits, and it wouldn't have the potential to reduce other allocations in snarkVM, so I decided against that.
I'm filing this as a draft until all the tests have run - it's possible that I might need to introduce another impl of ToBits::to_bits_le_into, which should be trivial.
before:
after:
The CI failures appear to be unrelated, so it's ready for review.
I am retargeting this PR to testnet3 since the changes are applicable to all snarkVM constructs that use this.
Rebased against testnet3 and added one additional commit which provides a small further improvement; that one was also found via profiling.
Superseded by https://github.com/AleoHQ/snarkVM/pull/1836; the drive-by commit will be filed as its own PR.
| gharchive/pull-request | 2023-07-19T18:13:49 | 2025-04-01T06:36:41.197645 | {
"authors": [
"howardwu",
"ljedrz"
],
"repo": "AleoHQ/snarkVM",
"url": "https://github.com/AleoHQ/snarkVM/pull/1811",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1727114623 | Question: Does this work generically with mono-repos? Or strictly NX at this stage?
I'm working in turbo-repo. My solution has been to write a script that overrides the path on the .json files so I can use the deployment extension, but if this can be configured to work in turborepo that would be amazing.
Hi @joelybahh ,
Unfortunately, I'm less familiar with turbo-repo and how it works.
Internally, in the executors, I'm using specific NX tools, so I don't know if this will work on other tools and monorepo frameworks.
@AlexPshul Thanks for the prompt reply!
I might do some digging into the source code in any case, although it's NX specific, I might be able to extract out the Azure parts and try to apply the same concepts to Turborepo, thanks again!
| gharchive/issue | 2023-05-26T08:02:28 | 2025-04-01T06:36:41.324015 | {
"authors": [
"AlexPshul",
"joelybahh"
],
"repo": "AlexPshul/nxazure",
"url": "https://github.com/AlexPshul/nxazure/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1020351923 | New samples
New samples in a repository https://github.com/JetBrains/kotlin-web-site
Files:
docs/topics/collection-transformations.md
docs/topics/constructing-collections.md
New samples in a repository https://github.com/JetBrains/kotlin-web-site
Files:
docs/topics/collection-transformations.md
docs/topics/constructing-collections.md
New samples in a repository https://github.com/JetBrains/kotlin-web-site
Commit: https://github.com/JetBrains/kotlin-web-site/commit/0cf51882694faa08341dc065cabdc631aca1fed8
Files:
docs/topics/jvm/java-to-kotlin-collections-guide.md
| gharchive/pull-request | 2021-10-07T18:47:17 | 2025-04-01T06:36:41.354060 | {
"authors": [
"AlexanderPrendota"
],
"repo": "AlexanderPrendota/kotlin-compiler-server",
"url": "https://github.com/AlexanderPrendota/kotlin-compiler-server/pull/378",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2299009235 | Unplugging and Replugging Laptop Crashes Waybar
Unplugging my laptop, waiting a few seconds, and then replugging my laptop consistently crashes waybar. Even after disabling the battery module, waybar still crashes. This could be related to #2519 and #2662. #2704 may solve this until inotify can possibly be replaced with netlink or upower events. The upower module would actually be perfect if it had format-charging like the battery module has.
Waybar Version: 0.10.3
Output:
[2024-05-15 15:31:26.609] [info] Using configuration file /home/zaheen/.config/waybar/config.jsonc
[2024-05-15 15:31:26.610] [info] Unable to receive desktop appearance: GDBus.Error:org.freedesktop.DBus.Error.UnknownMethod: No such interface “org.freedesktop.portal.Settings” on object at path /org/freedesktop/portal/desktop
[2024-05-15 15:31:26.610] [info] Using CSS file /home/zaheen/.config/waybar/style.css
[2024-05-15 15:31:26.637] [info] Hyprland IPC starting
[2024-05-15 15:31:26.637] [warning] $XDG_RUNTIME_DIR/hypr does not exist, falling back to /tmp/hypr
[2024-05-15 15:31:26.638] [info] Loading persistent workspaces from Waybar config
[2024-05-15 15:31:26.638] [info] Loading persistent workspaces from Hyprland workspace rules
[2024-05-15 15:31:26.672] [info] Loading persistent workspaces from Waybar config
[2024-05-15 15:31:26.672] [info] Loading persistent workspaces from Hyprland workspace rules
[2024-05-15 15:31:27.014] [info] Bar configured (width: 1900, height: 22) for output: eDP-1
[2024-05-15 15:31:27.014] [info] Bar configured (width: 1900, height: 22) for output: HDMI-A-1
[2024-05-15 15:31:27.051] [warning] Requested height: 22 is less than the minimum height: 23 required by the modules
[2024-05-15 15:31:27.051] [info] Bar configured (width: 1900, height: 23) for output: eDP-1
[2024-05-15 15:31:27.085] [warning] Requested height: 22 is less than the minimum height: 23 required by the modules
[2024-05-15 15:31:27.085] [info] Bar configured (width: 1900, height: 23) for output: HDMI-A-1
[2024-05-15 15:31:27.156] [warning] Requested height: 23 is less than the minimum height: 24 required by the modules
[2024-05-15 15:31:27.156] [info] Bar configured (width: 1900, height: 24) for output: eDP-1
[2024-05-15 15:31:27.215] [warning] Requested height: 23 is less than the minimum height: 24 required by the modules
[2024-05-15 15:31:27.215] [info] Bar configured (width: 1900, height: 24) for output: HDMI-A-1
/usr/include/c++/13.2.1/bits/stl_vector.h:1128: constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](size_type) [with _Tp = std::tuple<long unsigned int, long unsigned int>; _Alloc = std::allocator<std::tuple<long unsigned int, long unsigned int> >; reference = std::tuple<long unsigned int, long unsigned int>&; size_type = long unsigned int]: Assertion '__n < this->size()' failed.
fish: Job 1, 'waybar' terminated by signal SIGABRT (Abort)
Hi @ZaheenJ , for UPower features it's better to open feature issue.
Even with the battery module and upower modules disabled, replugging the laptop crashed Waybar which should not happen. I don't have access to the original computer on which the crash occurred, but I can't seem to reproduce the issue on my M1 Macbook on Asahi Linux. I can try debugging more when I have access to the other laptop in about two weeks.
Can't seem to reproduce anymore.
| gharchive/issue | 2024-05-15T23:18:08 | 2025-04-01T06:36:41.359522 | {
"authors": [
"LukashonakV",
"ZaheenJ"
],
"repo": "Alexays/Waybar",
"url": "https://github.com/Alexays/Waybar/issues/3275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1235643266 | Calendar module: localization issue #1552
Fixed week numbers right alignment when are used different locales(week is started from the SU or MO)
Hardcode prefix 'W' is removed in order to provide user an opportunity to use his own prefix via pango markup. Examples:
"format-calendar-weeks": "<span color='#99ffdd'><b>W{}</b></span>"
"format-calendar-weeks": "<span color='#99ffdd'><b>WEEK{}</b></span>"
@Alexays can you give a check?
| gharchive/pull-request | 2022-05-13T20:08:29 | 2025-04-01T06:36:41.361421 | {
"authors": [
"LukashonakV",
"tmpm697"
],
"repo": "Alexays/Waybar",
"url": "https://github.com/Alexays/Waybar/pull/1555",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
326150803 | MSYS2 ruby is stalled at 2.4.0-2
I have installed ruby in MSYS2 some time ago, it was at version 2.4.0-2. I see that at https://github.com/Alexpux/MSYS2-packages/tree/master/ruby version 2.5.0 is avalable. I am unable to update to this version with pacman:
`$ pacman -Q ruby
ruby 2.4.0-2
$ pacman -S ruby
warning: ruby-2.4.0-2 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...
Packages (1) ruby-2.4.0-2
Total Installed Size: 16.16 MiB
Net Upgrade Size: 0.00 MiB
:: Proceed with installation? [Y/n]`
It would be nice to get ruby updated to version 2.5.1 too. Thank you!
@OTLabs 32-bit ruby on MSYS/Cygwin crashing:
https://bugs.ruby-lang.org/issues/13999
Ruby 2.6.0 in repo now
| gharchive/issue | 2018-05-24T14:34:52 | 2025-04-01T06:36:41.448524 | {
"authors": [
"Alexpux",
"OTLabs"
],
"repo": "Alexpux/MSYS2-packages",
"url": "https://github.com/Alexpux/MSYS2-packages/issues/1266",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1698077572 | Tweak FAQ and downloaded files section for participant IDs to allow for future additions
There will be projects released shortly that do not currently have multiple samples that map to the same participant, but we expect future releases to include samples to meet that condition. The investigator has asked us to include participant IDs so that we don't have to make any revisions later.
As such, we're going to need to change the language in this FAQ to cover these cases: https://github.com/AlexsLemonade/scpca-docs/blob/5c66caa36f99b764f9f770e282b5787e318f3029/docs/faq.md#why-do-some-samples-have-missing-participant-ids
Edit: Also need to address this section: https://github.com/AlexsLemonade/scpca-docs/blob/5c66caa36f99b764f9f770e282b5787e318f3029/docs/download_files.md#metadata, so I've updated the title accordingly.
Anyone who implements this should find all "participant" to be sure everything has been addressed.
Closed by #108
| gharchive/issue | 2023-05-05T19:14:44 | 2025-04-01T06:36:41.495167 | {
"authors": [
"allyhawkins",
"jaclyn-taroni"
],
"repo": "AlexsLemonade/scpca-docs",
"url": "https://github.com/AlexsLemonade/scpca-docs/issues/107",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2028954260 | Add version updates before writing metadata
I realized that there was a possibility that when skipping steps in workflows we might inadvertently propagate old version numbers, so I added a function and steps to make sure that the version numbers are correct before we write out any metadata files.
The good news is that we are getting the version info for the output reports and final metadata directly from Nextflow, so those were always correct, but some intermediate scpca-meta.json files might not always have been.
I don't think I care much about this variation, but I can make it uniform...
Yeah, it's small 🥔 . totally up to you.
| gharchive/pull-request | 2023-12-06T16:34:16 | 2025-04-01T06:36:41.497099 | {
"authors": [
"jashapiro",
"sjspielman"
],
"repo": "AlexsLemonade/scpca-nf",
"url": "https://github.com/AlexsLemonade/scpca-nf/pull/606",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1012570616 | Add plots to QC report
Here I am mostly adding the Total vs. gene count plot to the QC report. I also fixed the knee plot and made the session info hidden by default, so closing all three of those issues here.
I was also playing around with #54, which I ended up abandoning, but in that process I added a bit of formatting to the tables, including making them not full-width, and striping the rows.
A few things to focus on that I was thinking about as I did them:
Please also look at all axis labels and legend titles! (Should I make the knee plot use non-scientific notation?)
For consistency among reports, I made the mitochondrial percentage go from 0-100% for all reports. This may make lower numbers harder to see in mostly-good plots (where no cells are above 50%, say). I thought the consistency was more important, but we could tweak the color scale if we think it matters.
The other potentially controversial aesthetic decision I made was to move the figure legends into the figure panels, at least for the knee & UMI/gene plots. In both of these plots, the location of the data is predictable, so in the vast majority of cases the positions I chose will not overlap data. Still, there is a chance they will, so if there is worry, I can move them back out. (when we add the miQC plot, the top right ought to work as a clear space)
I am attaching an example report (zipped, b/c github), and the two main figures I worked on below, for reference:
closes #39
closes #40
closes #19
closes #55
Adding in the miQC plot as well, which looks like this (not included in the example report above):
Same story here where I moved the legend inside the plot. In most cases the upper right should have no genes, so it ought to be safe, but I can move back outside if that is preferred.
I updated legends and titles as suggested, using sentence case everywhere, and unifying labels.
I moved what I had put as figure captions to alt text because it was laregely redundant, as noted, but I thought we should have some kind of figure label beyond the title.
For the knee plot, I chose a more contrasting color scheme, but I didn't want to go to red, because that would imply (to me) that those points failed some QC. So I went with a dark green, and lightened up the grey. I also went with smaller points for the filtered cells. Let me know what you think.
Here is a full report file...
SCPCL000001_qc.html.zip
I don't necessarily think we need the change in size and have previously been taught not to double encode variables where we have both size and color showing if a cell is considered passed or excluded but I'll leave that final call up to you.
Yeah, I don't usually do this, but I thought it was worth it in this case, as I think it solves a problem where the two kinds of points overlap.
Yeah, I don't usually do this, but I thought it was worth it in this case, as I think it solves a problem where the two kinds of points overlap.
I would agree, it does help show the separation a lot more. I think for the purposes of this plot its okay to keep it in.
| gharchive/pull-request | 2021-09-30T20:01:35 | 2025-04-01T06:36:41.505571 | {
"authors": [
"allyhawkins",
"jashapiro"
],
"repo": "AlexsLemonade/scpcaTools",
"url": "https://github.com/AlexsLemonade/scpcaTools/pull/57",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
611234815 | Could you please add the EWelink Smart Wifi Switch Relay Module Timer DC 5 B/12 B/24 B/32 to your component ? ?
Hi Alex,
For your reference:
https://aliexpress.ru/i/33027260480.html
It seems to be a TISHRIC TSR620/DC
Sonoff cloud info about is:
"1000926297":{"settings":{"opsNotify":0,"opsHistory":1,"alarmNotify":1,"wxAlarmNotify":0,"wxOpsNotify":0,"wxDoorbellNotify":0},"group":"","online":true,"shareUsersInfo":[],"groups":[],"devGroups":[],"_id":"5d84c2daa4285d754675c139","name":"SW_Puerta_Calle","type":"10","deviceid":"1000926297","apikey":"23effe28-d521-4d8f-8a79-e23c97e5c9d9","extra":{"extra":{"uiid":6,"description":"20190516002","brandId":"5735f5f906d9751d4f109629","apmac":"d0:27:01:24:c2:7f","mac":"d0:27:01:24:c2:7e","ui":"单通道开关","modelInfo":"589833ac2f979b623e2f503f","model":"PSF-B01-GL","manufacturer":"郑州市中原区汇诚电子材料经营部","staMac":"60:01:94:D5:BC:41","chipid":"00D5BC41"},"_id":"5cdd2fbb211f3b10753dee66"},"params":{"bindInfos":{"alexa":["621f7a3a-6015-449f-a4ae-9d7403cdc5bc_26ca1996a20e8bd63617ab272d4eeede1d2d8e32"]},"sledOnline":"on","switch":"off","fwVersion":"3.4.0","rssi":-86,"staMac":"60:01:94:D5:BC:41","startup":"off","init":1,"pulse":"off","pulseWidth":500,"version":8},"createdAt":"2019-09-20T12:15:22.938Z","__v":0,"onlineTime":"2020-05-01T07:15:33.008Z","ip":"88.0.108.2","location":"","offlineTime":"2020-05-01T07:14:51.754Z","tags":{"m_c9d9_jime":"on"},"devicekey":"621f7a3a-6015-449f-a4ae-9d7403cdc5bc","deviceUrl":"","brandName":"New Smart ","showBrand":true,"brandLogoUrl":"","productModel":"G1","devConfig":{},"uiid":6},
Alternatively if you give some hint I can look to the code and try a contribution on my own. I look to your code and looks pretty well organized and done. Only issue is that my russian is really poor, not more of 10-15 words after 10y travelling frequently to Moscow for work.
Apologies, it work fine. My fault.
| gharchive/issue | 2020-05-02T17:18:15 | 2025-04-01T06:36:41.514984 | {
"authors": [
"barto64"
],
"repo": "AlexxIT/SonoffLAN",
"url": "https://github.com/AlexxIT/SonoffLAN/issues/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1795168103 | Added add_suffix! and add_prefix! functions to SystemStructure to easily change names, and corresponding set_sname!/set_snames!, etc to StockFlow
Edits to SystemStructure may require further documentation. Currently writing tests for it, though seemed to work in informal tests so far.
Specifically, these change stock, flow, sum variable, dynamic variable and parameters names to feet, stock flows, and open stock flows.
I think the changes are done on the wrong branch, this should be done on a branch based off 7e65d31
| gharchive/pull-request | 2023-07-08T23:48:22 | 2025-04-01T06:36:41.575491 | {
"authors": [
"neonWhiteout"
],
"repo": "AlgebraicJulia/StockFlow.jl",
"url": "https://github.com/AlgebraicJulia/StockFlow.jl/pull/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
340545474 | how to configure this in laravel ?
what will be next step after adding trait in model ?
go to
http://localhost:8000/odata.svc
http://localhost:8000/odata.svc/$metadata
http://localhost:8000/odata.svc/ModelName
please let me know if it works for you because i'm getting this error
A non well formed numeric value encountered
Do i have to declare this path in my web.php file right? how can i use all
that function like $count, $top ect..
Best Regards,
Dhruvisha Vasavada
On Thu, Jul 12, 2018 at 2:33 PM donmbelembe [email protected]
wrote:
go to
http://localhost:8000/odata.svc
http://localhost:8000/odata.svc/$metadata
http://localhost:8000/odata.svc/ModelName
please let me know if it works for you because i'm getting this error
A non well formed numeric value encountered
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/Algo-Web/POData-Laravel/issues/170#issuecomment-404443346,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AkqxyYLE4jHmy87ge0bWc7MMOa9R-a2zks5uFxDfgaJpZM4VMc4V
.
Do i have to declare this path in my web.php file right? how can i use all
that function like $count, $top ect..
Have you exposed the model with the trait?
The routes are defined by the route service provider.
@donmbelembe where are you encountering that error
http://localhost:8000/odata.svc/$metadata this URL is working for me but when I write ModelName it redirecting back to a login page
I'm also new to Odata
so i don't know, just trying out
this link: http://localhost:8000/odata.svc/$metadata
produce this error _http://localhost:8000/odata.svc/$metadata_
when I write ModelName it redirecting to Home page
@c-harris yeah and when trying to access http://localhost:8000/odata.svc i'm getting A non well formed numeric value encountered on \vendor\symfony\http-foundation\Response.php
Do we need to add Class 'Symfony\Component\Yaml\Yaml' this manually in the composer
i have checked in composer.json this class is already there still I'm getting the same error on this link: http://localhost:8000/odata.svc/$metadata
I've installed composer require symfony/yaml
i'm not getting this error anymore Class 'Symfony\Component\Yaml\Yaml' not found
but still my model redirecting to /home and
http://localhost:8000/odata.svc/$metadata & http://localhost:8000/odata.svc gives this error A non well formed numeric value encountered on \vendor\symfony\http-foundation\Response.php
I'm also getting the same error with laravel 5.6
@donmbelembe , @dsv4890 , can you try again, please? I've rolled up @NoelDeMartin 's pull request that fixed at least one underlying issue.
thank you ,
but unfortunately after updating to the last version and set the trait to the User Model, i'm getting the following error
`php artisan package:discover
In MetadataProvider.php line 140:
Undefined index: User`
@donmbelembe , thanks for your feedback.
Blast. I'm working on a pull request at the moment that might get you out of a jam (#171 ) - if I give you the details, are you able to update with them and see if pain persists?
of course @CyberiaResurrection
Here we go - you'll need to tweak your project's composer.json file somewhat:
1 - Add the following section:
"repositories": [ { "type": "vcs", "url": "https://github.com/CyberiaResurrection/podata-laravel.git", "no-api": true } ],
2 - Change your podata-laravel dependency line as follows:
"algo-web/podata-laravel": "dev-CleanUpAsserts as dev-master",
After that, you'll need to run composer update. Could you let me know how that goes?
I'm still arguing with getting the test suite working under Laravel 5.6.
Thanks dude, I followed your steps, now http://localhost:8000/ and http://localhost:8000/odata.svc/$metadata are returning good response, however When trying to access my model like this http://localhost:8000/odata.svc/ModelName It's redirecting to home page
@donmbelembe , thanks heaps for your feedback and confirming a certain wombat (ie, @CyberiaResurrection) had not mucked that particular bit up.
This may be a really dumb question - crack up laughing if you feel the need - but are you logged into your project before browsing to odata.svc/ModelName ?
@CyberiaResurrection should we perhaps consider disabling auth by default as it is not detailed in odata itself?
I direct my learned colleague's attention to the README.
@donmbelembe , can you try adding the following line to your project's .env file?
APP_DISABLE_AUTH=true
@CyberiaResurrection About your question, When I'm not logged in it is redirecting to the login page and when it's already logged in it is redirecting to /home. actually my home page has auth middleware
The trait is already set in my model but http://localhost:8000/odata.svc/ModelName is redirecting to /home
@CyberiaResurrection even with APP_DISABLE_AUTH=true i'm getting the same result
@donmbelembe , that's a new problem. APP_DISABLE_AUTH should do what it says on the tin - thanks for rumbling a spot where I've outsmarted myself.
@donmbelembe , I've pushed out what I hope is a fix for APP_DISABLE_AUTH - does that get you out of trouble?
I think It works now.
So http://localhost:8000/odata.svc/User return an xml with 404 status code but http://localhost:8000/odata.svc/Users return data, so this mean we must call call the model in plural ?
Maybe I should learn more about Odata, I'm testing first
@donmbelembe , thanks for your continued feedback and patience.
We followed Laravel convention and thus pluralised the model names to use as endpoints. That's an implementation choice in POData-Laravel, not anything intrinsic to OData itself.
Okey. Thanks to you also for your help
Thanks for your bug reports. As you've confirmed that your issue is fixed in that branch, I'll close this issue.
@donmbelembe , I've rolled those changes up, so you'll need to undo those changes I asked you to make in your project's composer.json file.
| gharchive/issue | 2018-07-12T08:50:51 | 2025-04-01T06:36:41.605612 | {
"authors": [
"CyberiaResurrection",
"c-harris",
"donmbelembe",
"dsv4890"
],
"repo": "Algo-Web/POData-Laravel",
"url": "https://github.com/Algo-Web/POData-Laravel/issues/170",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1227320630 | Add torchvision
Could be useful for multimedia!
closing wontfix
| gharchive/issue | 2022-05-06T01:16:47 | 2025-04-01T06:36:41.607441 | {
"authors": [
"znmeb"
],
"repo": "AlgoCompSynth/AlgoCompSynth-One",
"url": "https://github.com/AlgoCompSynth/AlgoCompSynth-One/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
782261220 | اضافه شدن یک نفر به گروه
لطفا merge کنید.
یک نفر به گروه اضافه شد و لینک آن را به گروه اضافه کردم. متشکرم.
سلام شما نمیتونید به ریدمی اصلی دست بزنید
سلام شما نمیتونید به ریدمی اصلی دست بزنید
| gharchive/pull-request | 2021-01-08T16:54:27 | 2025-04-01T06:36:41.622708 | {
"authors": [
"FATEMEHVAKILI",
"saharzeinivand"
],
"repo": "AliRazavi-edu/PNU_3991",
"url": "https://github.com/AliRazavi-edu/PNU_3991/pull/661",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
676505131 | Problem with docker build
I'm struggling to get the proper linux environment to work for this, so I've resorted to using docker. I eventually got the program to work with the provided docker build instructions about a week or so ago on WSL2, but it doesn't seem to work for me any more.
docker build -t first-order-model . fails for me on this step: full log
Collecting opencv-python (from face-alignment==1.1.0)
Downloading https://files.pythonhosted.org/packages/a1/d6/8422797e35f8814b1d9842530566a949d9b5850a466321a6c1d5a99055ee/opencv-python-4.3.0.38.tar.gz (88.0MB)
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-wuc0x2vh/opencv-python/setup.py", line 9, in <module>
import skbuild
ModuleNotFoundError: No module named 'skbuild'
Any ideas to get this working again?
Running on a fresh WSL2 Ubuntu build following the instructions here. Building with a fresh clone of this repo as well.
Thank you!
Hi, I have exactly the same problem after downloading OpenVino toolkit for linux with FPGA support version 2020.4.287.
the same issue is being solved in this thread, but I do not know, how to download changed files.
https://github.com/OpenVisualCloud/Dockerfiles/issues/549
Hi, this problem is described in issue in official opencv-python repo and in their FAQ.
Submited a pull request to fix it
Submited a pull request to fix it
This worked, thank you
Edit the Dockerfile, and add RUN pip3 install scikit-build==0.11.1 before the last Command
about like this:
FROM nvcr.io/nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04
RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update \
&& DEBIAN_FRONTEND=noninteractive apt-get -qqy install python3-pip ffmpeg git less nano libsm6 libxext6 libxrender-dev \
&& rm -rf /var/lib/apt/lists/*
COPY . /app/
WORKDIR /app
RUN pip3 install scikit-build==0.11.1
RUN pip3 install setuptools
RUN pip3 install \
https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-linux_x86_64.whl \
git+https://github.com/1adrianb/face-alignment \
-r requirements.txt
| gharchive/issue | 2020-08-11T01:03:02 | 2025-04-01T06:36:41.628780 | {
"authors": [
"Grandmother",
"ishc3ice",
"koles289",
"lobomfz",
"mgleed"
],
"repo": "AliaksandrSiarohin/first-order-model",
"url": "https://github.com/AliaksandrSiarohin/first-order-model/issues/226",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
281400490 | Ability to sort column either by number or letter
Hi! Is there any way to add 2 options to the header of a column (which has alpha numeric data) which allows the user to either sort by number or by letter? Thank you for a job well done!
@rikkirabz sorry for lately reply. Currently, it's hard to achieve your requirement in the react-bootstrap-table. But I'll consider to enhance it on react-bootstrap-table2 when I have time,
Thank you :)
| gharchive/issue | 2017-12-12T14:19:43 | 2025-04-01T06:36:41.732801 | {
"authors": [
"AllenFang",
"rikkirabz"
],
"repo": "AllenFang/react-bootstrap-table",
"url": "https://github.com/AllenFang/react-bootstrap-table/issues/1805",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1055660086 | worker heap memory use high
Alluxio Version:
alluxio 2.6.1
Describe the bug
alluxio worker use memory as follows:
this one only shows heap memory and only alluxio worker use without job worker
worker jvm setting:
ALLUXIO_WORKER_JAVA_OPTS: ' -Dalluxio.worker.rpc.port=30062 -Dalluxio.worker.web.port=30063
-Dalluxio.worker.container.hostname=${ALLUXIO_WORKER_CONTAINER_HOSTNAME} -Dalluxio.worker.memory.size=70Gi
-Dalluxio.worker.hostname=${ALLUXIO_WORKER_HOSTNAME} -Xmx20g -Xms20g -XX:MaxDirectMemorySize=50g'
And the metaspaceSize show as follows:
jinfo -flag MaxMetaspaceSize 1
-XX:MaxMetaspaceSize=18446744073709547520
jinfo -flag MetaspaceSize 1
-XX:MetaspaceSize=21807104
To Reproduce
spark write to allxuio
Expected behavior
can this be smaller?
6 spark jobs write to alluxio ,each spark job has 20 executor and 4 cpu * 8 memory per executor
@lilyzhoupeijie are you able to get one or more heap dump when the worker memory usage is high?
@yuzhu any suggestions?
K8S will calculate memory by adding process memory + process page cache and will issue OOM when using more than 100GB
process memory is small
K8S don't have ways to clear page cache
worker process with new process cleaning page cache
another manual way cleaning process page cache
Close because K8S constraint
| gharchive/issue | 2021-11-17T03:22:04 | 2025-04-01T06:36:41.778694 | {
"authors": [
"LuQQiu",
"lilyzhoupeijie"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/issues/14531",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1041329901 | Fix BlockMasterIntegrityIntegrationTest for worker stream register
This test fails when worker registers with a stream, due to how the operation are rearranged for the streaming registration.
Although this is on the same codepath for unary RPC register, this does not affect the correctness.
@HelloHorizon fyi
Codecov Report
Merging #14363 (eed1910) into master (7f30116) will decrease coverage by 25.06%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## master #14363 +/- ##
=============================================
- Coverage 41.96% 16.90% -25.07%
+ Complexity 9341 2698 -6643
=============================================
Files 1492 1492
Lines 87257 87258 +1
Branches 10417 10417
=============================================
- Hits 36621 14753 -21868
- Misses 47667 71266 +23599
+ Partials 2969 1239 -1730
Impacted Files
Coverage Δ
.../java/alluxio/master/block/DefaultBlockMaster.java
40.62% <0.00%> (-35.40%)
:arrow_down:
...mon/src/main/java/alluxio/shell/CommandReturn.java
0.00% <0.00%> (-100.00%)
:arrow_down:
...mon/src/main/java/alluxio/util/ExceptionUtils.java
0.00% <0.00%> (-100.00%)
:arrow_down:
...n/src/main/java/alluxio/wire/AlluxioProxyInfo.java
0.00% <0.00%> (-100.00%)
:arrow_down:
.../src/main/java/alluxio/wire/AlluxioMasterInfo.java
0.00% <0.00%> (-100.00%)
:arrow_down:
...src/main/java/alluxio/job/meta/JobIdGenerator.java
0.00% <0.00%> (-100.00%)
:arrow_down:
...n/src/main/java/alluxio/stress/BaseParameters.java
0.00% <0.00%> (-100.00%)
:arrow_down:
...src/main/java/alluxio/client/UnderStorageType.java
0.00% <0.00%> (-100.00%)
:arrow_down:
...main/java/alluxio/underfs/options/ListOptions.java
0.00% <0.00%> (-100.00%)
:arrow_down:
...main/java/alluxio/worker/block/io/BlockReader.java
0.00% <0.00%> (-100.00%)
:arrow_down:
... and 684 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7f30116...eed1910. Read the comment docs.
alluxio-bot, merge this please
alluxio-bot, cherry-pick this to branch-2.7 please
alluxio-bot, cherry-pick this to branch-2.7 please
| gharchive/pull-request | 2021-11-01T15:59:11 | 2025-04-01T06:36:41.797331 | {
"authors": [
"Xenorith",
"codecov-commenter",
"ggezer",
"jiacheliu3"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/pull/14363",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
316398406 | [SMALLFIX] Concurrency test speed up
100ms sleep is okay on my local machine.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/19332/
Test PASSed.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/19367/
Test PASSed.
| gharchive/pull-request | 2018-04-20T20:15:57 | 2025-04-01T06:36:41.800899 | {
"authors": [
"AmplabJenkins",
"calvinjia"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/pull/7183",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
797811731 | Design and implement default InventoryView
In the fxgl-trade module we have Inventory API, we now need to provide a default InventoryView with some reasonable settings such as width and height and a default look.
Most of the time, developers will produce their own views using Inventory.
We should consider if the "sort" functionality responsibility of the view or the model. Example: sort by name...
Something like the inventory in Zephyria
| gharchive/issue | 2021-01-31T21:05:01 | 2025-04-01T06:36:41.804896 | {
"authors": [
"AlmasB"
],
"repo": "AlmasB/FXGL",
"url": "https://github.com/AlmasB/FXGL/issues/957",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
654115024 | Improve Smart QR Code Scanner [Fiji]
It is a bit complex task.
Zeplin:
When scanning ETH Address
a) Send to this Address https://app.zeplin.io/project/5d088205bff2d15de6a4397b/dashboard?seid=5e6f641b060131115f1bc1c6
b) Add to Address Book https://app.zeplin.io/project/5d088205bff2d15de6a4397b/dashboard?seid=5e6f648d82a344183b52c43c
c) Watch Wallet https://app.zeplin.io/project/5d088205bff2d15de6a4397b/dashboard?seid=5e6f64c5fae46e11eefb46ae
When scanning Payment Request
https://app.zeplin.io/project/5d088205bff2d15de6a4397b/dashboard?seid=5e6f6531bc13e3160fe28da9
When scanning to add custom token
https://app.zeplin.io/project/5d088205bff2d15de6a4397b/dashboard?seid=5e6f63483eea5014c74bec75
When scanned QR is not ETH
https://app.zeplin.io/project/5d088205bff2d15de6a4397b/dashboard?seid=5e6f65e9599980122923f129
When QR is a link (to open in browser)
https://app.zeplin.io/project/5d088205bff2d15de6a4397b/dashboard?seid=5e6f6503de850113ba370a97
Flow (from iOS)
https://drive.google.com/a/alphawallet.com/file/d/14ge8G3pIM513myKZbDrUyjA2bvRGzFXR/view?usp=sharing
When scanning ETH Address - Add to Contacts
When scanning ETH Address - Watch Wallet
When scanning ETH Address - Send to the Address
When scanning Payment Request
When scanning to Add Custom Token
When scanned QR is not ETH
When QR is a link (to open in browser)
Which ones of these aren't working?
@tomekalphawallet could you clarify the 'add custom token' flow? Currently we access that through the + button on the wallet screen. Does this issue replace that behaviour, and if so do we need to remove that plus button? Or do you still get to the scan via clicking the plus, then clicking the scan button on the 'add custom token' activity?
It does not replace that behaviour, but goes in parallel. So if you want to add custom token you can:
a) launch a qr code scanner, scan a smart contract qr, hit "Add Token" to confirm adding prefilled form (symbol, balance, contract address, token's name)
b) you can add a custom token by: hit add/hide token in the wallet tab, hit "+", and enter details.
Makes sense?
--
Also, can I request you to update + button? I got a feedback that is not so visible. We can make it more prominent. The new one is the same size (24x24). So should be easy to replace.
https://zpl.io/2yAgxNo
This icon is used:
Change Wallet
Address Book
Add/Hide Tokens
@ChintanRathod I applied Tomek's suggestion to display an error within the QR scanner and resume scanning, rather than fold back to home and show scan error (as I mistakenly directed you to do), so no need to apply that fix now.
Addressbook integration is still being designed, so you can just add this as a hidden option that doesn't go anywhere for now.
@mpaschenko This is a very old issue, but still waiting for implementation. Please check out how this works in iOS.
Run QR Code camera (top right in the wallet tab) and scan the above QR in iOS.
iOS
@hboon The only one task left from Fiji.
Bump: @JamesSmartCell @mpaschenko
Hi @colourfreak can you clarify - in the graphic above underneath 'When scanning Payment Request', the ActionSheet looks like it pops up directly after the scan rather than going to the send screen. I like this (I have seen cases where people needed to implement something this simple at conferences, and failed terribly) this is your intention right?
Also @colourfreak when scanning a pure address, then clicking on 'send' it defaults to Ethereum Mainnet. It would be good to be able to pick which network we want to send to, although this isn't critical.
Hi @colourfreak can you clarify - in the graphic above underneath 'When scanning Payment Request', the ActionSheet looks like it pops up directly after the scan rather than going to the send screen. I like this (I have seen cases where people needed to implement something this simple at conferences, and failed terribly) this is your intention right?
@JamesSmartCell I remember that Weiwu wanted me to add this. I was not sure about the usage. However, I have seen something similar at a conference. So you had a printed QR code (almost like a banknote) and someone can scan it and get paid with Lightning Network on Bitcoin.
| gharchive/issue | 2020-07-09T14:40:23 | 2025-04-01T06:36:41.823375 | {
"authors": [
"JamesSmartCell",
"colourfreak",
"hboon",
"tomekalphawallet"
],
"repo": "AlphaWallet/alpha-wallet-android",
"url": "https://github.com/AlphaWallet/alpha-wallet-android/issues/1504",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
665772810 | allow nonce of 0?
Feedback from a user:
Question - I’m trying to specify a nonce of 0 to replace a pending tx of the same nonce (0 - first transfer from wallet). But Alpha Wallet is telling me the nonce must be a positive integer - it won’t let me specify 0.
I verified that this is a bug - 0 should be allowed. And 0 is a positive integer.
Can you try to reproduce this problem by:
Install a fresh AlphaWallet and generate a new address
Send some test-ether to the new address
Send it out
Speed up
See if you can capture the error in full and paste here.
Your Ethereum address starts with a nonce of 0 and increases by 1 with each transaction that's confirmed.
I think this can be a hotfix release candidate. I don't know how to fix it but shouldn't be difficult to figure out (by any one including chintan)
Looks like this issue is from a previous build. Nonce control is not available in the latest Playstore version of AlphaWalelt as the current version includes SpeedUp function.
| gharchive/issue | 2020-07-26T12:39:15 | 2025-04-01T06:36:41.827108 | {
"authors": [
"AW-STJ",
"colourful-land"
],
"repo": "AlphaWallet/alpha-wallet-android",
"url": "https://github.com/AlphaWallet/alpha-wallet-android/issues/1533",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1052753299 | Add thin gray lines behind the chart with edge values #3418
Closes #3418
Force pushed after rebase master
Also see if you can add the labels on the right?
but this PR, is already displaying them, on right side of the chart
Oh, what was I thinking. Hah!. Thanks
| gharchive/pull-request | 2021-11-13T19:27:28 | 2025-04-01T06:36:41.829058 | {
"authors": [
"hboon",
"vladyslav-iosdev"
],
"repo": "AlphaWallet/alpha-wallet-ios",
"url": "https://github.com/AlphaWallet/alpha-wallet-ios/pull/3428",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
672781096 | How do I use HashMapStrategy? Documentation unclear, has no examples.
Usage of HashMapStrategy is unclear from the documentation. I need to randomly create hashmaps that meet certain conditions. From what I can see, I'm supposed to supply it with 2 other strategies to generate the keys and the values, but I'm not exactly sure how I'm supposed to make the SizeRange. Should I even be using the hash_map function?
Related documentation:
https://docs.rs/proptest/0.10.0/proptest/collection/fn.hash_map.html
https://docs.rs/proptest/0.10.0/proptest/collection/struct.HashMapStrategy.html
HashMapStrategy is literally a thin wrapper around a strategy for creating Vec<(Key, Value)> up to the specified size and filtered to not go below the minumum.
So if you e.g., use hash_map(key_strat, val_strat, 5...10), then what happens is that a vec((key_strat, val_strat), 5..10) is created, and then this is mapped using vec.into_iter().collect(), giving you a HashMap of 5..10 elements with keys drawn from key_strat and values from val_strat.
Note that most RangeXYZ types implement Into<SizeRange>, which is what hash_map accepts, so you can use the normal Rust range syntax.
I hope that answers your question.
Ah, I see. Can I use prop_filter on it? or do I just have to ignore values I can't use?
Sure, you can use .prop_filter on the key_strat, value_strat, or the final hash_map(...) strategy. Doing so on the latter will give you a &HashMap<K, V> which you can apply a predicate to. You can also ignore irrelevant maps in each test itself as well, but that will scale less well if the strategy is used a lot. If you can create a correct key/value/map by construction, then that is preferable to filtering (see the book / docs on prop_filter for elaboration).
Thanks for your help! Closing...
| gharchive/issue | 2020-08-04T13:10:57 | 2025-04-01T06:36:41.836228 | {
"authors": [
"Centril",
"dyc3"
],
"repo": "AltSysrq/proptest",
"url": "https://github.com/AltSysrq/proptest/issues/204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2687974620 | Add clipboard copy to formatted SQL display
fix #659
tested manually, all works
| gharchive/pull-request | 2024-11-24T16:37:59 | 2025-04-01T06:36:41.852367 | {
"authors": [
"Slach",
"lunaticusgreen"
],
"repo": "Altinity/clickhouse-grafana",
"url": "https://github.com/Altinity/clickhouse-grafana/pull/671",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1858632643 | sudden disappearance of CHI pods
After running the clickhouse for couple of months, the CHI pod has disappeared on its own.
@hodgesrm had advised on adding stop: "no" for fix. However, I had already resolved by changing shard count and reverting, to trigger re-creation of CHI pod (by changing state of CHI out of completed to force pod re-creation).
Nevertheless, we had been unable to find the root cause of the issue. So, this issue could re-appear in the future.
CHI definition:
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
name: my-release-clickhouse
namespace: default
labels:
helm.sh/chart: clickhouse-23.12.2
app.kubernetes.io/name: clickhouse
app.kubernetes.io/instance: my-release
app.kubernetes.io/component: clickhouse
app.kubernetes.io/version: "22.8.8"
app.kubernetes.io/managed-by: Helm
spec:
defaults:
templates:
dataVolumeClaimTemplate: data-volumeclaim-template
serviceTemplate: service-template
configuration:
users:
admin/password: redacted
admin/networks/ip:
- "10.0.0.0/8"
- "100.64.0.0/10"
- "172.16.0.0/12"
- "192.0.0.0/24"
- "198.18.0.0/15"
- "192.168.0.0/16"
admin/profile: default
admin/quota: default
profiles:
default/allow_experimental_window_functions: "1"
default/allow_nondeterministic_mutations: "1"
clusters:
- name: "cluster"
templates:
podTemplate: pod-template
layout:
replicasCount: 1
shardsCount: 1
settings:
format_schema_path: /etc/clickhouse-server/config.d/
prometheus/endpoint: /metrics
prometheus/port: 9363
user_defined_executable_functions_config: /etc/clickhouse-server/functions/custom-functions.xml
user_scripts_path: /var/lib/clickhouse/user_scripts/
files:
events.proto: |
syntax = "proto3";
message Event {
string uuid = 1;
string event = 2;
string properties = 3;
string timestamp = 4;
uint64 team_id = 5;
string distinct_id = 6;
string created_at = 7;
string elements_chain = 8;
}
zookeeper:
nodes:
- host: my-release-zookeeper-0.my-release-zookeeper-headless
port: 2181
templates:
podTemplates:
- name: pod-template
metadata:
labels:
helm.sh/chart: clickhouse-23.12.2
app.kubernetes.io/name: clickhouse
app.kubernetes.io/instance: my-release
app.kubernetes.io/component: clickhouse
app.kubernetes.io/version: "22.8.8"
app.kubernetes.io/managed-by: Helm
annotations: {}
podDistribution:
- topologyKey: kubernetes.io/hostname
type: ReplicaAntiAffinity
spec:
serviceAccountName: my-release-clickhouse
priorityClassName: ""
securityContext:
fsGroup: 101
runAsGroup: 101
runAsUser: 101
volumes:
- name: shared-binary-volume
emptyDir: {}
- name: custom-functions-volume
configMap:
name: my-release-clickhouse-custom-functions
initContainers: []
containers:
- name: clickhouse
image: docker.io/clickhouse/clickhouse-server:22.8.8-alpine
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- -c
- /usr/bin/clickhouse-server --config-file=/etc/clickhouse-server/config.xml
ports:
- name: http
containerPort: 8123
- name: client
containerPort: 9000
- name: interserver
containerPort: 9009
volumeMounts:
- name: data-volumeclaim-template
mountPath: /var/lib/clickhouse
- name: shared-binary-volume
mountPath: /var/lib/clickhouse/user_scripts
- name: custom-functions-volume
mountPath: /etc/clickhouse-server/functions
resources:
requests:
cpu: 100m
memory: 200Mi
serviceTemplates:
- name: service-template
generateName: my-release-clickhouse
metadata:
labels:
helm.sh/chart: clickhouse-23.12.2
app.kubernetes.io/name: clickhouse
app.kubernetes.io/instance: my-release
app.kubernetes.io/component: clickhouse
app.kubernetes.io/version: "22.8.8"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- name: http
port: 8123
nodePort: null
- name: tcp
port: 9000
nodePort: null
volumeClaimTemplates:
- name: data-volumeclaim-template
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: standard
Context: Slack Message
Is your kind: ClickHouseInstallation managed by the same Helm chart with clickhouse-operator?
Could you share the following commands?
kubectl get deploy --all-namespaces -l app=clickhouse-operator
kubectl get pod --all-namespaces -l clickhouse.altinity.com/chi
@Slach Yes, it is managed by the same chart with clickhouse-operator.
kubectl get deploy --all-namespaces app.kubernetes.io/component=operator
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default my-release-clickhouse-operator 1/1 1 1 60d
test my-release-clickhouse-operator 1/1 1 1 38d
kubectl get pod --all-namespaces -l clickhouse.altinity.com/chi
NAMESPACE NAME READY STATUS RESTARTS AGE
default chi-my-release-clickhouse-cluster-0-0-0 1/1 Running 0 3d21h
test chi-my-release-clickhouse-cluster-0-0-0 1/1 Running 0 4d19h
could you share operator logs from test namespace?
kubectl logs -n test -l app.kubernetes.io/component=operator --since=7d
I was getting the error below:
error: invalid argument "7d" for "--since" flag: time: unknown unit "d" in duration "7d"
Hence, replaced 7d with 168h:
$ kubectl logs -n test -l app.kubernetes.io/component=operator --since=168h
Defaulted container "my-release-clickhouse-operator" out of: my-release-clickhouse-operator, my-release-clickhouse-metrics-exporter
E0822 10:51:33.116468 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.124.0.1:443/api/v1/namespaces/test/services?resourceVersion=101836934": dial tcp 10.124.0.1:443: i/o timeout
I0822 10:51:38.125884 1 trace.go:205] Trace[301511361]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (22-Aug-2023 10:51:08.124) (total time: 30001ms):
Trace[301511361]: [30.00114905s] [30.00114905s] END
E0822 10:51:38.125902 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://10.124.0.1:443/apis/apps/v1/namespaces/test/statefulsets?resourceVersion=101836529": dial tcp 10.124.0.1:443: i/o timeout
I0822 10:51:38.355293 1 trace.go:205] Trace[2060365486]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (22-Aug-2023 10:51:08.354) (total time: 30001ms):
Trace[2060365486]: [30.001212232s] [30.001212232s] END
E0822 10:51:38.355312 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.124.0.1:443/api/v1/namespaces/test/configmaps?resourceVersion=101837009": dial tcp 10.124.0.1:443: i/o timeout
I0822 10:51:42.173930 1 trace.go:205] Trace[250496038]: "Reflector ListAndWatch" name:pkg/client/informers/externalversions/factory.go:117 (22-Aug-2023 10:51:12.172) (total time: 30001ms):
Trace[250496038]: [30.001189387s] [30.001189387s] END
E0822 10:51:42.173949 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1.ClickHouseOperatorConfiguration: failed to list *v1.ClickHouseOperatorConfiguration: Get "https://10.124.0.1:443/apis/clickhouse.altinity.com/v1/namespaces/test/clickhouseoperatorconfigurations?resourceVersion=101836611": dial tcp 10.124.0.1:443: i/o timeout
$ kubectl logs -n default -l app.kubernetes.io/component=operator --since=168h
Defaulted container "my-release-clickhouse-operator" out of: my-release-clickhouse-operator, my-release-clickhouse-metrics-exporter
E0822 10:51:38.591455 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.124.0.1:443/api/v1/namespaces/default/services?resourceVersion=101836934": dial tcp 10.124.0.1:443: i/o timeout
I0822 10:51:41.314893 1 trace.go:205] Trace[1051692860]: "Reflector ListAndWatch" name:pkg/client/informers/externalversions/factory.go:117 (22-Aug-2023 10:51:11.313) (total time: 30001ms):
Trace[1051692860]: [30.001164853s] [30.001164853s] END
E0822 10:51:41.314912 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1.ClickHouseOperatorConfiguration: failed to list *v1.ClickHouseOperatorConfiguration: Get "https://10.124.0.1:443/apis/clickhouse.altinity.com/v1/namespaces/default/clickhouseoperatorconfigurations?resourceVersion=101836881": dial tcp 10.124.0.1:443: i/o timeout
I0822 10:51:41.768145 1 trace.go:205] Trace[523315261]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (22-Aug-2023 10:51:11.767) (total time: 30000ms):
Trace[523315261]: [30.000906333s] [30.000906333s] END
E0822 10:51:41.768163 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.124.0.1:443/api/v1/namespaces/default/configmaps?resourceVersion=101837009": dial tcp 10.124.0.1:443: i/o timeout
I0822 10:51:42.946368 1 trace.go:205] Trace[838394852]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (22-Aug-2023 10:51:12.945) (total time: 30000ms):
Trace[838394852]: [30.000528326s] [30.000528326s] END
E0822 10:51:42.946386 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.124.0.1:443/api/v1/namespaces/default/pods?resourceVersion=101836916": dial tcp 10.124.0.1:443: i/o timeout
looks like nothing happens 22 Aug
Could you share?
kubectl logs -n default -l app.kubernetes.io/component=operator -c clickhouse-operator
kubectl logs -n test -l app.kubernetes.io/component=operator -c clickhouse-operator
Sure, here are the logs:
$ kubectl logs -n default -l app.kubernetes.io/component=operator -c my-release-clickhouse-operator
I0823 18:03:06.328611 1 worker-reconciler.go:111] worker-reconciler.go:52:reconcileCHI():end:default/my-release-clickhouse
I0823 18:03:06.328632 1 worker.go:414] worker.go:380:updateCHI():end:default/my-release-clickhouse
E0823 18:03:49.676857 1 connection.go:190] Exec():FAILED Exec(http://clickhouse_operator:***@chi-my-release-clickhouse-cluster-0-0.default.svc.cluster.local:8123/) context deadline exceeded for SQL: SYSTEM DROP DNS CACHE
W0823 18:03:49.712800 1 retry.go:52] exec():chi-my-release-clickhouse-cluster-0-0.default.svc.cluster.local:FAILED single try. No retries will be made for Applying sqls
I0823 18:03:49.801384 1 worker.go:339] default/my-release-clickhouse/5d89fdfb-a136-430a-a8a5-310c07edef5b:IPs of the CHI-1 [10.120.1.155]
I0823 18:03:49.812084 1 worker.go:343] default/my-release-clickhouse/13b958a0-3cfd-46a6-a3ce-368a46cc97df:Update users IPS-1
I0823 18:03:49.821067 1 worker.go:1089] updateConfigMap():default/my-release-clickhouse/13b958a0-3cfd-46a6-a3ce-368a46cc97df:Update ConfigMap default/chi-my-release-clickhouse-common-usersd
I0823 18:03:49.903031 1 worker.go:339] default/my-release-clickhouse/88d992b4-a76a-4a2b-af19-2cc81d64548a:IPs of the CHI-1 [10.120.1.155]
I0823 18:03:49.910434 1 worker.go:343] default/my-release-clickhouse/5892dd12-16a8-4e6f-baf6-e5ec8776277d:Update users IPS-1
I0823 18:03:49.914824 1 worker.go:1089] updateConfigMap():default/my-release-clickhouse/5892dd12-16a8-4e6f-baf6-e5ec8776277d:Update ConfigMap default/chi-my-release-clickhouse-common-usersd
$ kubectl logs -n test -l app.kubernetes.io/component=operator -c my-release-clickhouse-operator
I0823 17:13:58.953558 1 worker-deleter.go:64] worker-deleter.go:64:dropReplicas():start:test/my-release-clickhouse/f3f569fa-3d23-4b03-a914-71c7bf07cf7d:drop replicas based on AP
I0823 17:13:58.953588 1 worker-deleter.go:81] worker-deleter.go:81:dropReplicas():end:test/my-release-clickhouse/f3f569fa-3d23-4b03-a914-71c7bf07cf7d:processed replicas: 0
I0823 17:13:58.953612 1 worker.go:573] includeStopped():test/my-release-clickhouse/f3f569fa-3d23-4b03-a914-71c7bf07cf7d:add CHI to monitoring
I0823 17:13:59.355282 1 controller.go:609] OK update watch (test/my-release-clickhouse): {"namespace":"test","name":"my-release-clickhouse","clusters":[{"name":"cluster","hosts":[{"name":"0-0","hostname":"chi-my-release-clickhouse-cluster-0-0.test.svc.cluster.local","tcpPort":9000,"httpPort":8123}]}]}
I0823 17:13:59.360534 1 worker.go:540] test/my-release-clickhouse:all IP addresses are in place
I0823 17:13:59.526393 1 worker.go:611] test/my-release-clickhouse/7de65ad0-12b1-4823-8657-ca1e4984bb8f:IPs of the CHI-2 [10.120.1.133]
I0823 17:13:59.534938 1 worker.go:615] test/my-release-clickhouse/789d4f97-3cee-499d-9e78-631399e02bea:Update users IPS-2
I0823 17:13:59.748918 1 worker.go:636] finalizeReconcileAndMarkCompleted():test/my-release-clickhouse/f3f569fa-3d23-4b03-a914-71c7bf07cf7d:reconcile completed successfully, task id: f3f569fa-3d23-4b03-a914-71c7bf07cf7d
I0823 17:14:00.146065 1 worker-reconciler.go:111] worker-reconciler.go:52:reconcileCHI():end:test/my-release-clickhouse
I0823 17:14:00.146085 1 worker.go:414] worker.go:380:updateCHI():end:test/my-release-clickhouse
=( this is not a full logs, this is only logs for 23 Aug
@Slach This is all that there is. :(
Closing it as it is not reproducible. Could be something related to Helm
@alex-zaitsev This is happening more often in recent time when trying to increase the ClickHouse PVC size in ClickHouse installation. Not able to pin point the root cause of the issue.
Operator logs:
I0729 03:49:26.917146 1 worker.go:1089] updateConfigMap():my-release/my-release-clickhouse/58053693-2b8a-4d1d-b5ed-e0432448d655:Update ConfigMap my-release/chi-my-release-clickhouse-deploy-confd-cluster-0-0
I0729 03:49:27.928385 1 cluster.go:84] Run query on: chi-my-release-clickhouse-cluster-0-0.my-release.svc.cluster.local of [chi-my-release-clickhouse-cluster-0-0.my-release.svc.cluster.local]
I0729 03:49:28.033592 1 worker-reconciler.go:292] reconcileHostStatefulSet():Reconcile host 0-0. ClickHouse version: 24.1.2.5
I0729 03:49:28.033769 1 worker.go:134] shouldForceRestartHost():Force restart is not required. Host: 0-0
I0729 03:49:28.033802 1 worker-reconciler.go:304] reconcileHostStatefulSet():Reconcile host 0-0. Reconcile StatefulSet
I0729 03:49:28.033959 1 creator.go:589] getPodTemplate():my-release/my-release-clickhouse/58053693-2b8a-4d1d-b5ed-e0432448d655:statefulSet chi-my-release-clickhouse-cluster-0-0 use custom template: pod-template
W0729 03:49:28.034289 1 creator.go:994] containerAppendVolumeMount():my-release/my-release-clickhouse/58053693-2b8a-4d1d-b5ed-e0432448d655:container.Name:clickhouse volumeMount.Name:data-volumeclaim-template already used
I0729 03:49:28.036205 1 worker.go:1290] getStatefulSetStatus():my-release/chi-my-release-clickhouse-cluster-0-0:cur and new StatefulSets ARE DIFFERENT based on labels. StatefulSet reconcile is required for: my-release/chi-my-release-clickhouse-cluster-0-0
I0729 03:49:28.036374 1 worker.go:1431] updateStatefulSet():Update StatefulSet(my-release/chi-my-release-clickhouse-cluster-0-0) - started
I0729 03:49:28.348502 1 worker.go:1401] waitConfigMapPropagation():Wait for ConfigMap propagation for 9.003953254s 996.046746ms/10s
E0729 03:49:37.360361 1 creator.go:76] updateStatefulSet():StatefulSet update failed. err: StatefulSet.apps "chi-my-release-clickhouse-cluster-0-0" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
E0729 03:49:37.361923 1 creator.go:100] updateStatefulSet():NOT EQUAL: AP item start -------------------------
modified spec items: 24
ap item path [0]:'.Template.Spec.Volumes[1].VolumeSource.ConfigMap.DefaultMode'
ap item value[0]:'nil'
ap item path [1]:'.Template.Spec.Containers[0].LivenessProbe.SuccessThreshold'
ap item value[1]:'0'
ap item path [2]:'.Template.Spec.Containers[0].ReadinessProbe.Handler.HTTPGet.Scheme'
ap item value[2]:'""'
ap item path [3]:'.Template.Spec.SchedulerName'
ap item value[3]:'""'
ap item path [4]:'.Template.Spec.Containers[0].LivenessProbe.Handler.HTTPGet.Scheme'
--
I0729 03:49:42.476651 1 poller.go:213] pollStatefulSet():my-release/chi-my-release-clickhouse-cluster-0-0:OK :ObservedGeneration:2 Replicas:1 ReadyReplicas:1 CurrentReplicas:0 UpdatedReplicas:0 CurrentRevision:chi-my-release-clickhouse-cluster-0-0-8f9b7b45b UpdateRevision:chi-my-release-clickhouse-cluster-0-0-8f9b7b45b
W0729 03:50:27.487009 1 reflector.go:436] pkg/client/informers/externalversions/factory.go:117: watch of *v1.ClickHouseInstallation ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0729 03:50:27.487049 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0729 03:50:27.487052 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Endpoints ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0729 03:50:27.487012 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0729 03:50:27.487096 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0729 03:50:27.487127 1 reflector.go:436] pkg/client/informers/externalversions/factory.go:117: watch of *v1.ClickHouseInstallationTemplate ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0729 03:50:27.487145 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0729 03:50:27.487294 1 reflector.go:436] pkg/client/informers/externalversions/factory.go:117: watch of *v1.ClickHouseOperatorConfiguration ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0729 03:50:27.488331 1 labeler.go:292] deleteLabelReadyPod():FAIL get pod for host 'my-release/0-0' err: Get "https://10.92.0.1:443/api/v1/namespaces/my-release/pods/chi-my-release-clickhouse-cluster-0-0-0": http2: client connection lost
E0729 03:50:57.489774 1 poller.go:237] pollStatefulSet():my-release/chi-my-release-clickhouse-cluster-0-0:my-release/chi-my-release-clickhouse-cluster-0-0 Get() FAILED
I0729 03:50:58.323884 1 trace.go:205] Trace[1632088119]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (29-Jul-2024 03:50:28.323) (total time: 30000ms):
Trace[1632088119]: [30.000791007s] [30.000791007s] END
E0729 03:50:58.323928 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.92.0.1:443/api/v1/namespaces/my-release/pods?resourceVersion=409134742": dial tcp 10.92.0.1:443: i/o timeout
I0729 03:50:58.387047 1 trace.go:205] Trace[1562350316]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (29-Jul-2024 03:50:28.386) (total time: 30000ms):
Trace[1562350316]: [30.000679957s] [30.000679957s] END
E0729 03:50:58.387074 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://10.92.0.1:443/apis/apps/v1/namespaces/my-release/statefulsets?resourceVersion=409134694": dial tcp 10.92.0.1:443: i/o timeout
I0729 03:50:58.482514 1 trace.go:205] Trace[1523333257]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (29-Jul-2024 03:50:28.481) (total time: 30000ms):
Trace[1523333257]: [30.000761918s] [30.000761918s] END
E0729 03:50:58.482587 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.92.0.1:443/api/v1/namespaces/my-release/configmaps?resourceVersion=409134621": dial tcp 10.92.0.1:443: i/o timeout
I0729 03:50:58.627969 1 trace.go:205] Trace[328066567]: "Reflector ListAndWatch" name:pkg/client/informers/externalversions/factory.go:117 (29-Jul-2024 03:50:28.626) (total time: 30001ms):
Trace[328066567]: [30.001690277s] [30.001690277s] END
E0729 03:50:58.627999 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1.ClickHouseOperatorConfiguration: failed to list *v1.ClickHouseOperatorConfiguration: Get "https://10.92.0.1:443/apis/clickhouse.altinity.com/v1/namespaces/my-release/clickhouseoperatorconfigurations?resourceVersion=409134300": dial tcp 10.92.0.1:443: i/o timeout
I0729 03:50:58.680316 1 trace.go:205] Trace[1867447287]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (29-Jul-2024 03:50:28.679) (total time: 30000ms):
Trace[1867447287]: [30.000683297s] [30.000683297s] END
E0729 03:50:58.680433 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.92.0.1:443/api/v1/namespaces/my-release/services?resourceVersion=409134696": dial tcp 10.92.0.1:443: i/o timeout
I0729 03:50:58.870657 1 trace.go:205] Trace[321750039]: "Reflector ListAndWatch" name:pkg/client/informers/externalversions/factory.go:117 (29-Jul-2024 03:50:28.869) (total time: 30000ms):
Trace[321750039]: [30.000749817s] [30.000749817s] END
E0729 03:50:58.870684 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1.ClickHouseInstallation: failed to list *v1.ClickHouseInstallation: Get "https://10.92.0.1:443/apis/clickhouse.altinity.com/v1/namespaces/my-release/clickhouseinstallations?resourceVersion=409134690": dial tcp 10.92.0.1:443: i/o timeout
I0729 03:50:59.038154 1 trace.go:205] Trace[345492881]: "Reflector ListAndWatch" name:pkg/client/informers/externalversions/factory.go:117 (29-Jul-2024 03:50:29.037) (total time: 30000ms):
Trace[345492881]: [30.000824067s] [30.000824067s] END
E0729 03:50:59.038264 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1.ClickHouseInstallationTemplate: failed to list *v1.ClickHouseInstallationTemplate: Get "https://10.92.0.1:443/apis/clickhouse.altinity.com/v1/namespaces/my-release/clickhouseinstallationtemplates?resourceVersion=409134495": dial tcp 10.92.0.1:443: i/o timeout
I0729 03:50:59.057219 1 trace.go:205] Trace[1113844881]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (29-Jul-2024 03:50:29.056) (total time: 30000ms):
Trace[1113844881]: [30.000738938s] [30.000738938s] END
E0729 03:50:59.057261 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.92.0.1:443/api/v1/namespaces/my-release/endpoints?resourceVersion=409134706": dial tcp 10.92.0.1:443: i/o timeout
@alex-zaitsev @Slach can you please help here?
Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.92.0.1:443/api/v1/namespaces/my-release/endpoints?resourceVersion=409134706": dial tcp 10.92.0.1:443: i/o timeout
means something wrong with your kubernetes API server
| gharchive/issue | 2023-08-21T05:39:18 | 2025-04-01T06:36:41.868641 | {
"authors": [
"Slach",
"alex-zaitsev",
"prashant-shahi"
],
"repo": "Altinity/clickhouse-operator",
"url": "https://github.com/Altinity/clickhouse-operator/issues/1221",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2032200620 | One of the 3 shards does not start
Hello, one of the 3 shards does not start. How can I get it working again?
Error:
2023.12.08 08:23:12.191260 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 231, e.displayText() = DB::Exception: The local set of parts of table default.outcomes_hourly_local doesn't look like the set of parts in ZooKeeper: 1.42 million rows of 2.33 million total rows in filesystem are suspicious. There are 4397 unexpected parts with 1421422 rows (494 of them is not just-written with 1414815 rows), 0 missing parts (with 0 blocks).: Cannot attach table `default`.`outcomes_hourly_local` from metadata file /var/lib/clickhouse/metadata/default/outcomes_hourly_local.sql from query ATTACH TABLE default.outcomes_hourly_local (`org_id` UInt64, `project_id` UInt64, `key_id` UInt64, `timestamp` DateTime, `category` UInt8, `outcome` UInt8, `reason` LowCardinality(String), `quantity` UInt64, `times_seen` UInt64, `bytes_received` UInt64) ENGINE = ReplicatedSummingMergeTree('/clickhouse/tables/outcomes/{shard}/default/outcomes_hourly_local', '{replica}') PARTITION BY toMonday(timestamp) PRIMARY KEY (org_id, project_id, key_id, outcome, reason, timestamp) ORDER BY (org_id, project_id, key_id, outcome, reason, timestamp, category) TTL timestamp + toIntervalDay(90) SETTINGS index_granularity = 256: while loading database `default` from path /var/lib/clickhouse/metadata/default, Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8feddda in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long&, unsigned long&, unsigned long&, unsigned long, unsigned long const&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, unsigned long&&, unsigned long&, unsigned long&, unsigned long&, unsigned long&&, unsigned long const&) @ 0x10af187d in /usr/bin/clickhouse
2. DB::StorageReplicatedMergeTree::checkParts(bool) @ 0x10ae5115 in /usr/bin/clickhouse
3. DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool) @ 0x10ad92c6 in /usr/bin/clickhouse
4. ? @ 0x10fa1077 in /usr/bin/clickhouse
5. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x10a3e3a1 in /usr/bin/clickhouse
6. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool) @ 0xff95b05 in /usr/bin/clickhouse
7. ? @ 0xff93c53 in /usr/bin/clickhouse
8. ? @ 0xff94c3f in /usr/bin/clickhouse
9. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x9031718 in /usr/bin/clickhouse
10. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0x90332bf in /usr/bin/clickhouse
11. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x902e9ff in /usr/bin/clickhouse
12. ? @ 0x90322e3 in /usr/bin/clickhouse
13. start_thread @ 0x8609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
14. clone @ 0x11f163 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.8.15.7)
2023.12.08 08:23:13.578866 [ 1 ] {} <Error> Application: DB::Exception: The local set of parts of table default.outcomes_hourly_local doesn't look like the set of parts in ZooKeeper: 1.42 million rows of 2.33 million total rows in filesystem are suspicious. There are 4397 unexpected parts with 1421422 rows (494 of them is not just-written with 1414815 rows), 0 missing parts (with 0 blocks).: Cannot attach table `default`.`outcomes_hourly_local` from metadata file /var/lib/clickhouse/metadata/default/outcomes_hourly_local.sql from query ATTACH TABLE default.outcomes_hourly_local (`org_id` UInt64, `project_id` UInt64, `key_id` UInt64, `timestamp` DateTime, `category` UInt8, `outcome` UInt8, `reason` LowCardinality(String), `quantity` UInt64, `times_seen` UInt64, `bytes_received` UInt64) ENGINE = ReplicatedSummingMergeTree('/clickhouse/tables/outcomes/{shard}/default/outcomes_hourly_local', '{replica}') PARTITION BY toMonday(timestamp) PRIMARY KEY (org_id, project_id, key_id, outcome, reason, timestamp) ORDER BY (org_id, project_id, key_id, outcome, reason, timestamp, category) TTL timestamp + toIntervalDay(90) SETTINGS index_granularity = 256: while loading database `default` from path /var/lib/clickhouse/metadata/default
and subscribe to https://github.com/ClickHouse/ClickHouse/issues/37664
| gharchive/issue | 2023-12-08T08:32:41 | 2025-04-01T06:36:41.875294 | {
"authors": [
"Slach",
"kiper-prog"
],
"repo": "Altinity/clickhouse-operator",
"url": "https://github.com/Altinity/clickhouse-operator/issues/1288",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1673164638 | Update _index.en.md
Description
Minor corrections to text.
Your vs You.
Functionality misspelled.
Minor corrections to text.
| gharchive/pull-request | 2023-04-18T13:42:27 | 2025-04-01T06:36:41.880676 | {
"authors": [
"borgethommesen"
],
"repo": "Altinn/altinn-studio-docs",
"url": "https://github.com/Altinn/altinn-studio-docs/pull/896",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2494149417 | chore: Move Telemetry class files to a separate /Telemetry folder
Description
The Telemetry partial class consists of 27 files, which were all located at root level in Core/Features/.
This change simply moves them to a /Telemetry subfolder, purely for organisational purposes. The namespace remains unchanged.
Related Issue(s)
N/A
Verification
[x] Your code builds clean without any errors or warnings
[x] Manual testing done (required)
[ ] Relevant automated test added (if you find this hard, leave it and we'll help out)
[x] All tests run green
Documentation
[ ] User documentation is updated with a separate linked PR in altinn-studio-docs. (if applicable)
Quality gate failed
😂
| gharchive/pull-request | 2024-08-29T11:17:08 | 2025-04-01T06:36:41.890260 | {
"authors": [
"danielskovli"
],
"repo": "Altinn/app-lib-dotnet",
"url": "https://github.com/Altinn/app-lib-dotnet/pull/738",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
249172551 | Small mistype (incorrect link) in README
Hi, I noticed the link to (this repository) was a hyperlink to another Github repo, which didn't contain the Alve OS source, so I corrected the link that is to be used :)
Thank you :P
| gharchive/pull-request | 2017-08-09T21:50:06 | 2025-04-01T06:36:41.900600 | {
"authors": [
"Arawn-Davies",
"valentinbreiz"
],
"repo": "Alve-OS/Alve-Operating-System",
"url": "https://github.com/Alve-OS/Alve-Operating-System/pull/15",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
655472428 | unknown argument: '-color-diagnostics'
When I run swift build --destination /tmp/cross-toolchain/arm64v8-ubuntu-bionic-destination.json
I get the following error message:
<unknown>:0: error: unknown argument: '-color-diagnostics'
Does anyone know how to fix this issue?
If I just run swift build then everything works correctly, so it's only an issue when trying to use the cross-toolchain.
I've tried this on two macs, one using Xcode 12 and the other using Xcode 11.
Try out this fork, it should be up2date: https://github.com/CSCIX65G/SwiftCrossCompilers
Thanks for the suggestion @helje5, but I get the same exact issue with SwiftCrossCompilers.
I'd still file it over there, the fork is AFAIK actively maintained. I guess the color diag option was probably new in some Swift 5.x, maybe they b0rked something else. Invoking it w/ -v is usually the way to start investigating the issue and see what breaks when and where.
Once there is more info, it might be also worth filing a bug against SPM at bugs.swift.org.
| gharchive/issue | 2020-07-12T20:39:30 | 2025-04-01T06:36:41.908712 | {
"authors": [
"helje5",
"meech-ward"
],
"repo": "AlwaysRightInstitute/swift-mac2arm-x-compile-toolchain",
"url": "https://github.com/AlwaysRightInstitute/swift-mac2arm-x-compile-toolchain/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1665924297 | Setting color and transparency_group for all entities in model
We currently have outline_recursive, which will apply the outline to all child entities. We could do something similar (color_recursive and transparency_group_recursive). Or we could try to figure out some more general solution to this.
@philpax Well the user would then have to somehow wait until the model is loaded and then run those.
Made a discussion about two-entity queries which could be the form of a possible 'more general solution': https://github.com/AmbientRun/Ambient/discussions/600
| gharchive/issue | 2023-04-13T08:03:20 | 2025-04-01T06:36:41.912851 | {
"authors": [
"FredrikNoren",
"droqen"
],
"repo": "AmbientRun/Ambient",
"url": "https://github.com/AmbientRun/Ambient/issues/299",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2406119794 | [Bug]:tests not working
What happened?
shit
How to reproduce the bug
shit
Package Version
1.1.1
PHP Version
8.2.3
Which operating systems does with happen with?
macOS
Notes
shit
shit has been fixed
| gharchive/issue | 2024-07-12T18:17:26 | 2025-04-01T06:36:41.920466 | {
"authors": [
"AmirVahedix"
],
"repo": "AmirVahedix/weight-conversion",
"url": "https://github.com/AmirVahedix/weight-conversion/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
59183532 | unexpected that remove also modifies bindings
heya, :rabbit2:
i'm trying to render, remove, and re-render a view, however on the re-render the bindings don't work as expected. i realize now this is due to remove modifying the bindings, but i wonder if there is a better way to do this.
if we want to unlisten and delete bindings in remove, is it possible to provide a function that re-listens and re-creates the bindings? or another option is we move that unlisten and delete bindings to another function that maybe can be run by default in remove but there is an option to not run it.
cheers, :tea:
This is the updated ref: https://github.com/AmpersandJS/ampersand-view/blob/master/ampersand-view.js#L152-L168
I don't see any reason to leave the bindings if the element has been removed, and you stop listening to events.
If you have a problem with rendering a view that was removed, I'd love to see the use-case and a simple implementation (test-case) of it failing... If the usage is correct there must be some side effect we haven't seen
It looks like a problem I just found with view switcher ;-/
Switching a view calls view remove what also removes all related event and model bindings. I agree with @pgilad that this behaviour make sense so it should be performed.
The problem is that switching back to such previously removed view do not restore bindings - it just calls view render.
How can I restore all bindings (automatically) in such case?
This is likely a problem with ampersand-view-switcher using the view.rendered property to determine whether or not it needs to call view.render() on the new view.
The view.rendered property will be true if view.el is defined. view.el will be defined even after view.remove() is called. This means that when we are "re-rendering" a view with View Switcher, the view's element will be placed in the DOM, but view.render() will not be called again, so I think the events are torn down, and never set back up again.
Pretty sure this should be moved to ampersand-view-switcher.
Issue: https://github.com/AmpersandJS/ampersand-view-switcher/issues/25
@ahdinosaur @pgilad,
i agree, this causes issues. in ampersand-form-manager-view i cycle between forms, meaning I render, remove, and re-render them. i reinitialize bindings and subviews on each render manually. the fix may be as easy as pulling out the binding and subview initialization to a helper. then, on render, test if they are set. if not, re-init them.
I have the same problem as @ahdinosaur. I want be able to rerender a view which has been removed.
Here is my
I have the same problem as @ahdinosaur. I want be able to reuse a view which has been removed. I use the view switcher but I could have the same problem with any other way of switching views which calls the remove method of the view.
Here is an example
import View from "ampersand-view";
import ViewSwitcher from "ampersand-view-switcher";
var switcher = new ViewSwitcher(document.querySelector("main"));
var V1 = View.extend({
session: {
text: {
type: "string",
default: "bonjour"
}
},
bindings: {
text: ""
},
template: `<div>default text v1</div>`
});
var V2 = View.extend({
template: `<div>default text v2</div>`
});
window.views = [new V1(), new V2()];
var n = 1;
document.addEventListener("click", () => {
n++;
switcher.set(views[n % 2]);
});
In this example, the bindings do not work after one view switch.
A workaround for this could be to create a new instance of the view each time but I don't find this satisfying.
And there is already a good solution for the dom events : just call delegateEvents.
A simple createBindings function would also be enough for this need.
Don't you think?
| gharchive/issue | 2015-02-27T03:11:58 | 2025-04-01T06:36:41.940930 | {
"authors": [
"ahdinosaur",
"cdaringe",
"doubleface",
"kahnjw",
"mst7555",
"pgilad"
],
"repo": "AmpersandJS/ampersand-view",
"url": "https://github.com/AmpersandJS/ampersand-view/issues/105",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1620616190 | Re-word manual session_id sentence
Reword last session under 'Sessions' to reflect usage of HTTP APIs for seeing session IDs manually
Amplitude Developer Docs PR
Description
Reword last session under 'Sessions' to reflect usage of HTTP APIs for seeing session IDs manually
Deadline
When do these changes need to be live on the site?
Change type
[ ] Doc bug fix. Fixes #[insert issue number]. Amplitude contributors include Jira issue number.
[X] Doc update.
[ ] New documentation.
[ ] Non-documentation related fix or update.
PR checklist:
[ ] My documentation follows the style guidelines of this project.
[ ] I previewed my documentation on a local server using mkdocs serve.
[ ] Running mkdocs serve didn't generate any failures.
[X] I have performed a self-review of my own documentation.
@amplitude-dev-docs
@markfoo do we still need this PR?
Closing this in preparation of migrating the repository.
| gharchive/pull-request | 2023-03-13T01:53:46 | 2025-04-01T06:36:41.949092 | {
"authors": [
"kevinpagtakhan",
"markfoo",
"markzegarelli"
],
"repo": "Amplitude-Developer-Docs/amplitude-dev-center",
"url": "https://github.com/Amplitude-Developer-Docs/amplitude-dev-center/pull/651",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2158781517 | Add theme support to Experiment model and components
This pull request adds theme support per Experiment. It gives the admin user the ability to create and select a theme for an experiment, in which it can configure a logo image, background image, heading font, and body font).
Right now, this means the user can configure images and fonts by inputting image and google font urls. Later, we can expand this functionality by provoding upload features.
Resolves #805
Screenshots preview
Theme configuration add/edit
Themes admin overview
Select theme in Experiment
Should / could we also set the favicon through the admin interface?
Should / could we also set the favicon through the admin interface?
That sounds like a good idea. I prefer not to have scope creep in this PR so I've created issue #819 for it.
I like it! The only thing that's not clear to me: what would happen if a user set example.com/not-a-font.jpg in the font form fields? Silent fail?
It's possible to enter an incorrect font or image url that will silently fail in the frontend. However, thanks to the preview in the Django interface that shouldn't happen as you can see when it fails.
| gharchive/pull-request | 2024-02-28T11:50:31 | 2025-04-01T06:36:41.952644 | {
"authors": [
"BeritJanssen",
"drikusroor"
],
"repo": "Amsterdam-Music-Lab/MUSCLE",
"url": "https://github.com/Amsterdam-Music-Lab/MUSCLE/pull/810",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
911169931 | ReferenceError: sth is not defined
I've wrote sth like this:
### regenerate invoice
@invoiceId = 21060216492001061854
PUT {{host}}/v1/electronicInvoice/regenerate?invoiceId={{invoiceId}}
the 'host' variable is defined in a env file, and I defined invoiceId in current region. When I attempt to send a http request, I got errors. The trace followings:
ReferenceError: invoiceId is not defined
at Object.userJS (d:\apis\gtc\einvoice.http:12:92)
at Object.<anonymous> (c:\Users\jingc\.vscode-oss\extensions\anweber.vscode-httpyac-2.12.4\dist\extension.js:292:37177)
at Generator.next (<anonymous>)
at c:\Users\jingc\.vscode-oss\extensions\anweber.vscode-httpyac-2.12.4\dist\extension.js:292:36182
at new Promise (<anonymous>)
at s (c:\Users\jingc\.vscode-oss\extensions\anweber.vscode-httpyac-2.12.4\dist\extension.js:292:35927)
at Object.g (c:\Users\jingc\.vscode-oss\extensions\anweber.vscode-httpyac-2.12.4\dist\extension.js:292:36461)
at t.JavascriptVariableReplacer.<anonymous> (c:\Users\jingc\.vscode-oss\extensions\anweber.vscode-httpyac-2.12.4\dist\extension.js:292:140016)
at Generator.next (<anonymous>)
at s (c:\Users\jingc\.vscode-oss\extensions\anweber.vscode-httpyac-2.12.4\dist\extension.js:292:139204)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
I could not reproduce the error right now. I guess the error also disappeared after reopening the file on your pc. Can you remember the order of the input. e.g. First RequestLine then Variable or the other way around?
reopen file or restart vscodium doesn't solve the problem. I tried rewrite code with input variable first, and failed again.
Then I try with a new http file, it worked.
Now I know how to reproduce the error:
### first req
GET http://sth/sth
### second
@sth = abc
GET http://sth/{{sth}}
I get the error now too. Interesting. I will have a look at it tonight.
Cause is the text in the delimiter `### second'. then the delimiter no longer works correctly
The following works
###
# @name second
The change can be tested with release 2.12.5.
It's still broken. Even when I remove all text after delimiter
###
@vara = 3
GET {{host}}/{{vara}} HTTP/1.1
###
@varb = 2
GET {{host}}/{{varb}} HTTP/1.1
the first variable vara works fine while varb does not.
Unfortunately, I won't get around to it today. I suspect it is the missing blank line between the first request and the ###. But that is just a guess.
error is as expected. The missing blank line between first request GET {{host}}/{{vara}} and the ### is causing this. The outline view of vscode give a hint to the error. @varb = 2 is added to request with vara.
Maybe I can optimize away the currently needed blank line. I will have a look at it
I have removed the required blank line. Now it is also possible to create the requests identically to Kibana. Nice:-)
It works :)
Again, thanks for your work and patience on this project, hoping this project can be discovered by more people.
Thx for using my extension:-)
| gharchive/issue | 2021-06-04T06:33:23 | 2025-04-01T06:36:41.969528 | {
"authors": [
"AnWeber",
"dragondove"
],
"repo": "AnWeber/vscode-httpyac",
"url": "https://github.com/AnWeber/vscode-httpyac/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2635696506 | first build
{package} {version} {:snowflake:}
Destination channel: {Snowflake | defaults}
Links
{ticket_number}
Upstream repository
Upstream changelog/diff
Relevant dependency PRs:
...
Explanation of changes:
...
Linter check found the following problems:
ERROR conda.cli.main_run:execute(125): `conda run conda-lint /tmp/abs_609kgkq8iw/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:15: avoid_noarch: noarch: python packages should be avoided
clone/recipe/meta.yaml:39: missing_description: The recipe is missing a description
===== ERRORS =====
clone/recipe/meta.yaml:39: missing_dev_url: The recipe is missing a dev_url
clone/recipe/meta.yaml:39: missing_license_family: The recipe is missing the about/license_family key.
clone/recipe/meta.yaml:39: missing_documentation: The recipe is missing a doc_url or doc_source_url
===== Final Report: =====
3 Errors and 2 Warnings were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(125): `conda run conda-lint /tmp/abs_01z0cvojnf/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:15: avoid_noarch: noarch: python packages should be avoided
clone/recipe/meta.yaml:41: missing_description: The recipe is missing a description
===== ERRORS =====
clone/recipe/meta.yaml:41: missing_license_family: The recipe is missing the about/license_family key.
===== Final Report: =====
1 Error and 2 Warnings were found
| gharchive/pull-request | 2024-11-05T14:55:21 | 2025-04-01T06:36:41.976788 | {
"authors": [
"anaconda-pkg-build",
"anaobi"
],
"repo": "AnacondaRecipes/haystack-experimental-feedstock",
"url": "https://github.com/AnacondaRecipes/haystack-experimental-feedstock/pull/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2232254086 | build 0.20.18 :snowflake:
polars 0.20.18 :snowflake:
Destination channel: Snowflake
Links
PKG-4133
Upstream repository
Upstream changelog/diff
Relevant dependency PRs:
rust-activation-feedstock
Explanation of changes:
used nightly version of rust to use experimental features/
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_aexmwhkhpb/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:57: missing_description: The recipe is missing a description
===== ERRORS =====
clone/recipe/meta.yaml:15: patch_unnecessary: patch should not be in build when source/patches is not set
clone/recipe/meta.yaml:31: missing_wheel: For pypi packages, wheel should be present in the host section
clone/recipe/build.sh:19: pip_install_args: pip install should be run with --no-deps and --no-build-isolation.
===== Final Report: =====
3 Errors and 1 Warning were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_fe6pjnvth2/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:55: missing_description: The recipe is missing a description
===== ERRORS =====
clone/recipe/meta.yaml:29: missing_wheel: For pypi packages, wheel should be present in the host section
clone/recipe/build.sh:19: pip_install_args: pip install should be run with --no-deps and --no-build-isolation.
===== Final Report: =====
2 Errors and 1 Warning were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_b3t1kudisi/clone` failed. (See above for error)
The following problems have been found:
===== ERRORS =====
clone/recipe/meta.yaml:30: missing_wheel: For pypi packages, wheel should be present in the host section
===== Final Report: =====
1 Error and 0 Warnings were found
Linter check found the following problems:
Traceback (most recent call last):
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/lint/__init__.py", line 834, in lint_file
recipe = _recipe.Recipe.from_file(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 243, in from_file
raise exc from exc
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 234, in from_file
recipe._load_from_string(text.read())
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 170, in _load_from_string
self.render()
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 290, in render
self.meta = renderer_utils.render(self.recipe_dir, self.dump(), self.selector_dict, self.renderer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/_renderer.py", line 234, in render
raise JinjaRenderFailure(recipe_dir, message=exc.message, line=exc.lineno) from exc
percy.render.exceptions.JinjaRenderFailure: (PosixPath('/tmp/abs_4elg8j0fvo/clone/recipe'), "expected token 'end of statement block', got '+' (at line 42)")
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/run.py", line 159, in main
sys.exit(prime())
^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/run.py", line 138, in prime
result = linter.lint(recipes, subdir, args.variant_config_files, args.exclusive_config_files, args.fix)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/lint/init.py", line 735, in lint
msgs = self.lint_file(
^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/lint/init.py", line 851, in lint_file
recipe = _recipe.Recipe(recipe_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 107, in init
self.recipe_dir = recipe_file.parent
^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'parent'
ERROR conda.cli.main_run:execute(124): conda run conda-lint /tmp/abs_4elg8j0fvo/clone failed. (See above for error)
Linter check found the following problems:
Traceback (most recent call last):
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/lint/__init__.py", line 834, in lint_file
recipe = _recipe.Recipe.from_file(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 243, in from_file
raise exc from exc
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 234, in from_file
recipe._load_from_string(text.read())
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 170, in _load_from_string
self.render()
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 290, in render
self.meta = renderer_utils.render(self.recipe_dir, self.dump(), self.selector_dict, self.renderer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/_renderer.py", line 234, in render
raise JinjaRenderFailure(recipe_dir, message=exc.message, line=exc.lineno) from exc
percy.render.exceptions.JinjaRenderFailure: (PosixPath('/tmp/abs_34bssbgmb8/clone/recipe'), "expected token 'end of statement block', got '+' (at line 42)")
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/run.py", line 159, in main
sys.exit(prime())
^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/run.py", line 138, in prime
result = linter.lint(recipes, subdir, args.variant_config_files, args.exclusive_config_files, args.fix)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/lint/init.py", line 735, in lint
msgs = self.lint_file(
^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/anaconda_linter/lint/init.py", line 851, in lint_file
recipe = _recipe.Recipe(recipe_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/linter/lib/python3.12/site-packages/percy/render/recipe.py", line 107, in init
self.recipe_dir = recipe_file.parent
^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'parent'
ERROR conda.cli.main_run:execute(124): conda run conda-lint /tmp/abs_34bssbgmb8/clone failed. (See above for error)
@boldorider4 You can skip these three tests on osx because they require missing *.avro files:
FAILED py-polars/tests/unit/io/test_iceberg.py::test_scan_iceberg_plain - pol...
FAILED py-polars/tests/unit/io/test_iceberg.py::test_scan_iceberg_filter_on_partition
FAILED py-polars/tests/unit/io/test_iceberg.py::test_scan_iceberg_filter_on_column
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_02bpls3vze/clone` failed. (See above for error)
The following problems have been found:
===== ERRORS =====
clone/recipe/meta.yaml:34: missing_wheel: For pypi packages, wheel should be present in the host section
===== Final Report: =====
1 Error and 0 Warnings were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_a7gl_ihyrn/clone` failed. (See above for error)
The following problems have been found:
===== ERRORS =====
clone/recipe/meta.yaml:33: missing_wheel: For pypi packages, wheel should be present in the host section
===== Final Report: =====
1 Error and 0 Warnings were found
@boldorider4 The only test that failed is
FAILED py-polars/tests/unit/test_polars_import.py::test_polars_import
> raise RuntimeError(msg)
E RuntimeError: measuring import timings failed
...
E Traceback (most recent call last):
E File "<string>", line 1, in <module>
E ModuleNotFoundError: No module named 'polars'
There was an upstream issue https://github.com/pola-rs/polars/issues/14442. You can skip the test or patch it. It's up to you
An error on osx:
File "/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_92ywvsgg57/croot/polars_1713342610693/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib/python3.10/zipfile.py", line 1378, in _RealGetContents
LookupError: unknown encoding: cp437
Is there something wrong with Python?
@boldorider4 The only test that failed is
FAILED py-polars/tests/unit/test_polars_import.py::test_polars_import
> raise RuntimeError(msg)
E RuntimeError: measuring import timings failed
...
E Traceback (most recent call last):
E File "<string>", line 1, in <module>
E ModuleNotFoundError: No module named 'polars'
There was an upstream issue pola-rs/polars#14442. You can skip the test or patch it. It's up to you
thank you @skupr-anaconda, I noticed it myself, I just need to skip it on all platforms.
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_41_loda37d/clone` failed. (See above for error)
The following problems have been found:
===== ERRORS =====
clone/recipe/meta.yaml:33: missing_wheel: For pypi packages, wheel should be present in the host section
===== Final Report: =====
1 Error and 0 Warnings were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_36ilziddxh/clone` failed. (See above for error)
The following problems have been found:
===== ERRORS =====
clone/recipe/meta.yaml:35: missing_wheel: For pypi packages, wheel should be present in the host section
===== Final Report: =====
1 Error and 0 Warnings were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_22w1rvzyd8/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:33: host_section_needs_exact_pinnings: Linked libraries host should have exact version pinnings.
===== Final Report: =====
0 Errors and 1 Warning were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_2b9me42f5u/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:33: host_section_needs_exact_pinnings: Linked libraries host should have exact version pinnings.
===== Final Report: =====
0 Errors and 1 Warning were found
| gharchive/pull-request | 2024-04-08T22:50:02 | 2025-04-01T06:36:42.008201 | {
"authors": [
"anaconda-pkg-build",
"boldorider4",
"skupr-anaconda"
],
"repo": "AnacondaRecipes/polars-feedstock",
"url": "https://github.com/AnacondaRecipes/polars-feedstock/pull/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1160554334 | Login page overflows
The login page has a small overflow when a presumably too large keyboard is present
Duplicate #151
| gharchive/issue | 2022-03-06T08:49:56 | 2025-04-01T06:36:42.013103 | {
"authors": [
"fremartini",
"marfavi"
],
"repo": "AnalogIO/coffeecard_app",
"url": "https://github.com/AnalogIO/coffeecard_app/issues/178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
279560753 | Camera.SetView() gives inaccurate orientation using direction and up vectors
Description
The view can be set with the method Camera.SetView() using direction and up vectors. Most of the time the behavior is correct. But in some angles, setView() does not behave as expected. The issue has been identified to occur when using a direction vector near to Cartesian3.UNIT_Y. In fact, this occurs when pointing a little under the middle of the earth (between 0 and ~-2.56radians). In the range of 0 to -2.56, the view is exactly the same as if the angle was positive. (a pitch rotation of -2.56 radians down gives a pitch rotation of 2.56 radians up). As soon as the rotation go to ~-2.565radians, the rotation goes down as expected.
Expected behavior
Using Camera.SetView with direction and up vectors points to the direction vector and the up vector is directly up.
Steps to reproduce
The issue is a little obscure. Here is a small sandcastle project to demonstrate the issue.
Use keys 1, 2, 3, 4 to switch from views
key 1 : set direction vector to vector a
key 2 : set direction vector to vector aPrime
key 3 : set direction vector to vector b
key 4 : set direction vector to vector bPrime
Alternate view by using key 1 and 2.
Notice that the view changes as expected and the view is mirrored since both vectors have only the z component opposite
Alternate view by using key 3 and 4.
Notice that the view does not change even if vectors b and bPrime have their z component opposed
//Start of example
var viewer = new Cesium.Viewer('cesiumContainer');
var scene = viewer.scene;
var canvas = viewer.canvas;
canvas.setAttribute('tabindex', '0');
canvas.onclick = function() {
canvas.focus();
};
var direction = Cesium.Cartesian3.UNIT_Y.clone();
var up = Cesium.Cartesian3.UNIT_Z.clone();
// initial position and orientation
viewer.camera.position = new Cesium.Cartesian3(0, -20000000, 0);
viewer.camera.setView(
{
orientation:
{
direction: direction,
up: up
}});
// setting view by switching from a to aPrime as direction changes view as expected
var a = new Cesium.Cartesian3(0, 0.9989980940754456, 0.044752743308394995);
var aPrime = new Cesium.Cartesian3(0, 0.9989980940754456, -0.044752743308394995);
// setting view by switching from b to bPrime does not change the view
var b = new Cesium.Cartesian3(0, 0.9993908270190958, 0.03489949670250097);
var bPrime = new Cesium.Cartesian3(0, 0.9993908270190958, -0.03489949670250097);
var ninetyDegreeRotation = Cesium.Matrix3.fromRotationX(Math.PI/2);
document.addEventListener('keydown', function(e) {
var camera = viewer.camera;
// setting direction vector to use
switch (e.keyCode)
{
case '1'.charCodeAt(0):
direction = a;
console.log("direction = a : " + direction);
break;
case '2'.charCodeAt(0):
direction = aPrime;
console.log("direction = aPrime : " + direction);
break;
case '3'.charCodeAt(0):
direction = b;
console.log("direction = b : " + direction);
break;
case '4'.charCodeAt(0):
direction = bPrime;
console.log("direction = bPrime : " + direction); // when using bPrime as direction, the view is the same as b.
break;
default:
break;
}
up = Cesium.Matrix3.multiplyByVector(ninetyDegreeRotation, direction, up);
camera.setView(
{orientation:
{
direction: direction,
up: up
}});
}, false);
// End of example
@DannyLebel thanks for the detailed report and code example. If there's anything you can do to narrow this down or contribute a fix before someone gets to it, we'd appreciate it.
Sorry, I do not have a fix. I thought the issue was in our project so I investigated it until I figured it was in Cesium itself. My report describes what I know of the issue. If you have any questions I will be glad to try to answer.
| gharchive/issue | 2017-12-05T22:53:32 | 2025-04-01T06:36:42.034426 | {
"authors": [
"DannyLebel",
"pjcozzi"
],
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/issues/6032",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
296875256 | PointGraphics heightReference crash
When attempting to clamp a Cesium.CustomDataSource entity to ground:
{
id: id,
name: name,
position: cartesian,
description: desciption,
point: {
pixelSize: 6,
color: color,
outlineColor: Cesium.Color.WHITE,
outlineWidth: 2,
heightReference: 1,
},
label: {
text: name,
horizontalOrigin : Cesium.HorizontalOrigin.CENTER,
verticalOrigin : Cesium.VerticalOrigin.BOTTOM,
font: '12pt roboto',
style: Cesium.LabelStyle.FILL_AND_OUTLINE,
outlineWidth: 4,
pixelOffset: new Cesium.Cartesian2(0, -9),
disableDepthTestDistance: Number.POSITIVE_INFINITY,
heightReference: 1,
}
Including the "heightReference" on the point crashes Cesium. Binding to GeoJsonDataSource or CzmlDataSource works, but there is no way to clamp the custom PointGraphics object to the terrain without crashing Cesium.
Chrome Version 63.0.3239.132 (Official Build) (64-bit)
Cesium 1.37
@jmack2424 can you please paste a complete code example that reproduces the crash in Sandcastle? https://cesiumjs.org/Cesium/Build/Apps/Sandcastle/
Thanks
Hannah, we found our issue, and forcing it to work in the Sandcastle helped us to find it. =)
Our custom data points were being serialized, which broke on the load call, because the Cesium references in the custom object could not be resolved.
Thanks @jmack2424, glad you were able to figure it out =)
| gharchive/issue | 2018-02-13T20:31:27 | 2025-04-01T06:36:42.039401 | {
"authors": [
"hpinkos",
"jmack2424"
],
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/issues/6213",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
465668048 | How can I set clipping planes to primitive object?
Thanks a lot!
There's a few code examples on Sandcastles showing this, here's one:
https://cesiumjs.org/Cesium/Apps/Sandcastle/index.html?src=3D Tiles Clipping Planes.html
Please keep general questions like this on the Cesium forum: https://groups.google.com/forum/#!forum/cesium-dev. Feel free to post a follow up question there!
| gharchive/issue | 2019-07-09T09:19:23 | 2025-04-01T06:36:42.041293 | {
"authors": [
"OmarShehata",
"TNMoOn"
],
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/issues/7988",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
174058540 | 3D Tiles Transform Tests
Tests for https://github.com/AnalyticalGraphicsInc/cesium/pull/4130
Sorry about the branch confusion. This is merging into pnts-updates instead of 3d-tiles-transform mainly because of all the file renaming that happened in #4228.
I reworked ModelInstanceCollection extensively to simplify it and fix some bugs related to its interaction with shadows and derived commands. The functionality is the same though.
Any typed array memory concerns with ModelInstanceCollection? No because everything is converted to a separate data structure?
Other than these comments, code and tests look good. Did you run coverage?
Any typed array memory concerns with ModelInstanceCollection? No because everything is converted to a separate data structure?
Yeah that's correct.
Did you run coverage?
Yeah coverage is mostly solid.
I'm curious what you think about the inline comments.
Updated. I squashed some commits because of the change to z-up and EAST_NORTH_UP.
Code looks OK.
I'm curious what you think about the inline comments.
What comments?
What comments?
The ones you already looked at before.
Thanks @lasalvavida for the updated tiles. @pjcozzi this is ready to merge now.
Tests and Sandcastle example are good!
| gharchive/pull-request | 2016-08-30T15:59:29 | 2025-04-01T06:36:42.046327 | {
"authors": [
"lilleyse",
"pjcozzi"
],
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/pull/4256",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
364191380 | Fix draco model in Edge
Fixes #7079
@hpinkos can you review?
Thanks for the pull request @ggetz!
:heavy_check_mark: Signed CLA found.
Reviewers, don't forget to make sure that:
[ ] Cesium Viewer works.
[ ] Works in 2D/CV.
[ ] Works (or fails gracefully) in IE11.
I am a bot who helps you make Cesium awesome! Contributions to my configuration are welcome.
:earth_africa: :earth_americas: :earth_asia:
Since this only happens in master, and not any released version, should we remove the mention from CHANGES.md ?
Looks good to me!
Travis is failing because of https://github.com/AnalyticalGraphicsInc/cesium/issues/7076, but tests are passing here and locally.
| gharchive/pull-request | 2018-09-26T20:13:27 | 2025-04-01T06:36:42.050908 | {
"authors": [
"OmarShehata",
"cesium-concierge",
"ggetz",
"lilleyse"
],
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/pull/7083",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1920681409 | CORS Error is there while fetching data on Main Section.
I think this is a major issue cuz starters and newcomers wont be able to see anything.
@LynxSumit yes it is expected, have you installed CORS extension from chrome? please install and try sometime browser wont allow us to fetch API's from external sites
@LynxSumit Hope you're able to access the site now? are we good to close this issue
sure i will review once you made PR
| gharchive/issue | 2023-10-01T10:08:40 | 2025-04-01T06:36:42.052991 | {
"authors": [
"Anandsg",
"LynxSumit"
],
"repo": "Anandsg/Hungry-hero",
"url": "https://github.com/Anandsg/Hungry-hero/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1196955812 | Looking for Cargo.toml in a python project
HI, thanks for this amazing project
I tried following the instructions on the readme and opened a Python file.
message on status bar:
could not find `Cargo.toml` in `path` or any parent directory
It also asked me to add rls to my rustup toolchain, so looks like this is not considering the filetype.
Hi! I updated the messaging for this. Since we now have default values for the LSP configuration, it is unnecessary to show such messages in the status bar. It is now only logged.
| gharchive/issue | 2022-04-08T07:38:07 | 2025-04-01T06:36:42.058677 | {
"authors": [
"AndCake",
"rochacbruno"
],
"repo": "AndCake/micro-plugin-lsp",
"url": "https://github.com/AndCake/micro-plugin-lsp/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1787052495 | textarea-autosize conflict with custom key bind
I want to a function like this: when I use shift + enter, I can wrap lines, and the textarea can autoheight,
when I use enter key, I can send the message,
BUT, when I bind the enter key to textarea(with this plugin), I found enter key was used to wrapping lines default,
Does the plugin support this feature? I understand this is a more general scenario.
thanks very much
@joeylin in case it helps I think I figured this out. You can just do an e.preventDefault(). My resulting code looks something like this:
<textarea
onKeyDown={(e) => {
if (!e.shiftKey && e.key === 'Enter') {
e.preventDefault();
handleSendMessage();
}
}}
/>
| gharchive/issue | 2023-07-04T02:51:48 | 2025-04-01T06:36:42.061944 | {
"authors": [
"ifightcrime",
"joeylin"
],
"repo": "Andarist/react-textarea-autosize",
"url": "https://github.com/Andarist/react-textarea-autosize/issues/380",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1873444882 | Fix non-method error
I get this error when trying to use the prepare feature:
error TS roblox-ts: Attempted to assign non-method where method was expected.
18 prepare: () => {
~~~~~~~~~~~~~~~~
19 return {
~~~~~~~~~~~~~~~~~~~~~~~~
...
23 };
~~~~~~~~~~~~~~~~~~
24 },
I had to convert it to a method instead:
prepare() {
return {
payload: {
promptId: HttpService.GenerateGUID(),
},
};
},
But this compiles incorrectly:
prepare = function(self)
return {
payload = {
promptId = HttpService:GenerateGUID(),
},
}
end,
The self shouldn't be there.
This PR fixes that by changing some types to use a property instead of a method
Looks good to me.
| gharchive/pull-request | 2023-08-30T11:05:10 | 2025-04-01T06:36:42.080085 | {
"authors": [
"AndreRojasMartinsson",
"Scyfren"
],
"repo": "AndreRojasMartinsson/roduxutils",
"url": "https://github.com/AndreRojasMartinsson/roduxutils/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1566822768 | core dumped
Arch Linux 6.1.8
Wayland 1.21.0
Sway 1.8
❯ waycorner
[wayland-client error] Attempted to dispatch unknown opcode 0 for wl_shm, aborting.
zsh: IOT instruction (core dumped) waycorner
It worked some weeks ago. Now no more :)
Something I am missing here?
Same error here
$ waycorner
[wayland-client error] Attempted to dispatch unknown opcode 0 for wl_shm, aborting.
Abandon (core dumped)
not really any info to find here:
~
❯ export RUST_LOG=trace
~
❯ waycorner
[2023-02-07T05:10:47Z DEBUG waycorner::config] Replacing ~/ with $HOME/
[2023-02-07T05:10:47Z INFO waycorner::config] Using config: /home/salkin/.config/waycorner/config.toml
[wayland-client error] Attempted to dispatch unknown opcode 0 for wl_shm, aborting.
zsh: IOT instruction (core dumped) waycorner
~
❯ export RUST_LOG=debug
~
❮ waycorner
[2023-02-07T05:11:07Z DEBUG waycorner::config] Replacing ~/ with $HOME/
[2023-02-07T05:11:07Z INFO waycorner::config] Using config: /home/salkin/.config/waycorner/config.toml
[wayland-client error] Attempted to dispatch unknown opcode 0 for wl_shm, aborting.
zsh: IOT instruction (core dumped) waycorner
~
❯ export RUST_LOG=info
~
❮ waycorner
[2023-02-07T05:11:19Z INFO waycorner::config] Using config: /home/salkin/.config/waycorner/config.toml
[wayland-client error] Attempted to dispatch unknown opcode 0 for wl_shm, aborting.
zsh: IOT instruction (core dumped) waycorner
~
❯ export RUST_LOG=warn
~
❮ waycorner
[wayland-client error] Attempted to dispatch unknown opcode 0 for wl_shm, aborting.
zsh: IOT instruction (core dumped) waycorner
~
❯ export RUST_LOG=error
~
❮ waycorner
[wayland-client error] Attempted to dispatch unknown opcode 0 for wl_shm, aborting.
zsh: IOT instruction (core dumped) waycorner
coredumpctl info
Is this a (g)libc versioning issue?
PID: 29687 (waycorner)
UID: 1000 (salkin)
GID: 1000 (salkin)
Signal: 6 (ABRT)
Timestamp: Tue 2023-02-07 06:11:44 CET (8min ago)
Command Line: waycorner
Executable: /usr/bin/waycorner
Control Group: /user.slice/user-1000.slice/session-1.scope
Unit: session-1.scope
Slice: user-1000.slice
Session: 1
Owner UID: 1000 (salkin)
Boot ID: ~~..ada3..~~
Machine ID: ~~..e5f8..~~
Hostname: ~~abcd~~
Storage: /var/lib/systemd/coredump/core.waycorner.1000.0f213c497c54456ea0db3b20e619ada3.29687.1675746704000000.zst (present)
Size on Disk: 115.8K
Message: Process 29687 (waycorner) of user 1000 dumped core.
Stack trace of thread 29687:
#0 0x00007f91d480464c n/a (libc.so.6 + 0x8864c)
#1 0x00007f91d47b4938 raise (libc.so.6 + 0x38938)
#2 0x00007f91d479e53d abort (libc.so.6 + 0x2253d)
#3 0x000055d41c8fb803 n/a (waycorner + 0x111803)
#4 0x00007f91d4ae7d65 n/a (libwayland-client.so + 0x7d65)
#5 0x00007f91d4ae7ffc wl_display_dispatch_queue_pending (libwayland-client.so + 0x7ffc)
#6 0x00007f91d4aeac10 wl_display_roundtrip_queue (libwayland-client.so + 0xac10)
#7 0x000055d41c85454f n/a (waycorner + 0x6a54f)
#8 0x000055d41c86560f n/a (waycorner + 0x7b60f)
#9 0x000055d41c87a6c2 n/a (waycorner + 0x906c2)
#10 0x000055d41c8684ed n/a (waycorner + 0x7e4ed)
#11 0x000055d41c86aa73 n/a (waycorner + 0x80a73)
#12 0x000055d41c8478b9 n/a (waycorner + 0x5d8b9)
#13 0x000055d41c9ae96f n/a (waycorner + 0x1c496f)
#14 0x000055d41c868f48 n/a (waycorner + 0x7ef48)
#15 0x00007f91d479f290 n/a (libc.so.6 + 0x23290)
#16 0x00007f91d479f34a __libc_start_main (libc.so.6 + 0x2334a)
#17 0x000055d41c832c55 n/a (waycorner + 0x48c55)
ELF object binary architecture: AMD x86-64
#15 fixes this.
The smithay-client-toolkit, wayland-client, and wayland-protocols crates were out of date.
| gharchive/issue | 2023-02-01T21:02:11 | 2025-04-01T06:36:42.084784 | {
"authors": [
"Okanda",
"jhpaques",
"salkin-mada"
],
"repo": "AndreasBackx/waycorner",
"url": "https://github.com/AndreasBackx/waycorner/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
341322186 | Example with code
Hi @AndreiMisiukevich
could you translate your XAML example in CSHARP?
Thanks
Alessandro
Hi.
Sure, I will do it and let you know
Thanks for feedback
wow, you are fast. Thanks!
Readme is updated =)
Thanks I take a look
| gharchive/issue | 2018-07-15T13:44:20 | 2025-04-01T06:36:42.087328 | {
"authors": [
"AndreiMisiukevich",
"acaliaro"
],
"repo": "AndreiMisiukevich/ContextMenu",
"url": "https://github.com/AndreiMisiukevich/ContextMenu/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2696212587 | Update VoiceChat.cs
Fixed stuttering issues with Voice Activation, optimized allocation, and threading.
Also added a function for getting the mic input volume
| gharchive/pull-request | 2024-11-26T21:50:02 | 2025-04-01T06:36:42.088140 | {
"authors": [
"noleakk"
],
"repo": "AndrejStojkovic/FishNet-Voice",
"url": "https://github.com/AndrejStojkovic/FishNet-Voice/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
31766685 | Rotation
Hello,
This script work fine when no rotation, but with rotate=1 in kernel, it freeze and does not work .. any idea ? thx
Hi Andrew,
Revisiting the raspi2png on rotated screen. Were you able to get some
update on solving the rotated screen problem.
Thanks a lot.
Regards,
Chandra
On Sun, Oct 5, 2014 at 3:28 AM, Andrew Duncan [email protected]
wrote:
Hi Sorry for the delay. I have been busy on other things. No the change
only related to compiling in the math library. I will try to get back to
your issue shortly. Again sorry for the delay.
—
Reply to this email directly or view it on GitHub
https://github.com/AndrewFromMelbourne/raspi2png/issues/2#issuecomment-57932147
.
Hi Chandra,
Sorry for the very long delay. Other things came up and then I completely forgot. I am sorry!
As you will see above I have finally managed to reproduce the issue and I have raised an issue on the firmware.
The good news is I think I have a work around. Try using the -b option of omxplayer. The option adds a black layer underneath the playing video. As your video is full screen, you won't see it. However, from my experiments by adding the extra layer, the video is no longer distorted. So try
omxplay -b yourvideo.mpg
Hi Chandra,
I am sorry that this took so long (on my part). The Raspberry PI firmware has now been fixed (you will need to use rpi-update for now to get the latest firmware). Once the bug was reported on the Raspberry Pi firmware the fix was made very quickly. Once again sorry that it took so long for me to raise the issue. I hope that it now works for you.
Andrew
Hi Andrew,
Thanks a lot. It works perfectly. My sincere gratitude in you
addressing this problem.
One suggestion for raspi2png - a setting for resolution (to reduce the
file size).
Thanks again.
With kind regards,
Chandra
On Mon, Mar 23, 2015 at 2:58 AM, Andrew Duncan [email protected]
wrote:
Hi Chandra,
I am sorry that this took so long (on my part). The Raspberry PI firmware
has now been fixed (you will need to use rpi-update for now to get the
latest firmware). Once the bug was reported on the Raspberry Pi firmware
the fix was made very quickly. Once again sorry that it took so long for me
to raise the issue. I hope that it now works for you.
Andrew
—
Hi Chandra,
Thanks for your patience.
I will have a think about resolution ... so just to be clear are you wanting to reduce the file size or have a smaller (fewer pixel) image. Reducing the file size would be possible if I created a raspi2jpg program, which is possible. Reducing the number of pixels is pretty trivial and could implemented by specifying the desired width (and/or height) on the command line. Let me know.
With regards to this issue, are you happy to close it now?
Thanks,
Andrew
Hi Andrew,
We are delighted with your solution you have provided and you may close
the issue. I have tested it a few times.
Yes, we would like to have a reduced file size as an option to raspi2png.
Currently I am using a program to create a lower resolution jpg file (from
snapshot.png) to reduce the file size.
Thanks.
Regards,
Chandra
On Mon, Mar 23, 2015 at 11:31 AM, Andrew Duncan [email protected]
wrote:
Hi Chandra,
Thanks for your patience.
I will have a think about resolution ... so just to be clear are you
wanting to reduce the file size or have a smaller (fewer pixel) image.
Reducing the file size would be possible if I created a raspi2jpg program,
which is possible. Reducing the number of pixels is pretty trivial and
could implemented by specifying the desired width (and/or height) on the
command line. Let me know.
With regards to this issue, are you happy to close it now?
Thanks,
Andrew
—
Reply to this email directly or view it on GitHub
https://github.com/AndrewFromMelbourne/raspi2png/issues/2#issuecomment-84823612
.
Hi Andrew,
Thanks for all your help, I have been using raspi2png quite a bit.
Right now it has been painful to reduce the file size for copying the
image to the cloud - if you could kindly provide support for raspi2jpg or a
way to reduce the number of pixel would be great.
Thanks.
Regards,
Chandra
On Sun, Mar 22, 2015 at 11:01 PM, Andrew Duncan [email protected]
wrote:
Hi Chandra,
Thanks for your patience.
I will have a think about resolution ... so just to be clear are you
wanting to reduce the file size or have a smaller (fewer pixel) image.
Reducing the file size would be possible if I created a raspi2jpg program,
which is possible. Reducing the number of pixels is pretty trivial and
could implemented by specifying the desired width (and/or height) on the
command line. Let me know.
With regards to this issue, are you happy to close it now?
Thanks,
Andrew
Hi Chandra,
I am still struggling to understand what you require. You can reduce the number of pixels in the snapshot using either the --width or --height command line options
For reduced resolution use the --width and/or --height to specify the dimensions of the snapshot. If you just specify one the other is calculated from the the aspect ratio of the screen.
Unfortunately, at the moment that probably won't work for you as the is an issue open about this feature on rotated displays and related issue for the Raspberry Pi firmware.
Could you please explain why using the --width and/or --height options is not a solution for you.
Andrew
Hi Chandra.
The bug that prevented resizing on rotated screens is now addressed in the latest firmware release.
Andrew
Hi Andrew,
Thanks a lot. I will test it out this weekend.
With kind regards,
Chandra
On Wed, Jun 17, 2015 at 10:51 PM, Andrew Duncan [email protected]
wrote:
Hi Chandra.
The bug that prevented resizing on rotated screens is now addressed in the
latest firmware release.
Andrew
—
Reply to this email directly or view it on GitHub
https://github.com/AndrewFromMelbourne/raspi2png/issues/2#issuecomment-113044989
.
I have installed and reinstalled this on my Pi 2. I have successfully taken two screenshot. Then it always fails with raspi2png: vc_dispmanx_snapshot() failed. Anybody have any idea what's happening. Why it works one second, but not the next?
Followup. If I delete the directory. Then completely reinstall, sometimes it will work. Sometimes it won't. Maddening. Any help appreciated.
OK I will see if I can reproduce this. It would be good if you opened a new issue rather than adding to a closed and probably unrelated issue.
Sorry about that. I will. Just was glad to find some people talking about the program and got excited I guess. Thanks.
No problem. I am the author of the program.
Think I figured out the problem. When Scraping in RetroPie on a Raspberry Pi, while doing the final step, choosing the metadata, raspi2png always fails. Today I can’t get it to fail otherwise, so maybe it has something to do with the scrape utility. So far today raspi2png is working like a champ for me.
Thanks for the program, I will mention it in the credits of the book I’m writing, it’s been indispensable now that I’ve got it running.
Matt
Matt Smith, NBCT
[email protected]
AP 12 English/US History/Sociology
Valley High School
Albuquerque, New Mexico
On Jul 19, 2015, at 6:04 PM, Andrew Duncan [email protected] wrote:
No problem. I am the author of the program.
—
Reply to this email directly or view it on GitHub https://github.com/AndrewFromMelbourne/raspi2png/issues/2#issuecomment-122719851.
| gharchive/issue | 2014-04-17T21:59:27 | 2025-04-01T06:36:42.116245 | {
"authors": [
"AndrewFromMelbourne",
"chandra50",
"matttheorbiter",
"michaelhanin"
],
"repo": "AndrewFromMelbourne/raspi2png",
"url": "https://github.com/AndrewFromMelbourne/raspi2png/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1974023083 | Feature Request: Session Continuation and OpenAPI Specification Import
Currently, closing the browser results in a loss of the generated OpenAPI specification data. To enhance usability and continuity, could we consider the following features?
Implementing a session save feature that allows users to pause and resume their recording sessions at a later time.
Providing an option to upload a JSON file of a previously downloaded OpenAPI specification, enabling users to continue recording from where they left off.
Great suggestion @colin6-work, I have things set up to make this possible. I'll implement this in the next version.
This is now in release v1.2.0.
| gharchive/issue | 2023-11-02T11:12:56 | 2025-04-01T06:36:42.119137 | {
"authors": [
"AndrewWalsh",
"colin6-work"
],
"repo": "AndrewWalsh/openapi-devtools",
"url": "https://github.com/AndrewWalsh/openapi-devtools/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2647311678 | Feature X
Created feature x.
Everything OK.
| gharchive/pull-request | 2024-11-10T14:42:38 | 2025-04-01T06:36:42.133299 | {
"authors": [
"Andy-Wall"
],
"repo": "Andy-Wall/my-rag-project",
"url": "https://github.com/Andy-Wall/my-rag-project/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1959825265 | Newspaper not scraping full content or picking up random sentences in article.
Issue by shashank7596
Thu Apr 9 16:05:27 2020
Originally opened as https://github.com/codelucas/newspaper/issues/799
Hello everyone.
There's few urls especially for a particular site the newspaper is not scraping the full content for some reason. It randomly picks some sentences in the article. Can anyone help in fixing this issue. Newspaper scraper been very crucial in our project right now and for some websites it's not working. For different articles in the same domain it behaves differently.
Below are some example urls :
https://www.clinicaltrials.gov/ct2/show/study/NCT00034216?cond=breast+cancer&lupd_s=03%2F26%2F2020&lupd_d=14
https://www.clinicaltrials.gov/ct2/show/study/NCT04335006?cond=triple+negative+breast+cancer
https://www.clinicaltrials.gov/ct2/show/study/NCT04338269
https://www.clinicaltrials.gov/ct2/show/NCT04332653?cond=breast+cancer&lupd_s=03%2F25%2F2020&lupd_d=14
Newspaper is not suited for scraping research papers or studies
| gharchive/issue | 2023-10-24T18:20:28 | 2025-04-01T06:36:42.139698 | {
"authors": [
"AndyTheFactory"
],
"repo": "AndyTheFactory/newspaper4k",
"url": "https://github.com/AndyTheFactory/newspaper4k/issues/449",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.