id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
457567305 | Save Logs button is functional even if device is not connected
The Save Logs button is functional even if no device is detected. Button should be disabled if there's no connected device, or an error message should pop up saying "Please connect your device first" when button is clicked. Let's implement which ever is the easiest/fastest to do.
Steps to reproduce:
Install build: https://www.dropbox.com/s/1l1hxinlwni6xhn/Android Test Tool.zip?dl=0
Launch app with no device connected
Click Save Logs button
I disabled click functionality when device is not connected. Implemented in 76b2610
| gharchive/issue | 2019-06-18T16:00:56 | 2025-04-01T06:36:42.142337 | {
"authors": [
"Andychochocho",
"oantila"
],
"repo": "Andychochocho/android-testing-tool",
"url": "https://github.com/Andychochocho/android-testing-tool/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2219942155 | Add ignore linter rules to autogenerated files
Please add "ignore_for_file" statements to all autogenerated files that will always be overridden like the main_flavor.dart and flavors.dart.
For example:
"always_use_package_imports"
"prefer_relative_imports"
There are likely more that are of use.
Questions and bugs
If you need help with the use of the library or you just want to request new features, please use the Discussions section of the repository. Issues opened as questions will be automatically closed.
| gharchive/issue | 2024-04-02T08:55:26 | 2025-04-01T06:36:42.154228 | {
"authors": [
"AngeloAvv",
"YukiAttano"
],
"repo": "AngeloAvv/flutter_flavorizr",
"url": "https://github.com/AngeloAvv/flutter_flavorizr/issues/251",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
818330467 | API Date Filters
There should exist a Date Filter in the API Filter framework that uses the Range Filter strategy described in #121. All of our non-Pivot Models use the standard created_at and updated_at timestamps, which allow us to perform queries on objects created or updated within given ranges. Currently, this can only be accomplished by sorting results on these fields.
This has been implemented through filter conditions in commit f21f41e3a65d77f6a77cc92d9902fe13ba3f8528, date validation in commit eb0991a642ed12e3739679a74046c918bc034ca7, and greater precision timestamps in 6192b9e31ff8c139e99ff0bf45b4a701a15b4cf5.
Adopted date format (to match resource attributes output format)
YYYY-MM-DDTHH:mm:ss.SSSSSS
All date components except for year are optional.
| gharchive/issue | 2021-02-28T22:33:17 | 2025-04-01T06:36:42.160222 | {
"authors": [
"paranarimasu"
],
"repo": "AnimeThemes/animethemes-server",
"url": "https://github.com/AnimeThemes/animethemes-server/issues/123",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2291275646 | Add Topics
In GSSoC'24, GitHub Topics will help the discoverability of your project.
I see that you already have great topics on your repository!
I would recommend adding the name of the company like the software you use to build like "vs-code, ghdesktop" to improve your discoverability.
If you are happy with the topics you have, feel free to close this issue. 👍
Please assign this issue to me.
| gharchive/issue | 2024-05-12T10:41:59 | 2025-04-01T06:36:42.161682 | {
"authors": [
"DARSHANITRIPATHI",
"Princegupta101"
],
"repo": "Anishkagupta04/RAPIDOC-HEALTHCARE-WEBSITE-",
"url": "https://github.com/Anishkagupta04/RAPIDOC-HEALTHCARE-WEBSITE-/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2051900790 | how to use command line
Hi , Can you please guide me how to use this by command line(I don't want gui ) ?
The command line feature was removed when the GUI version was published. I just made a fork to re-implement the ability to use command line parameters again.
Thankyou can you please share the link I also want to contribute.
@codingbeast I haven't commited anything yet, will share it as soon as I found some time to do so.
| gharchive/issue | 2023-12-21T07:52:15 | 2025-04-01T06:36:42.163128 | {
"authors": [
"ChrisTG742",
"codingbeast"
],
"repo": "Anjok07/ultimatevocalremovergui",
"url": "https://github.com/Anjok07/ultimatevocalremovergui/issues/1042",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1058951130 | 🛑 AnnAngela.cn is down
In d316837, AnnAngela.cn (https://ping.annangela.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AnnAngela.cn is back up in 7c307a7.
| gharchive/issue | 2021-11-19T22:01:45 | 2025-04-01T06:36:42.169375 | {
"authors": [
"AnnAngela"
],
"repo": "AnnAngela/annangela.cn-monitor",
"url": "https://github.com/AnnAngela/annangela.cn-monitor/issues/261",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2579790389 | [BUG] Рандомное подключение к голосовым каналам Discord
Опишите проблему
Подключение к голосовому каналу Discord происходит рандомно. Заметил зависимость, что после перезагрузки роутера - всегда заходит со 100%-й вероятностью (устанавливает маршрут и можно общаться), спустя какое-то время - абсолютный рандом. Может совершенно не устанавливать соединение, а может делать это без проблем на любом сервере любого региона.
p.s. Интересно, что подобная же история наблюдается с сайтом instagram.com - но на него просто рандомно то заходит - то не заходит. Причём у меня настроен VPN на роутере (чтобы с телефона можно было пользоваться всеми благами домашней сети, так сказать - и там в Инстаграм заходит всегда и безо всяких проблем, что вообще уже странно, как по мне.
Модель маршрутизатора
Keenetic Ultra (KN-1811)
Установленная версия 4.1.7
Провайдер
ATHome (eth3)
Выполните команды и приложите их вывод
opkg info nfqws-keenetic
Package: nfqws-keenetic
Version: 2.4.1
Depends: iptables, busybox
Conflicts: tpws-keenetic
Status: install user installed
Section: net
Architecture: aarch64-3.10
Size: 121302
Filename: nfqws-keenetic_2.4.1_aarch64-3.10.ipk
Conffiles:
/opt/etc/nfqws/nfqws.conf b49e99e9938987df7d24e87d8b87f69cf1f39c3ddf0368462c92bf36e8bbe3a5
/opt/etc/nfqws/user.list 45dc2adaa172b86d73369c6ed12a8a0e648b851b66293b11514c3b1d4bd3fce6
/opt/etc/nfqws/auto.list e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
/opt/etc/nfqws/exclude.list 31fbbe06a06e48f9047db40300750b7f11c203a7434fb1a3151f33518ccd9805
Description: NFQWS service
Installed-Time: 1728571688
/opt/etc/init.d/S51nfqws restart
Stopping NFQWS service...
we have 3 user defined desync profile(s) and default low priority profile 0
Loading hostlist /opt/etc/nfqws/auto.list
loading plain text list
Loaded 6 hosts from /opt/etc/nfqws/auto.list
Loading hostlist /opt/etc/nfqws/user.list
loading plain text list
Loaded 137655 hosts from /opt/etc/nfqws/user.list
Loading hostlist /opt/etc/nfqws/auto.list
loading plain text list
Loaded 6 hosts from /opt/etc/nfqws/auto.list
Loading hostlist /opt/etc/nfqws/user.list
loading plain text list
Loaded 137655 hosts from /opt/etc/nfqws/user.list
Loading hostlist /opt/etc/nfqws/exclude.list
loading plain text list
Loaded 26 hosts from /opt/etc/nfqws/exclude.list
Loading hostlist /opt/etc/nfqws/exclude.list
loading plain text list
Loaded 26 hosts from /opt/etc/nfqws/exclude.list
Started NFQWS service
cat /opt/etc/nfqws/nfqws.conf
# Provider network interface, e.g. eth3
# You can specify multiple interfaces separated by space, e.g. ISP_INTERFACE="eth3"
ISP_INTERFACE="eth3"
# All arguments here: https://github.com/bol-van/zapret (search for `nfqws` on the page)
# HTTP(S) strategy
NFQWS_ARGS="--filter-udp=50000-65535 --dpi-desync=fake,tamper --dpi-desync-any-protocol --dpi-desync-cutoff=d4 --dpi-desync-repeats=6 --new --dpi-desync=fake,split2 --dpi-desync-autottl=2 --dpi-desync-split-pos=1 --dpi-desync-fooling=md5sig,badseq --dpi-desync-fake-tls=/opt/etc/nfqws/tls_clienthello.bin --dpi-desync=fake,split --dpi-desync-split-pos=7 --dpi-desync-ttl=12 --dpi-desync-fooling=md5sig,badsum"
# QUIC strategy
NFQWS_ARGS_QUIC="--dpi-desync=fake --dpi-desync-repeats=6 --dpi-desync-cutoff=d4 --dpi-desync-fooling=badsum --dpi-desync-fake-quic=/opt/etc/nfqws/quic_initial.bin"
# auto - automatically detects blocked resources and adds them to the auto.list
NFQWS_EXTRA_ARGS="--hostlist=/opt/etc/nfqws/user.list --hostlist-auto=/opt/etc/nfqws/auto.list --hostlist-auto-debug=/opt/var/log/nfqws.log --hostlist-exclude=/opt/etc/nfqws/exclude.list"
# IPv6 support
IPV6_ENABLED=0
# HTTP support
HTTP_ENABLED=0
# QUIC support
QUIC_ENABLED=1
UDP_PORTS=443,50000:65535
# Syslog logging level (0 - silent, 1 - debug)
LOG_LEVEL=0
NFQUEUE_NUM=200
USER=nobody
CONFIG_VERSION=3
ps | grep nfqws
4656 nobody 33396 S /opt/usr/bin/nfqws --daemon --pidfile=/opt/var/run/nfqws.pid --user=nobody --qnum=200 --filt
4770 root 5976 S grep nfqws
iptables-save | grep 200
-A POSTROUTING -o eth3 -p udp -m multiport --dports 443,50000:65535 -m connbytes --connbytes 1:8 --connbytes-mode packets --connbytes-dir original -m mark ! --mark 0x40000000/0x40000000 -j NFQUEUE --queue-num 200 --queue-bypass
-A POSTROUTING -o eth3 -p tcp -m tcp --dport 443 -m connbytes --connbytes 1:8 --connbytes-mode packets --connbytes-dir original -m mark ! --mark 0x40000000/0x40000000 -j NFQUEUE --queue-num 200 --queue-bypass
-A _NDM_IPSEC_FORWARD_MANGLE -d 172.20.8.0/23 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1200
-A _NDM_IPSEC_FORWARD_MANGLE -s 172.20.8.0/23 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1200
sysctl net.netfilter.nf_conntrack_checksum
net.netfilter.nf_conntrack_checksum = 0
С инстой такая же ерунда, с компьютера через браузер сайт не открывается, с телефонов через приложение заходит четко(через браузер instagram.com тоже не грузит), остальное все работает прекрасно, куда копать?
Опишите проблему Подключение к голосовому каналу Discord происходит рандомно. Заметил зависимость, что после перезагрузки роутера - всегда заходит со 100%-й вероятностью (устанавливает маршрут и можно общаться), спустя какое-то время - абсолютный рандом. Может совершенно не устанавливать соединение, а может делать это без проблем на любом сервере любого региона. p.s. Интересно, что подобная же история наблюдается с сайтом instagram.com - но на него просто рандомно то заходит - то не заходит. Причём у меня настроен VPN на роутере (чтобы с телефона можно было пользоваться всеми благами домашней сети, так сказать - и там в Инстаграм заходит всегда и безо всяких проблем, что вообще уже странно, как по мне.
Модель маршрутизатора Keenetic Ultra (KN-1811) Установленная версия 4.1.7
Такой-же роутер. В user.list прописаны сервера Discord?
discord-attachments-uploads-prd.storage.googleapis.com
discord-activities.com
discord.design
discord.gifts
discord.store
discord.tools
discordactivities.com
discordmerch.com
discordpartygames.com
discordsays.com
discordsez.com
discord.com
discordapp.net
discordapp.com
discord.gg
discord.app
discord.media
discordcdn.com
discord.dev
discord.new
discord.gift
discordstatus.com
dis.gd
discord.co
| gharchive/issue | 2024-10-10T20:34:51 | 2025-04-01T06:36:42.193690 | {
"authors": [
"EasyKill33",
"FdFilosof",
"shurshick"
],
"repo": "Anonym-tsk/nfqws-keenetic",
"url": "https://github.com/Anonym-tsk/nfqws-keenetic/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
354054884 | displaycameras.conf.1920x1200
Any way we can get a default configuration file for monitors with a resolution of 1920x1200? Maybe a 6 camera setup.
16x9 feeds won't fit cleanly on a 16x10 monitor. 16x10 windows will distort the feeds but leave no black bars on sides of each window. Which would you prefer (stretch or bars)?
You the man! Stretch.
https://github.com/Anonymousdog/displaycameras/blob/master/example_layouts/layout.conf.1920x1200.6on4x4 rotates six feeds through a 2x2 matrix. 3x2 or 2x3 would be very distorted. Recommend feeds no larger than the window size, 960x600.
| gharchive/issue | 2018-08-26T00:55:35 | 2025-04-01T06:36:42.197093 | {
"authors": [
"Anonymousdog",
"MechX2"
],
"repo": "Anonymousdog/displaycameras",
"url": "https://github.com/Anonymousdog/displaycameras/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
587124499 | Dummy bug for checking reports
will be closed shortly
Checking the addition of comment on an issue.
Checking closing an issue (final check hopefully)
| gharchive/issue | 2020-03-24T17:04:15 | 2025-04-01T06:36:42.198139 | {
"authors": [
"epsilon-0"
],
"repo": "AnsiMail/AnsiMail",
"url": "https://github.com/AnsiMail/AnsiMail/issues/17",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
2460856033 | [Offline Support] Pre-packaging Docker Image for Re-Ranker with Pre-Downloaded Libraries in strict proxy environments
Issue: Pre-packaging Docker Image for Re-Ranker with Pre-Downloaded Libraries
Description:
The current setup for the Re-Ranker application involves downloading libraries and packages at runtime when the Docker container is initiated. This approach introduces several challenges, particularly in environments where internet access is restricted or where applications are subject to strict network proxies.
In our deployment environment, all network traffic is routed through proxies, and internet access may be limited or heavily controlled. As a result, the application faces delays, failures, or interruptions when attempting to download required libraries and packages at runtime. This dependency on real-time downloading can lead to instability and unreliability, especially when running the application in environments with such network constraints.
Proposal:
To mitigate these issues, the Flash Rank Re-Ranker Docker image should be pre-packaged with all necessary libraries, dependencies, and language model (LLM) packages already downloaded and included. By doing so, the application will be fully self-contained and will not need to access external resources when the container is started. This ensures that the application can run smoothly and reliably, even in environments with strict network controls or limited internet access.
Benefits:
Improved Reliability: The application will no longer be dependent on external network conditions to function correctly.
Faster Startup Times: Since all dependencies are pre-downloaded, the container can start up more quickly without waiting for network downloads.
Network Independence: The application can run in offline environments or environments with restricted internet access.
Consistency: Ensures that the same versions of libraries and packages are used across different environments, reducing the risk of version conflicts or incompatibilities.
This approach will significantly enhance the robustness and reliability of the Flash Rank Re-Ranker application in production environments.
Hey!
I'm not quite sure that this issue is within scope of what we'd like to do here. rerankers is purposefully a very light, thin library so that it can be slotted in anywhere, and building docker images with other packages for various usecases isn't what we want to do for it at all, since it's simple enough for users to do so and only include the packages that they personally care about.
Is there any fundamental issue with the library that makes it difficult for you to build your own Docker images including it? If so, I'd be more than happy to take a look at what's causing the issues!
| gharchive/issue | 2024-08-12T12:25:01 | 2025-04-01T06:36:42.206171 | {
"authors": [
"bclavie",
"khanzzirfan"
],
"repo": "AnswerDotAI/rerankers",
"url": "https://github.com/AnswerDotAI/rerankers/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1014496551 | UI Fixes - Font styles , Responsivness , Text align , Image resize, Aligned Arrow img perfectly at middle of the page for both mobile and desktop.
I have devoted 2 hrs. on this , please merge this request under hacktoberfest-accepted . Thank You!!
I have devoted 2 hrs. on this , please merge this request under hacktoberfest-accepted . Thank You!!
Thanks for your efforts.
| gharchive/pull-request | 2021-10-03T19:09:22 | 2025-04-01T06:36:42.321512 | {
"authors": [
"AkhilChoubey",
"Anveshreddy18-collab"
],
"repo": "Anveshreddy18-collab/Anveshreddy18-collab.github.io",
"url": "https://github.com/Anveshreddy18-collab/Anveshreddy18-collab.github.io/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
771220609 | Open ImageEditorController after image selection.
Hi Thnaks for this lib.
Is it possible to open editor controller right after image picked, just pushing it, and not close and after that in completion open ImageEditorController?
Thanks.
Run Example project and open Avatar Picker controller.
Is this what you expected?
similar to that, but because we are dismissing picker controller and after that show editor contrroller, even without animation, on some miliseconds we can see main view controller, it would be nice to have an option to push EditorController right after image selection, just simple flag, openEditorAfterSelection for example.
Thanks
And Also Here are 2 screenshots, in editor screen I can see chinese text, but my app has english localization, and also in picker screen Cancel button is not visible.
I been updated Avatar Picker code. Fixed the issue that you can see main view controller on some milliseconds. Please download newest code on master.
We will do this feature like that:
Add flag in PickerOptionsInfo default is false.
Tap photo in AssetPickerViewController, check disable rules, push editor if pass check.
User tap "Done" in Editor, will back to Picker and select this photo.
Picker will dismiss immediately if limit is 1.
User tap "Cancel" in Editor, will back to Picker and unselect this photo.
Show live camera in cell instead of static camera image, in picker collectionView.
We will not do this feature, because app will request photo, camera and microphone permission when user first open picker. We should request permission when user need this feature instead of request all permission at first time.
I been updated Avatar Picker code. Fixed the issue that you can see main view controller on some milliseconds. Please download newest code on master.
We will do this feature like that:
Add flag in PickerOptionsInfo default is false.
Tap photo in AssetPickerViewController, check disable rules, push editor if pass check.
User tap "Done" in Editor, will back to Picker and select this photo.
Picker will dismiss immediately if limit is 1.
User tap "Cancel" in Editor, will back to Picker and unselect this photo.
Yep, this would be great, I have checked example app as well, right now it shows withou any lags, but because I am using also camera open in picker, I cant open Editor right after image was picked, because maybe it was picked with camera and already edited, I will wait to this feature to be released, I have tried something similar to that in my fork, but faced few issues while openning Editor.
Thanks.
I am sorry, will it be available for upcoming 2 weeks, I will need to publish my app after few weeks, wanted to know will this option be available at that period, thanks
We will done this feature in the next 2 weeks, but we may not release a new version. You should use SPM as a dependency or fork this repo then release to Cocoapods.
Thanks a lot, I am using SPM
We have done this feature, but we will not release a new version, you can use SPM to download the latest code.
You can see the sample code on AvatarPickerController.
works as expected, Thanks a lot
| gharchive/issue | 2020-12-18T22:56:32 | 2025-04-01T06:36:42.338122 | {
"authors": [
"Narek1994",
"RayJiang16"
],
"repo": "AnyImageProject/AnyImageKit",
"url": "https://github.com/AnyImageProject/AnyImageKit/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2749435308 | Virus
So, basically every security software on the planet flags the release as a virus. What's going on with this? I had to load this up in a VM to stop Windows Defender from automatically deleting it.
https://nuitka.net/user-documentation/common-issue-solutions.html#id3
I hope this can help!
| gharchive/issue | 2024-12-19T07:02:59 | 2025-04-01T06:36:42.340579 | {
"authors": [
"AnyaCoder",
"Igrium"
],
"repo": "AnyaCoder/fish-speech-gui",
"url": "https://github.com/AnyaCoder/fish-speech-gui/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1523046849 | feat: change address arg to addressType
What I did
changed the address arg to an AddressType
made a test for it
fixes: #Ape-368
How I did it
How to verify it
Checklist
[ ] All changes are completed
[ ] New test cases have been added
[ ] Documentation has been updated
This one seems to be failing tests, trying to figure out why now.
| gharchive/pull-request | 2023-01-06T19:36:41 | 2025-04-01T06:36:42.360495 | {
"authors": [
"Ninjagod1251",
"sabotagebeats"
],
"repo": "ApeWorX/ape",
"url": "https://github.com/ApeWorX/ape/pull/1211",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2557805018 | feat: added draft of structured errors
What I did
Refactored the except CompilerError as e: in compuile_project a structured error response for failed tasks caused by CompilerError. This introduces a new Pydantic model to format the error and ensure it's compatible with frontend systems via FastAPI.
fixes: #50
How I did it
Added a CompilerErrorModel class using Pydantic, which includes fields like message, line, column, and errorType.
Updated the logic within the compile_project function to handle and format CompilerError in line with the new model.
How to verify it
Trigger a compilation task that raises a CompilerError by submitting a Vyper contract with a syntax error.
Check the response from /exceptions/{task_id} to verify that the error details are returned in the correct format, as described in the issue.
Ensure that the response structure matches the frontend expectations (JSON format with specific error fields).
Checklist
[] All changes are completed
[] New test cases have been added for error handling and proper formatting of CompilerError.
[ ] Documentation has been updated to reflect the changes in /exceptions/{task_id} endpoint behavior.
What still needs to check and verify
Refactored the /exceptions/{task_id} endpoint to check specifically for tasks that failed due to a CompilerError and return a structured JSON response for the error.
Im still using pdb to make sure the obj can parse the error.
but it format good so far?
and do we need to change the get_task_exceptions to handle the new format?
| gharchive/pull-request | 2024-09-30T23:09:54 | 2025-04-01T06:36:42.365995 | {
"authors": [
"Ninjagod1251"
],
"repo": "ApeWorX/hosted-compiler",
"url": "https://github.com/ApeWorX/hosted-compiler/pull/53",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
997683332 | build 错误
你好!
当我安装Apollo 使用sudo ./apollo.sh build产生了如下错误,请问我该如何解决呢?
System information
OS Platform and Distribution (e.g., Linux Ubuntu 18.04):
Ubuntu 18.04
Apollo installed from (source or binary):
Apollo version (3.5, 5.0, 5.5, 6.0):
Apollo v6.0
Output of apollo.sh config if on master branch:
(02:11:13) WARNING: Download from https://github.com/bazelbuild/rules_swift/releases/download/0.12.1/rules_swift.0.12.1.tar.gz failed: class java.io.IOException connect timed out
(02:11:13) ERROR: An error occurred during the fetch of repository 'build_bazel_rules_swift':
Traceback (most recent call last):
File "/apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/bazel_tools/tools/build_defs/repo/http.bzl", line 111, column 45, in _http_archive_impl
download_info = ctx.download_and_extract(
Error in download_and_extract: java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_swift/releases/download/0.12.1/rules_swift.0.12.1.tar.gz] to /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/build_bazel_rules_swift/temp11293974715659360049/rules_swift.0.12.1.tar.gz: connect timed out
(02:11:13) ERROR: /apollo/modules/v2x/proto/BUILD:95:16: //modules/v2x/proto:_v2x_service_car_to_obu_cc_grpc_grpc_codegen depends on @com_github_grpc_grpc//src/compiler:grpc_cpp_plugin in repository @com_github_grpc_grpc which failed to fetch. no such package '@build_bazel_rules_swift//swift': java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_swift/releases/download/0.12.1/rules_swift.0.12.1.tar.gz] to /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/build_bazel_rules_swift/temp11293974715659360049/rules_swift.0.12.1.tar.gz: connect timed out
(02:11:13) ERROR: Analysis of target '//modules/v2x/proto:_v2x_service_car_to_obu_cc_grpc_grpc_codegen' failed; build aborted: Analysis failed
(02:11:13) INFO: Elapsed time: 75.534s
(02:11:13) INFO: 0 processes.
(02:11:13) FAILED: Build did NOT complete successfully (0 packages loaded, 0 targets configured)
currently loading: @com_github_grpc_grpc//src/compiler ... (3 packages)
Steps to reproduce the issue:
Please use bullet points and include as much details as possible:
Supporting materials (screenshots, command lines, code/script snippets):
@Yuanzhuo-Liu 请问如何解决的呢?
@Yuanzhuo-Liu 您好,请问您是如何解决的呢?
@Yuanzhuo-Liu 您好,请问您是如何解决的呢?
我现在又重新运行来一遍,又没问题了,感觉可以多试下,可能是网络问题
+1, Manually download and copy to the correct path can fix temporary
network problem.
add proxy setup to dev_into.sh can solve
-e HTTP_PROXY="http://[yourproxyip]:[port]" \
-e HTTPS_PROXY="http://[yourproxyip]:[port]" \
@OrangeSoda1 请问您现在解决了吗
@Meta-YZ 请问最后怎么解决的,救救孩子
此问题已困扰几日,救救孩子
此问题已困扰几日,救救孩子
不好意思,兄弟们,才看见,我没有解决这个问题。
刚看到,竟然有这么多人有相同的问题。统一回复大家吧:我记得之前查相关资料,错误原因好像是显卡太旧了,所以放弃了。。。
修改电脑上DNS,看图片,应该是要从GitHub下载文件,但是出现 connect timeout ,连接GitHub失败,
| gharchive/issue | 2021-09-16T02:17:15 | 2025-04-01T06:36:42.386375 | {
"authors": [
"HuangCongQing",
"Meta-YZ",
"OrangeSoda1",
"POPCharlyn",
"PhoenixZhong",
"c0y0h",
"chasingw"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/14105",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1348478839 | Planning very sensitive to vehicle vibrations, frequently fails.
Using Apollo 7.0:
When driving autonomously, the vehicle very frequently stops because a planning cycle fails due to the Piecewise Jerk Nonlinear Speed Optimizer determining that the QP problem is infeasible. I have determined that this happens when the vehicle shakes, presumably since the IMU velocity/acceleration values jump around during shaking. I can't figure out why I am having this issue now when I never did before, any insight into what might be causing this is greatly appreciated.
Maybe current initial acceleration exceed the maximum acceleration because of vibrations, which leads to Ipopt solve failed
Hello! I have encounterd the same issue, have you sloved it ?
Using Apollo 7.0:
When driving autonomously, the vehicle very frequently stops because a planning cycle fails due to the Piecewise Jerk Nonlinear Speed Optimizer determining that the QP problem is infeasible. I have determined that this happens when the vehicle shakes, presumably since the IMU velocity/acceleration values jump around during shaking. I can't figure out why I am having this issue now when I never did before, any insight into what might be causing this is greatly appreciated.
I added some functions that specifically checked for infeasible starting conditions (either initial acceleration/velocity out of range, or the initial velocity plus acceleration combined with the maximum jerk not being enough to slow down without going over the max speed) and modifies the inputs to what can be feasible if so. It's a bit hacky, but it works. I found that just setting the initial acceleration to always be 0 for the solver made it a lot better too, without really affecting the behavior too much.
Hope this helps.
well, thanks for your reply. I'd like to try the second way first. May I ask how to set the initial acceleration to always be 0 ?
At 2024-03-30 01:42:59, "Josh Wende" @.***> wrote:
I added some functions that specifically checked for infeasible starting conditions (either initial acceleration/velocity out of range, or the initial velocity plus acceleration combined with the maximum jerk not being enough to slow down without going over the max speed) and modifies the inputs to what can be feasible if so. It's a bit hacky, but it works. I found that just setting the initial acceleration to always be 0 for the solver made it a lot better too, without really affecting the behavior too much.
Hope this helps.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>
I believe I just set s_ddot_init_ to 0.
| gharchive/issue | 2022-08-23T19:58:47 | 2025-04-01T06:36:42.392727 | {
"authors": [
"josh-wende",
"qwetqwe",
"zhanglonggao"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/14568",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
343487585 | Failed on compiling velodyne
New added blocking_queue.h has problem in take function argument std::vector<T>* v which will lead to error:
./modules/common/util/blocking_queue.h:98:8: error: member reference base type 'std::vector *' is not a structure or union
We fixed this issue at https://github.com/ApolloAuto/apollo/commit/f12c3891fe29479e530860b6c6304515c752fc80.
| gharchive/issue | 2018-07-23T04:56:14 | 2025-04-01T06:36:42.394711 | {
"authors": [
"VigiZhang",
"lianglia-apollo"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/5093",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
353891511 | utilize proto file in c++
To whom it may concern,
We are trying to utilize Apollo protobuf topic in C++. When including the header file for protobuf we used the following script:
#include <modules/canbus/proto/chassis.pb.h>
#include <modules/localization/proto/localization.pb.h>
#include <modules/perception/proto/perception_obstacle.pb.h>
#include <string>
#include <math.h>
When I run the cmake file for C++, it returns the following error:
hlchen@in_dev_docker:/apollo/catkin_ws/src/DSRC/build$ make
[ 1%] Building CXX object CMakeFiles/openAV_DSRC.dir/bsm/CID_BSM_Receiver.cpp.o
In file included from /apollo/catkin_ws/src/DSRC/bsm/CID_BSM_Receiver.cpp:23:0:
/apollo/catkin_ws/src/DSRC/bsm/dsrc_apollo.h:2:45: fatal error: modules/canbus/proto/chassis.pb.h: No such file or directory
#include <modules/canbus/proto/chassis.pb.h>
^
compilation terminated.
make[2]: *** [CMakeFiles/openAV_DSRC.dir/bsm/CID_BSM_Receiver.cpp.o] Error 1
make[1]: *** [CMakeFiles/openAV_DSRC.dir/all] Error 2
make: *** [all] Error 2
Which by theory I thought it shouldn't happen. According to the protobuf documentation, it says when using protobuf file with c++, just replace .proto with .pb.h and it should work.
Does anyone have any suggestions?
Thank you!
bazel build is the only method we currently support. For general questions on how to make protobuf, I suggest post on protobuf community for questions.
Hi lianglia,
Thanks for the reply! Does that mean that I need to build Apollo first then build my code?
Thank you!
My suggestion is to use bazel build your added files. You can find examples in other folders on how defined protos are used as dependent libraries, for example https://github.com/ApolloAuto/apollo/blob/master/modules/planning/BUILD.
My guess: The error msg shows that the path to the header files are not configured properly in your CMakeList.txt. Configure cmake build in verbose mode and check "-I" flag in the console output. You can figure it out.
Closing this issue as it appears to be resolved. Feel free to reopen if you have additional questions. Thanks!
| gharchive/issue | 2018-08-24T18:53:01 | 2025-04-01T06:36:42.399030 | {
"authors": [
"hlchen1043",
"jilinzhou",
"lianglia-apollo",
"natashadsouza"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/5488",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
358234694 | Fail to open config.pb.txt when running yolo_camera_detector_test
System information
**OS Platform and Distribution **: Host: Linux Ubuntu 16.04/Docker: 14.04
**Apollo installed from **: docker
**Apollo version **: 3.0
I am trying to run the yolo_camera_detector_test in module/perception/obstacle/camera/detector/yolo_camera_detector after building it with Bazel to see my own image result, but I got the following error:
_[NVBLAS] NVBLAS_CONFIG_FILE environment variable is NOT set : relying on default config filename 'nvblas.conf'
[NVBLAS] Cannot open default config file 'nvblas.conf'
[NVBLAS] Config parsed
[NVBLAS] CPU Blas library need to be provided
Running main() from gmock_main.cc
[==========] Running 3 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 3 tests from YoloCameraDetectorTest
[ RUN ] YoloCameraDetectorTest.model_init_test
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0907 15:41:44.330113 5446 file.h:97] Failed to open file modules/perception/model/camera/yolo_camera_detector_config.pb.txt in text mode.
E0907 15:41:44.330178 5446 file.h:140] Failed to open file modules/perception/model/camera/yolo_camera_detector_config.pb.txt in binary mode.
F0907 15:41:44.330260 5446 yolo_camera_detector.cc:41] Check failed: GetProtoFromFile(FLAGS_yolo_camera_detector_config, &config_)
*** Check failure stack trace: ***
Aborted (core dumped)_
It looks like there is problem with the yolo_camera_detector_config.pb.txt, but I checked that file and it
seems normal. Could somebody help me with this issue? Or if anyone know a better way to run the perception module with my own data, please share with me. Thanks!
You could set the path of nvblas.conf by using export NVBLAS_CONFIG_FILE=/usr/local/cuda.
You could not find the file in /usr/local/cuda. You may need to create one first.
Hi muleisheng, thanks for your reply. I created the nvblas.conf, now the first 3 lines of warning became:
_[NVBLAS] NVBLAS_CONFIG_FILE environment variable is set to '/usr/local/cuda'
[NVBLAS] Config parsed
[NVBLAS] CPU Blas library need to be provided_
I put
NVBLAS_CPU_BLAS_LIB /usr/lib/libopenblas.so
in the nvblas.conf, but it still says 'CPU Blas library need to be provided', and the rest of the error messages are still the same. Could you share with me if you have any idea what is going on? Thank you very much!
I am closing this as the rest of the problem seems to because the file path is wrong
@sopXx, hey, how do you fix this problem, I also meet the same problem as you, can you share your method here?
@xinwf export NVBLAS_CONFIG_FILE=/usr/local/cuda
@HouYuu, I have done this, it seems that works right, but another problem CPU Blas library need to be provided still exists, how to solve it?
I have never seen the problem like that,you should post your detailed operation steps and system version information
| |
houyu
Samsung Electronics -
Software Engineer
18189118087
邮箱:[email protected]
|
Signature is customized by Netease Mail Master
On 11/21/2018 13:53, 辛文飞 wrote:
@HouYuu, I have done this, it seems that works right, but another problem CPU Blas library need to be provided still exists, how to solve it?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
detailed info is at this issue, #6013
if I set the NVBLAS_CONFIG_FILE=/usr/local/cuda,then the log is config is parsed, but the CPU BLAS library problem still exists, if I set the set the NVBLAS_CONFIG_FILE=/usr/local/cuda/nvblas.conf, it won't generate such log, no tips about the CPU, it looks like everything is ok, but when I search this problem in issues, they all answer this question with the reply setting the NVBLAS_CONFIG_FILE=/usr/local/cuda, so, I set it with this common setting, keeping consistent with all of you, I don't konw which is right.
@xinwf Have you tried to use the released version of Apollo 2.5 or 3.0?
@HouYuu, no, I always use develop version from 2.5 to 3.0
| gharchive/issue | 2018-09-07T22:51:54 | 2025-04-01T06:36:42.407762 | {
"authors": [
"HouYuu",
"XiranBai",
"muleisheng",
"sopXx",
"xinwf"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/5706",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
366898429 | How to create HD map
Hey,
Can I create HD map by myself?
Hi, have you found the method yet ?
I am interested as well in creating my own map
你好,我所知目前自己制作高精地图创建有3个方法:1、自己开车记录pose lidar等数据,提交给apollo map 云服务。(开题云服务请联系apollo),2、使用roadrunner 倒车pose gis 路径,导入点云,创建3维场景,导出apollo 格式的高精地图,详见roadrunner doc。3、使用lgsvl unity 开源工程,基于unity 开源的3维仿真地图创建,可以支持pose、点云导入,输出apollo 格式高精地图。——回复参考来自Apollo开发者社区issue达人赛 https://apollo.auto/developer/questiondata_cn.html?target=577
| gharchive/issue | 2018-10-04T17:43:10 | 2025-04-01T06:36:42.410382 | {
"authors": [
"LitoNeo",
"TZVIh",
"apollodevcommunity",
"hect1995"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/5830",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1965132426 | Perfera mit BRP069C4x wird nicht "online" gezeigt
Meine 3 Klimainnengeräte Perfera mit BRP069C4x Gateway werden als "offline" dargestellt und das Objekt "cloudConnected" steht auf (null). Ansteuern lassen sich die Geräte aber über den Adapter (0.3.1 von Github).
all 3 devices show "connected true" in the log file however the state "cloudConnected" is not added nor an initial value is set during adapter start up unlike e.g. daylightSavingTimeEnabled:
...
2023-10-27 13:23:07.603 - debug: daikin-cloud.0 (346772) Added object ---c41.gateway.daylightSavingTimeEnabled (state) with initial value = true
...
2023-10-27 13:23:13.680 - debug: daikin-cloud.0 (346772) Set initial state value: ---c41.gateway.daylightSavingTimeEnabled = true
The extract of the DebugLog is attached.
Daikin Log.txt
I don't know what happened over the weekend, but all 3 devices are green now (online) and cloudConnected=true. Firmware is still the same...
| gharchive/issue | 2023-10-27T09:31:14 | 2025-04-01T06:36:42.413386 | {
"authors": [
"fu-zhou"
],
"repo": "Apollon77/ioBroker.daikin-cloud",
"url": "https://github.com/Apollon77/ioBroker.daikin-cloud/issues/205",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
253605159 | Reviews welcome - GUVNOR-3193 Change Project from a folder in a repo to a repository
Please do not merge.
This PR does the following:
Introduces the Project and Module classes. The old Project that was the maven project is now called Module, just to avoid confusion with the new Project. The new Project is the active project that the workbench is presenting in the UI or the Project that we can later build ( these changes still only support building single Modules ). The Project has an active branch and branch changes require a recreation of the Project ( We can change this later, but after spending months with the code it is quite clear that keeping this step will save us from an extra month of code changes and bug fixes. )
Introduces a Branch class. This is again to make our code a bit easier and safer to use. We used to just store the branches as strings, when a branch is a location in a repository that has a name and a path.
Removes asset-mgmt, this is no longer used
Renames org.guvnor.common.services.project.model.Repository to org.guvnor.common.services.project.model.MavenRepository so it is easier to know the difference between MavenReposity = used where we have dependencies and Repository = GIT repository that stores code.
Changes the REST API. We now hide repositories and access projects directly.
Structural changes in RepositoryService and ConfiguredRepositories to fix fired events (See GUVNOR-3555)
Just using the build servers to run the tests since they take few hours locally.
https://issues.jboss.org/browse/AF-648
The related PRs:
https://github.com/kiegroup/kie-soup/pull/7
https://github.com/AppFormer/uberfire/pull/814
https://github.com/kiegroup/drools/pull/1402
https://github.com/kiegroup/kie-wb-common/pull/1015
https://github.com/kiegroup/drools-wb/pull/566
https://github.com/kiegroup/jbpm-designer/pull/643
https://github.com/kiegroup/jbpm-wb/pull/832
https://github.com/kiegroup/kie-wb-distributions/pull/573
Build finished. 3508 tests run, 7 skipped, 0 failed.
Can you please move this PR for https://github.com/kiegroup/appformer ?
| gharchive/pull-request | 2017-08-29T10:29:51 | 2025-04-01T06:36:42.443341 | {
"authors": [
"Rikkola",
"ederign",
"kie-ci"
],
"repo": "AppFormer/uberfire",
"url": "https://github.com/AppFormer/uberfire/pull/814",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
511476959 | Remove cassandra backend for PullQueues (with master)
This pull request is #3177 with master merged and conflicts resolved.
Daily build no. 6926
Daily build no. 6982
Demo is that the Taskqueue-e2e-Test test passes and logs show postgres is used:
2019-11-06 19:43:15,718 INFO appscale_taskqueue.py:347 Starting TaskQueue server on port 50002
2019-11-06 19:43:15,720 INFO appscale_taskqueue.py:296 TaskQueue server registered at /appscale/tasks/servers/10.10.9.74:50002
2019-11-06 19:43:15,727 INFO queue_manager.py:59 Updating queues for test-project
2019-11-06 19:43:15,758 INFO pg_connection_wrapper.py:42 Establishing new connection to Postgres server
2019-11-06 19:43:15,773 INFO queue.py:769 Ensuring "appscale_test-project" schema is created
2019-11-06 19:43:15,776 INFO queue.py:782 Ensuring "appscale_test-project.queues" table is created
2019-11-06 19:43:15,801 INFO queue_manager.py:59 Updating queues for test-project
| gharchive/pull-request | 2019-10-23T17:46:23 | 2025-04-01T06:36:42.450641 | {
"authors": [
"sjones4"
],
"repo": "AppScale/appscale",
"url": "https://github.com/AppScale/appscale/pull/3194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
857670274 | adeum-config: open failed: ENOENT (No such file or directory)
There is an error on AD initialization for release and debug mode on Android
Plugin version is 1.0.0
adeum-config- open failed- ENOENT (No such file or directory).log
Hi @dolpheen ,
is this error causing any trouble? This error comes from the underlying android SDK and is not related to the flutter plugin, so I can not fix this from here.
Hi @dolpheen ,
is this error causing any trouble? This error comes from the underlying android SDK and is not related to the flutter plugin, so I can not fix this from here.
Hi, @svrnm
Looks like everything is OK))
I will try to check AD SDK issues.
| gharchive/issue | 2021-04-14T08:43:47 | 2025-04-01T06:36:42.453619 | {
"authors": [
"dolpheen",
"svrnm"
],
"repo": "Appdynamics/flutter-plugin",
"url": "https://github.com/Appdynamics/flutter-plugin/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1021806058 | Ubuntu install still failing on farmers in 1.2.90
The same "MAC check failed" error that is described in Issue #16 is still happening trying to run apple 1.2.90 on a farmer in Ubuntu. Installs fine on harvesters, but I still have to run 1.2.30 for my farmer.
Actually, wow, even 1.2.30 is failing now.
Same problem in Debian - "MAC check failed". 1.2.30 works ok.
Okay, I figured out what that was. Because the apple init is failing, the target addresses it set in the config.yaml were null. When I fixed that, I was able to start with 1.2.30.
I'm having the same issue installing 1.2.90 on ubuntu 20.04. Going to try to install 1.2.30 first.
| gharchive/issue | 2021-10-09T21:08:02 | 2025-04-01T06:36:42.464525 | {
"authors": [
"Motko222",
"Qwinn1",
"ariegrossman"
],
"repo": "Apple-Network/apple-blockchain",
"url": "https://github.com/Apple-Network/apple-blockchain/issues/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
324974912 | Confusion on implementation of 'get_mixture_coef()'
https://github.com/AppliedDataSciencePartners/WorldModels/blob/36ebabf24783991dfe6f86fa5e25c2bee141db77/rnn/arch.py#L29
This line is from the function get_mixture_coef(). I wonder if pi should have shape [-1, rollout_length, GAUSSIAN_MIXTURES, 1] to match the definition of GMM in MDN. In this way, SketchRNN's implementation is a special case of this one when Z_DIM=2.
Am I right? I still cannot draw the final conclusion by myself. Hope to discuss with you.
The pis for each z dimension need to be independent I believe, hence the inclusion of Z_DIM in the final dimension
| gharchive/issue | 2018-05-21T16:16:11 | 2025-04-01T06:36:42.469343 | {
"authors": [
"davidADSP",
"xueeinstein"
],
"repo": "AppliedDataSciencePartners/WorldModels",
"url": "https://github.com/AppliedDataSciencePartners/WorldModels/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1731160159 | Setup Flipper
Asana ticket: None
Now that the site is live, we want to be careful rolling out new features.
Introduce Flipper, an open source library that provides simple feature flags for Ruby.
I'd like to use it on next PRs for this task: https://app.asana.com/0/1203289004376659/1203289004376689/f
@sarahraqueld Besides linter issues, looks good!
| gharchive/pull-request | 2023-05-29T19:26:32 | 2025-04-01T06:36:42.473737 | {
"authors": [
"FerPerales",
"sarahraqueld"
],
"repo": "ApprenticeshipStandardsDotOrg/ApprenticeshipStandardsDotOrg",
"url": "https://github.com/ApprenticeshipStandardsDotOrg/ApprenticeshipStandardsDotOrg/pull/253",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1328263649 | Migrate to capacitor v4
Upgraded plugin to use latest capacitor v4.0.1 core and bumped minimum sdk versions to match the new specs.
Hi, how can I use this in my project? I've just upgraded to Capacitor 4 but the main appsflyer-capacitor-plugin plugin is failing. I'd like to use this pull which I've installed using:
npm i --save-dev AppsFlyerSDK/appsflyer-capacitor-plugin#pull/30/head --force
It adds it in the devDependencies section of package.json so when I do npm run build, I'm getting:
Cannot find module 'appsflyer-capacitor-plugin' or its corresponding type declarations.
I get the same if I uninstall and use:
npm i --save AppsFlyerSDK/appsflyer-capacitor-plugin#pull/30/head --force
Please help! Stuck!
Thanks :)
I've also installed with:
npm i appsflyer-capacitor-plugin@github:Ejobs/appsflyer-capacitor-plugin
But, again, when I do npm run build I'm getting:
Cannot find module 'appsflyer-capacitor-plugin' or its corresponding type declarations.
Just to clarify, the files are downloaded in the node_modules folder.
Here's my app's package.json:
{
"name": "QuizSwipe",
"private": true,
"version": "1.0.18",
"description": "QuizSwipe",
"license": "MIT",
"files": [
"dist/"
],
"scripts": {
"build": "stencil build",
"start": "stencil build --dev --watch --serve",
"test": "stencil test --spec --e2e",
"test.watch": "stencil test --spec --e2e --watch",
"generate": "stencil generate",
"clean": "npx rimraf www"
},
"devDependencies": {
"@capacitor/cli": "^4.0.0",
"@ionic/core": "^5.0.7",
"@stencil/core": "2.10.0",
"@stencil/store": "^1.3.0",
"@types/node": "^18.11.2"
},
"dependencies": {
"@awesome-cordova-plugins/in-app-purchase-2": "^5.45.0",
"@capacitor-community/admob": "^3.3.0",
"@capacitor-community/firebase-analytics": "^1.0.1",
"@capacitor-community/native-audio": "^4.0.0-0",
"@capacitor/android": "^4.0.0",
"@capacitor/app": "^4.0.0",
"@capacitor/app-launcher": "^4.0.0",
"@capacitor/browser": "^4.0.0",
"@capacitor/clipboard": "^4.0.0",
"@capacitor/core": "^4.0.0",
"@capacitor/device": "^4.0.0",
"@capacitor/ios": "^4.0.0",
"@capacitor/local-notifications": "^4.0.0",
"@capacitor/network": "^4.0.0",
"@capacitor/preferences": "^4.0.1",
"@capacitor/push-notifications": "^4.0.0",
"@capacitor/share": "^4.0.0",
"@capacitor/splash-screen": "^4.0.0",
"@ionic-native/core": "^5.26.0",
"@ionic-native/screenshot": "^5.31.1",
"@sparkfabrik/capacitor-plugin-idfa": "github:AE1NS/capacitor-plugin-idfa",
"appsflyer-capacitor-plugin": "github:Ejobs/appsflyer-capacitor-plugin",
"capacitor-rate-app": "^3.0.0",
"com.darktalker.cordova.screenshot": "^0.1.6",
"cordova-plugin-device": "^2.0.3",
"cordova-plugin-purchase": "^11.0.0",
"cordova-support-android-plugin": "^1.0.2",
"cordova-support-google-services": "^1.4.0",
"es6-promise-plugin": "^4.2.2",
"jetifier": "^1.6.6",
"ts-md5": "^1.2.11",
"tslib": "^1.11.1",
"web-social-share": "^6.4.1"
},
"postinstall": "jetifier"
}
I've been struggling with this for days now, could someone help me out?
Thanks,
Sean
@MateiNenciu @sawaca96
How can I install this to use in my project?
See my package.json above.
There is no dist folder and I'm getting:
"Cannot find module 'appsflyer-capacitor-plugin' or its corresponding type declarations." when I do npm run build
Help! :)
I'm still use capacitor v3
@pazlavi Any update on this? Looks like it's not been released yet.
Why modifying / accepting an already opened MR from August, when you can have your clients waiting till December to do it yourself ^
Sorry for late reply, you can either build that project and generate the dist folder, or you could do something like
import {AppsFlyer} from 'appsflyer-capacitor-plugin/src' like I did, till the maintainer decides to care about users that are upgrading.
Sorry for late reply, you can either build that project and generate the dist folder, or you could do something like import {AppsFlyer} from 'appsflyer-capacitor-plugin/src' like I did, till the maintainer decides to care about users that are upgrading.
Thanks for the info.
I tried importing from the src folder as you suggested with the plugin installed as per my package.json as above but I was getting build errors:
Error: Unexpected token (Note that you need plugins to import files that are not JavaScript)
So I uninstalled the AppsFlyer plugin and checked your PR out as a submodule. Installed and built to get the dist folder, then installed from my local build. Same error:
[ ERROR ] Rollup: Parse Error: ./submodules/appsflyer-capacitor-plugin/src/index.ts:3:12
Unexpected token (Note that you need plugins to import files that
are not JavaScript)
L3: import type { AppsFlyerPlugin } from './definitions';
L4: const AppsFlyer = registerPlugin<AppsFlyerPlugin>('AppsFlyerPlugin', {
[11:17.2] build failed in 11.08 s
What am I doing wrong? ;)
Here's my latest package.json:
{
"name": "QuizSwipe",
"private": true,
"version": "1.0.18",
"description": "QuizSwipe",
"license": "MIT",
"files": [
"dist/"
],
"scripts": {
"build": "stencil build",
"start": "stencil build --dev --watch --serve",
"test": "stencil test --spec --e2e",
"test.watch": "stencil test --spec --e2e --watch",
"generate": "stencil generate",
"clean": "npx rimraf www"
},
"devDependencies": {
"@capacitor/cli": "^4.0.0",
"@ionic/core": "^5.0.7",
"@stencil/core": "2.10.0",
"@stencil/store": "^1.3.0",
"@types/node": "^18.11.2"
},
"dependencies": {
"@awesome-cordova-plugins/in-app-purchase-2": "^5.45.0",
"@capacitor-community/admob": "4.0.0",
"@capacitor-community/firebase-analytics": "^1.0.1",
"@capacitor-community/native-audio": "^4.0.0-0",
"@capacitor/android": "^4.0.0",
"@capacitor/app": "^4.0.0",
"@capacitor/app-launcher": "^4.0.0",
"@capacitor/browser": "^4.0.0",
"@capacitor/clipboard": "^4.0.0",
"@capacitor/core": "^4.0.0",
"@capacitor/device": "^4.0.0",
"@capacitor/ios": "^4.0.0",
"@capacitor/local-notifications": "^4.0.0",
"@capacitor/network": "^4.0.0",
"@capacitor/preferences": "^4.0.1",
"@capacitor/push-notifications": "^4.0.0",
"@capacitor/share": "^4.0.0",
"@capacitor/splash-screen": "^4.0.0",
"@ionic-native/core": "^5.26.0",
"@ionic-native/screenshot": "^5.31.1",
"@sparkfabrik/capacitor-plugin-idfa": "github:AE1NS/capacitor-plugin-idfa",
"appsflyer-capacitor-plugin": "file:submodules/appsflyer-capacitor-plugin",
"capacitor-plugin-android-post-notifications-permission": "file:submodules/capacitor-plugin-android-post-notifications-permission",
"capacitor-rate-app": "^3.0.0",
"com.darktalker.cordova.screenshot": "^0.1.6",
"cordova-plugin-device": "^2.0.3",
"cordova-plugin-purchase": "^11.0.0",
"cordova-support-android-plugin": "^1.0.2",
"cordova-support-google-services": "^1.4.0",
"es6-promise-plugin": "^4.2.2",
"jetifier": "^1.6.6",
"ts-md5": "^1.2.11",
"tslib": "^1.11.1",
"web-social-share": "^6.4.1"
},
"postinstall": "jetifier"
}
We just released v6.9.2, which supports Capacitor v4.
If you still wish to use Capacitor v3, please check this page .
Thanks for the update :)
I installed it and sync-ed but now my app won't build :(
I'm getting:
[ ERROR ] Rollup: Parse Error: ./node_modules/appsflyer-capacitor-plugin/src/index.ts:3:12
Unexpected token (Note that you need plugins to import files that are not JavaScript)
L3: import type { AppsFlyerPlugin } from './definitions';
L4: const AppsFlyer = registerPlugin<AppsFlyerPlugin>('AppsFlyerPlugin', {
[03:04.1] build failed in 12.42 s
Here's my package.json:
{
"name": "QuizSwipe",
"private": true,
"version": "1.0.18",
"description": "QuizSwipe",
"license": "MIT",
"files": [
"dist/"
],
"scripts": {
"build": "stencil build",
"start": "stencil build --dev --watch --serve",
"test": "stencil test --spec --e2e",
"test.watch": "stencil test --spec --e2e --watch",
"generate": "stencil generate",
"clean": "npx rimraf www"
},
"devDependencies": {
"@capacitor/cli": "^4.0.0",
"@ionic/core": "^5.0.7",
"@stencil/core": "2.10.0",
"@stencil/store": "^1.3.0",
"@types/node": "^18.11.2"
},
"dependencies": {
"@awesome-cordova-plugins/in-app-purchase-2": "^5.45.0",
"@capacitor-community/admob": "4.0.0",
"@capacitor-community/firebase-analytics": "^1.0.1",
"@capacitor-community/native-audio": "^4.0.0-0",
"@capacitor/android": "^4.0.0",
"@capacitor/app": "^4.0.0",
"@capacitor/app-launcher": "^4.0.0",
"@capacitor/browser": "^4.0.0",
"@capacitor/clipboard": "^4.0.0",
"@capacitor/core": "^4.0.0",
"@capacitor/device": "^4.0.0",
"@capacitor/ios": "^4.0.0",
"@capacitor/local-notifications": "^4.0.0",
"@capacitor/network": "^4.0.0",
"@capacitor/preferences": "^4.0.1",
"@capacitor/push-notifications": "^4.0.0",
"@capacitor/share": "^4.0.0",
"@capacitor/splash-screen": "^4.0.0",
"@ionic-native/core": "^5.26.0",
"@ionic-native/screenshot": "^5.31.1",
"@sparkfabrik/capacitor-plugin-idfa": "github:AE1NS/capacitor-plugin-idfa",
"appsflyer-capacitor-plugin": "^6.9.2",
"capacitor-plugin-android-post-notifications-permission": "file:submodules/capacitor-plugin-android-post-notifications-permission",
"capacitor-rate-app": "^3.0.0",
"com.darktalker.cordova.screenshot": "^0.1.6",
"cordova-plugin-device": "^2.0.3",
"cordova-plugin-purchase": "^11.0.0",
"cordova-support-android-plugin": "^1.0.2",
"cordova-support-google-services": "^1.4.0",
"es6-promise-plugin": "^4.2.2",
"jetifier": "^1.6.6",
"ts-md5": "^1.2.11",
"tslib": "^1.11.1",
"web-social-share": "^6.4.1"
},
"postinstall": "jetifier"
}
To get my version of TypeScript I did:
$ npx tsc -v
Which gives:
Version 4.1.6
Ionic:
Ionic CLI : 6.19.0 (/Users/seanwilson/.nvm/versions/node/v15.1.0/lib/node_modules/@ionic/cli)
Capacitor:
Capacitor CLI : 4.4.0
@capacitor/android : 4.4.0
@capacitor/core : 4.4.0
@capacitor/ios : 4.4.0
Utility:
cordova-res (update available: 0.15.4) : 0.15.3
native-run : 1.7.1
System:
NodeJS : v15.1.0 (/Users/seanwilson/.nvm/versions/node/v15.1.0/bin/node)
npm : 7.5.2
OS : macOS Catalina
| gharchive/pull-request | 2022-08-04T08:32:30 | 2025-04-01T06:36:42.518084 | {
"authors": [
"MateiNenciu",
"pazlavi",
"sawaca96",
"undergroundcreative"
],
"repo": "AppsFlyerSDK/appsflyer-capacitor-plugin",
"url": "https://github.com/AppsFlyerSDK/appsflyer-capacitor-plugin/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2584279805 | Colon space (: ) inside title or event body breaks entire plugin
Describe the bug
If you type a colon followed by a space in any event variable, the entire plugin breaks and ceases to display any timeline.
To Reproduce
This works:
%%aat-inline-event
aat-event-start-date: 54
aat-event-end-date: true
aat-render-enabled: true
aat-event-title: This Works.
timelines: [inline-events]
%%
This breaks the plugin:
%%aat-inline-event
aat-event-start-date: 54
aat-event-end-date: true
aat-render-enabled: true
aat-event-title: This: Does Not.
timelines: [inline-events]
%%
If you have a colon followed by a space in aat-event-title, or aat-event-body, the whole plugin ceases to display any timelines, not just the timeline with the offending character combination.
I don't know if this is something that can be accounted for, but sometimes it's nice to have a colon in a note title. This may be a limitation of how things get parsed in Obsidian, but I wanted to point it out. Took me hours to figure out why my timelines completely disappeared on me! Lol.
Did you try:
%%aat-inline-event
aat-event-start-date: 54
aat-event-end-date: true
aat-render-enabled: true
aat-event-title: "This: Does Not."
timelines: [inline-events]
%%
Did you try:
%%aat-inline-event
aat-event-start-date: 54
aat-event-end-date: true
aat-render-enabled: true
aat-event-title: "This: Does Not."
timelines: [inline-events]
%%
Not OP, but I just tested your solution and still get the broken timeline. It seems to be specifically when the colon has a space immediately after it. Putting a colon anywhere else, including with a space before it, works fine.
Hi again, @Panthon13 @thompa2
The wrapping with " seems to work on my end, :thinking: feel free to provide a reproduction vault and post it in .zip here
https://github.com/user-attachments/assets/07f296c9-4b65-40da-81fb-d4651f4edba1
| gharchive/issue | 2024-10-13T21:22:13 | 2025-04-01T06:36:42.548511 | {
"authors": [
"April-Gras",
"Panthon13",
"thompa2"
],
"repo": "April-Gras/obsidian-auto-timelines",
"url": "https://github.com/April-Gras/obsidian-auto-timelines/issues/190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1808011659 | Unable to disable request timeout
Previously it was possible to disable the timeout for the HTTP client by setting request_timeout to None, but after: https://github.com/ArangoDB-Community/python-arango/pull/257 it 'has' to be an int of float.
So the timeout can no longer be easily disabled (can of course be set super high, but that means having to set it to something instead of clearly nothing/unlimited).
Please allow None as a value for the request_timeout.
Hey @153957, #265 should fix the mypy issue you are seeing
| gharchive/issue | 2023-07-17T15:14:45 | 2025-04-01T06:36:42.575437 | {
"authors": [
"153957",
"aMahanna"
],
"repo": "ArangoDB-Community/python-arango",
"url": "https://github.com/ArangoDB-Community/python-arango/issues/261",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1476886986 | Add the ability to search by most used asset
I think this would be a beneficial addition.
On some sites I re-use the same asset multiple times. This filter provides quick access to the most commonly used assets (see screenshot for example):
Looks great, thanks! ✨
Released in 2.1.2
| gharchive/pull-request | 2022-12-05T16:04:34 | 2025-04-01T06:36:42.576946 | {
"authors": [
"Aratramba",
"benmawla"
],
"repo": "Aratramba/sanity-plugin-media-library",
"url": "https://github.com/Aratramba/sanity-plugin-media-library/pull/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1903162469 | [General] Rebuild the async version
Are you in the correct repository?
[x] Yes
Request
Rebuild the async version, as there are too many errors that I found during testing. I will spend a few days changing the code and fixing it.
I realized that the rebuilding didn't take as long as I expected, so I changed the target date to today
Nevermind, it still needs to be rebuilt
I feel really dumb, I forgot to use await in my testing program
| gharchive/issue | 2023-09-19T14:47:46 | 2025-04-01T06:36:42.580257 | {
"authors": [
"Arcader717"
],
"repo": "Arcader717/Async-DisOAuth2",
"url": "https://github.com/Arcader717/Async-DisOAuth2/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2756619139 | System.IO.DirectoryNotFoundException: Could not find a part of the path 'C:\Actor\Pack\
Whenever I try to use the tool it gives me this error. I don't know what I am doing wrong
Please help
Make sure you have set your game path in the settings.
Now it's giving me this error
Sorry for the trouble
Ah, now that may be an error on my end, I'll look into it.
Okay thank you
Ah, I see the issue, change the BaseActorName to nothing. That field is for a vanilla actor name to use when building the custom one.
Hey I'm really sorry to keep bothering you but that didn't seem to work. I'm still getting the same error but now it's just saying tis:
I'm really sorry for the trouble
I must have made a mistake when checking the field. Please try restarting the app and avoid filling out that field.
Tried that and it just keeps defaulting to "TwnObj_HunterHouseBed_A_01" whenever I run it for some reason
I am very sorry for the trouble
That is expected behaviour, however, if you update to 2.0.1 you can clear it without an issue.
It's still defaulting but now it's saying this
"System.IO.FileNotFoundException: The actor 'TwnObj_HunterHouseBed_A_01' is not a vanilla actor. Please change the 'Base Actor' field."
Even when I clear it and run it again it just defaults back again
It should default to that because it automatically looks for the best match. However, that actor does exist, so the issue is with your game dump. Possibly the path is wrong, or else the dump is incomplete.
I have the actor in my game directory. What should the path look like so I can verify if I have it right?
Sorry again for the trouble
I have the actor in my game directory. What should the path look like so I can verify if I have it right?
Sorry again for the trouble
It depends on where it's stored on your computer, but it should end in content (or the folder containing Actor, but not the Actor folder itself).
There we go! Finally! I had the game directory folder wrong I think.
Thank you very much for the help. Sorry it ended up being so complicated
No worries, I'm glad you got it sorted.
| gharchive/issue | 2024-12-23T19:35:52 | 2025-04-01T06:36:42.588669 | {
"authors": [
"AdmilZhao",
"ArchLeaders"
],
"repo": "ArchLeaders/HavokActorTool",
"url": "https://github.com/ArchLeaders/HavokActorTool/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
238727545 | it does not generate the redirector.googlevideo.com..
i check the cached files in cache folder and the link is
https://doc-08-7c-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/kkkkq0kcmslfojlfrlojiq058bpmf1e1/1498528800000/1340883420501142032
it is not redirector.googlevideo.com/videoplayback
how to fix it
@donguyenha, it really doesnt generate redirector.googlevideo.com. It only focuses to the direct download using uc=
@threathgfx how to generate the link with redirector.googlevideo,com,
ex:
https://redirector.googlevideo.com/videoplayback?id=49ce579a8ac5005b&itag=22&source=webdrive&requiressl=yes&ttl=transient&pl=33&ei=OnNTWeaMJpPIqAWgx5WoBg&mime=video/mp4&lmt=1498589208203180&ip=2600:3c01::f03c:91ff:fe60:c14e&ipbits=0&expire=1498655610&sparams=api,ei,expire,id,ip,ipbits,itag,lmt,mime,mm,mn,ms,mv,pl,requiressl,source,ttl&signature=0E1E8B5519026E7AC7BDCB89C55F2A305B6D50B7.7657FCDC57659A4F06FB0C9A03E502539C79074C&api=B4D9C084A100269C74C2292DDF010&cms_redirect=yes&mm=31&mn=sn-n4v7sn7y&ms=au&mt=1498641098&mv=m&key=cms1&app=storage
That's currently being discussed as we don't know yet. Nobody has made it public if they do know.
except they have. refer to youtube-dl. its public
| gharchive/issue | 2017-06-27T03:25:17 | 2025-04-01T06:36:42.615660 | {
"authors": [
"CB49",
"chopraaa",
"donguyenha",
"threathgfx"
],
"repo": "ArdiArtani/Google-Drive-Player-Script",
"url": "https://github.com/ArdiArtani/Google-Drive-Player-Script/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
309762006 | opencv files, version question
Hi,
Does the example that you sent me last time (Linux) require a client-side opencv installation, or are all the necessary opencv files included? I am getting the following error when executing:
./Test: error while loading shared libraries: libopencv_core.so.2.4: cannot open shared object file: No such file or directory
If are to install OpenCV do we do that with 3.0+ or match the 2.4, as above.
you need to install the opencv binary library by sudo apt-get install libopencv-dev
I did that and manually checked /usr/lib and /usr/local/lib and no libopencv_core.so.2.4.
I did system search as well for libopencv_core.so.2.4 as well, with negative results.
The system did install
a/usr/lib/x86_64-linux-gnu/libopencv_core.so.3.1.0
/usr/lib/x86_64-linux-gnu/libopencv_core.so
/usr/lib/x86_64-linux-gnu/libopencv_core.so.3.1
What next?
Should I uninstall the 3.1.0 and replace with the older distro opencv Unix 2.4?
I really could use the help here, I really want to move forward here.
I think you need to recompile the source code to run the ./Test example. If you link the source code with opencv 3, it won't require 2.4 version.
Awesome, that makes sense.
What compiler/linker did you use? I plan o using LinuxGCC toolchain unless you think it better to use something else.
There is comments on top of the cpp file about how to compile the code, something like this: g++ MT9V022_demo.cpp -o Test pkg-config --cflags --libs opencv -lArduCamLib -lpthread -lusb-1.0 -L. -I. -std=gnu++11
So you need g++ to compile the code.
Things going ok recompiling source code. It appears opencv library files were not installed wioth the orignal sudo command:
you need to install the opencv binary library by sudo apt-get install libopencv-dev
I can link to header files but not link to library files - they don't exist. I' ve done a thorough search--and manually checked /usr
Should I reinstall opencv?
The documentation is quite good, thanks for checking anyway. My only problem rightn ow is my opencv libraries have literally disappeared. If I can find them I can move on.
Hi Lee, me again. A lot of progress. Can you please tell me what to fix ?
directory>./Test
`Device found = 1
ArduCam_open successful
create capture thread successfully
create display thread successfully
frame available
No protocol specified
Unable to init server: Could not connect: Connection refused
(Display Image:1991): Gtk-WARNING **: cannot open display: :0
`
I have never seen "No protocol specified" and "Unable to init server: Could not connect: Connection refused" errors, did you modified our example ?
| gharchive/issue | 2018-03-29T13:49:11 | 2025-04-01T06:36:42.636660 | {
"authors": [
"ArduCAM",
"coldstart01"
],
"repo": "ArduCAM/Arduino",
"url": "https://github.com/ArduCAM/Arduino/issues/313",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
583308285 | Publishing the package
We need to publish this package as a public Julia package to finalize this development cycle.
Done!
| gharchive/issue | 2020-03-17T21:24:55 | 2025-04-01T06:36:42.725265 | {
"authors": [
"kibaekkim",
"yim0331"
],
"repo": "Argonne-National-Laboratory/MaximinOPF.jl",
"url": "https://github.com/Argonne-National-Laboratory/MaximinOPF.jl/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
294364109 | Add a message on the sending dialog when using a Ledger
To avoid problems such as https://cointelegraph.com/news/newly-discovered-vulnerability-in-all-ledger-hardware-wallets-puts-user-funds-at-risk
there is another issue, is that the receiving address might be redacted. So we need to display on device the receiving address to be sure it is not redacted.
If i remember well, the ledger app already allows this, you need to put some switch in the request to display the requested address on device.
https://github.com/ArkEcosystem/ark-desktop/blob/master/LedgerArk.js#L11
@alexbarnsley might need to confirm with ledger on their slack
see here https://github.com/ArkEcosystem/ark-ledger/blob/master/src/main.c#L752
| gharchive/pull-request | 2018-02-05T11:43:37 | 2025-04-01T06:36:42.771637 | {
"authors": [
"fix",
"j-a-m-l"
],
"repo": "ArkEcosystem/ark-desktop",
"url": "https://github.com/ArkEcosystem/ark-desktop/pull/555",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1895183391 | 软件打开报错
Traceback (most recent call last):
File "webview_ui.py", line 4, in
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "server.py", line 3, in
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "arknights_mower_init.py", line 9, in
File "pathlib.py", line 1181, in resolve
File "pathlib.py", line 206, in resolve
OSError: [WinError 1] 函数不正确。: 'R:\Temp\_MEI60362\arknights_mower\init'
我使用了内存盘。系统的缓存地址是指向内存盘R盘的
你运行的是哪个版本
| gharchive/issue | 2023-09-13T20:01:58 | 2025-04-01T06:36:42.788582 | {
"authors": [
"NULLlnull",
"ZhaoZuohong"
],
"repo": "ArkMowers/arknights-mower",
"url": "https://github.com/ArkMowers/arknights-mower/issues/284",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2452527042 | The link for sharing the network weights is invalid. Could the author please share it again?
The link for sharing the network weights is invalid. Could the author please share it again?
The link for sharing the network weights is invalid. Could the author please share it again?
Hi myzyuzhoumu,
The google drive link for network weights is correct. I am able to download the weights without issue. Here, are the weights.
Can you share the error message or screeshot?
https://github.com/Arka-Bhowmik/mri_triage_normal/tree/main/output
| gharchive/issue | 2024-08-07T05:47:59 | 2025-04-01T06:36:42.790448 | {
"authors": [
"Arka-Bhowmik",
"MYZyuzhoumu"
],
"repo": "Arka-Bhowmik/mri_triage_normal",
"url": "https://github.com/Arka-Bhowmik/mri_triage_normal/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
296732004 | Make backupDirectory unique for each served single file TiddlyWiki
Given that i'm completely new both to node.js and TiddlyWiki pls forgive me if i'm telling some oddity.
I'm still experimenting to see what kind of beast i've got (syntax, macros, widgets and mostly its philosophy that i don't yet completely grasp) and i found really useful making one or more copy of a master wiki file and apply different changes to the different copies just to see the differences, then apply what i like most to the master copy.
Then i delete the copies and make them new from the master with the same names (so i don't have to change settings.json) and restart the cycle.
The problem is that when i delete a copy, the unique backupDirectory remain polluted with backups that are no longer usable and must be deleted manually.
Would it be possible to have the backupDirectory relative to the tree so that each single file instance can have its own?
This would help at least in two ways, one, with this usage pattern, i could delete with a single action both the instance and the backups, second having the each backupDirectory together with its file instance will allow to backup easily them together.
Thanks,
Gabriele
It is possible that in the future I could allow a backup folder to be
specified for each tree item. I will keep that in mind. Currently it is not
possible.
I would like to mention that you can specify a folder in the tree instead
of just a file, as it seems you have done, which then allows you to serve
anything in that folder.
Thanks, i tried using folder but i see there the behavior is completely different and as far as i understand backupDirectory is not applicable there.
I plan to use TiddlyWiki as a companion notebook of Jabref and Zotero for literary research and it will end containing a huge amount of data scattered on many different wikis, so before filling it with real data i want to be sure to have explored every single facet of it's behavior.
Now i'm stressing single file wikis, then will do the same on folder wikis. I need to be sure that's rock solid and i don't risk to loose data in any circumstances.
Anyhow thanks for your attention,
Gabriele
Actually, I'm referring to folders containing multiple single file wikis. If you specify a folder path instead of a file path in settings.json it will serve all the single file wikis AND data folders found in that folder. So you can have multiple single file wikis in the folder and just specify the whole folder instead of each individual wiki. You can also have multiple data folders (any folder containing a tiddlywiki.info file) in the folder. Of course, you can have both in the same folder.
Wow! That's powerful! I didn't got it.
As i said in my first post, i don't yet really grasp the "philosophy" behind this object. All the wikis i tried are php and a lot more rigid in structuring data, but this, as Ruston said, is something completely different. What really fascinated me from the beginning is the apparent simplicity by which once you have your data, you can build multiple "knowledge" paths through them and i'm looking for the magic "click" on this.
Anyhow, in the meanwhile, i solved another small problem. I don't like products that mix installation files with runtime byproducts, but fortunately with a rapid look at the code i saw that settings.json can be passed as an argument to the server, so now i have a wiki root dir with settings.json using only relative paths, backupDirectory and all the wiki folders and now i can move the server installation to a read only zone.
Thanks,
Gabriele
A backup folder may now be specified for each tree item in version 2.1. The documentation I am writing will include instructions for this.
| gharchive/issue | 2018-02-13T13:13:01 | 2025-04-01T06:36:42.800606 | {
"authors": [
"Arlen22",
"garu57"
],
"repo": "Arlen22/TiddlyServer",
"url": "https://github.com/Arlen22/TiddlyServer/issues/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2409856866 | 🛑 Armada Yazılım - Proje is down
In 35fff96, Armada Yazılım - Proje (https://proje.armadayazilim.com/) was down:
HTTP code: 521
Response time: 177 ms
Resolved: Armada Yazılım - Proje is back up in 8e18543 after 20 minutes.
| gharchive/issue | 2024-07-16T00:07:31 | 2025-04-01T06:36:42.809838 | {
"authors": [
"onuralpszr"
],
"repo": "ArmadaSoftware/status",
"url": "https://github.com/ArmadaSoftware/status/issues/253",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
209865910 | Multisensor may not work with all sensors
Multisensor has only been tested on Proove sensors, needs to be validated by someone else with sensors from other manufacturers.
v0.2.10 brings a new devices interface, hub.devices() that supports filtering by capabilities such as TEMPERATURE. It's still crude and doesn't support really the way this was built at the start so this issue may become moot as we move towards a more granular way of operating on a set of devices. So in fact the whole legacy multisensor module may go away in favor of more generalized tools.
Also, got myself some Develco moisture sensors that also do temperature sensing so now I can actually test pulling data by capabilities across different manufacturers.
Multisensor is being deprecated with v0.3 in favor of capability based device access. The capabilities are already there (and improved in current devel branch) so nothing stopping from using them already now.
| gharchive/issue | 2017-02-23T19:53:52 | 2025-04-01T06:36:42.837321 | {
"authors": [
"Artanicus"
],
"repo": "Artanicus/python-cozify",
"url": "https://github.com/Artanicus/python-cozify/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
797891681 | Add .signature class to the <cite> element
While the signature class works on <cite> elements as far as applying basic text styles, it doesn't render on the right like it does with <div>s. Using <cite> for quotes seems to fall more in-line with how many in the Obsidian community appear to use them, and at the moment, their positioning is a bit intrusive for the writing it's associated with.
Expected behavior:
Actual behavior:
I haven't tried anything yet due to a lack of time, though I think setting display: block; should allow for it to be positioned more easily.
I didn't even know about the option. I'll play around with it.
Oh, the "incorrect" font isn't because of issues with the theme; the screenshots were just to show the positioning. I changed the font using a second class with .signature; if I take it off, it defaults back to Edwardian.
Will be coming in v6, already have it "" working on my working build
| gharchive/issue | 2021-02-01T02:18:33 | 2025-04-01T06:36:42.846266 | {
"authors": [
"ArtexJay",
"Jexle"
],
"repo": "ArtexJay/Obsidian-CyberGlow",
"url": "https://github.com/ArtexJay/Obsidian-CyberGlow/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1836680853 | Quick clarification of projected 2d points of smpl
hello @Arthur151, I had a quick query to be clarified.
Now that we have the 3d smpl joints, I want to project it on the resp. 2d image.
I am aware that we have pj2d, but I wish to do the conversion myself.
So, I checked the repo and saw a perspective projective function
Q1. is this the function used for the conversion of 3d smpl joints to 2d ?? or there's something else.
def perspective_projection_ROMP(points, rotation, translation, focal_length,
camera_center):
"""
This function computes the perspective projection of a set of points.
Input:
points (bs, N, 3): 3D points
rotation (bs, 3, 3): Camera rotation
translation (bs, 3): Camera translation
focal_length (bs,) or scalar: Focal length
camera_center (bs, 2): Camera center
"""
batch_size = points.shape[0]
K = torch.zeros([batch_size, 3, 3], device=points.device)
K[:, 0, 0] = focal_length
K[:, 1, 1] = focal_length
K[:, 2, 2] = 1.0
K[:, :-1, -1] = camera_center
# Transform points
points = torch.einsum("bij,bkj->bki", rotation, points)
points = points + translation.unsqueeze(1)
# Apply perspective distortion
projected_points = points / points[:, :, -1].unsqueeze(-1)
# Apply camera intrinsics
projected_points = torch.einsum("bij,bkj->bki", K, projected_points)
return projected_points[:, :, :-1]
Q2. We only require the 'smpl_joints' for such plotting right? or do we need the 'smpl_poses'?
Q3. For plotting the skeleton, can I use the following function after I have generated the 2d smpl
https://github.com/Arthur151/ROMP/blob/5cf8068297e8700701748c58d98428d8b6bcea91/trace/lib/utils/vis_utils.py#L246
and this is the skeleton_tree for smpl I suppose
smpl24_connMat = np.array([0,1, 0,2, 0,3, 1,4,4,7,7,10, 2,5,5,8,8,11, 3,6,6,9,9,12,12,15, 12,13,13,16,16,18,18,20,20,22, 12,14,14,17,17,19,19,21,21,23]).reshape(-1, 2)
Q4. Lastly, for plotting the 2d joints on the image(not the skeleton), do you recommend any function in ROMP to check or can we just use matplotlib scatterplot to plot the joints on the IMAGES.
I really need to know these. Thank you @Arthur151
@Dipankar1997161
Sorry for the late reply!
Q1. Yes, this function is written to perform the 2D projection of SMPL's 3D joints.
Q2. Actually, it just implements a normal perspective projection, so the 3D joints are required. SMPL theta poses are not needed.
Q3. Yes, this could be very convenient.
Q4. Any functions you like to plot the 2D joints should be fine.
| gharchive/issue | 2023-08-04T12:43:14 | 2025-04-01T06:36:42.851475 | {
"authors": [
"Arthur151",
"Dipankar1997161"
],
"repo": "Arthur151/ROMP",
"url": "https://github.com/Arthur151/ROMP/issues/477",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
257953896 | Does not work!
Colouring a Row in a table does not work.
Same here...
Hi any fix for this?
Same here, it seems that this library is no longer maintained :/
background-color doesn't work on td either
| gharchive/issue | 2017-09-15T07:44:12 | 2025-04-01T06:36:42.853079 | {
"authors": [
"PBascones",
"Smurf-IV",
"mikeobrien",
"sadiqkhoja",
"vishalmane"
],
"repo": "ArthurHub/HTML-Renderer",
"url": "https://github.com/ArthurHub/HTML-Renderer/issues/103",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
353102524 | Fix: Scrollbar vises alltid
Scrollbar bare ved behov, når det ikke er plass til alt, vertikalt.
Fixes #572
Coverage remained the same at 43.081% when pulling e26351adb9e6015b5a04ad9c6abac07e72f799f5 on bjornreppen:scroll into 633ab64b777c0c009932d469907b1b4ce6cdec03 on Artsdatabanken:master.
| gharchive/pull-request | 2018-08-22T20:01:36 | 2025-04-01T06:36:42.855794 | {
"authors": [
"bjornreppen",
"coveralls"
],
"repo": "Artsdatabanken/ratatouille",
"url": "https://github.com/Artsdatabanken/ratatouille/pull/574",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1565091751 | 🛑 API - Spain (ES) is down
In d95d38c, API - Spain (ES) ($URL_API) was down:
HTTP code: 0
Response time: 0 ms
Resolved: API - Spain (ES) is back up in 590f7f8.
| gharchive/issue | 2023-01-31T22:28:30 | 2025-04-01T06:36:42.858295 | {
"authors": [
"ArturiaPendragon"
],
"repo": "ArturiaPendragon/uptime-status",
"url": "https://github.com/ArturiaPendragon/uptime-status/issues/399",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1646150094 | example does not compile
error[E0599]: no method named into_make_service found for struct Router<CsrfConfig, _> in the current scope
ahh Ill update the Example to the newest Axum.
It seems that into_make_service() is only defined on Router<(), _> while here we have Router<CsrfConfig, _>
hmm it should just take CSRFConfig just fine. I am wondering if the error is from somewhere else. as sometimes Axum errors dont always Tell you what the real issue is.
You can try to make a State Struct instead like
use axum::extract::FromRef;
#[derive(Clone)]
pub struct SystemState {
pub odbc: axum_odbc::ODBCConnectionManager,
pub flash_config: axum_flash::Config,
pub csrf: axum_csrf::CsrfConfig,
}
impl SystemState {
pub fn new(
odbc: axum_odbc::ODBCConnectionManager,
flash_config: axum_flash::Config,
csrf: axum_csrf::CsrfConfig,
) -> Self {
Self {
odbc,
flash_config,
csrf,
}
}
}
impl FromRef<SystemState> for axum_csrf::CsrfConfig {
fn from_ref(input: &SystemState) -> Self {
input.csrf.clone()
}
}
It seems that into_make_service() is only defined on Router<(), _> while here we have Router<CsrfConfig, _>
I got it. The thing is, .with_state(...) should go the last in the chains, after all .route(...)s . Precisely as in your example. I thought it would not matter... Now it all works. Thank you !
| gharchive/issue | 2023-03-29T16:30:26 | 2025-04-01T06:36:42.874262 | {
"authors": [
"amkhlv",
"genusistimelord"
],
"repo": "AscendingCreations/AxumCSRF",
"url": "https://github.com/AscendingCreations/AxumCSRF/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
176308682 | even import only (from pods) crashes OpenGL view
I have a OpenGL ES 2.0 view, which I wanted to overlay with the KVNProgress. Every single time, I opened the GLView twice, the app crashes with gpus_ReturnGuiltyForHardwareRestart ... Even if only imported the KVNProgress module via CocoaPods and no showing of the KVNProgress itself...
Hello @mattem86, thanks for using this library!
Are you sure this crash is from KVNProgress itself? Do the crash disappear when you remove the KVNProgress import?
Yep, the crash disappears, if I do not import KVNProgress within the ViewController which contains the GLView. I didn't believe that to, so I double-checked :) But I will try it once again tomorrow...
So the view of your view controller is a GLView?
Not exactly, the View in my View Controller has a View, which is holds an GLKView inside and is a GLKViewDelegate
Am 12. September 2016 um 16:30:49, Kevin Hirsch ([email protected]) schrieb:
So the view of your view controller is a GLView?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
So if you remove the "import KVNProgress" statement, the app does not crash anymore?
Which version of CocoaPods are you using? (pod --version)
Sorry, I was busy the last week ... I use pod 1.0.0 ... but, something changed in the meantime, the app crashes no more when I import KVNProgress on that particular ViewController... maybe some other pod were interfering? I have to try again...!
Am 13. September 2016 um 16:22:19, Kevin Hirsch ([email protected]) schrieb:
So if you remove the "import KVNProgress" statement, the app does not crash anymore?
Which version of CocoaPods are you using? (pod --version)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Ha! It is the Viewcontroller which I segue off from to reach the new Viewcontroller with the GLView. So, if I import (just import!) KVNProgress in my ViewController A and I segue off to my ViewController B (which contains my GLView), then the crash appears at the second time I segue. Weird.
Am 20. September 2016 um 11:37:08, Matthias Temmen ([email protected]) schrieb:
Sorry, I was busy the last week ... I use pod 1.0.0 ... but, something changed in the meantime, the app crashes no more when I import KVNProgress on that particular ViewController... maybe some other pod were interfering? I have to try again...!
Am 13. September 2016 um 16:22:19, Kevin Hirsch ([email protected]) schrieb:
So if you remove the "import KVNProgress" statement, the app does not crash anymore?
Which version of CocoaPods are you using? (pod --version)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Ok, seems to be a bug in the GLView Framework I was using ... I was trying again yesterday and everything went well, after updating the specific framework!
| gharchive/issue | 2016-09-12T07:15:59 | 2025-04-01T06:36:43.422518 | {
"authors": [
"kevin-hirsch",
"mattem86"
],
"repo": "AssistoLab/KVNProgress",
"url": "https://github.com/AssistoLab/KVNProgress/issues/88",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
370479418 | Received parameter in client is not equal to that sent in server
If I send a server function continually, sometimes the client will lose one. Then when I send it one more time, the client will get the last losed one.
Is this a potential issue?
Any chance you could post some code showing what issue you are experiencing?
I think I made a mistake in my own code.
I added 2 server functions with the same name but different number of parameters, these 2 functions are triggered at the same time. I only handled one function in my client code. It works well after deleting one function in server.
| gharchive/issue | 2018-10-16T07:43:14 | 2025-04-01T06:36:43.424714 | {
"authors": [
"Astn",
"jdengitw"
],
"repo": "Astn/JSON-RPC.NET",
"url": "https://github.com/Astn/JSON-RPC.NET/issues/99",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
352519301 | Make more comfortable get_page_id behaviour
If get_pade_id exec with wrong id parameter, don't raise 'AttributeError: 'NoneType' object has no attribute 'get''
Thanks!
| gharchive/pull-request | 2018-08-21T12:37:18 | 2025-04-01T06:36:43.427705 | {
"authors": [
"OrangeFlag",
"gonchik"
],
"repo": "AstroMatt/atlassian-python-api",
"url": "https://github.com/AstroMatt/atlassian-python-api/pull/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
799379441 | Integration into zfit
I was thinking on what is possibly the best way of moving this inside of zfit so that it can be used there.
Since there are a few, I would suggest that maybe the pure zfit PDFs, we can directly move into zfit. While the TFP ones, we can ask whether this is actually of interest (but this may needs some more work on them, e.g. added tests etc.)
I would propose that either you or me is gonna copy them to zfit, add docs, tests. If you're short on time, I can do that, or parts, and you're very welcome to add things. If you're interested and want to contribute directly - it's your work and it should fit without a problem - please go ahead and add it yourself to zfit and make a PR, that is also very welcome!
Do you have a preference on how to proceed?
Or would you want to promote this to a real package? I think it would just need more polish, unittests, docs, then for sure...
Thanks for following up with this!
I do not have plans to make this into a real (and maintained) package currently.
There are actually four PDF classes of interest (all located in tf_kde/distribution/kernel_density_estimation_zfit.py).
KernelDensityEstimation, which is the only one that uses TFP Distributions and has some overlap with GaussianKDE1DimV1 in zfit, although it supports multiple kernels
KernelDensityEstimationFFT, which has some overlap with FFTConvPDFV1 in zfit
KernelDensityEstimationISJ, which is based on the Improved Sheather Jones algorithm
KernelDensityEstimationHofmeyr, which I think is not suitable to port to zfit right now, since its TensorFlow implementation is awfully slow compared to the other three methods and its CPP based (a custom TensorFlow Op) implementation is not stable and has to be compiled specifically for different architectures
I started to move the three suitable methods into zfit in the following pull request here: https://github.com/zfit/zfit/pull/285
Since I currently do not have the time to extend this with tests and docs all by myself, your help is very welcome. Also I am not entirely sure about the architecture of zfit and where to put things like utility classes, so please advise me, where I made a suboptimal choice.
| gharchive/issue | 2021-02-02T15:23:18 | 2025-04-01T06:36:43.434524 | {
"authors": [
"AstroViking",
"mayou36"
],
"repo": "AstroViking/tf-kde",
"url": "https://github.com/AstroViking/tf-kde/issues/2",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
363770304 | Can this technique be reliably used against Components?
Really interested in the technique being used here, and I am curious if this could also be extended for use against Components. I wrote a HOC to experiment with the idea, using the same demo code as the README, using Components instead of functions:
import React, { Component, cloneElement } from 'react';
import immutagen from 'immutagen';
const compose = ({ value, next }) => next
? cloneElement(value, null, values => compose(next(values)))
: value;
export default UnyieldableComponent => {
const generator = immutagen(UnyieldableComponent.prototype.render);
UnyieldableComponent.prototype.render = function() {
return compose(generator(this.props));
};
return class YieldableComponent extends Component {
render() {
return <UnyieldableComponent {...this.props} />;
}
};
};
// used as legacy decorator, but can use as HOC as well
@withGeneration
export default class App extends Component {
* render() {
console.log('Rendering again!');
const { loading, data } = yield <Query />;
const { time } = yield <Time/>;
if (loading) {
return <h1>Loading</h1>;
}
const {
values,
touched,
errors,
handleChange,
handleBlur,
handleSubmit,
isSubmitting,
} = yield (
<WrapFormik
initialValues={{
// Use data from other HOCs!
email: data.user.email,
password: '',
}}
validate={values => {
// same as above, but feel free to move this into a class method now.
let errors = {};
if (!values.email) {
errors.email = 'Required'
} else if (
!/^[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}$/i.test(values.email)
) {
errors.email = 'Invalid email address';
}
return errors;
}}
/>
);
return (
<div className="App">
<h1>{`Hello, ${data.user.name}`}</h1>
<h2>The time is {time.toLocaleString()}!</h2>
<form onSubmit={handleSubmit}>
<input
type="email"
name="email"
onChange={handleChange}
onBlur={handleBlur}
value={values.email}
/>
{touched.email && errors.email && <div>{errors.email}</div>}
<input
type="password"
name="password"
onChange={handleChange}
onBlur={handleBlur}
value={values.password}
/>
{touched.password && errors.password && <div>{errors.password}</div>}
<button type="submit" disabled={isSubmitting}>
Submit
</button>
</form>
</div>
);
}
}
It seemed to work pretty well; do you see any concerns with this approach?
One thing that needs to be ironed out is accessing this within render. I'm sure a call(this needs to happen somewhere, but not sure where. :smile:
Yeah, this is one of the reasons I always prefer to use functional components.
I fear that it can add more margin for errors for beginners too 🤔
One thing that needs to be ironed out is accessing this within render.
I have solved this and updated the original post by caching the generator at first render with a binding.
| gharchive/issue | 2018-09-25T21:18:10 | 2025-04-01T06:36:43.439062 | {
"authors": [
"eliperelman",
"grsabreu"
],
"repo": "Astrocoders/epitath",
"url": "https://github.com/Astrocoders/epitath/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1013304392 | Finding the norm of a matrix using dynamic memory allocation
Would like do this using c
Can you please assign this
You have to make appropriate folder or choose proper directory to put this program of C
Link PR if made
https://github.com/Astrodevil/Programming-Basics/pull/289
Here it is sorry for the delay
| gharchive/issue | 2021-10-01T12:48:06 | 2025-04-01T06:36:43.441042 | {
"authors": [
"Astrodevil",
"ShouryaBrahmastra"
],
"repo": "Astrodevil/Programming-Basics",
"url": "https://github.com/Astrodevil/Programming-Basics/issues/177",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
792736762 | 🛑 CAF-ACE is down
In 1d62529, CAF-ACE (https://caface-rfacace.forces.gc.ca) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CAF-ACE is back up in cce6aef.
Resolved: CAF-ACE is back up in cce6aef.
| gharchive/issue | 2021-01-24T07:10:17 | 2025-04-01T06:36:43.451066 | {
"authors": [
"Async0x42"
],
"repo": "Async0x42/epic-upptime",
"url": "https://github.com/Async0x42/epic-upptime/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
51980686 | Uncaught Error: spawn EACCES
Uncaught Error: spawn EACCES
Atom Version: 0.158.0
System: Mac OS X 10.10.1
Thrown From: linter package, v0.9.0
Steps To Reproduce
Happens everytime i open a .py file, i have linter-pep8 installed
Stack Trace
At child_process.js:1160
Error: spawn EACCES
at exports._errnoException (util.js:742:11)
at ChildProcess.spawn (child_process.js:1160:11)
at Object.exports.spawn (child_process.js:993:9)
at new BufferedProcess (/Applications/Atom.app/Contents/Resources/app/src/buffered-process.js:47:37)
at LinterPep8.Linter.lintFile (/Users/fabianrios/.atom/packages/linter/lib/linter.coffee:142:19)
at /Users/fabianrios/.atom/packages/linter/lib/linter-view.coffee:138:18
at Array.forEach (native)
at /Users/fabianrios/.atom/packages/linter/lib/linter-view.coffee:137:18
at Object.oncomplete (fs.js:93:15)
/cc @atom/core
I was having this same issue with linter-pep8 and was able to fix it by changing my config from:
'linter-pep8':
'pep8ExecutablePath': '/usr/local/bin'
to:
'linter-pep8':
'pep8ExecutablePath': '/usr/local/bin/pep8'
Hopefully that helps you!
Thanks that solved the problem!
| gharchive/issue | 2014-12-15T12:32:41 | 2025-04-01T06:36:43.565626 | {
"authors": [
"gatormha",
"yemi"
],
"repo": "AtomLinter/Linter",
"url": "https://github.com/AtomLinter/Linter/issues/292",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1172989629 | change texture of elevator
Describe the solution you'd like
It would be nice if you can change the texture of the elevator to any block
Additional context
The elevator would be more fitting in the floor of the buildings
I have no plans to enable to change textures because this is the "Quartz" Elevator.
If you really want to change the textures, you can manually modify the model files.
https://minecraft.fandom.com/wiki/Model
quartz-elevator.jar
└── assets
└── quartzelv
└── models
├── block
│ ├── smooth_quartz_elevator.json <- edit
│ └── quartz_elevator.json <- edit
└── item
├── smooth_quartz_elevator.json <- edit
└── quartz_elevator.json <- edit
If you're bothered by particles, you can hide them. (ModMenu > Quartz Elvator > Display particles: Yes -> No)
| gharchive/issue | 2022-03-17T22:31:27 | 2025-04-01T06:36:43.609562 | {
"authors": [
"Aton-Kish",
"FreshDoktor"
],
"repo": "Aton-Kish/quartz-elevator",
"url": "https://github.com/Aton-Kish/quartz-elevator/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
892208235 | Update scalatest to 3.2.9
Updates org.scalatest:scalatest from 3.2.3 to 3.2.9.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalatest", artifactId = "scalatest" } ]
labels: test-library-update, semver-patch
Superseded by #127.
| gharchive/pull-request | 2021-05-14T20:15:22 | 2025-04-01T06:36:43.612629 | {
"authors": [
"scala-steward"
],
"repo": "Atry/scalajs-all-in-one-template",
"url": "https://github.com/Atry/scalajs-all-in-one-template/pull/121",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1112260563 | the parameters are only 43196001, instead of 43524961
I run the default Conddetr-r50, but the num of parameters is different from that in the provided log.
Also, after training for 1 epoch, the eval results are
[0.04369693586567375, 0.12083834673558262, 0.023675111814434113, 0.01864211602467282, 0.052261665895792626, 0.07171156446634068, 0.09023536974930606, 0.18654859799415718, 0.22196121793196433, 0.04610799601904764, 0.21023391350986004, 0.3797766209046455],
which is weaker (about 0.7AP) than that in the provided log
[0.0509964214370242, 0.13292741190993088, 0.030383986414032393, 0.015355903493298791, 0.05914294278060285, 0.08176101640052409, 0.10028554935230335, 0.2012481198582593, 0.23517722389597043, 0.04296950016312112, 0.23670937055006003, 0.40016568706711353].
Hi,
Did you enable the '--no_aux_loss' flag? We use aux_loss in training, and disabling this flag might cause fewer parameters as well as weaker performance. Moreover, the AP in the early training stage is unstable. +/- 0.7 AP at epoch 1 is not informative. Consistent lower performance in training (maybe epoch 1 to epoch 10) might indicate that the training has some problem.
Thanks for your answer.
I did not change the args. And I used aux loss during the training. The args are:
Namespace(aux_loss=True, backbone='resnet50', batch_size=2, bbox_loss_coef=5, clip_max_norm=0.1, cls_loss_coef=2, coco_panoptic_path=None, coco_path='/mnt/lustre/share/DSK/datasets/mscoco2017/', dataset_file='coco', dec_layers=6, device='cuda', dice_loss_coef=1, dilation=False, dim_feedforward=2048, dist_backend='nccl', dist_url='env://', distributed=True, dropout=0.1, enc_layers=6, epochs=50, eval=False, focal_alpha=0.25, frozen_weights=None, giou_loss_coef=2, gpu=0, hidden_dim=256, is_slurm_job=True, lr=0.0001, lr_backbone=1e-05, lr_drop=40, mask_loss_coef=1, masks=False, nheads=8, num_queries=300, num_workers=2, output_dir='output/default', position_embedding='sine', pre_norm=False, rank=0, remove_difficult=False, resume='', seed=42, set_cost_bbox=5, set_cost_class=2, set_cost_giou=2, start_epoch=0, tcp_port='29550', weight_decay=0.0001, world_size=8)
number of params: 43196001
BUT, after 5 epoch training, the model performs just the same as that in the provided logs. So, it might just be the performance fluctuation in the early stage caused by random seeds.
I will keep tracking the performance during the training and update the comment if some other problems happen.
I also encountered the problem of inconsistency in the amount of parameters.
| gharchive/issue | 2022-01-24T07:17:21 | 2025-04-01T06:36:43.626471 | {
"authors": [
"Cohesion97",
"DeppMeng",
"onepeachbiubiubiu"
],
"repo": "Atten4Vis/ConditionalDETR",
"url": "https://github.com/Atten4Vis/ConditionalDETR/issues/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
259313699 | XCode 9 - AudioKit 4 - Objective C
If I am missing something, I apologize, but I am seeing no visible interface for what used to work in the previous version of AudioKit.
AKAudioPlayer *player = [[AKAudioPlayer alloc] initWithFile...
There is no initWithFile now for the AKAudioPlayer object when using Objective C. I am actually getting this error with several objects like AKTimePitch, AKReverb, etc.
Mmh, I wouldn't be surprised if we're missing a few @objc qualifiers in the Swift code. I noticed that for some other classes I was using.
any ideas how to work around or do I need to wait for an AudioKit new version?
We'll likely need to make a new build, I think there's too many of these oversights. Hunting them all down might take some time though.
You could use the AudioKit develop branch and include the .xcodeproj in your project instead of the framework. Then, you could find the offensive classes, add the @objc and make a pull request with the changes. That would be amazingly helpful and get you on the right track the fastest. I can help you through screensharing if you need any help getting set up.
I would be more than happy to help. Is there any way for the time being to use the previous version I was using which was AudioKit 3.7.1 with XCode 9? I seem to remember the issue with this was around Swift 4.
Not as far as I know. iOS development is like this, constantly being shoved around by toolset, device, and operating system changes. I can only offer some commiseration!
I will start tonight with the develop branch and then do the pull request with my findings. Thanks again for your help.
I am closing this issue as I was able to compile my app last night using the develop branch xcode project adding @objc where needed. I'll do a pull request this morning, but basically found that several effects needed @objc added for the init so you could instantiate the object in obj-c.
@jldunk Are you close to making a pull request? We've got a few other bugs I'd like to address with a 4.0.1 release
Just uploaded changes.
Jason
On Sep 23, 2017, at 3:29 PM, Aurelius Prochazka [email protected] wrote:
@jldunk Are you close to making a pull request? We've got a few other bugs I'd like to address with a 4.0.1 release
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
| gharchive/issue | 2017-09-20T21:54:10 | 2025-04-01T06:36:43.643223 | {
"authors": [
"aure",
"jldunk",
"megastep"
],
"repo": "AudioKit/AudioKit",
"url": "https://github.com/AudioKit/AudioKit/issues/1054",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
669206832 | Balancer problem, making a PR to see if the issue happens on CI and other people's machines
This simple change, similar to ones that worked this morning, breaks AudioKit.h. Why?
Well I see that it fails to compile because of unknown references. Maybe it has something to do with Taylor's recent changes to make internal APIs less accessible?
Well I see that it fails to compile because of unknown references. Maybe it has something to do with Taylor's recent changes to make internal APIs less accessible?
No I don't think. I did this to 50 or 60 nodes in a previous commit without issue.
I see the issue now. The balancer hpp file was the last place that AKSoundpipeDSPBase.hpp was imported from a public header file.
I've made the AKSoundpipeDSP.hpp file into a .hpp/.mm pair but still having some issues.
| gharchive/pull-request | 2020-07-30T21:21:05 | 2025-04-01T06:36:43.645733 | {
"authors": [
"aure",
"megastep"
],
"repo": "AudioKit/AudioKit",
"url": "https://github.com/AudioKit/AudioKit/pull/2204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
364114465 | fix for logout bug
Steps for repro:
on dev.augur.net
sign into account 1 in augur-ui
switch to account 2
sign back in
see it signs you out
locally
in load-account-data switch line 23 to be boolean opposite
do same steps as above
Issue: dev.augur.net doesn't use localstorage for account signed in data
Coverage decreased (-0.01%) to 63.209% when pulling 43591d20ac413cd11660dda4c5f42907d6aaebe9 on dev-logout-bug into 8107792ac6ce6e20fdf05842c7a871ee94b98beb on master.
Coverage decreased (-0.01%) to 63.209% when pulling 43591d20ac413cd11660dda4c5f42907d6aaebe9 on dev-logout-bug into 8107792ac6ce6e20fdf05842c7a871ee94b98beb on master.
| gharchive/pull-request | 2018-09-26T16:30:26 | 2025-04-01T06:36:43.668437 | {
"authors": [
"coveralls",
"phoebemirman"
],
"repo": "AugurProject/augur-ui",
"url": "https://github.com/AugurProject/augur-ui/pull/2331",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
415195805 | Order Form confirmation word change
flipping a short position:
closing position
Buying Back
New position
Buying Green, instead of long and short
Flipping a long position:
closing position
Selling Out Red
new position
Selling Red
working correctly in snpk
| gharchive/issue | 2019-02-27T15:54:59 | 2025-04-01T06:36:43.671151 | {
"authors": [
"Chwy5",
"bthaile"
],
"repo": "AugurProject/augur",
"url": "https://github.com/AugurProject/augur/issues/1156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
96204537 | Holdings issue
Left hand side
Can confirm that I'm getting this as well (using Chrome).
| gharchive/issue | 2015-07-21T01:26:06 | 2025-04-01T06:36:43.672326 | {
"authors": [
"joeykrug",
"tinybike"
],
"repo": "AugurProject/augur",
"url": "https://github.com/AugurProject/augur/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
437984029 | Portfolio Open Orders Sorting
Portfolio open orders section, it only has one sort method but two views
Sort by most recently traded market, not by the logged in user. The most recent market that has a trade should be at the top of the list.
View by Most Recently Traded Market
View by Most Recently Traded Outcome
added fixes to PR
for this view:
should individual orders still be grouped by markets?
| gharchive/issue | 2019-04-27T21:02:56 | 2025-04-01T06:36:43.674475 | {
"authors": [
"bthaile",
"phoebemirman"
],
"repo": "AugurProject/augur",
"url": "https://github.com/AugurProject/augur/issues/1929",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
93040766 | Uncaught TypeError: Cannot read property 'constructor' of null
When using client.augur.net on the main (Olympic) testnet.
Browser gets stuck at the loading screen with this at console:
//one of these:
Uncaught TypeError: Cannot read property 'constructor' of null
augur.run.augur.execute.augur.invoke @ app.js:7003
...
//followed by tons of these:
Uncaught TypeError: this.state.asset.cash.toFixed is not a function
render @ app.js:2147
...
Any suggestion on how to copy a full stack trace from chrome console without the rows smashing together?
i just fixed this. mainly an issue with the new geth account management notions changing underneath me. pushing coming soon.
You should be able to use Error().stack to get a stringified stack trace.
Sweet
| gharchive/issue | 2015-07-04T19:28:40 | 2025-04-01T06:36:43.677311 | {
"authors": [
"carver",
"scottzer0",
"tinybike"
],
"repo": "AugurProject/augur",
"url": "https://github.com/AugurProject/augur/issues/94",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
903658609 | Add date and time to portfolio cards
Desktop: https://www.figma.com/file/6y4nvjfeVZwzwKcoXB0neq/Simplified-UI?node-id=93%3A17690
Mobile: https://www.figma.com/file/6y4nvjfeVZwzwKcoXB0neq/Simplified-UI?node-id=292%3A5705
Done
| gharchive/issue | 2021-05-27T12:27:42 | 2025-04-01T06:36:43.678934 | {
"authors": [
"matt-bullock",
"pgebheim"
],
"repo": "AugurProject/turbo",
"url": "https://github.com/AugurProject/turbo/issues/648",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
387165980 | Do not expand while selecting cell!
cells will auto expand while select it, but I don't want this feature, what should I do?
I'd like to expand a cell while tapped the button on this cell
| gharchive/issue | 2018-12-04T08:12:31 | 2025-04-01T06:36:43.679778 | {
"authors": [
"aimm"
],
"repo": "Augustyniak/RATreeView",
"url": "https://github.com/Augustyniak/RATreeView/issues/262",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
489169775 | ios13 fix
default added
This is an important fix. It helped during the migration to swift5, please merge this PR.
| gharchive/pull-request | 2019-09-04T13:43:47 | 2025-04-01T06:36:43.680699 | {
"authors": [
"Brainyoo",
"svetlanama"
],
"repo": "Augustyniak/RATreeView",
"url": "https://github.com/Augustyniak/RATreeView/pull/270",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
323483164 | Is there security?
This looks like it might solve a problem I've had for a few months!
Is there any way to require a Pre-Shared Key to open the web page or ical feed?
I'd like to aggregate all my google calendars into one for sharing to specific people to open in their GCal, but not open it to the world.
i'm interested in this as well.
@DerekFroese @steveb85 Yes! There's several options, depending on what you want to achieve:
An obfuscated URL (i.e. a pre-shared key in a URL), so you can have the feed at /my-shared-key/events.ics
HTTP Basic auth, with a username/password
OAuth (with a bit of work)
What kind of solution were you after?
Personally, I like the obfuscated URL. It will have maximal compatibility, as many consumers of ical feeds are not able to do HTTP Basic Auth much less OAuth.
i.e., you wouldn't be able to add options 2 or 3 to Google Calendar, but you could add option 1.
@DerekFroese I think both obfuscated URL and basic auth will work with Google Calendar.
Obfuscated URL
For the obfuscated URL, find the following line in your config.ru:
run Almanack::Server
Change it to:
SECRET_TOKEN = 'shhhh'
app = Rack::Builder.app do
map("/#{SECRET_TOKEN}") do
run Almanack::Server
end
end
run app
This will mount the calendar (and its feed) under /shhhh. If you want to avoid keeping the secret in your codebase (a good idea), I recommend using an environment variable:
SECRET_TOKEN = ENV.fetch('SECRET_TOKEN') { fail "Couldn't find a SECRET_TOKEN env var" }
Environment variables are available on any unix-y system. On Heroku, you can set this with:
heroku config:set SECRET_TOKEN=shhhh
If you're using the default theme, you'll need to override layout.erb to fix the paths to the stylesheet and JavaScript. (I'll fix this in a future release).
Basic Auth
I believe most calendar apps, including Google Calendar, support basic auth, through use of the optional username and password parts of a URL, i.e. https://username:[email protected]/calendar.ics.
To use Basic Auth, find the following line in your config.ru:
run Almanack::Server
and change it to the following:
USERNAME = 'calendar'
PASSWORD = 'sshhhsecret'
use Rack::Auth::Basic, "My Calendar" do |given_username, given_password|
Rack::Utils.secure_compare(PASSWORD, given_password) && given_username == USERNAME
end
run Almanack::Server
This will protect the application using HTTP Basic Auth. Please serve this over SSL/TLS (i.e. HTTPS) to prevent the password being sent in the clear.
If you want to avoid keeping the secret in your codebase (a good idea), I recommend using an environment variable:
CREDENTIALS = ENV.fetch('CREDENTIALS') { fail "Couldn't find a CREDENTIALS env var" }
USERNAME, PASSWORD = credentials.split(':')
This assumes an environment variable called CREDENTIALS in the format username:password.
Environment variables are available on any system. On Heroku, you can set this with:
heroku config:set CREDENTIALS=username:password
Hope that helps!
Hi @DerekFroese. Did this solve your issue? Can I close this issue?
HI Pete,
Yes, the configurations you listed solve the issue. Thanks!
On Fri, 10 May 2019 at 13:40, Pete Nicholls [email protected]
wrote:
Hi @DerekFroese https://github.com/DerekFroese. Did this solve your
issue? Can I close this issue?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Aupajo/almanack/issues/22#issuecomment-491423422, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAD3MBVSLRRODE2JF4ECYWLPUXMSNANCNFSM4FAC2ZRA
.
--
Cheers,
Derek Froese
Great!
I found that all of these changes were overwritten when Heroku is reloaded. Is it possible to have the secret token implemented as a Config Var so it can persist?
Sorry to hear that, @DerekFroese. Can you elaborate? Changes should be made via git and pushed to the Heroku repo to persist between deploys. The above example demonstrates how to do this with a Heroku config environment variable.
Hi Aupajo,
If I understand correctly; I'd have to fork your repo and make my own in order to make changes to the code that persist across Heroku reboots and such. The problem for me is that my repo will become out-of-sync with your repo and will be an older version. I'm not sure I have the experience to keep my repo in sync with yours to have the latest version.
For my personal needs, it would be nice if the official code allowed for a config variable (set in Heroku) of an authentication token that would, if used, be required in the URL to access the calendar. But I also recognize most others may not need this and it's not fair of me to ask you to write code just for me :).
I apologize for my unfamiliarity; I have some small experience with PHP and web hosting, but Heroku is foreign to me.
Hi @DerekFroese. No you don't need to maintain a fork of this repo.
The installation steps are:
gem install almanack
alamanack new my-custom-calender
This will create a directory called my-custom-calendar, which is small Git repo containing a Gemfile that keeps you in sync with releases of this project, and a configuration file you can customise your Almanack set-up. When you run:
almanack deploy
It will create or update a Heroku app for you.
If you deployed using the “Heroku Deploy” button, then these steps were already performed for you. You can clone your existing Heroku git repository by logging in to Heroku, clicking on “Settings” and finding your “Heroku Git URL”:
You can clone the Heroku repo locally:
git clone https://git.heroku.com/my-almanack-app.git
Make the changes to config.ru, and then commit push them back:
git add config.ru -m "Add authentication"
git push origin master
| gharchive/issue | 2018-05-16T06:22:41 | 2025-04-01T06:36:43.696504 | {
"authors": [
"Aupajo",
"DerekFroese",
"steveb85"
],
"repo": "Aupajo/almanack",
"url": "https://github.com/Aupajo/almanack/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
801551504 | Manque de doc
T'aurais pu faire de la doc pour savoir quelles dépendances installer 👀
Comming with next version
| gharchive/issue | 2021-02-04T18:57:05 | 2025-04-01T06:36:43.700712 | {
"authors": [
"AurelieV",
"Lu-Ks"
],
"repo": "AurelieV/purple-orwel-fox",
"url": "https://github.com/AurelieV/purple-orwel-fox/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1508237868 | Feature/issue 226
Crowding distance repaired and notebooks in sync. New notebook by Luke cleaned up for formal integration.
@markcoletti A test is failing:
| gharchive/pull-request | 2022-12-22T17:05:23 | 2025-04-01T06:36:43.704370 | {
"authors": [
"SigmaX",
"markcoletti"
],
"repo": "AureumChaos/LEAP",
"url": "https://github.com/AureumChaos/LEAP/pull/236",
"license": "AFL-3.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1928847811 | Request failed with status code 404
After updating today's games, any search you run with any region the API returns this error, I was making a command yesterday and it was working, then today after updating this week's new games it gave me this
Erro: Error: An error occurred
error: {
"message": "CatalogOffer/offerMappings: Request failed with status code 404",
"locations": [
{}
],
"correlationId": "1792c500-beb8-4854-9d69-43e4a9a30e54",
"serviceResponse": "{\"errorMessage\":\"The item or resource being requested could not be found.\",\"errorCode\":\"errors.com.epicgames.not_found\",\"numericErrorCode\":1004,\"errorStatus\":404}",
"stack": null,
"path": [
"Catalog",
"searchStore",
"elements",
3,
"offerMappings"
]
},{
"message": "CatalogNamespace/mappings: Request failed with status code 404",
"locations": [
{}
],
"correlationId": "1792c500-beb8-4854-9d69-43e4a9a30e54",
"serviceResponse": "{\"errorMessage\":\"The item or resource being requested could not be found.\",\"errorCode\":\"errors.com.epicgames.not_found\",\"numericErrorCode\":1004,\"errorStatus\":404}",
"stack": null,
"path": [
"Catalog",
"searchStore",
"elements",
3,
"catalogNs",
"mappings"
]
}
Same error...
I have just released v4.0.2. Please Install the new version.
| gharchive/issue | 2023-10-05T18:38:39 | 2025-04-01T06:36:43.707956 | {
"authors": [
"AuroPick",
"chrom007",
"gelones"
],
"repo": "AuroPick/epic-free-games",
"url": "https://github.com/AuroPick/epic-free-games/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
657733281 | Reaction Event - Context Requests
Would like to request one additional context tags:
[x] <context.group> - Returns the guild Group that the reaction is heard from
Would like to request some revisions:
[x] <context.user> returns the DiscordUser object; can we change this to <context.author> for consistency with other discord events?
[x] <context.emoji_id> returns the literal name of the emoji as opposed to the ID of the emoji; some emoji's from different groups can contain the same name where-as the ID is always directly associated with the correct emoji.
Would like to also add meta for this event to be called by our clone of the Meta bot if possible
Lastly, would it be difficult to add a DiscordEmoji object type that could return each of the emoji fields (Excluding the User, which is provided with the user context we have now) listed in the Emoji Object Structure?:
{
"id": "41771983429993937",
"name": "LUL",
"roles": ["41771983429993000", "41771983429993111"],
"user": {
"username": "Luigi",
"discriminator": "0002",
"id": "96008815106887111",
"avatar": "5500909a3274e1812beb4e8de6631111"
},
"require_colons": true,
"managed": false,
"animated": false
}
[x] ID | Emoji IDs would differentiate the Emoji from every other emoji regardless of name, animation or group it originated from
[x] Name | Great for formatted name of the emoji
[ ] Roles | This specifically would return the list of roles that the specified channel has whitelisted the roles to via the 'Allow Reactions' and 'Allow External Reactions' permissions
[ ] Require_colons | This may only be beneficial if we could utilize this as opposed to regex to coordinate easier message replacements of emojis
[ ] Managed | Returns if this emoji is implemented by this group or not
[x] Animated | Returns if this emoji is animated; may only be beneficial for the same reason as the require_colons field
[ ] Available | This would return true or false if the server loses access to this emoji due to Server Boost falling to a level that reduces the server's emoji count
Thanks :}
We have everything we need from this currently for the Ticket system; closing and leaving the unmarked for future potential features we may or may not need in the future.
| gharchive/issue | 2020-07-15T23:48:27 | 2025-04-01T06:36:43.715364 | {
"authors": [
"BehrRiley"
],
"repo": "AuroraInteractive/dDiscordBot",
"url": "https://github.com/AuroraInteractive/dDiscordBot/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1268558984 | #28
Да говнокод, а что ты мне сделаешь? :d
Может ещё что-то нужно поправить, но хз
С телефона не очень удобно проверять
:d
😐
?
все настолько плохо?)
Не, но у меня есть пару мыслей на счёт данной фичи, через час сяду поковырять
| gharchive/pull-request | 2022-06-12T11:59:55 | 2025-04-01T06:36:43.717574 | {
"authors": [
"IsTopNick",
"JoCat"
],
"repo": "AuroraTeam/LauncherServer",
"url": "https://github.com/AuroraTeam/LauncherServer/pull/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
272315716 | Add support for string based properties decorated with the RangeAttribute
Consider the following:
public class MyModel
{
[Required]
[Range(1, 65535)]
public string Port { get; set; }
}
var fixture = new Fixture();
var model = fixture.Create<MyModel>();
In 3.51.0 the first call the Create() worked, however subsequent calls threw, now in 4.0.0-rc1 (the version I am using), the first call throws ObjectCreationException whereas the attribute itself offers support for the use case.
Originated from https://stackoverflow.com/q/47184400/126014
@jcasale Thanks for raising the issue.
Well, as far as I noticed, AutoFixture wasn't designed to handle gracefully the Range attribute applied to non-numeric members. Therefore, I'm basically not surprised as we already have the same issue for enums. They should be fixed together and I've already started to investigate that.
I don't have too much experience with annotation attributes and at the first glance it looks a bit weird that you apply numeric range attribute to a property of string type. Could you please point to some documentation describing that such usage is perfectly valid (and what is the expected behavior)? Just want to ensure that we cover valid scenarios :blush:
P.S. :+1: for using AutoFixture v4 :wink:
To be honest, it's a convention I am used to from the ASP arena and I don't know if it's an abuse of the facility or intended. I will check out the docs and if not, the reference source for logic that indicates it's valid and expected and not simply a fragile coincidence.
Reference source implies it will handle casting a string (which it does) and the docs illustrate an example where the the value being applied is coerced and in that case, boxed. I have a workaround and I admit the use case is rare, however it does appear to be valid but certainly low priority if any.
@jcasale Thanks for the confirmation. Well, it's indeed seems that string could be a valid type. Also it looks like that Range attribute could support even other types, like DateTime. The question is whether we are going to support that in AutoFixture out-of-the-box as we clearly cannot support all those types.
Probably, it makes sense to follow the way, when we support this feature partially - recognize the Range attribute and wrap that to the RangedRequest. Later clients could add customization to handle the RangedRequest for the custom types they have. The benefit is that you will not need to handle the Range attribute directly. Also we as a project will not need to support the variety of different possible types as they might be quite rare.
@jcasale What would you say about that plan?
@moodmosaic Your opinion is also appreciated as the topic is quite tricky.
The purpose of supporting data annotations in AutoFixture is to provide a more fine-grained control over the scope of generated values.
However, some of those data annotations have a totally weird API, where you can easily do the wrong thing, as with the RangeAttribute, which has 3 constructor overloads:
RangeAttribute(Double, Double)
RangeAttribute(Int32, Int32)
RangeAttribute(Type, String, String)
And because of that pesky 3rd constructor overload accepting a Type, it is possible to do this:
[Range(1, long.MaxValue)]
public long SomeProperty { get; set; }
1 gets casted into a Double
long.MaxValue gets casted into a Double resulting to an arithmetic overflow.
So, IMHO, and AFAICT, in this case:
[Range(1, 65535)]
public string Port { get; set; }
we should throw an error. In the error message we should probably tell to the user that the right way of controlling the scope of generated strings is by doing this:
[Range(typeof(string), "1", "65535")]
public string Port { get; set; }
And then, we'd have to make sure we support not only strings, but chars, dates, and so on:
[Range(typeof(DateTime),"14-Dec-1984", "14-Dec-2099")]
[Range(typeof(char), "n", "s")]
This is one of the reasons that F# Hedgehog makes this concept explicit, and easier, through the Range type and combinators.
In the error message we should probably tell to the user that the right way of controlling the scope of generated strings is by doing this:
[Range(typeof(string), "1", "65535")]
public string Port { get; set; }
That is incorrect, it works coincidentally for values where none of the place values exceed the leading place values of the max string. For example, 1 through 6 pass, however 7 through 9 fail, 10 through 65 pass, 66 to 99 fail, 100 through to 655 pass then 656 fails etc.
A contrived example:
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
using System.Reflection;
internal class Program
{
[Range(typeof(string), "1", "65535")]
// [Range(1, 65535)]
public string Port { get; set; }
private static void Main()
{
for (int i = 1; i < 65536; i++)
{
Program program = new Program
{
Port = i.ToString()
};
foreach (var error in program.Validate())
{
Console.WriteLine($"{i}, {error.ErrorMessage}");
}
}
}
}
public static class Extensions
{
public static IEnumerable<ValidationResult> Validate<T>(this T model)
where T : class
{
if (model == null)
{
throw new ArgumentNullException(nameof(model));
}
foreach (PropertyInfo propertyInfo in model.GetType().GetProperties())
{
object[] attributes = propertyInfo.GetCustomAttributes(typeof(ValidationAttribute), false);
if (attributes.Length == 0)
{
continue;
}
ValidationContext validationContext = new ValidationContext(model)
{
DisplayName = propertyInfo.Name
};
if (attributes.OfType<RequiredAttribute>().FirstOrDefault() is RequiredAttribute required)
{
ValidationResult result = required.GetValidationResult(propertyInfo.GetValue(model), validationContext);
if (result != null)
{
yield return result;
yield break;
}
}
foreach (ValidationAttribute attribute in attributes)
{
ValidationResult result = attribute.GetValidationResult(propertyInfo.GetValue(model), validationContext);
if (result == null)
{
continue;
}
yield return result;
}
}
}
}
@jcasale Thanks for the sample.
That's why I'd suggest to not include this feature to the AutoFixture for now as there might be a whole set of different options depending on the OperandType/MemberType combinations.
In the #920 I've introduced the generic RangedRequest in a way that later any specimen builder could decide what to do. Request offers both the OperandType (this is how the Ranged attribute refers to the type you specified) and MemberData - type of the member you applied this property to.
Later @jcasale could register it's own RangedRequest builder to handle strings in a way he wants, without a need to deal with the RangedAttribute directly.
For me that looks like a good trade-off.
@zvirja No objections, I have a workaround and realized some good takeaways from this. In reference to registering my own RangedRequest, off the top of your head do you know of any code exercising the concept that I could review to see how this is accomplished?
any code exercising the concept that I could review
Probably, not for now, as #920 is still under the review by @moodmosaic. Only after we merge the PR, we'll know its shape, so I'll be able to show you a demo.
However, the best place would be to look at the NumericRangedRequestRelay implementation (if it doesn't changes during the review) as it's a sample of builder for numeric types.
@jcasale Feel free to use the NumericRangedRequestRelay or EnumRangedRequestRelay as a sample of such customization.
This API will be available since our next release that should happen in the nearby future.
Closing this one as no further action is required so far. Feel free to ask more questions if you have 😉
| gharchive/issue | 2017-11-08T19:14:12 | 2025-04-01T06:36:43.795946 | {
"authors": [
"jcasale",
"moodmosaic",
"ploeh",
"zvirja"
],
"repo": "AutoFixture/AutoFixture",
"url": "https://github.com/AutoFixture/AutoFixture/issues/919",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
161727259 | Add Support for FakeItEasy 2.1+ (#628)
Went with 2.1 plus, since it's the latest as of now.
I don't know why it fails to a missing assembly.
Thank you for your interest in contributing to AutoFixture! This looks promising :+1: The few comments I have are superficial, and are easy to address :smile:
Thank you for your contribution! It's now live as AutoFixture.AutoFakeItEasy2 3.48.0.
| gharchive/pull-request | 2016-06-22T16:24:15 | 2025-04-01T06:36:43.798496 | {
"authors": [
"caloggins",
"ploeh"
],
"repo": "AutoFixture/AutoFixture",
"url": "https://github.com/AutoFixture/AutoFixture/pull/661",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1795564390 | Feat: general wrapper
Description
a general wrapper that accepts arguments to field mappings as arguments
Type of change
feat: A new feature
I like the look of this!
Would it be sensible to split up the wrapper into two – one which does the input name mapping, and one which does the output wrapping? Then we could use the two independently – perhaps your inner function needs to return two values (say conditions and experiment_js_code) and returns those as a Result/Delta object, but you still want a mapping. Conversely, perhaps you have a function which uses entirely standard naming for the variables, but you want to wrap the outputs.
hey @younesStrittmatter , do you think you could refactor this so that there's a wrapper function which just does the input field-name mapping? We could include that as an option in the on_state function from #33, but it needs to be an independent wrapper function first so that we can test it really extensively on its own.
I think this is covered by the newest version of the state object – closing this now.
| gharchive/pull-request | 2023-07-09T21:08:33 | 2025-04-01T06:36:43.883408 | {
"authors": [
"hollandjg",
"younesStrittmatter"
],
"repo": "AutoResearch/autora-core",
"url": "https://github.com/AutoResearch/autora-core/pull/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
151094535 | Check collection of input words
There should be an option to check a collection of input words, for better automaton validation.
Integrated in the next version
| gharchive/issue | 2016-04-26T10:09:24 | 2025-04-01T06:36:43.884369 | {
"authors": [
"MRisto"
],
"repo": "AutoSimDevelopers/automata-simulation",
"url": "https://github.com/AutoSimDevelopers/automata-simulation/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
184735624 | Tried to install in a jupyter docker container, failed
15:45 $ docker run -ti jupyter/jupyterhub /bin/bash
Unable to find image 'jupyter/jupyterhub:latest' locally
latest: Pulling from jupyter/jupyterhub
efd26ecc9548: Pull complete
a3ed95caeb02: Pull complete
298ffe4c3e52: Pull complete
758b472747c8: Pull complete
8b9809a68afc: Pull complete
93b253b5483d: Pull complete
ef8136abb53c: Pull complete
Digest: sha256:8f8cd2b62942b29e84bb99401ec9819d489b0cf0ebece99a42ba05946feeb72f
Status: Downloaded newer image for jupyter/jupyterhub:latest
root@766a9d01a6b4:/srv/jupyterhub# pip install moldesign
Collecting moldesign
Downloading moldesign-0.7.3.tar.gz (20.0MB)
100% |████████████████████████████████| 20.0MB 69kB/s
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-jbo8ia1m/moldesign/setup.py", line 70
print 'Thank you for installing the Molecular Design Toolkit!!!'
^
SyntaxError: Missing parentheses in call to 'print'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-jbo8ia1m/moldesign/
You are using pip version 8.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
111
It would be really great if there was a Dockerfile in the root directory that had the correct versions of everything to run and test, independent of the host setup.
Also tried to install and run on my machine. It installed ok with sudo, but could not run:
/usr/bin/python: No module named moldesign
✘-1 ~/autodesk/cloud-compute-cannon [stream_std_out_err_wip L|✚ 3⚑ 2]
19:26 $ which pip
/usr/local/bin/pip
✔ ~/autodesk/cloud-compute-cannon [stream_std_out_err_wip L|✚ 3⚑ 2]
19:27 $ which python
/usr/bin/python```
Thanks @dionjwa .
It doesn't install in the docker image because it's defaulting to Python 3, and MDT is still python 2 only. However, it should have printed out an error message stating that explicitly instead of dying with a syntax error - that's a bug.
Agreed, we need a root Dockerfile. Will add it.
Local installation doesn't work: :headdesk: Python + MacOS = sadness. Probably try switching to the Homebrew python install, or, per @hainm in #32, try pip install --user.
FYI @dionjwa - actually, the easiest way to deploy, as long as you're not doing development, is to pull the official docker image:
docker run -p 8888:8888 -it autodesk/moldesign:moldesign_notebook-0.7.3
Fixed with #108
| gharchive/issue | 2016-10-24T02:21:57 | 2025-04-01T06:36:43.888439 | {
"authors": [
"avirshup",
"dionjwa"
],
"repo": "Autodesk/molecular-design-toolkit",
"url": "https://github.com/Autodesk/molecular-design-toolkit/issues/107",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
212161818 | Front-end Widget Catalogue
If the user can add front-end widgets to apps, where are those widgets stored, and how can they be referred to?
Deliverables:
Database of front-end widgets. (These could simply be tagged URLs for iframe widgets).
API to add/remove/edit widgets from the catalogue
Catalogue must support versioning
Policy for users to add their widgets (everything whitelisted or manually curated?)
Front-end Searching and viewing widgets.
Testing out widgets?
This ticket needs to be broken down when specced out more thoroughly.
Can this just be a publicly available github repo that we control final pull requests? This then specifies submission policies, potential license agreements, versioning.
@dionjwa - I really really like the idea of contributing workfows/apps via pull request to a master repository. This is exactly how Conda Forge manages its contributions. In addition to having a clear, well-understood contribution policy, the whole github infrastructure would also take care of the problems of scaling, community management, etc.
For right now, the priority is definitely on workflow development, so let's keep most of this stuff on the backlog for now.
We will definitely need a widget catalog for workflow/app developers; but as the widget themselves will just be MST components, this doesn't need to be anything more than a JSON document in the MST repo.
| gharchive/issue | 2017-03-06T15:45:59 | 2025-04-01T06:36:43.892842 | {
"authors": [
"avirshup",
"dionjwa"
],
"repo": "Autodesk/molecular-simulation-tools",
"url": "https://github.com/Autodesk/molecular-simulation-tools/issues/195",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
207748946 | Remove rendering when empty model data set on component
There was a bug where when you change the modelData to an empty instance (i.e. { atoms: [], bonds: [] }) the previously rendered model isn't removed. This can quickly be verified in npm run example.
Yup I was able to reproduce the bug. I think this looks good. I'm going to add another test or two just to prove to myself it works and then I think it's good to go.
| gharchive/pull-request | 2017-02-15T09:26:48 | 2025-04-01T06:36:43.894153 | {
"authors": [
"danielholmes",
"justinmc"
],
"repo": "Autodesk/molecule-3d-for-react",
"url": "https://github.com/Autodesk/molecule-3d-for-react/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2073772337 | Option to set visibility of answers to comment
What
I would like to be able to set the visibility of answers to „not listed“ in general and be able to set it for every comment before posting it on demand.
Why
Answering to many comments would clutter the timeline of folks just following the website.
The feature of sending answers to comments back to the fediverse is great but may be not usable in cases with a hugh ammount of interactions.
Sometimes if a comment from the fediverse gets big traction one might want to answer public in the timeline.
How
No response
I would like to be able to set the visibility of answers to „not listed“ in general and be able to set it for every comment before posting it on demand.
Ooh, this sounds like something I may want to add to my add-on plugin.
| gharchive/issue | 2024-01-10T07:35:27 | 2025-04-01T06:36:44.122104 | {
"authors": [
"janboddez",
"jaschaurbach"
],
"repo": "Automattic/wordpress-activitypub",
"url": "https://github.com/Automattic/wordpress-activitypub/issues/644",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1983110148 | Parsing binding expression with type cast of attached property fails
Describe the bug
I'm attempting to bind to a property of the adorned element in a template set to the adorner property. To do this with compiled bindings a type cast is required but that fails to parse.
To Reproduce
<TextBox>
<TextBox.Styles>
<Style Selector="TextBox">
<Setter Property="(AdornerLayer.Adorner)">
<Template>
<TextBlock Margin="10 20 0 0" Foreground="Red" Text="{Binding $self.((TextBox)(AdornerLayer.AdornedElement)).Text}"/>
</Template>
</Setter>
</Style>
</TextBox.Styles>
</TextBox>
This fails to compile.
Avalonia error AVLN:0004: Internal compiler error while transforming node XamlX.Ast.XamlAstObjectNode:
Avalonia error AVLN:0004: Avalonia.Data.Core.ExpressionParseException: Expected ')'.
Avalonia error AVLN:0004: at Avalonia.Markup.Parsers.BindingExpressionGrammar.ParseTypeCast(CharacterReader& r, List`1 nodes) in /_/src/Markup/Avalonia.Markup/Markup/Parsers/BindingExpressionGrammar.cs:line 249
Avalonia error AVLN:0004: at Avalonia.Markup.Parsers.BindingExpressionGrammar.Parse(CharacterReader& r) in /_/src/Markup/Avalonia.Markup/Markup/Parsers/BindingExpressionGrammar.cs:line 50
Avalonia error AVLN:0004: at Avalonia.Markup.Xaml.XamlIl.CompilerExtensions.Transformers.AvaloniaXamlIlBindingPathParser.Transform(AstTransformationContext context, IXamlAstNode node) in /_/src/Markup/Avalonia.Markup.Xaml.Loader/CompilerExtensions/Transformers/AvaloniaXamlIlBindingPathParser.cs:line 28
Avalonia error AVLN:0004: at XamlX.Transform.AstTransformationContext.Visitor.Visit(IXamlAstNode node) in /_/src/Markup/Avalonia.Markup.Xaml.Loader/xamlil.github/src/XamlX/Transform/AstTransformationContext.cs:line 58 Line 20, position 60.
With a ReflectionBinding it fails at runtime.
Avalonia.Data.Core.ExpressionParseException
HResult=0x80131500
Message=Expected ')'.
Source=Avalonia.Markup
StackTrace:
at Avalonia.Markup.Parsers.BindingExpressionGrammar.ParseTypeCast(CharacterReader& r, List`1 nodes)
at Avalonia.Markup.Parsers.BindingExpressionGrammar.Parse(CharacterReader& r)
at Avalonia.Markup.Parsers.ExpressionParser.Parse(CharacterReader& r)
at Avalonia.Markup.Parsers.ExpressionObserverBuilder.Parse(String expression, Boolean enableValidation, Func`3 typeResolver, INameScope nameScope)
at Avalonia.Data.Binding.CreateExpressionObserver(AvaloniaObject target, AvaloniaProperty targetProperty, Object anchor, Boolean enableDataValidation)
at Avalonia.Data.BindingBase.Initiate(AvaloniaObject target, AvaloniaProperty targetProperty, Object anchor, Boolean enableDataValidation)
at Avalonia.AvaloniaObjectExtensions.Bind(AvaloniaObject target, AvaloniaProperty property, IBinding binding, Object anchor)
at AvaloniaApplication3.MainWindow.XamlClosure_2.Build(IServiceProvider ) in MainWindow.axaml:line 20
at Avalonia.Markup.Xaml.XamlIl.Runtime.XamlIlRuntimeHelpers.<>c__DisplayClass1_0`1.<DeferredTransformationFactoryV2>b__0(IServiceProvider sp)
at Avalonia.Markup.Xaml.Templates.TemplateContent.Load(Object templateContent)
at Avalonia.Markup.Xaml.Templates.Template.Build()
at Avalonia.Markup.Xaml.Templates.Template.Avalonia.Styling.ITemplate.Build()
at Avalonia.Styling.PropertySetterTemplateInstance.GetValue()
at Avalonia.PropertyStore.EffectiveValue`1.GetValue(IValueEntry entry)
at Avalonia.PropertyStore.EffectiveValue`1.SetAndRaise(ValueStore owner, IValueEntry value, BindingPriority priority)
at Avalonia.PropertyStore.ValueStore.ReevaluateEffectiveValues(IValueEntry changedValueEntry)
at Avalonia.PropertyStore.ValueStore.EndStyling()
at Avalonia.StyledElement.ApplyStyling()
at Avalonia.StyledElement.EndInit()
at AvaloniaApplication3.MainWindow.!XamlIlPopulate(IServiceProvider , MainWindow ) in MainWindow.axaml:line 14
at AvaloniaApplication3.MainWindow.!XamlIlPopulateTrampoline(MainWindow )
at AvaloniaApplication3.MainWindow.InitializeComponent(Boolean loadXaml, Boolean attachDevTools) in Avalonia.Generators\Avalonia.Generators.NameGenerator.AvaloniaNameSourceGenerator\AvaloniaApplication3.MainWindow.g.cs:line 23
at AvaloniaApplication3.MainWindow..ctor() in MainWindow.axaml.cs:line 14
at AvaloniaApplication3.App.OnFrameworkInitializationCompleted() in App.axaml.cs:line 18
at Avalonia.AppBuilder.SetupUnsafe()
at Avalonia.AppBuilder.Setup()
at Avalonia.AppBuilder.SetupWithLifetime(IApplicationLifetime lifetime)
at Avalonia.ClassicDesktopStyleApplicationLifetimeExtensions.StartWithClassicDesktopLifetime(AppBuilder builder, String[] args, ShutdownMode shutdownMode)
at AvaloniaApplication3.Program.Main(String[] args) in Program.cs:line 12
With ReflectionBinding you don't need the type cast so if you instead do {ReflectionBinding $self.(AdornerLayer.AdornedElement).Text} it works.
Expected behavior
No error and successful binding the same as when using the ReflectionBinding without type cast.
Environment:
OS: Windows
Avalonia-Version: 11.0.5
Additional context
It looks like BindingExpressionGrammar.ParseTypeCast expects ParseBeforeMember to parse the identifier but for attached properties it doesn't but instead returns State.AttachedProperty.
If I add parsing of an attached property after the call to ParseBeforeMember it looks like it works but I'm not well versed in the code base enough to know if that would be enough.
if (parseMemberBeforeAddCast)
{
if (!ParseCloseBrace(ref r))
{
throw new ExpressionParseException(r.Position, "Expected ')'.");
}
result = ParseBeforeMember(ref r, nodes);
// With this parsing a type cast of an attached property seems to work
if (result == State.AttachedProperty)
result = ParseAttachedProperty(ref r, nodes);
if(r.Peek == '[')
{
result = ParseIndexer(ref r, nodes);
}
}
[Fact]
public async Task Should_Get_Chained_Attached_Property_Value_With_TypeCast()
{
var expected = new Class1();
var data = new Class1();
data.SetValue(Owner.SomethingProperty, new Class1() { Next = expected });
var target = ExpressionObserverBuilder.Build(data, "((Class1)(Owner.Something)).Next", typeResolver: (ns, name) => name == "Class1" ? typeof(Class1) : _typeResolver(ns, name));
var result = await target.Take(1);
Assert.Equal(expected, result);
Assert.Null(((IAvaloniaObjectDebug)data).GetPropertyChangedSubscribers());
}
private static class Owner
{
public static readonly AttachedProperty<string> FooProperty =
AvaloniaProperty.RegisterAttached<Class1, string>(
"Foo",
typeof(Owner),
defaultValue: "foo");
public static readonly AttachedProperty<AvaloniaObject> SomethingProperty =
AvaloniaProperty.RegisterAttached<Class1, AvaloniaObject>(
"Something",
typeof(Owner));
}
@appel1 great findings and I like that you also directly added a unit test. If you want to, you may try to make a PR out of it.
This may help:
https://github.com/AvaloniaUI/Avalonia/wiki/Debugging-the-XAML-compiler
https://github.com/AvaloniaUI/Avalonia/blob/master/Documentation/build.md
| gharchive/issue | 2023-11-08T09:12:12 | 2025-04-01T06:36:44.282059 | {
"authors": [
"appel1",
"timunie"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/13539",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2022138286 | OpenGlControlBase enters OnOpenGlInit with OpenGL errors
Describe the bug
When implement OnOpenGlInit in an OpenGlControlBase and checking gl.GetError() for any error, there is one, presumably from something in Avalonia's code.
To Reproduce
Steps to reproduce the behavior:
Clone, build and run https://github.com/Dragorn421/DragoStuff/tree/ffe12b1003c3e6cd92419d7fdaab97b3c90c5dc4
Check the debug prints:
GL1.OnOpenGlInit
GL1.CheckError 1280
GL1.OnOpenGlRender
GL1.CheckError OK
GL1.CheckError OK
GL1.OnOpenGlRender
GL1.CheckError OK
GL1.CheckError OK
Notice GL1.CheckError 1280. It is printed by https://github.com/Dragorn421/DragoStuff/blob/ffe12b1003c3e6cd92419d7fdaab97b3c90c5dc4/MyOpenGLControl.cs#L29 , before the child control does any OpenGL call, so it seems to be an error originating from something in Avalonia.
Expected behavior
gl.GetError() should always return GL_NO_ERROR on entering child/user methods like OnOpenGlInit
Environment
OS: Kubuntu 23.10 (Linux, X11)
Avalonia-Version: 11.0.5
1280 is mean GL_INVALID_ENUM
Some operations use incorrect parameters, but this is related to the version of OpenGL
Still in Avalonia 11.0.10
@Dragorn421 if you wanted to check newer version, worth to try 11.1 betas
Still in Avalonia 11.1.1
| gharchive/issue | 2023-12-02T17:37:14 | 2025-04-01T06:36:44.287996 | {
"authors": [
"Coloryr",
"Dragorn421",
"timunie"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/13807",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
996288008 | Win32: Doesn't receive next key after Alt or F10
Describe the bug
When the Alt or F10 key is pressed, the next keypress is not received.
To Reproduce
Run the following program and press Alt or F10, followed by another key. The key subsequent to Alt or F10 is not registered.
public class MainWindow : Window
{
public MainWindow()
{
this.InitializeComponent();
this.AttachDevTools();
}
protected override void OnKeyDown(KeyEventArgs e)
{
System.Diagnostics.Debug.WriteLine("KeyDown " + e.Key);
base.OnKeyDown(e);
}
}
Desktop (please complete the following information):
OS: Windows
Version: master
Appears to be caused by our (not) handling of WM_ENTERIDLE?
| gharchive/issue | 2021-09-14T17:56:30 | 2025-04-01T06:36:44.290694 | {
"authors": [
"grokys"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/6592",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1688795342 | Correctly remove ContentPresenter's content from its parent host
Correctly remove ContentPresenter's content from its parent host when the content is updated while being detached from the logical tree.
I've updated the existing ContentControlTests.Should_Set_Child_LogicalParent_After_Removing_And_Adding_Back_To_Logical_Tree test with asserts that failed before, and now pass with this change. This seems acceptable to me as it already tested the right thing (but only from the child ⇒ parent side, now from both sides), tell me if you prefer a whole new test instead.
Fixes #11149
You can test this PR using the following package version. 11.0.999-cibuild0034002-beta. (feed url: https://pkgs.dev.azure.com/AvaloniaUI/AvaloniaUI/_packaging/avalonia-all/nuget/v3/index.json) [PRBUILDID]
Wow, thanks for finding that - looks like this bug's been in there forever!
Yay, this fixes https://github.com/AvaloniaUI/Avalonia/issues/9940 which was very annoying, thanks!
| gharchive/pull-request | 2023-04-28T16:00:08 | 2025-04-01T06:36:44.294077 | {
"authors": [
"BAndysc",
"MrJul",
"avaloniaui-team",
"grokys"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/pull/11178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
533256418 | Licencing and code usage
Hey !
I'm starting to write an addon aiming to help hunters with tranqshot rotation.
As I'm really new to addon, I used YaHT as a reference and inspiration for the base and announces part.
The addon should grow on his own way past a first basic release.
As our code is really similar and I found no licence there, I have to ask you if you are fine with this.
Repo at https://github.com/Slivo-fr/TranqRotate
Thanks for your time
Added a license.
Thanks <3
| gharchive/issue | 2019-12-05T10:21:04 | 2025-04-01T06:36:44.310855 | {
"authors": [
"Aviana",
"Slivo-fr"
],
"repo": "Aviana/YaHT",
"url": "https://github.com/Aviana/YaHT/issues/41",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
294986809 | 为什么不添加一份答案呢?
我想给这些面试题添加一份答案,找答案的过程也是一种学习,并把它记录下来加深印象。你觉得如何?
https://github.com/Omooo/Android_QA
能做出来更好 前面的基础还可以做一下,后面有点高端的设计底层源码的问题就没那么容易整理了
的确是,然而时间也是能挤出来的,关键在于毅力吧。
| gharchive/issue | 2018-02-07T02:39:32 | 2025-04-01T06:36:44.318857 | {
"authors": [
"AweiLoveAndroid",
"Omooo"
],
"repo": "AweiLoveAndroid/CommonDevKnowledge",
"url": "https://github.com/AweiLoveAndroid/CommonDevKnowledge/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2383197453 | Communication with Service Worker causes memory leak
Issue and Steps to Reproduce
Issue is reproducible on demo deployment https://icy-glacier-004ab4303.2.azurestaticapps.net/. When you log-in and app is opened for few minutes memory usage is increasing. It can be checked in Chrome Dev Tools/Memory panel.
Versions
7.22.8
Screenshots
Expected
When app is open (no actions), memory usage is not increasing over time.
Actual
I did few memory snapshots and noticed that most of new memory allocations are coming from MessagePort class. After digging into oidc-client code I found this util used for Service Worker communication which creates MessageChannel but it's never closed after promise resolution.
Additional Details
Thank you very much @radk0s
hi @radk0s version 7.22.9 contains your fix!
Great, thanks @guillaume-chervet! Already checked and issue is gone.
| gharchive/issue | 2024-07-01T08:30:43 | 2025-04-01T06:36:44.327309 | {
"authors": [
"guillaume-chervet",
"radk0s"
],
"repo": "AxaFrance/oidc-client",
"url": "https://github.com/AxaFrance/oidc-client/issues/1395",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1611914570 | [BUG]: No Save/Download option with VSCode
Issue description
There is no option to save the rendered mapping with the VSCode Extension. The Save and SaveAs do not seem to have any connection with the generated diagram
Media & Screenshots
No response
Operating system
Version: 1.75.1
Commit: 441438abd1ac652551dbe4d408dfcec8a499b8bf
Date: 2023-02-08T21:34:59.000Z
Electron: 19.1.9
Chromium: 102.0.5005.194
Node.js: 16.14.2
V8: 10.2.154.23-electron.0
OS: Darwin x64 21.6.0
Sandboxed: No
Priority this issue should have
Low (slightly annoying)
Merging under #29
| gharchive/issue | 2023-03-06T17:38:48 | 2025-04-01T06:36:44.330332 | {
"authors": [
"AykutSarac",
"aaron-ballard-530"
],
"repo": "AykutSarac/jsoncrack-vscode",
"url": "https://github.com/AykutSarac/jsoncrack-vscode/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
919716855 | Regular Expression -anshyyy
Description
Please include a summary of the change and which issue is fixed. List any dependencies that are required for this change.
Fixes #(issue_no)
Replace issue_no with the issue number which is fixed in this PR
Type of change
Please delete options that are not relevant.
Checklist:
[1 ] My code follows the style guidelines(Clean Code) of this project
[1 ] I have performed a self-review of my own code
[1 ] I have commented my code, particularly in hard-to-understand areas
[1] I have made corresponding changes to the documentation
[1] My changes generate no new warnings
@anshyyy issue number??
@anshyyy any update??
| gharchive/pull-request | 2021-06-13T05:01:05 | 2025-04-01T06:36:44.361993 | {
"authors": [
"Amit366",
"anshyyy"
],
"repo": "Ayush7614/Daily-Coding-DS-ALGO-Practice",
"url": "https://github.com/Ayush7614/Daily-Coding-DS-ALGO-Practice/pull/735",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2164665203 | Fix 31 issues
I have been studying this repository for a long time, and whenever I come across a typo, I fix it.
I discovered 31 typos in a total of 29 files. Then I merged them together.
The code quality is exceptionally high, so sincerely hope my PR can help make this repository more standardized.
We don't accept typo fixes directly, but thank you for your contribution.
| gharchive/pull-request | 2024-03-02T07:04:15 | 2025-04-01T06:36:44.375217 | {
"authors": [
"charlielye",
"miles-six"
],
"repo": "AztecProtocol/aztec-connect",
"url": "https://github.com/AztecProtocol/aztec-connect/pull/64",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1855460861 | Temporary hack: remove public kernel checks that state updates are in correct order
Do #1616 (and maybe #1617) first
Followup: #1623
Will be closed by https://github.com/AztecProtocol/aztec-packages/pull/1685
| gharchive/issue | 2023-08-17T17:56:56 | 2025-04-01T06:36:44.376555 | {
"authors": [
"dbanks12"
],
"repo": "AztecProtocol/aztec-packages",
"url": "https://github.com/AztecProtocol/aztec-packages/issues/1622",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2223462329 | rollback FunctionAbi isTranspiled changes
Please read contributing guidelines and remove this line.
#5561 👈
master
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @fcarreiro and the rest of your teammates on Graphite
Benchmark results
Metrics with a significant change:
note_trial_decrypting_time_in_ms (32): 112 (+218%)
Detailed results
All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write.
This benchmark source data is available in JSON format on S3 here.
Values are compared against data from master at commit a581e80d and shown if the difference exceeds 1%.
L2 block published to L1
Each column represents the number of txs on an L2 block published to L1.
Metric
8 txs
32 txs
64 txs
l1_rollup_calldata_size_in_bytes
676
676
676
l1_rollup_calldata_gas
6,424
6,424
6,412
l1_rollup_execution_gas
585,757
585,757
585,745
l2_block_processing_time_in_ms
1,337 (+1%)
4,792 (-1%)
9,106 (+1%)
note_successful_decrypting_time_in_ms
248 (+5%)
601 (-4%)
1,036 (+3%)
note_trial_decrypting_time_in_ms
53.0 (-50%)
:warning: 112 (+218%)
32.7 (+27%)
l2_block_building_time_in_ms
11,985 (+1%)
43,796 (+1%)
86,804 (+1%)
l2_block_rollup_simulation_time_in_ms
7,182 (+1%)
24,860
47,857 (-1%)
l2_block_public_tx_process_time_in_ms
4,762 (+1%)
18,807 (+1%)
38,705 (+3%)
L2 chain processing
Each column represents the number of blocks on the L2 chain where each block has 16 txs.
Metric
5 blocks
10 blocks
node_history_sync_time_in_ms
14,048 (-2%)
26,406
note_history_successful_decrypting_time_in_ms
1,272
2,470 (+4%)
note_history_trial_decrypting_time_in_ms
93.0 (+29%)
118 (+15%)
node_database_size_in_bytes
18,657,360
35,082,320 (+1%)
pxe_database_size_in_bytes
29,859
59,414
Circuits stats
Stats on running time and I/O sizes collected for every circuit run across all benchmarks.
Circuit
circuit_simulation_time_in_ms
circuit_input_size_in_bytes
circuit_output_size_in_bytes
private-kernel-init
181
44,377
26,164
private-kernel-ordering
162
50,830
39,325
base-parity
4,366
128
311
root-parity
1,171 (+1%)
1,244
311
base-rollup
14,514
116,608
861
root-rollup
49.9
4,359
725
private-kernel-inner
221 (+1%)
71,744
26,164
public-kernel-app-logic
122 (+2%)
47,695
40,661
public-kernel-tail
165 (+1%)
53,372
13,269
merge-rollup
10.1
2,568
861
public-kernel-teardown
119 (+1%)
47,695
40,661
public-kernel-setup
118 (+1%)
47,695
40,661
Tree insertion stats
The duration to insert a fixed batch of leaves into each tree type.
Metric
1 leaves
16 leaves
64 leaves
128 leaves
512 leaves
1024 leaves
2048 leaves
4096 leaves
32 leaves
batch_insert_into_append_only_tree_16_depth_ms
10.1 (+1%)
16.1
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_count
16.8
31.6
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_ms
0.587
0.496
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_32_depth_ms
N/A
N/A
46.5
72.7
231 (-1%)
448
873
1,759 (+1%)
N/A
batch_insert_into_append_only_tree_32_depth_hash_count
N/A
N/A
96.0
159
543
1,055
2,079
4,127
N/A
batch_insert_into_append_only_tree_32_depth_hash_ms
N/A
N/A
0.476
0.448
0.420 (-2%)
0.418 (+1%)
0.413 (-1%)
0.421 (+1%)
N/A
batch_insert_into_indexed_tree_20_depth_ms
N/A
N/A
54.4
107 (-1%)
339 (+1%)
663
1,309 (-2%)
2,614
N/A
batch_insert_into_indexed_tree_20_depth_hash_count
N/A
N/A
105
207
691
1,363
2,707
5,395
N/A
batch_insert_into_indexed_tree_20_depth_hash_ms
N/A
N/A
0.479
0.482
0.458
0.456
0.452 (-2%)
0.454
N/A
batch_insert_into_indexed_tree_40_depth_ms
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
61.6
batch_insert_into_indexed_tree_40_depth_hash_count
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
109
batch_insert_into_indexed_tree_40_depth_hash_ms
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
0.535
Miscellaneous
Transaction sizes based on how many contract classes are registered in the tx.
Metric
0 registered classes
1 registered classes
tx_size_in_bytes
40,548
501,142
Transaction size based on fee payment method
Metric
native fee payment method
fpc_public fee payment method
fpc_private fee payment method
tx_with_fee_size_in_bytes
905
1,161
1,377
Transaction processing duration by data writes.
Metric
0 new note hashes
1 new note hashes
2 new note hashes
tx_pxe_processing_time_ms
1,751
1,098
5,635 (+2%)
Metric
1 public data writes
2 public data writes
3 public data writes
4 public data writes
5 public data writes
8 public data writes
tx_sequencer_processing_time_ms
587 (+2%)
443
1,081
604 (+1%)
1,768
597 (+1%)
| gharchive/pull-request | 2024-04-03T17:08:51 | 2025-04-01T06:36:44.430552 | {
"authors": [
"AztecBot",
"fcarreiro"
],
"repo": "AztecProtocol/aztec-packages",
"url": "https://github.com/AztecProtocol/aztec-packages/pull/5561",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2724777773 | 対応アバター追加
T.Garden様のももかちゃんのボーン構造をBoneNames.xmlに追記しました
こまいぬ通り様のネメシスちゃん、並びにリィちゃん(並びに共通素体のきなこちゃん)のボーン構造を追記しました
| gharchive/pull-request | 2024-12-07T19:17:01 | 2025-04-01T06:36:44.433299 | {
"authors": [
"YTJVDCM"
],
"repo": "Azukimochi/BoneRenamer",
"url": "https://github.com/Azukimochi/BoneRenamer/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1596018812 | Example pipeline (i.e HELLOWORLD pipeline) to use local kubernetes (external silo) PVC data for training
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Don't find the syntax to use local k8s data in external silo for FL training. Followed these steps to create PVC - https://github.com/Azure-Samples/azure-ml-federated-learning/blob/main/docs/tutorials/read-local-data-in-k8s-silo. How do I modify the HELLOWORLD pipeline to use this local PVC data for training?
Describe the solution you'd like
A clear and concise description of what you want to happen.
Provide example pipline config files to use local k8s silo data.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Searched for syntax both in this repo and Azure AML K8s repo. Only method for creating and exposing PVC is provided. Could not find yaml syntax for pointing to local mountpath,
Additional context
Add any other context or screenshots about the feature request here.
Hi @amahab !
First of all, thanks for filing an issue. We will address it and provide detailed instructions in the repo on what needs to be done to consume local data.
However, for the time being, please let me provide an answer here to unblock you as fast as possible. The key difference between reading local data vs consuming data from an Azure ML data asset is that for local data one cannot use uri_file or uri_folder input parameter like in this config file for instance. Instead, one needs to use a string input. To implement that, you will need to do 2 things:
have your "read local data" component accept a string input in its spec, and have the component code use this string as needed;
provide the value of the string parameter in the config.yml used to submit the job.
We did have a very basic example for that in the initial version of this PR, but at the last minute we decided to drop it. The good news is, the files are still available in GitHub.
The component spec can be found here; see that string input called local_data_path.
The component code can be found here; see how the input parameter value is being used .
The job config.yml is there, and the associated submission script is over there. They both use the parameter defined in the component spec.
The documentation, at that time, had a section on how to configure and run a test job, that was leveraging the files above.
Hopefully this is enough to unblock you, but if not feel free to ask follow-up questions :)
Hi,
Thanks for the response and pointing me to a version of this repo that has this example. Upon inspection of the submit script, I notice that pre-processed data is written back to the Azure cloud datastore for the silo. Is there example where all data remains local on the pvc (r/w mount) and only trained model parameters are written to the silo Azure datastore?
The benefit of external k8s silo is avoiding costly data movement to the cloud. E.g. A scenarios where multiple edge geo sites are generating data. The ability to train the data locally without data movement to Azure and aggregating only model weights via FL is useful.
Appreciate the follow up :-)
No, we don't have that other example currently @amahab .
@amahab as @thomasp-ms mentioned, we don't have another example like that. Overall, you could apply the same kind of guidance for the preprocessing step, although there might be some complications.
One question: how would you expect the interactions between the preprocessing and training when using local mount points in k8s?
For instance:
do you expect the preprocessing to write the data in a unique path (ex: use a run id as subfolder) in the local, then give this unique path down to the training to locate the preprocessed data?
or do you expect the preprocessing to just write in some hardcoded location?
@jfomhover
Preprocessing can write data to a runid subfolder on local and give that part for training.
The whole use case i'm evaluating is that data is generated at the data center/edge locations. These locations train the data locally without moving data to the cloud. Only FL model weights are sent to Azure Workspace for updating the wights. New data will come into the locations at a later time and model gets re-trained locally with updated model sent from cloud.
| gharchive/issue | 2023-02-23T00:05:44 | 2025-04-01T06:36:44.492099 | {
"authors": [
"amahab",
"jfomhover",
"thomasp-ms"
],
"repo": "Azure-Samples/azure-ml-federated-learning",
"url": "https://github.com/Azure-Samples/azure-ml-federated-learning/issues/276",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2229276242 | Use create_all for index creation
This PR uses the already existing create_all call to create indexes, so no additional create call is needed.
Also added missing index to async example.
@nachoalonsoportillo I think you mentioned the missing index in async, this fixes that.
| gharchive/pull-request | 2024-04-06T14:36:08 | 2025-04-01T06:36:44.493577 | {
"authors": [
"pamelafox"
],
"repo": "Azure-Samples/azure-postgres-pgvector-python",
"url": "https://github.com/Azure-Samples/azure-postgres-pgvector-python/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1701583946 | ERROR: deployment failed: error deploying infrastructure: deploying to subscription:
Please provide us with the following information:
This issue is for a: (mark with an x)
- [x] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Minimal steps to reproduce
running azd up
Any log messages given by the failure
ERROR: deployment failed: error deploying infrastructure: deploying to subscription:
Deployment Error Details:
Conflict: No available instances to satisfy this request. App Service is attempting to increase capacity. Please retry your request later. If urgent, this can be mitigated by deploying this to a new resource group.
No available instances to satisfy this request. App Service is attempting to increase capacity. Please retry your request later. If urgent, this can be mitigated by deploying this to a new resource group.
Expected/desired behavior
the app service plan is expected to deploy with no error
OS and Version?
Windows 11
Versions
22H2
Mention any other details that might be useful
nothing to mention the error detail provides all the info,also i tried making a new resource group just as it recommended but i got the same error
Thanks! We'll be in touch soon.
I am receiving the same error when trying to deploy to East US. Any update on this issue?
| gharchive/issue | 2023-05-09T08:16:15 | 2025-04-01T06:36:44.500968 | {
"authors": [
"FunkyDialUpDude",
"IhebGhazala"
],
"repo": "Azure-Samples/azure-search-openai-demo",
"url": "https://github.com/Azure-Samples/azure-search-openai-demo/issues/184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.